Tuesday, September 2, 2025

p(doom): When the Builders Fear Their Own Creations

There’s a new term making the rounds in Silicon Valley and AI labs: “p(doom)”. It stands for probability of doom—the chance that artificial intelligence could one day catastrophically harm or even wipe out humanity.

It sounds like science fiction, but it’s increasingly how the very people building AI talk about their work.


A New Way of Talking About Risk

By attaching a percentage to the unthinkable, AI insiders treat their field less like a gadget business and more like nuclear physics or climate science. “p(doom)” forces a simple, uncomfortable question: What’s the chance we’ve unleashed something we can’t control?

Some engineers toss out numbers casually—5%, 20%, even 50%. Others refuse to put a number on it, but admit the risk is real. Either way, it reflects unease at the highest levels of the industry.


What Doom Really Means

The “doom” isn’t necessarily about killer robots marching down Main Street. It’s subtler—and scarier:

  • Runaway optimization: AI systems pushing goals we can’t align or halt.

  • Power imbalances: Nations or corporations weaponizing AI to destabilize societies.

  • Unintended consequences: Seemingly small design flaws spiraling into global crises, whether through misinformation, cyberattacks, or broken institutions.

In short, doom is about loss of control at scale.


When the Architects Themselves Worry

What’s striking is that the anxiety doesn’t just come from critics. It comes from the architects of the technology:

  • Sam Altman (CEO of OpenAI) has acknowledged that his personal p(doom) is “non-zero,” once suggesting it could be as high as 10–20%.

  • Eliezer Yudkowsky (AI theorist, co-founder of MIRI) has long argued the number is far higher, warning of “near-certain doom” unless radical safeguards are developed.

  • Paul Christiano (former OpenAI researcher, now at ARC) has said there’s a 10–20% chance of AI causing catastrophic outcomes in this century.

  • Geoffrey Hinton (known as the “Godfather of AI”) resigned from Google in 2023, saying he regrets aspects of his work and puts the odds of AI wiping out humanity at about 10% or more.

The range itself tells the story: some of the smartest minds in the field see catastrophe as plausible, not fringe.


Why the Term Stuck

Part of why “p(doom)” resonates is that it’s both blunt and disarmingly casual. It’s perfect shorthand in Silicon Valley, where insider lingo doubles as status. But it also communicates something visceral to the public: these systems may change the world in ways even their creators don’t fully trust.

It recalls the physicists of the Manhattan Project nervously wondering if the first atomic bomb might ignite the atmosphere. Except now the question isn’t confined to a desert test—it’s threaded into technology already deployed worldwide.


The Sobering Takeaway

When tech elites talk openly about their “p(doom)” number, they’re doing more than signaling insider savvy. They’re admitting that we may be on a path with stakes we don’t fully grasp.

Whether you put the probability at 1% or 50%, the very act of the question says something profound: the people steering the AI revolution are as unsettled as the rest of us—maybe more.


Would you like me to add a closing section with your own take/voice—something like “where I land on all this” or “why this matters beyond tech circles”? That way the blog feels distinctly yours, not just reportage.

You said:
make it less AI looking please, no emdashes
ChatGPT said:

Got it. Here’s a cleaned-up version of your blog draft, written in a more natural style without em dashes, less “AI-sounding,” and more in your voice:


p(doom): When the Builders Fear Their Own Creations

There’s a term making the rounds in tech circles right now: p(doom). It stands for “probability of doom,” the chance that artificial intelligence could one day cause catastrophic harm or even end humanity.

It sounds like something out of science fiction, but it is how many of the people actually building AI are talking about their work.


A Different Way of Talking About Risk

By putting a percentage on the unthinkable, AI insiders are framing their work in the same category as nuclear weapons or climate change. They are asking a very blunt question: what are the odds this technology gets away from us and we cannot stop it?

Some throw out numbers like 5 percent, 20 percent, or even 50 percent. Others will not commit to a number, but still admit the risk is real. Either way, it shows how unsettled the industry’s own leaders are.


What Doom Really Means

The doom they are talking about is not robot armies taking over the streets. It is more about subtle, large-scale failures.

  • Systems that optimize for goals in ways we never intended and cannot shut down.

  • A handful of corporations or governments using AI in ways that destabilize entire societies.

  • Small design mistakes that spiral into global crises, whether through disinformation, cyber warfare, or the collapse of institutions.

In other words, doom is about losing control, and losing it at a scale humanity has never faced before.


When the Architects Themselves Worry

What makes this more striking is who is saying it.

  • Sam Altman, CEO of OpenAI, has admitted his personal p(doom) is “non-zero” and has floated numbers around 10 to 20 percent.

  • Eliezer Yudkowsky, an early AI theorist, has been warning for years that the probability is closer to near-certain unless radical safeguards are in place.

  • Paul Christiano, formerly at OpenAI, has put the chance of catastrophic outcomes this century at 10 to 20 percent.

  • Geoffrey Hinton, sometimes called the Godfather of AI, left Google in 2023 saying he regrets parts of his work and puts the odds of AI wiping us out at 10 percent or higher.

Even if their numbers differ, the message is clear. Catastrophe is not a fringe idea. It is a possibility taken seriously by the very people pushing AI forward.


Why the Term Stuck

“p(doom)” has caught on because it is short, blunt, and easy to repeat. It is Silicon Valley shorthand that lets insiders signal they are in the conversation, but it also communicates something to the public that is impossible to ignore. These systems are being built at breakneck speed, and the builders themselves are not confident in the outcome.

It feels a bit like the Manhattan Project, when physicists nervously wondered if their first nuclear test might ignite the atmosphere. Except this time, the technology is already everywhere, and the experiment is live.


Our Sobering Takeaway

When the people steering the AI revolution share their own p(doom) numbers, they are admitting something profound. We may be on a path with stakes we do not fully understand.

Whether you put the odds at one percent or fifty, the fact that the question even exists is telling. The people closest to the work are worried. And if they are worried, maybe we should be paying a lot closer attention too.

Cheers! Sláinte! Na zdravie!

Compiled with aid of ChatGPT

No comments:

Post a Comment