From AI Doomerism to Molotov Cocktail
The billion-dollar ideology behind the violence finally produced what it always promised.
Source: garryslist.org
Source: garryslist.org
TL;DR
A 20-year-old PauseAI member threw a Molotov at Sam Altman’s home and marched on OpenAI HQ. This isn’t random violence: it’s the predictable endpoint of an ideology backed by billions and amplified by every major media outlet.
At 3:45 AM on April 10, 2026, a 20-year-old threw a Molotov cocktail at Sam Altman’s house. Then he walked to OpenAI headquarters and threatened to burn it down. Daniel Moreno-Gama was booked on suspicion of attempted murder. A follow-up New York Times story revealed he was carrying a list of other AI leaders at the time of his arrest.
Four days before that, in Indianapolis, an assailant fired 13 rounds into the front door of City Councilman Ron Gibson’s home. His eight-year-old son was inside. The note left at the scene read: “No Data Centers.”
And in November 2025, OpenAI locked down its San Francisco offices after a former Stop AI member allegedly expressed intent to physically harm employees and may have purchased weapons targeting additional OpenAI locations. The internal Slack message was direct: “Our information indicates that [name] from StopAI has expressed interest in causing physical harm to OpenAI employees. He has previously been on site at our San Francisco facilities.”
These are not random acts of madness. They are the predictable endpoint of a specific ideology, carefully constructed, lavishly funded, and enthusiastically amplified by people who should have known better.
He Wasn’t a Lone Wolf
Moreno-Gama held six community roles within PauseAI. His Discord handle was “Butlerian Jihadist,” a reference to Frank Herbert’s fictional anti-technology holy war. His Instagram was a feed of doomer content: capability curves captioned “if we do nothing very soon we will die.” Four months before the attack, he recommended Yudkowsky and Soares’ book “If Anyone Builds It, Everyone Dies” to his PauseAI followers.
In January 2026, he published a Substack post estimating AI-caused extinction as “nearly certain.” He wrote: “We must deal with the threat first and ask questions later.” In the same post, he wrote a poem imagining the children of AI developers dying. “May Hell be kind to such a vile creature,” he wrote about the builders.
On December 3, he wrote in PauseAI’s Discord: “We are close to midnight it’s time to actually act.”
Then he acted.
After the attack, PauseAI deleted his messages from their server. They had given him six community roles. When another community member had previously flagged violent rhetoric in PauseAI’s Discord, the mods deleted that warning post instead of acting on it.
The Rhetoric Was Never Subtle
Look at what these people actually said — out loud, on the record, to millions.
Eliezer Yudkowsky, the intellectual figurehead of the AI doomer movement, stated in TIME magazine that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” Not probably. Not maybe. Everyone. He co-authored a book called “If Anyone Builds It, Everyone Dies.” The title is the thesis. He called for airstrikes on data centers and said the risk of nuclear exchange is preferable to a training run completing. The Center for AI Safety’s national spokesperson said on camera that people should “walk to the labs across the country and burn them down.” P(doom) numbers climbed from 50% to 90% to 99.99999%. PauseAI activated a “Warning Shot Protocol” declaring an AI model “a weapon of mass destruction.” A PauseAI leader said an Anthropic researcher “deserves whatever is coming to her.”
This is not subtle. A reasonable editor, a responsible producer, a halfway serious policy staffer — any of them could read these statements and recognize them for what they are: sensationalist, eliminationist rhetoric dressed up in academic language. But recognizing it would have meant not running the story. And the story was too good not to run.
The New York Times ran multiple splashy features. Ezra Klein gave Yudkowsky a long, sympathetic platform — not a cross-examination, a megaphone. They did this because a credentialed scientist saying “everyone will die” is a front-page story, not because anyone in editorial genuinely believed data centers were extinction machines. Apocalyptic claims from credentialed sources generate traffic. The outlets knew that. They made a choice.
Politicians and policy actors piled on for the same reason. AI panic is a useful lever — for regulatory positioning, for fundraising, for influence. When officials treat “AI will kill everyone” as a serious policy position worthy of hearings and formal response, they hand the fringe something it could never earn on its own: institutional credibility. The rhetoric went from Discord servers to the front page to the halls of power, and at every stage the people amplifying it benefited from doing so.
That amplification is the mechanism. Daniel Moreno-Gama did not invent this worldview in his bedroom. He was handed it — by major publications, by credentialed spokespeople, by an entire infrastructure of legitimization that treated “burn the labs down” as a defensible position rather than a dangerous incitement. Every outlet that ran the doomer line without pushback, every politician who cited extinction risk to justify a regulatory land grab, helped build the epistemic environment in which a 20-year-old could conclude that firebombing a CEO’s house was rational self-defense.
And that’s exactly what he concluded. Moreno-Gama described himself as a consequentialist in a memoir for his community college English class: “I give very little credence to intentions if the results do not match.” They gave him a trolley problem. One life versus all of humanity. The kid pulled the lever.
Follow the Money
This is not a grassroots fringe. The “AI Existential Risk Ecosystem” has directed over $1 billion toward AI existential risk advocacy. The Future of Life Institute alone received a single cryptocurrency donation worth over $660 million.
Follow the incentives. When your organization’s funding depends on people believing in extinction, you do not publish “actually things look fine.” You fund Discord servers and open letters and organizations that give credentialed community roles to people like Daniel Moreno-Gama. Stop AI, describing their own legal protest actions, said they were trying to “slow OpenAI down in their attempted murder of everyone and every living thing on earth.” That is the language this movement uses for legal business activity. A 20-year-old read it and concluded that murdering Sam Altman was self-defense for the human race.
Mainstream outlets ran those theoreticians on their front pages because apocalyptic rhetoric generates clicks. When a credentialed researcher says everyone will die, that is a front-page story, not a red flag. The money funded the press releases, and the press releases funded more coverage. A professional class of doomers with institutional incentives to keep the panic alive does not produce moderation.
The Most Revealing Answer
A journalist asked Yudkowsky directly: if AI is this dangerous, why aren’t you attacking data centers yourself?
Notice what that answer is not. It is not “because violence is wrong.” His restraint is strategic, not moral. And the community knows it. The unspoken agreement, visible to anyone reading the discourse: the kid’s greatest sin was bad timing, not method.
Soares, Yudkowsky’s co-author, publicly tweeted that Altman was “doing terrible stuff” the same night Moreno-Gama walked to the CEO’s house with a bottle of accelerant.
The people who built this ideology, funded it, published it, and put it on the front page need to own what it produces. Calling for airstrikes on data centers is not a philosophical position with no real-world consequences. Telling a national audience that AI developers are attempting murder is not harmless hyperbole.
A 20-year-old took those words at face value. An 8-year-old was asleep in a house that got shot up over a data center vote.
AI developers are not committing mass murder. They are building tools that are transforming medicine, science, and human potential. The people who spent over a billion dollars convincing a generation otherwise now have some explaining to do.
Related Links
-
Man Who Attacked OpenAI CEO's Home Had List of Other AI Executives (New York Times)
-
How Effective Altruism Lost Its Way (Quillette)
-
When Effective Altruism Takes a Dark Turn (AI Panic)
Comments (0)
Sign in to join the conversation.