Genius and madness often share the same space in the human mind. In the case of Ilya Sutskever, the brains behind ChatGPT and co-founder of OpenAI, this cohabitation has produced both revolutionary innovations and visions bordering on the mystical. While he was completing algorithms capable of mimicking human language, Sutskever was secretly planning a bunker to protect scientists from the possible “apocalypse” unleashed by the release of AGI.
A revealing paradox: the very creators of the most advanced artificial intelligence fear their creation so much that they are preparing for a technological cataclysm. The almost religious term he used reveals how the line between science and faith has become blurred in the world of AI.
The Shock Proposal: An Anti-Apocalypse Bunker for Scientists
During a meeting with a group of new researchers in the summer of 2023, Sutskever made an incredible statement: “Once we’re all in the bunker…” A confused researcher interrupted him to ask about the “bunker,” and the answer was even more surprising: “We’ll definitely build a bunker before we release the AGI.” A joke, a hyperbole?
None of this. As reported by Karen Hao in his book "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI“, the plan would have been to protect OpenAI's key scientists from the geopolitical chaos or violent competition between world powers that Sutskever predicted could erupt after the release of artificial general intelligence (AGI). The episode, reported in an essay from the book and published on the website The Atlantic, highlights the existential fears that pervade the top levels of AI companies.
With disconcerting nonchalance, he would later add, “Of course, it will be optional whether you want to go into the bunker or not.” As if he were talking about a corporate lunch option, not a refuge from the technological apocalypse.
AI Apocalypse Bunker: An Obsession, Not an Isolated Case
The reference to the doomsday bunker was not an isolated incident. Two other sources confirmed to the reporter that Sutskever regularly mentioned the facility in internal discussions. One OpenAI researcher went so far as to say, “There is a group of people, Ilya among them, who believe that building AGI will lead to some sort of apocalyptic event. Literally, a technological apocalypse.”
This quasi-religious vision of the technological future reveals how deeply some of the brightest minds in AI are influenced by existential fears about their own creations. Sutskever was known among employees as “a deep thinker and even a mystic of sorts,” who regularly spoke in spiritual terms, according to Hao’s book.
The OpenAI Paradox: Between Catastrophe and Profit
Sutskever's apocalyptic fears weren't entirely out of place in the OpenAI ecosystem. In May 2023, the CEO Sam altman had co-signed an open letter describing AI as a potential extinction risk for humanity, as reported both in Karen Hao's book and in several industry articles.
However, this catastrophic narrative had to coexist with increasingly aggressive commercial ambitions. ChatGPT was becoming a global phenomenon and OpenAI was rapidly transforming from a research laboratory into a multi-billion dollar technology giant. A contradiction that fueled internal tensions, culminating in the (failed) attempt to remove Altman from his position in November 2023.

The Internal Fracture: The Finger on the AGI Button
The tension between the apocalyptic and the capitalist vision exploded in late 2023. Sutskever, along with the then Chief Technology Officer Mira Murat, orchestrated a brief corporate coup, temporarily removing Altman as CEO.
Underlying this decision was concern that Altman was circumventing internal security protocols and consolidating too much power. According to notes reviewed by Hao and reported exclusively in her book, Sutskever said, “I don’t think Sam is the person who should have his finger on the AGI button.” Documentation of these internal tensions, reported both on The Atlantic that of MIT Technology Review, offers a previously unseen glimpse into the leadership crisis that has rocked OpenAI.
The “finger on the button” metaphor is particularly revealing: it evokes Cold War scenarios, with AGI in the role of a nuclear weapon. A vision that places artificial intelligence not as a tool for progress, but as a potential cause of destruction.
After OpenAI: The Mission Continues with Safe Superintelligence
After the failed “coup” and Altman’s reinstatement, Sutskever left OpenAI in May 2024 to found Safe Superintelligence (SSI), a company dedicated to developing secure artificial intelligence systems.
Unlike OpenAI, SSI has a much more narrow focus: “Our first product will be secure AI, and we won’t do anything else until then,” Sutskever said in an interview. The company raised an initial $1 billion and then another $2 billion, reaching $XNUMX billion in XNUMX. TechCrunch a valuation of $32 billion as of April 2025, demonstrating that concerns about AI safety are also echoed among investors.
The bunker as a symbol: paranoia or foresight?
Sutskever’s proposed doomsday bunker, though never built, has become a powerful symbol of the contradictions and fears that permeate the AI industry. On the one hand, it represents an almost religious paranoia; on the other, a perhaps necessary caution in the face of potentially transformative technologies.
Come I wrote to you 5 years ago: “Never before has the world been at a crossroads. It can end in war and suffocate with poisons, or be reborn with technology and ethics.” The Sutskever bunker represents precisely this crossroads: the fear of disaster and the hope of being able to avoid it. If we then consider the increasingly tense geopolitical context, the technological competition between the United States and China is now compared to a new cold war, with AI as the main battlefield.
In this scenario, the idea of a bunker to protect the minds behind artificial intelligence no longer seems like folly but a strategic consideration in a world where control of technology determines the balance of power.
Anti-Apocalypse Bunker, The Terror of the Front Line
The story of Sutskever's anti-apocalypse bunker confronts us with the innovator's paradox: those who push the boundaries of technology are often the first to fear the consequences. This dual attitude (enthusiasm and fear, ambition and caution) characterizes the frontier of artificial intelligence.
Whether paranoia or foresight, the former OpenAI man’s proposal forces us to confront fundamental questions: How much control do we really have over the technologies we’re creating? And if their creators are so afraid of the consequences that they’re considering hiding in underground bunkers, shouldn’t we all be paying more attention?