OpenAI Founder: Doomsday Bunker & AGI Fears

Did OpenAI scientists Want a Doomsday Bunker for AGI?

Former chief scientist Ilya Sutskever’s concerns about artificial general intelligence reportedly included building a physical shelter.

When former chief scientist Ilya Sutskever discussed artificial general intelligence, or AGI, his perspective wasn’t that of someone simply improving chatbots. Instead, he seemed to anticipate a cataclysmic event he was actively contributing to.

According to Karen Hao of The Atlantic, who is writing a book about the November 2023 boardroom conflict that briefly removed CEO Sam Altman, Sutskever was purportedly developing advanced AI while also preparing for a potential apocalypse.

In a meeting in 2023, he reportedly stated, “We’re definitely going to build a bunker before we release AGI.” When questioned about his seriousness,he affirmed his statement,while also suggesting that entering the bunker would be optional.

The idea of optional shelter recalls the Vault-Tec vaults, where survival during a nuclear apocalypse depends on entering and remaining inside.

OpenAI’s Top Scientists Wanted to ‘Build a Bunker’ Before Releasing AGI

Sutskever’s views were shared by others,with some describing his mindset as almost prophetic. One source told Hao, “There is a group of people-Ilya being one of them-who believe that building AGI will bring about a rapture.” This belief was reportedly literal.

The scientist, who once speculated that some AI models might be “slightly conscious,” appeared to be losing confidence in the company’s direction, despite his commitment to the technology. His belief that AGI could end life as we know it, unless controlled by the right people, influenced his decision to participate in the attempt to remove Altman. The effort was ultimately unsuccessful.

“We’re definitely going to build a bunker before we release AGI.”

The internal conflict is now referred to as “The Blip,” possibly referencing the Thanos snap from the Marvel Cinematic Universe. Altman remains the CEO, seemingly with increased authority, while Sutskever has left OpenAI.

At least he’ll always have his bunker.

Frequently Asked Questions

What is Artificial General Intelligence (AGI)?
AGI is a hypothetical form of AI with human-like cognitive abilities, capable of understanding, learning, and applying knowledge across various domains.
Why are some experts concerned about AGI?
Concerns include potential impacts on employment, ethical considerations, and existential risks if AGI is not developed and managed responsibly.
What are the potential benefits of AGI?
AGI could lead to breakthroughs in various fields, including medicine, science, and technology, potentially solving complex problems and improving human lives.

About the Author

Amelia Shepherd is a technology reporter covering artificial intelligence, machine learning, and the future of computing.




Related Posts

Leave a Comment