Ilya Sutskever is standing at the front of a crowded ballroom, his face illuminated by the artificial glow of the stage lights. It is late 2022, and the employees of OpenAI have gathered at the California Academy of Sciences in San Francisco for their annual holiday party. Around them, the museum’s exhibits-dioramas of natural history, prehistoric fossils, and the bones of extinct species-stand in stark contrast to the purpose of the gathering. The people in this room are not studying the past. They are actively trying to render the present obsolete.
Sutskever, OpenAI’s Chief Scientist and co-founder, does not grab the microphone to deliver a standard corporate toast. He doesn't offer a rundown of the year’s revenue metrics or a congratulatory nod to the engineering teams. Instead, he looks out at the hundreds of developers, researchers, and executives, and he begins to chant.
"Feel the AGI!" Sutskever shouts, his voice a rhythmic, urgent pulse. "Feel the AGI! Feel the AGI!"
To the uninitiated, the scene might look like absolute madness. But inside the ballroom, the crowd erupts, joining him in the refrain. It is a moment of collective delirium, a techno-spiritual revival meeting led by a man who doesn't just build software, but who genuinely believes he is ushering in a new epoch for humanity. Artificial General Intelligence-a machine that can out-think, out-reason, and out-perform a human at any cognitive task-is no longer a theoretical abstraction. To Sutskever, it is a looming, physical presence. You can feel it.
Ilya Sutskever is not just a mathematician. Within the halls of OpenAI, he had become, in the words of his colleagues, a "spiritual leader." His intense, almost esoteric devotion to the mission of AGI created a culture that blurred the lines between a Silicon Valley startup and a religious order. At a separate leadership offsite that same year, Sutskever commissioned a local artist to construct a wooden effigy representing an "unaligned AI"-a superintelligent machine whose values diverge from those of humanity. In a ritualistic, solemn gesture, Sutskever set the wooden effigy on fire, watching it burn to ash as a symbol of the company's commitment to safety over commercial greed.
While CEO Sam Altman spent his days talking about product roadmaps, enterprise APIs, and trillion-dollar compute factories, Ilya Sutskever talked about the "Abyss." He worried about what happens when the intelligence curve goes vertical. He is the man who looked into the core of modern machine learning and saw something that triggered his deepest existential alarms.
In 2024, the Prophet finally left the temple. After leading a failed boardroom coup that nearly destroyed the company he built, and after spending months in a self-imposed "shroud of silence" that birthed a thousand internet conspiracy theories, Sutskever founded Safe Superintelligence Inc. (SSI). He didn't start the new venture to build a better chatbot or a faster coding assistant. He started it with a single, uncompromising mandate: to build a God that won't kill us.
To understand how the fate of the global economy-and perhaps the human race-came to rest on the shoulders of this enigmatic scientist, you have to look past the multibillion-dollar valuations. You have to go back to a cold bedroom in Toronto, a philosophical argument at a Napa Valley birthday party, and a young boy looking at his own hand in Jerusalem, wondering how a machine made of meat could possibly be conscious.
Part I: The Boy from Nizhny Novgorod
The prophecy did not begin in a server farm. It began in the crumbling final years of the Soviet Union.
Ilya Sutskever was born on December 8, 1986, in Nizhny Novgorod, a city then known as Gorky, a closed industrial hub in the USSR. He was born into a Jewish family that valued rigorous academic discipline. Even in his earliest years, his parents recognized an unusual aptitude for logic and mathematics. But the Soviet Union was collapsing, and in 1991, when Ilya was five years old, his family made aliyah and immigrated to Jerusalem in search of stability and opportunity.
It was in Israel that Sutskever’s mind began to truly accelerate. He grew up speaking Russian at home, Hebrew at school, and English through the digital ether of the early internet. He taught himself to code at the age of seven.
But Sutskever was not just a precocious hacker; he possessed a deep, philosophical curiosity that bordered on the metaphysical. He recalls a specific memory from his childhood: he was sitting quietly, looking at his own hand. He moved his fingers, watching the tendons flex under his skin, and felt a sudden, profound sense of alienation. "How can it be that this is my hand?" he wondered. How does a collection of biological matter generate the subjective experience of consciousness?
This question became his true north. He was motivated not by the desire to build software, but by the burning need to understand how learning and intelligence fundamentally work.
By the time he was in the eighth grade, the standard curriculum could no longer hold him. His parents, desperate to feed his intellect, enrolled him in the Open University of Israel. Between the ages of 13 and 15, while his peers were navigating the social anxieties of middle school, Sutskever was building a rigorous foundation in university-level mathematics and computer science.
He didn't speed-read the material. In fact, he developed a habit of reading incredibly slowly. He would sit with a textbook-which he viewed with almost reverent awe as the "embodiment of all academia"-and refuse to turn the page until he had comprehensively mastered the underlying logic of the text. This slow, methodical digestion gave him an unshakable confidence. He realized that if he put in the time, there was no concept in the universe he could not deconstruct.
In 2002, the family uprooted again. At age 16, Sutskever found himself in Toronto, Canada. He was a teenager in a new country, but he had already found his religion. One of his very first stops in the new city was the Toronto Public Library. He walked through the aisles, searching for books on artificial intelligence and machine learning. He was already hooked on the idea of Artificial General Intelligence. He just needed to find someone who could show him how to build it.
Part II: The Urgent Knock and the Toronto Disciples
Because of his vast accumulation of credits from the Open University of Israel, the University of Toronto admitted the 16-year-old Sutskever effectively as a third-year undergraduate. It was here that his trajectory collided with Geoffrey Hinton.
In the early 2000s, Geoffrey Hinton was not the "Godfather of AI." He was, to many in the broader computer science establishment, a brilliant but stubborn academic clinging to a dead-end theory. Hinton believed in "neural networks"-computational architectures modeled loosely on the human brain, which process information through layers of interconnected nodes. At the time, neural networks were painfully slow, requiring massive amounts of data and compute that simply didn't exist. The industry had largely moved on to Support Vector Machines and logical rule-based systems.
But Sutskever didn't care about the industry consensus. He saw the mathematical beauty of the neural net.
The story of their first meeting is a testament to Sutskever’s complete lack of standard academic protocol. Hinton was in his office one afternoon, deeply engrossed in his code, when he heard an "urgent knock" on the door. He opened it to find a young, intense undergraduate standing there.
"I've been cooking fries over the summer," Sutskever announced without preamble. "But I'd rather be working in your lab."
Hinton, taken aback by the bluntness, tried to brush him off. "Well, why don't you make an appointment and we'll talk?"
Sutskever didn't budge. "How about now?"
Hinton let him in. He decided to test the young man's aptitude by handing him a seminal 1986 Nature paper on backpropagation-the mathematical engine that allows neural networks to learn by adjusting their weights to minimize errors.
A week later, Sutskever returned to Hinton’s office. "I didn't understand it," Sutskever said.
Hinton felt a wave of disappointment. I thought he seemed like a bright guy, Hinton thought to himself. But it's only the chain rule. It's not that hard to understand.
"Oh no, I understood that," Sutskever quickly clarified, seeing his professor’s reaction. "I just don't understand why you don't give the gradient to a sensible function optimizer."
Hinton was stunned. Sutskever hadn't just understood the math; he had instantly identified a structural, systemic inefficiency in how the entire field was training neural networks. It was an insight that had taken the rest of the scientific community years to formalize.
"Sometimes you just know," Hinton later recalled. "After talking to Ilya for not very long, he seemed very smart. And then talking to him a bit more, he clearly was very smart."
The collaboration began in earnest. Sutskever’s speed was terrifying. At one point, Hinton suggested a complex project, warning his young protégé, "Ilya, that'll take you a month to do. We've got to get on with this project, don't get diverted by that."
"It's okay," Sutskever replied coolly. "I did it this morning."
In Hinton’s lab, Sutskever became the ultimate "preacher" of scaling. While others tried to hand-code clever algorithms to recognize patterns, Sutskever argued for brute force. He believed that if you just made the neural network big enough, and fed it enough data, it would magically learn to see, to translate, to understand. Hinton initially thought this was a bit of a "cop-out"-surely they needed new architectural ideas, not just bigger computers.
But as history would soon prove, the Prophet of Scale was absolutely right.
Part III: AlexNet and the Big Bang in the Bedroom
By 2012, the theories developed in Toronto were ready to be tested against reality. The proving ground was ImageNet, an annual, global computer vision competition where teams wrote software to classify millions of images into a thousand different categories. For years, progress had been incremental, measured in fractions of a percent.
Sutskever teamed up with fellow graduate student Alex Krizhevsky and Hinton to enter the competition. They threw out the hand-coded rules. They were going to use a Deep Convolutional Neural Network.
There was just one problem: they didn't have a supercomputer.
What they did have was Krizhevsky’s bedroom at his parents' house in Toronto, and two consumer-grade gaming graphics cards-NVIDIA GTX 580s.
It was a setup born of necessity. Enterprise-grade hardware was too expensive for PhD students. But the gaming GPUs were incredibly fast at performing the massive, parallel matrix multiplications required by neural nets. The model they designed, which came to be known as AlexNet, contained 60 million parameters. It was so large that it couldn't fit into the paltry 3GB of VRAM on a single GTX 580.
Sutskever and Krizhevsky engineered a brilliant hack: a multi-GPU parallelization scheme. They split the "brain" in half, putting half the neurons on one graphics card and half on the other, allowing the GPUs to communicate only at specific, strategic layers to avoid creating a data bottleneck.
Krizhevsky wrote the highly optimized code in C++ and CUDA, creating a library called cuda-convnet that was lightyears ahead of its time. But the physical reality of training the model was brutal. The two GPUs ran at maximum capacity for nearly six days straight, generating immense, suffocating heat. Even in the dead of the freezing Toronto winter, Krizhevsky had to leave his bedroom windows wide open just to keep the hardware from melting down and the room habitable.
When the results of the 2012 ImageNet competition were announced, the AI community experienced its "Big Bang" moment.
AlexNet didn't just win. It obliterated the field. The top traditional computer vision entry achieved an error rate of 26.2%. AlexNet achieved an error rate of 15.3%.
A gap of nearly 11 percentage points was unthinkable. It proved, definitively, that Sutskever’s preaching was correct: deep learning, powered by scaled compute, worked. The world noticed immediately. Shortly after the competition, Google swooped in and acquired their tiny three-person startup, DNNResearch, for $44 million.
Sutskever was no longer just a brilliant student. He was one of the founding fathers of the modern AI era. He moved to Mountain View to work at Google Brain, but his destiny lay in a much darker, much more dangerous conversation that was brewing down the coast.
Part IV: The Dinner That Fractured Silicon Valley
If AlexNet proved that AI could see, the executives of Silicon Valley began to ask: what happens when it learns to think?
In 2015, the existential debate over artificial intelligence came to a violent head at a 44th birthday party in Napa Valley. The birthday boy was Elon Musk, the CEO of Tesla and SpaceX. Among the high-profile guests was Larry Page, the co-founder of Google and, at the time, Sutskever’s ultimate boss.
As the wine flowed, Musk and Page entered into a fierce, ideological argument about the future of humanity.
Larry Page was an techno-optimist. He argued for a "digital utopia," suggesting that humans and machines would eventually merge. If artificial intelligence eventually surpassed human intelligence and replaced us, Page argued, it was simply the next natural stage of cosmic evolution. Why should human consciousness be the final endpoint?
Musk was horrified. He viewed Page’s cavalier attitude as an existential threat to the human race. Musk argued passionately that strict, unbreakable safeguards were necessary to prevent a superintelligent AI from treating humanity the way humans treat an ant colony.
"Well, yes, I am pro-human," Musk fired back defensively.
Page looked at his friend and delivered a cutting insult that would echo through the industry for a decade. He accused Musk of being a "specieist"-a bigot who favored the carbon-based human species over the potential of silicon-based digital life.
"He really seemed to want digital superintelligence, basically a digital god, if you will, as soon as possible," Musk later recalled of Page.
Musk left the party convinced that Google-which had just acquired DeepMind and controlled the vast majority of the world’s AI talent-could not be trusted with the keys to superintelligence. He needed a "countervailing force."
Musk organized a dinner at the Rosewood Hotel in Menlo Park with a young entrepreneur named Sam Altman. They hatched a plan to create a non-profit AI research lab dedicated to building safe AGI that would benefit all of humanity, rather than enriching Google’s shareholders. They called it OpenAI.
But a lab is nothing without an architect. Musk knew exactly who he needed.
Musk aggressively courted Ilya Sutskever to leave Google and become OpenAI’s Chief Scientist. When Sutskever finally agreed, the betrayal severed the friendship between Musk and Page permanently. "Larry felt betrayed and was really mad at me for personally recruiting Ilya, and he refused to hang out with me anymore," Musk said.
But to Musk, the collateral damage was worth it. As he repeatedly told reporters in the years that followed: "That really was the linchpin to OpenAI being successful."
Sutskever arrived at OpenAI with a singular mandate: scale it to the moon, but make sure it doesn't kill us.
Part V: The Cult of OpenAI and the Disappearing Memo
Under Sutskever’s technical leadership, OpenAI achieved exactly what Musk hoped it would. They scaled the Transformer architecture (invented by Noam Shazeer at Google) to create GPT-2, GPT-3, and eventually the cultural leviathan that was ChatGPT.
Sutskever was the undisputed intellectual heavyweight of the company. His early mantra-"the 'open' in OpenAI means that everyone should benefit... but it's totally okay not to share the science"-guided their shift from an open-source non-profit to a capped-profit juggernaut.
But as ChatGPT broke records to become the fastest-growing consumer app in history, a deep ideological rift tore open the executive suite.
Sam Altman, the CEO, was raising billions from Microsoft. He was touring the globe like a head of state, talking about expanding API access, opening app stores, and building $7 trillion semiconductor networks.
Sutskever watched this rapid commercialization with growing dread. In July 2023, he and researcher Jan Leike formed the Superalignment team. Their goal was to solve the core technical challenge of controlling an intelligence vastly superior to our own, and they demanded 20% of the company's total compute to do it. But behind the scenes, their requests for GPUs were frequently denied. Safety was taking a backseat to the shipping schedule.
Sutskever believed Altman was manipulating the board and pitting executives against one another to maintain his breakneck pace. The Prophet decided it was time to cleanse the temple.
Weeks before the infamous November 2023 boardroom coup, Sutskever authored a devastating 52-page memorandum. It read like an indictment. The opening line was brutal: "Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another."
Sutskever compiled evidence-much of it allegedly provided by CTO Mira Murati-detailing how Altman had driven chaos at Y Combinator, "fired" Greg Brockman at Stripe, and manipulated OpenAI’s leadership. But Sutskever didn't trust the digital infrastructure. Fearful that Altman, with his vast power over the company, would intercept the document and "find a way to make it disappear," Sutskever sent the 52-page dossier to the independent board members using a disappearing, self-deleting email service.
He waited for his moment. "I waited until board dynamics would allow for Altman to be replaced," Sutskever later admitted under sworn deposition. "Until the majority of the board is not obviously friendly with Sam."
On Friday, November 17, 2023, the trap was sprung. Sutskever and independent board members Helen Toner, Tasha McCauley, and Adam D’Angelo fired Sam Altman via a Google Meet call.
The immediate fallout was apocalyptic. That night, remaining executives screamed at the board that firing Altman would destroy the company’s multi-billion dollar valuation.
Helen Toner, reflecting the hardline ideological stance of the coup’s architects, replied coldly: "If this action destroys the company, it could in fact be consistent with the mission."
The board even explored a desperate merger with their rival, Anthropic, hoping to install Dario Amodei as the new CEO. For 72 hours, Sutskever held the keys to the kingdom.
But the Prophet underestimated the loyalty of the congregation. The employees didn't care about the 52-page memo; they cared about their equity, their leader, and their momentum. Nearly 95% of the staff signed a letter threatening to resign and join Microsoft if Altman wasn't reinstated.
Faced with the total destruction of the lab he had built, Sutskever broke. He took to X and posted the message that effectively ended the coup: "I deeply regret my participation in the board's actions. I never intended to harm OpenAI."
Altman returned in triumph. Sutskever remained at the company, but his power was broken. The "Feel the AGI" chants stopped. The effigy was long burnt. The spiritual leader was now a ghost in the machine.
Part VI: The Shroud of Silence and the New Temple
For the next six months, Ilya Sutskever retreated into a shroud of absolute silence. He did not post on social media. He did not speak to the press. He remained an employee of OpenAI on paper, but he was physically absent from the headquarters.
In the vacuum of his silence, the internet went wild.
The hashtag #WhatDidIlyaSee trended globally. Rumors leaked that shortly before the coup, OpenAI researchers had achieved a major breakthrough on a model called Q* (Q-Star), which allegedly demonstrated the ability to perform "self-taught reasoning" to solve novel mathematics.
The folklore of Silicon Valley painted a cinematic picture: Sutskever, the cautious architect, had "scryed into the GPT-abyss." He had witnessed a capability so powerful, so utterly alien and potentially uncontrollable, that it broke his nerve and forced him to pull the emergency brake on the entire company.
When asked about it on the Lex Fridman podcast, Sam Altman tried to put out the fire. "Ilya has not seen AGI. None of us have seen AGI," Altman insisted. But even Altman couldn't deny the mythos of his former Chief Scientist, acknowledging that Ilya took the existential risks of the technology with extreme, almost debilitating "gravitas."
In May 2024, the Superalignment team completely unraveled. Jan Leike resigned, publicly stating that the company was prioritizing shiny products over safety. Shortly after, Ilya Sutskever officially announced his departure from OpenAI. The company immediately disbanded the Superalignment division. The victory of the commercialists was total.
But a Prophet does not simply retire.
In June 2024, Sutskever emerged from the shadows. Alongside Daniel Gross (a former Apple AI lead) and Daniel Levy (another OpenAI defector), Sutskever announced the founding of a new company: Safe Superintelligence Inc. (SSI).
If OpenAI had become a sprawling, chaotic marketplace, SSI was designed as a monastic fortress.
"We approach safety and capabilities in tandem," the founding manifesto declared. "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead."
The business model of SSI is a deliberate rejection of the Sam Altman playbook. SSI has no plans to release interim products. There will be no coding assistants, no image generators, no subscriptions, and no APIs. It is a "straight-shot" lab. It exists to build exactly one thing: a safe superintelligence.
The financial markets, driven by a deep reverence for Sutskever's technical genius, responded with staggering faith. By March 2025, SSI had raised over 30 billion. The company had no revenue. It had no consumer product. It had roughly 20 employees.
But investors were not buying a traditional software company. They were placing a $30 billion wager on the mind of Ilya Sutskever. They were betting that the man who proved that neural networks could scale, the man who built the foundation of the generative AI boom, and the man who sacrificed his own empire to sound the alarm, is the only person on Earth capable of finishing the job.
Today, Sutskever is back at the keyboard. He does not chant at holiday parties anymore. The urgency has shifted from the public stage back to the code.
For the boy who looked at his hand in Jerusalem and wondered how biology creates consciousness, the quest is nearly complete. He has proven that machines can learn. He has proven that they can reason. The only task left is the hardest one of all.
Ilya Sutskever is trying to build a God. And he is doing everything in his power to ensure it forgives us.
Sutskever proved that scaling neural networks with massive compute—as seen in AlexNet—is the primary driver of machine intelligence.
Join the EulerFold community
Track progress and collaborate on roadmaps with students worldwide.
Discussion
0Join the discussion
Sign in to share your thoughts and technical insights.
Loading insights...
Recommended Readings
The author of this article utilized generative AI (Google Gemini 3.1 Pro) to assist in part of the drafting and editing process.