David Silver gave the world its very first glimpse of superintelligence.
In 2016, an AI program he developed at Google DeepMind, AlphaGo, taught itself to play the famously difficult game of Go with a kind of mastery that went far beyond mimicry.
Silver has since founded his own company, Ineffable Intelligence, that aims to build more general forms of AI superintelligence. The company will do this, Silver says, by focusing on reinforcement learning, which involves AI models learning new capabilities through trial and error. The vision is to create “superlearners” that go beyond human intelligence in many domains.
This approach stands in contrast to how most AI companies plan to build superintelligence, by exploiting the coding and research capabilities of large-language models.
Silver, speaking to WIRED from his office in London, says he thinks this approach will fail. As amazing as LLMs are, they learn from human intelligence—rather than building their own.
“Human data is like a kind of fossil fuel that has provided an amazing shortcut,” Silver says. “You can think of systems that learn for themselves as a renewable fuel—something that can just learn and learn and learn forever, without limit,” he says.
I’ve met Silver a few times and—despite this proclamation—he’s always struck me as one of the more humble people in AI. Sometimes, when talking about ideas he considers silly, he flashes a puckish grin. Right now, though, he’s deadly serious.
“I think of our mission as making first contact with superintelligence,” he says. “By superintelligence I really mean something incredible. It should discover new forms of science or technology or government or economics for itself.”
Five years ago, such a mission might have seemed ridiculous. But tech CEOs now routinely talk about machines outpacing human intelligence and replacing entire categories of workers. The idea that some new technical twist might unlock superhuman AI capabilities has recently spawned a raft of billion-dollar startups.
Ineffable Intelligence has so far raised $1.1 billion in seed funding at a valuation of $5.1 billion—an enormous sum by European AI standards. Silver has also recruited top AI researchers from Google DeepMind and other frontier labs to join his endeavor.
Silver says he will give all of the money he makes from equity Ineffable Intelligence—a sum that could amount to billions if he is successful—away to charity.
“It's a huge responsibility to build a company focusing on superintelligence,” he tells me. “I think this is something that has to be done for the benefit of humanity, and any money that I make from Ineffable will go to high-impact charities that save as many lives as possible.”
Total Focus
Silver met Demis Hassabis, the CEO of Google DeepMind, at a chess tournament when they were kids, and the pair later became lifelong friends and collaborators.
They remained close after Silver left Google DeepMind, which he did only because he wanted to chart a completely new path. “I feel it's really important that there is an elite AI lab that actually focuses a hundred percent on this approach,” he says. “That it's not just a corner of another place dedicated to LLMs.”
The limits of the LLM-based approach can be seen, Silver says, with a simple thought experiment. Imagine going back in time and releasing a large language model in a world that believed the world was flat. Without being able to interact with the real world, the system, he says, would remain an avid flat-earther, even if it continued to improve its own code.
An AI system that can learn about the world for itself, however, could make its own scientific discoveries.
Silver compares the current state of AI to understandings of biology before Darwin. “There were all of these people trying to understand what life was, but no one had a unified view that really explained what life was until Darwin came along,” he says.
The big question for Ineffable Intelligence is how to go from confined worlds like the game of Go to the unimaginable complexity of the real world.
Silver says he sees a way to achieve this by placing AI agents inside of simulations. He is cagey about what these simulations might look like, but says the approach would allow agents to learn to achieve goals and collaborate with one another.
Ravi Mhatre, a cofounder and partner at Lightspeed Ventures, which is backing Ineffable Intelligence, says Silver is “a world-class researcher” whose career is “basically a single, coherent argument for being able to scale intelligence without human priors.”
Building superintelligence in this way might raise new problems, however, if the resulting AI identifies optimal ways to solve problems that are not aligned with human values or interests.
Silver says that developing the technology inside of simulations will help because it will be possible to see how an AI agent behaves towards others, including lesser intelligences. “We can actually see what kind of behavior emerges from this,” he says.
Mhatre says he pressed Silver on the issue of safety, and he believes that his approach may offer a better way to build aligned AI because it is not so dependent on learning from human behavior. Silver, “is very focused on how to build highly intelligent systems that are benign or copasetic with whatever we want,” Mhatre says.
No Shortcuts
The idea that computers might someday learn as humans do—from experience—goes back to the early days of computer science, including the writing of Alan Turing. The algorithmic approach used to achieve this is what is known as reinforcement learning.
Silver has long believed that this approach is the true key to building superhuman machine intelligence. One of Silver’s mentors, Rich Sutton, together with his longtime collaborator, Andrew Barto, won the Turing Award in 2025 for their work developing early reinforcement learning algorithms.
The world of AI is now more focused on LLMs and a different approach to training that involves feeding vast quantities of human text scraped from books, internet pages, and other sources into AI algorithms.
Reinforcement learning has, however, played an important role in creating today’s AI systems. It made it possible to build chatbots by shaping the output of LLMs with human feedback. More recently, it has allowed LLM-based AI systems to learn how to solve more complex problems, especially in areas like math and programming.
Benevolent Creator
The race toward superintelligence has become increasingly frenzied, with big companies spending billions on building infrastructure and hiring talent. Some see an almighty bubble brewing.
Sonya Huang, a VC at Sequoia Capital, which has invested in the startup, says Ineffable Intelligence stands out because of Silver’s remarkable track record and the purity of his vision.
“There’s only a very, very small number—less than a handful of people—who have done truly foundational work,” Huang says. “Dave is one of them.”
Huang says the enormous amount of compute now available to AI companies and the increasing sophistication of simulations have convinced her of Silver’s approach. “I fundamentally agree with his thesis on where we’re going to find the next big breakthroughs,” she says.
Silver’s reputation as being both a top researcher and frankly, not an asshole, may work in his favor when it comes to recruiting talent. “I think it matters immensely to researchers,” Huang says.
Andrew Dai, who worked with Silver at Google DeepMind, agrees. “He’s a very smart guy who always has new ideas to put on the table,” he says. “And, yeah, he's also very nice. He respects other people's opinions and gives researchers freedom.”
To Silver, the science alone should be a draw. “In terms of pure science, I see this as the most important scientific mission that we could possibly go on,” he says.

