Artificial Intelligence

"I Would Have Cured Cancer First": Why Google DeepMind’s CEO Thinks the AI Race is a Mistake

|Author: Viacheslav Vasipenok|4 min read| 11
"I Would Have Cured Cancer First": Why Google DeepMind’s CEO Thinks the AI Race is a Mistake

Demis Hassabis is arguably one of the most consequential figures in the history of technology. As the head of Google DeepMind and a recent Nobel Prize winner for AlphaFold — a system that solved a 50-year-old protein-folding mystery — his work already impacts over 3 million scientists. Almost every new drug currently in development has been touched by his AI.

However, in a recent, strikingly candid interview, Hassabis shared a perspective that should make every tech executive and policymaker pause. His message was clear: The AI industry has taken a wrong turn.


The Road Not Taken: Science vs. Chatbots

"I Would Have Cured Cancer First": Why Google DeepMind’s CEO Thinks the AI Race is a MistakeWhen asked about the launch of ChatGPT and Google’s subsequent "Code Red" shift, Hassabis didn't offer a polished PR answer. Instead, he admitted a profound regret:

"If it had been up to me, I would have kept AI in the lab for longer. I’d have done more things like AlphaFold. Maybe cured cancer or something like that."

Let that sink in. The man leading Google’s entire AI division just publicly stated that the commercial race we are currently witnessing was a mistake.

His original vision was simple and noble:

  • Develop AI slowly and cautiously, modeled after CERN.
  • Solve fundamental scientific problems first (energy, materials, disease).
  • Allow basic science to stabilize for a decade or two before mass commercialization.

But in November 2022, ChatGPT changed everything.


The "Furious Commercial Race"

"I Would Have Cured Cancer First": Why Google DeepMind’s CEO Thinks the AI Race is a MistakeHassabis described the post-ChatGPT era as being locked in a "furious commercial race of pressures" from which no laboratory can escape.

Between the hunt for quarterly profits and the geopolitical tension of USA vs. China, the industry has pivoted.

We are now galloping toward products instead of breakthroughs. Scientific potential is being buried under marketing cycles, and the quest for the "next big feature" has superseded the quest for the next big cure.


The Real Threat: The Coming "Era of Agents"

"I Would Have Cured Cancer First": Why Google DeepMind’s CEO Thinks the AI Race is a MistakeWhile Hassabis acknowledged the common fears — terrorist groups or hostile states using AI for cyberattacks — he revealed a much deeper concern that keeps him up at night.

He warns that we are 2 to 4 years away from the "Era of Agents." These are not the chatbots we know today; these are systems capable of autonomously executing complex, multi-step tasks.

"How can we make sure the fuses are set so they do exactly what they’re told, and there’s no way for them to bypass or accidentally break those safety valves? That’s going to be an incredibly difficult technical challenge."

A Short Window for Alignment

"I Would Have Cured Cancer First": Why Google DeepMind’s CEO Thinks the AI Race is a MistakeThe core of the problem is AI Alignment. A Nobel laureate running one of the world's most advanced AI labs is telling us that:

  1. In the next 24 to 48 months, AI control will become a critical, real-world issue.
  2. The technical complexity of solving this is enormous.
  3. Hardly anyone is paying enough attention.

Hassabis is calling for unprecedented international cooperation between labs, safety institutes, and academia. He argues that the only way to survive the "AGI moment" safely is to treat it with the gravity it deserves — rather than as a race to the top of the App Store.

Also read:


Final Thoughts

"I Would Have Cured Cancer First": Why Google DeepMind’s CEO Thinks the AI Race is a MistakeMost AI CEOs speak in platitudes about "responsible development" while checking their stock prices. Demis Hassabis is doing something different.

He is telling us that the race forced a premature deployment of a technology we barely understand.

If the man who built a system capable of curing cancer tells you he wishes he could have finished that job before the world got distracted by chatbots, we should probably start listening to his warnings about what comes next.


Does the potential for scientific breakthrough justify the risks of a commercial AI "arms race," or have we already passed the point of no return?

Share:
0