Artificial Intelligence

The Rice on the Chessboard: Why Humanity Keeps Underestimating Exponential AI Growth

|Author: Viacheslav Vasipenok|4 min read| 17
The Rice on the Chessboard: Why Humanity Keeps Underestimating Exponential AI Growth

There’s an ancient Indian parable about a wise man who, for a service rendered, asked a powerful king to pay him in rice — placing one grain on the first square of a chessboard, two on the second, four on the third, and doubling the amount with every subsequent square.

The Rice on the Chessboard: Why Humanity Keeps Underestimating Exponential AI GrowthThe king, relieved by such a modest request, readily agreed. Only later did his advisors explain the terrible truth: by the 64th square, the total would reach 2⁶³ grains — roughly 9.2 quintillion, or about 200 billion tons of rice. Far more than the entire kingdom (or the planet) could ever produce.

This story has been retold for centuries as a cautionary tale about exponential growth. Psychologists, economists, and futurists — from Daniel Kahneman to Ray Kurzweil — have repeatedly warned that while humans can calculate exponentials on paper, our intuition fails catastrophically when trying to *feel* them. We evolved for a linear world of steady paces and predictable returns. On the savannah, walking twice as long got you roughly twice as far. That mental model served us well for millennia.

But it breaks down completely in the face of modern phenomena like compound interest, viral epidemics… and especially artificial intelligence.


“We Evolved for a Linear World”

In a powerful new essay published in MIT Technology Review on April 8, 2026, Mustafa Suleyman — co-founder of DeepMind and Inflection, and now CEO of Microsoft AI — opens with exactly this insight:

> “We evolved for a linear world… This intuition served us well on the savannah. But it catastrophically fails when confronting AI and the core exponential trends at its heart.”

The Rice on the Chessboard: Why Humanity Keeps Underestimating Exponential AI GrowthSuleyman argues that the central story of our era is the ongoing compute explosion. Since 2010, the compute used to train frontier AI models has increased by a factor of roughly one trillion — from around 10¹⁴ FLOPs to over 10²⁶ FLOPs. Everything else in AI — capabilities, cost reductions, new architectures — flows from this single fact.

Skeptics keep predicting walls: the end of Moore’s Law, data shortages, energy constraints, or diminishing returns from scaling. Yet these predictions have been consistently wrong because they rely on linear extrapolation in an exponential world.

Suleyman points out that the compute required for a given level of performance is halving roughly every eight months — faster than Moore’s Law itself. Inference costs have plummeted by up to 900x in some cases, while effective compute for frontier models grows by about 5x per year.

He paints a striking near-term picture: by 2027, global AI-relevant compute could reach 100 million H100-equivalent GPUs (a 10x jump in just three years), with another 1,000x effective increase possible by 2028.


From Chatbots to Autonomous Agents

Because we struggle to internalize exponentials, most people still imagine AI as slightly smarter chatbots. Suleyman says we’re only in the foothills of the real transition — toward nearly human-level agents: semi-autonomous systems capable of working for days or weeks on complex projects, writing code, negotiating, managing logistics, and collaborating in teams.

These won’t be isolated tools answering questions. They will be “teams of AI workers that deliberate, collaborate, and execute.” The implications stretch across every industry built on cognitive labor.


The Real Goal Isn’t Superintelligence

The Rice on the Chessboard: Why Humanity Keeps Underestimating Exponential AI GrowthImportantly, Suleyman pushes back against the hype (and fear) surrounding AGI as an abstract destination.

He writes:

> “The anti-goal is autonomous superintelligence. What we’re building is a teammate. An assistant. Something in your corner, backing you up.”

Instead of chasing god-like autonomous superintelligence, the focus should be on creating reliable cognitive abundance — AI that amplifies human capability rather than replacing or escaping it.

This perspective is refreshingly grounded and optimistic. Suleyman isn’t promising utopia or warning of imminent doom. He’s simply saying: stop thinking linearly. The compute explosion is real, it’s accelerating, and it’s still only beginning.

Also read:


Learning to See the Exponent

Humanity has marched into this trap for centuries — underestimating compound growth in finance, epidemiology, technology, and now AI. School teaches us the math, but our brains remain wired for the savannah.

The rice-on-the-chessboard story endures because it’s humbling. The king wasn’t stupid; he was human. We all are.

Suleyman’s essay is a timely reminder that in the age of AI, our greatest cognitive bias — the failure to intuitively grasp exponentials — may be the biggest obstacle to clear thinking about the future. Those who learn to override their linear instincts won’t just avoid surprise.

They might actually be ready for what’s coming.

Share:
0