The Rice on the Chessboard: Why Humanity Keeps Underestimating Exponential AI Growth

There’s an ancient Indian parable about a wise man who, for a service rendered, asked a powerful king to pay him in rice — placing one grain on the first square of a chessboard, two on the second, four on the third, and doubling the amount with every subsequent square.

This story has been retold for centuries as a cautionary tale about exponential growth. Psychologists, economists, and futurists — from Daniel Kahneman to Ray Kurzweil — have repeatedly warned that while humans can calculate exponentials on paper, our intuition fails catastrophically when trying to *feel* them. We evolved for a linear world of steady paces and predictable returns. On the savannah, walking twice as long got you roughly twice as far. That mental model served us well for millennia.
But it breaks down completely in the face of modern phenomena like compound interest, viral epidemics… and especially artificial intelligence.
“We Evolved for a Linear World”
In a powerful new essay published in MIT Technology Review on April 8, 2026, Mustafa Suleyman — co-founder of DeepMind and Inflection, and now CEO of Microsoft AI — opens with exactly this insight:

Skeptics keep predicting walls: the end of Moore’s Law, data shortages, energy constraints, or diminishing returns from scaling. Yet these predictions have been consistently wrong because they rely on linear extrapolation in an exponential world.
Suleyman points out that the compute required for a given level of performance is halving roughly every eight months — faster than Moore’s Law itself. Inference costs have plummeted by up to 900x in some cases, while effective compute for frontier models grows by about 5x per year.
He paints a striking near-term picture: by 2027, global AI-relevant compute could reach 100 million H100-equivalent GPUs (a 10x jump in just three years), with another 1,000x effective increase possible by 2028.
From Chatbots to Autonomous Agents
Because we struggle to internalize exponentials, most people still imagine AI as slightly smarter chatbots. Suleyman says we’re only in the foothills of the real transition — toward nearly human-level agents: semi-autonomous systems capable of working for days or weeks on complex projects, writing code, negotiating, managing logistics, and collaborating in teams.
These won’t be isolated tools answering questions. They will be “teams of AI workers that deliberate, collaborate, and execute.” The implications stretch across every industry built on cognitive labor.
The Real Goal Isn’t Superintelligence

He writes:
> “The anti-goal is autonomous superintelligence. What we’re building is a teammate. An assistant. Something in your corner, backing you up.”
Instead of chasing god-like autonomous superintelligence, the focus should be on creating reliable cognitive abundance — AI that amplifies human capability rather than replacing or escaping it.
This perspective is refreshingly grounded and optimistic. Suleyman isn’t promising utopia or warning of imminent doom. He’s simply saying: stop thinking linearly. The compute explosion is real, it’s accelerating, and it’s still only beginning.
Also read:
- How to Hack Perplexity and Get Unlimited Claude Opus at Someone Else's Expense
- The Overlooked Crackdown: China's Tech Giants Reined In Over 'Involution' Pricing Wars Amid Deflation Fears
- IT Inflation: How AI is Driving Up the Costs of Tech Infrastructure
Learning to See the Exponent
Humanity has marched into this trap for centuries — underestimating compound growth in finance, epidemiology, technology, and now AI. School teaches us the math, but our brains remain wired for the savannah.
The rice-on-the-chessboard story endures because it’s humbling. The king wasn’t stupid; he was human. We all are.
Suleyman’s essay is a timely reminder that in the age of AI, our greatest cognitive bias — the failure to intuitively grasp exponentials — may be the biggest obstacle to clear thinking about the future. Those who learn to override their linear instincts won’t just avoid surprise.
They might actually be ready for what’s coming.