Hello!
In a slightly manic new screed, OpenAI CEO and cofounder Sam Altman waxed prolific about the future of artificial intelligence — and invented a strange new unit of measurement to describe when his predictions might finally come to pass.
Altman's blog post, titled "The Intelligence Age" and published to his personal website, started out sounding like a relatively average tech CEO missive.
"In the next couple of decades," the CEO wrote, "we will be able to do things that would have seemed like magic to our grandparents."
But the further he gets into detail, the wilder the promises start to sound, with Altman asserting that the said "magic" will include things like "fixing the climate, establishing a space colony, and the discovery of all of physics," achievements that will "eventually become commonplace."
And when will all this happen? To count down to this magical future, Altman debuted a new measure of time: "a few thousand days," which in regular English translates to "an indeterminate number of years." And, close readers will notice, he even hedged that.
"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there," Altman wrote.
Timing Is Everything
If "a few" means, as Dictionary.com defines it, anywhere between two and eight but generally refers to bundles of two, three, or four, Altman could be talking about anywhere from 2,000 to 8,000 days, which would be equivalent to between five and 21 years (to remember how long ago 21 years actually is, 21 years ago was when George W. Bush invaded Iraq.)
The 39-year-old computer scientist continued to play around with tenses, boasting that humanity has reached this "doorstep of the next leap in prosperity" because, as he put it, "deep learning worked."
"That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data," Altman continued.
He added that to a "shocking degree of precision" — and here we have to argue that the CEO is speaking overbroadly if not incorrectly, given that OpenAI's large language models (LLMs) and others are still known for bullshitting — the "more compute and data available, the better it gets at helping people solve hard problems."
"I find that no matter how much time I spend thinking about this," Altman concluded. "I can never really internalize how consequential it is."
Is the OpenAI CEO getting high on his own supply? Is he just straight-up tripping? Is he now the firm's number one weirdo since fellow cofounder Ilya Sutskever finally gave up the ghost?
Perhaps we'll find out within the next "few thousand days."
Thank you!
Join us on social media!
See you!