Former OpenAI Technical Director Exposes Sam Altman's Lies About AI Safety

In explosive video depositions played this week in Elon Musk's federal lawsuit against Sam Altman and OpenAI, former executives and board members have painted a damning picture of the company's leadership.

Murati: "No" — Altman Was Not Telling the Truth

When lawyers asked her point-blank whether Altman was telling the truth, her answer was simple and unequivocal: "No."

Murati still pushed for — and ultimately secured — the safety review. But the episode raised serious questions about whether the world's most powerful AI company nearly released a high-stakes model without proper oversight.

- Did he undermine her authority as CTO? "Yes.";
- Did he pit other executives against one another? "Yes.";
- By fall 2023, did she believe Altman was not always forthcoming, honest, or truthful with her?
After a very long pause, Murati replied: "Not always."
She described OpenAI during that period as being at "catastrophic risk of falling apart" and said she worried the company could "completely blow up."

Murati was not alone. Former OpenAI board member Tasha McCauley testified that Altman had told the board that three versions of ChatGPT had been submitted, tested, and reviewed by the safety deployment board. In reality, only one had gone through the full process at the time.
McCauley called this part of a broader "pattern of dishonesty" that created "a culture of lying, a culture of deception" that trickled down through the entire company. She said the board had "tons of concerns" about Altman and had dealt with "recurring crisis events caused by Sam's behavior."

Safety Teams Disbanded, Key Talent Flees
Former AI safety researcher Rosie Campbell added another layer. When she joined OpenAI, two dedicated teams focused on long-term safety: the Superalignment team and the AGI Readiness team. Both were later disbanded. Half of her own team left rather than accept other roles.
Campbell signed the infamous employee letter supporting Altman's return — but only because she feared the alternative (everyone moving to Microsoft) would be even worse for safety.

- Co-founder Ilya Sutskever left to found Safe Superintelligence (SSI).
- Mira Murati departed to launch Thinking Machines Lab.
- Jan Leike joined Anthropic.
- The entire Super Alignment team was dissolved.
- The AGI Readiness team was liquidated.
Every co-founder and executive who prioritized safety is now gone.
$852 Billion Valuation — And Business as Usual
Meanwhile, OpenAI's commercial success has only accelerated. In March 2026, the company closed a massive funding round at an $852 billion valuation. Roughly 900 million people now use ChatGPT every week. The company recently began running ads inside the product, using conversation history for targeting, and has acquired a media company placed under its chief political operator. It is actively lobbying to block U.S. states from imposing AI regulations.
Also read:
- The Trillion-Dollar Gambit: Is Musk Using the OpenAI Lawsuit to Fuel the Greatest Wealth Transfer in History?
- Nvidia Is Pulling Off the Most Sophisticated Financial Loop in Tech History
- AI Bubble Acknowledged at the Top: Even Sam Altman Warns of a Crash

The depositions raise uncomfortable questions: Can a company racing toward AGI afford a culture where the CEO is accused of misleading his own safety team and board? Or is this simply the messy reality of building the most powerful technology in human history?
What do you think? The courtroom may deliver one verdict — but the real test will be what happens next with the AI that billions already rely on every day.