News

Former OpenAI Technical Director Exposes Sam Altman's Lies About AI Safety

|Author: Viacheslav Vasipenok|5 min read| 7
Former OpenAI Technical Director Exposes Sam Altman's Lies About AI Safety

In explosive video depositions played this week in Elon Musk's federal lawsuit against Sam Altman and OpenAI, former executives and board members have painted a damning picture of the company's leadership.

At the center: accusations that CEO Sam Altman repeatedly misled his own team and board about critical AI safety reviews — even as the technology rolled out to hundreds of millions of users.

Former OpenAI Technical Director Exposes Sam Altman's Lies About AI SafetyMira Murati, the engineer widely credited with leading the development of GPT-4 and who briefly served as OpenAI's interim CEO after Altman's 2023 ouster, delivered some of the most striking testimony. Under oath, she directly contradicted Altman's claims about bypassing the company's internal safety processes.

Murati: "No" — Altman Was Not Telling the Truth

Former OpenAI Technical Director Exposes Sam Altman's Lies About AI SafetyAccording to Murati's deposition, Altman told her that OpenAI's legal team, led by general counsel Jason Kwon, had approved skipping the internal Deployment Safety Board review for a major new model about to reach hundreds of millions of people.

When lawyers asked her point-blank whether Altman was telling the truth, her answer was simple and unequivocal: "No."

Former OpenAI Technical Director Exposes Sam Altman's Lies About AI SafetyShe went further, confirming she had checked directly with Kwon. What Altman had claimed the lawyers said and what Kwon actually told her were "completely different things."

Murati still pushed for — and ultimately secured — the safety review. But the episode raised serious questions about whether the world's most powerful AI company nearly released a high-stakes model without proper oversight.

Former OpenAI Technical Director Exposes Sam Altman's Lies About AI SafetyLawyers then pressed her on Altman's management style:

  • Did he undermine her authority as CTO? "Yes.";
  • Did he pit other executives against one another? "Yes.";
  • By fall 2023, did she believe Altman was not always forthcoming, honest, or truthful with her?

After a very long pause, Murati replied: "Not always."

She described OpenAI during that period as being at "catastrophic risk of falling apart" and said she worried the company could "completely blow up."

Former OpenAI Technical Director Exposes Sam Altman's Lies About AI SafetyBoard Member: A "Pattern of Dishonesty" and "Culture of Lying"

Murati was not alone. Former OpenAI board member Tasha McCauley testified that Altman had told the board that three versions of ChatGPT had been submitted, tested, and reviewed by the safety deployment board. In reality, only one had gone through the full process at the time.

McCauley called this part of a broader "pattern of dishonesty" that created "a culture of lying, a culture of deception" that trickled down through the entire company. She said the board had "tons of concerns" about Altman and had dealt with "recurring crisis events caused by Sam's behavior."

Former OpenAI Technical Director Exposes Sam Altman's Lies About AI SafetyWhen the board finally fired Altman in November 2023, he reportedly told employees it was an "evil coup." Microsoft intervened, hundreds of employees threatened to quit, and Altman was reinstated within days. The board members who tried to hold him accountable — including McCauley — ultimately resigned.

Safety Teams Disbanded, Key Talent Flees

Former AI safety researcher Rosie Campbell added another layer. When she joined OpenAI, two dedicated teams focused on long-term safety: the Superalignment team and the AGI Readiness team. Both were later disbandedHalf of her own team left rather than accept other roles.

Campbell signed the infamous employee letter supporting Altman's return — but only because she feared the alternative (everyone moving to Microsoft) would be even worse for safety.

Elon Musk Accuses Sam Altman of Orchestrating a MurderThe exodus of safety-focused leaders has been total:

  • Co-founder Ilya Sutskever left to found Safe Superintelligence (SSI).
  • Mira Murati departed to launch Thinking Machines Lab.
  • Jan Leike joined Anthropic.
  • The entire Super Alignment team was dissolved.
  • The AGI Readiness team was liquidated.

Every co-founder and executive who prioritized safety is now gone.

$852 Billion Valuation — And Business as Usual

Meanwhile, OpenAI's commercial success has only accelerated. In March 2026, the company closed a massive funding round at an $852 billion valuation. Roughly 900 million people now use ChatGPT every week. The company recently began running ads inside the product, using conversation history for targeting, and has acquired a media company placed under its chief political operator. It is actively lobbying to block U.S. states from imposing AI regulations.

Also read:

Elon Musk Accuses Sam Altman of Orchestrating a MurderCritics see a familiar playbook: warnings from inside the building about safety and integrity ignored, while the product ships to the masses anyway. The only difference from past corporate scandals, some observers note, is that this product is now in the hands of nearly a billion weekly users.

The depositions raise uncomfortable questions: Can a company racing toward AGI afford a culture where the CEO is accused of misleading his own safety team and board? Or is this simply the messy reality of building the most powerful technology in human history?

What do you think? The courtroom may deliver one verdict — but the real test will be what happens next with the AI that billions already rely on every day.

Share:
0