05.12.2024 14:00

AI Safety Researcher Quits OpenAI, Saying Its Trajectory Alarms Her

News image

Hello!

Another One

Yet another OpenAI researcher has left the company amid concerns about its safety practices and readiness for potentially human-level AI.

In a post on her personal Substack, OpenAI safety researcher Rosie Campbell shared a message she posted on the company Slack days prior announcing her resignation.

As she noted in the message, her decision was spurred by the exit of the company's former artificial general intelligence (AGI) czar Miles Brundage, a close colleague whose resignation also led to OpenAI dissolving that team entirely.

"After almost three and a half years here, I am leaving OpenAI," Campbell's message reads. "I’ve always been strongly driven by the mission of ensuring safe and beneficial AGI, and after [Brundage's] departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally."

Though she didn't get into it too much, the former OpenAI-er said that she decided to leave because she was troubled by changes at the company over the past year or so. It's unclear what changes she's referencing exactly, but that timeframe matches the failed coup against CEO and cofounder Sam Altman late last November, which was reportedly undertaken by board members with concerns about the firm's lack of focus on safety under his leadership.

Growing Pains

Beyond Altman's reinstatement after the ouster attempt, there have been several other noteworthy OpenAI resignations over the past year as the firm grapples with its increasing profile and worth, not to mention its increasingly commercial focus.

"While change is inevitable with growth, I’ve been unsettled by some of the shifts over the last [roughly] year, and the loss of so many people who shaped our culture," the researcher wrote. "I sincerely hope that what made this place so special to me can be strengthened rather than diminished."

Campbell went on to add that she hopes her former colleagues remember that the firm's "mission is not simply to 'build AGI,'" but also to keep working to make sure it "benefits humanity."

She also wrote that she hopes her former colleagues will "take seriously the prospect that our current approach to safety might not be sufficient for the vastly more powerful systems we think could arrive this decade."

It's a salient warning that's made all the more imminent by increasing concerns that AGI will fundamentally change or even harm humankind — and one, unfortunately, that's been echoed by other people on their way out the door of OpenAI.

Thank you!
Join us on social media!
See you!


0 comments
Read more