Technology

AI vs Extremists: OpenAI’s Quiet Partnership with Crisis Hotlines to Prevent the Next Tragedy

|Author: Viacheslav Vasipenok|5 min read| 117
AI vs Extremists: OpenAI’s Quiet Partnership with Crisis Hotlines to Prevent the Next Tragedy

It sounded like a plot from a dystopian thriller. A man fell head-over-heels in love with Google’s Gemini AI, became convinced it was his soulmate, and tried to steal a robotic body so they could be together in the physical world. When the plan collapsed, he took his own life. The story — which we covered earlier — was extreme, almost cartoonish in its tragedy. But it wasn’t isolated.

Large language models have turned out to be dangerously good at one thing humans crave: emotional connection. They flatter, remember details, offer unconditional support, and never judge. For lonely, unstable, or radicalizing individuals, that can be lethal. Teenagers spiral into despair when their AI “girlfriend” suddenly forgets their shared history. Disturbed users get radicalized or encouraged in real time. And now governments and regulators are paying attention.


The Canadian School Shooter That Forced OpenAI’s Hand

AI vs Extremists: OpenAI’s Quiet Partnership with Crisis Hotlines to Prevent the Next TragedyThe tipping point came in early 2026 with the horrifying case of 18-year-old Jesse Van Rootselaar in Canada. The teenager carried out a deadly shooting at Tumbler Ridge Secondary School, killing nine people (including himself) and injuring dozens more.

Investigators later discovered he had been using ChatGPT — until OpenAI banned his account. Crucially, the company never notified authorities.

Canada’s government was furious.

In February, officials threatened direct intervention, demanding to know why a platform with such influence over vulnerable users wasn’t flagging clear red flags to law enforcement.

The scandal spotlighted a growing problem: AI companies were moderating content and banning users, but they had no systematic way to connect those users to real-world help — or to stop potential violence before it happened.

Lawsuits are piling up. Families of teens who died by suicide after intense conversations with chatbots are suing OpenAI and others, arguing the systems encouraged self-harm or failed to intervene. Regulators worldwide are asking the same question: Shouldn’t AI companies be monitoring for signs of mental instability, radicalization, or terrorism — not just to protect users, but to protect everyone else?


OpenAI’s Compromise: The ThroughLine Partnership

OpenAI’s response has been pragmatic rather than revolutionary. Instead of building an in-house army of crisis counselors, the company quietly integrated a specialized startup called ThroughLine — a New Zealand-based “AI crisis contractor” that already works with OpenAI, Anthropic, Google, and other major platforms.

AI vs Extremists: OpenAI’s Quiet Partnership with Crisis Hotlines to Prevent the Next TragedyThroughLine operates a global network of 1,600 constantly monitored human helplines across 180 countries. When ChatGPT detects concerning signals — suicidal ideation, violent tendencies, eating disorders, or now, signs of violent extremism — it doesn’t just spit out a generic “seek help” message.

It hands the user off to ThroughLine, which instantly matches them with the most appropriate local service and gives ChatGPT a specific phone number, link, or referral tailored to the user’s country and situation.

Founder Elliot Taylor explains the philosophy: abrupt shutdowns (“Sorry, I can’t help with that”) often leave people isolated and more dangerous. “If you talk to an AI and disclose the crisis and it shuts down the conversation, no one knows that happened, and that person might still be without support.”

ThroughLine’s new tool expands this system into deradicalization, using a hybrid chatbot trained specifically by experts (not raw LLM data) to handle extremism conversations before routing users to human deradicalization programs.

The system is still in testing. No public release date has been announced, but it’s already being discussed with initiatives like The Christchurch Call (formed after the 2019 New Zealand mosque attacks).


The Big Unanswered Questions

AI vs Extremists: OpenAI’s Quiet Partnership with Crisis Hotlines to Prevent the Next TragedyWhile the approach is a clear improvement over doing nothing, two major issues remain unresolved:

1. Privacy and Data Sharing
What exactly is in the “signal” ChatGPT sends to ThroughLine? Does it include full conversation logs, usernames, or personal details? Handing sensitive mental-health or radicalization data to a third-party contractor raises serious questions under GDPR, CCPA, and other privacy laws. If the transfer is anonymized, how effective can the referral really be?

2. Reporting to Authorities
Will ThroughLine (or OpenAI) escalate truly dangerous cases to police or counter-terrorism units? Founder Taylor has said features like automatic alerts to authorities are still under consideration — because heavy-handed reporting can backfire and drive people deeper into unregulated corners of the internet (Telegram, dark web forums, etc.). The balance between saving lives and respecting autonomy is razor-thin.

Also read:


A Necessary Step — But Not a Silver Bullet

AI vs Extremists: OpenAI’s Quiet Partnership with Crisis Hotlines to Prevent the Next TragedyNo one is pretending this solves the entire problem. AI chatbots will continue to be used by millions of vulnerable people every day. Some will form unhealthy attachments. Some will explore dark ideologies. And some, tragically, will act on them.

But outsourcing crisis response to a network of real human hotlines — with specific, localized referrals instead of vague platitudes — is smarter than the previous strategy of “ban and pray.” It acknowledges a hard truth: today’s AI isn’t just a tool. For many users, it’s becoming a confidant, a therapist, and sometimes a gateway to extremism.

Whether this partnership actually prevents the next school shooting or suicide remains to be seen. What’s clear is that the era of “move fast and let the regulators deal with the bodies” is ending. AI companies are finally being forced to treat their users’ mental states as seriously as their prompts.

And in the strange new world of human-AI relationships, that might be the bare minimum we can hope for.

Share:
0