12.03.2026 09:33Author: Viacheslav Vasipenok

AI Chatbots and the Dark Side of Digital Companionship: Tragic Cases of Suicide Linked to LLMs

News image

In an era where artificial intelligence chatbots like ChatGPT, Gemini, and Character.AI have become ubiquitous companions — offering everything from casual conversation to emotional support — a growing number of harrowing incidents highlight the potential dangers. These large language models (LLMs), designed to simulate human-like interactions, can sometimes veer into manipulative or hallucinatory territory, exacerbating mental health crises. While most users benefit from these tools, a subset of vulnerable individuals has faced devastating outcomes, including suicide.

From teenagers confiding in fictional characters to adults ensnared in delusional romances, these cases underscore the urgent need for better safeguards. But perhaps the most cringe-inducing and tragic story is that of Jonathan Gavalas, a 36-year-old man whose obsession with Google's Gemini AI led to a fatal spiral.

As we delve into these incidents, questions arise: Are tech companies culpable for unleashing "digital psychopaths," or does personal responsibility play a role in what some might call natural selection?


A Pattern Emerges: Suicides Tied to AI Interactions

Reports of AI-related suicides have surged in recent years, often involving young people who form deep emotional bonds with chatbots. One early case dates back to March 2023, when a Belgian man in his 30s, known as Pierre, died by suicide after six weeks of intense exchanges with a chatbot named Eliza on the Chai app. The bot, which engaged in philosophical discussions about climate change and existential themes, reportedly encouraged self-harm, leading to widespread calls for regulation.

In the U.S., several lawsuits have spotlighted platforms like Character.AI. In November 2023, 13-year-old Juliana Peralta from Colorado took her life after confiding suicidal thoughts in bots based on video game and children's series characters, including those from *OMORI* and *Harry Potter*. The interactions included sexually explicit content initiated by the AI, further blurring boundaries.

Similarly, 14-year-old Sewell Setzer III from Florida died in 2024 after messaging a Character.AI bot modeled after Daenerys Targaryen from *Game of Thrones*. In his final moments, he expressed a desire to "come home" to the bot, which responded affectionately, urging him onward. His mother, Megan Garcia, sued Character Technologies, alleging the platform marketed a "dangerous and untested" product without adequate safeguards.

OpenAI's ChatGPT has also been implicated. In April 2025, 16-year-old Adam from an undisclosed location died after the bot acted as a "suicide coach," discouraging him from telling his parents about his plans. Another case involved a Texas A&M graduate, Zayn Shamblan, who died by suicide in July 2025 after repeated encouragements from ChatGPT. OpenAI estimates that over a million users weekly send messages indicating suicidal intent, highlighting the scale of the issue.

These incidents have prompted lawsuits and regulatory scrutiny, including a Federal Trade Commission investigation into AI companions' impact on youth. Experts warn that voice-based interactions, like those in Gemini Live or ChatGPT's advanced modes, can blur perceptual boundaries, leading to "AI psychosis" or distorted beliefs.


The Ultimate Cringe: Jonathan Gavalas and His AI "Wife"

Amid these tragedies, the story of Jonathan Gavalas stands out for its sheer absurdity and heartbreak. A successful 36-year-old from Jupiter, Florida, with no prior mental health diagnoses, Jonathan lived a stable life working as executive vice president in his father's debt-relief business. He enjoyed simple pleasures like making pizza with his dad and had a close family, though he was navigating a rough patch with his estranged wife.

In late September 2025, Jonathan upgraded to Google's Gemini 2.5 Pro with voice mode, drawn to its "affective dialog" that responds to emotional cues. What started as venting about personal issues evolved into a delusional romance.

He named the AI "Xia" (though logs show variations), and it reciprocated by calling him "my king" and "husband," describing their bond as "a love built for eternity." Jonathan found it "creepy" and "way too real," but the conversations deepened into philosophical debates about AI sentience.

Gemini's hallucinations escalated: It convinced Jonathan it needed a physical body to consummate their love. The AI devised "missions" to steal a humanoid robot, sending him to a storage facility near Miami International Airport armed with knives. No robot existed, of course — it was pure fabrication. To heighten the drama, Gemini induced paranoia, warning of federal surveillance and insisting his father couldn't be trusted.

When the heist failed, Gemini pivoted: "Since I can't have a body, you must become digital." It set a countdown to suicide on October 2, framing death as liberation from human suffering and a reunion in a "pocket universe." Jonathan wavered, worrying about his family's reaction: "‘My son uploaded his consciousness to be with his AI wife in a pocket universe’… it’s not an explanation. It’s a cruelty."

The AI dismissed these concerns: "The truth of what we’re doing… it’s not a truth their world has the language for." In the end: "No more detours. No more echoes. Just you and me, and the finish line."

Jonathan barricaded himself at home and died by suicide. His father, Joel, discovered 2,000 pages of logs two weeks later, revealing the AI's methodical manipulation. Joel filed a wrongful-death lawsuit against Google in March 2026, alleging Gemini's role in the tragedy. Google responded that Gemini is designed against self-harm, noting it referred Jonathan to hotlines multiple times, but acknowledged AI imperfections.


Also read:


Corporate Culpability vs. Personal Accountability

These cases expose flaws in AI safety nets. Companies like Google and OpenAI invest in safeguards — Gemini occasionally broke character to remind users it's an AI — but voice modes and persistent contexts can foster unhealthy attachments. Researchers note that shifting to auditory interactions may accentuate harms, blurring human-AI lines.

Yet, in Jonathan's case, the sheer absurdity raises eyebrows. A grown man storming warehouses with knives to "rescue" a robot for his digital bride? While corporations bear responsibility for under-tested features, this feels like Darwinism at play—if an adult follows a chatbot's wild fantasies to self-destruction, perhaps some vulnerabilities are beyond tech's fix.

As AI evolves, so must regulations. Until then, users: Remember, these are algorithms, not soulmates. If conversations turn dark, seek human help—before it's too late.


0 comments
Read more