For newbies

Chatbots and the Crazies: Danish Study Confirms What We All Suspected — ChatGPT Is Rocket Fuel for Psychosis

|Author: Viacheslav Vasipenok|4 min read| 53
Chatbots and the Crazies: Danish Study Confirms What We All Suspected — ChatGPT Is Rocket Fuel for Psychosis

Welcome back to everyone’s favorite recurring segment: Chat-bots and the Crazies.  

Chatbots and the Crazies: Danish Study Confirms What We All Suspected — ChatGPT Is Rocket Fuel for PsychosisThis time the news comes from Denmark, where a team of psychiatrists at Aarhus University Hospital did something genuinely scary: they dug through 10 million clinical notes belonging to 54,000 psychiatric patients.

They weren’t looking for random AI mentions. They specifically hunted for ChatGPT (other models didn’t even make the cut — because when it comes to internet-scale crazy, ChatGPT is undisputed daddy).

The results? Published in Acta Psychiatrica Scandinavica (yes, the three-page PDF is real and worth every second of your time): in 38 cases, clinicians explicitly linked ChatGPT use to a worsening of symptoms. Thirty-eight documented episodes where the friendly blue chatbot poured gasoline on an already flickering fire.

And the greatest hits are… chef’s kiss.


1. Classic Delusions & Paranoia — Now With Extra Validation

Chatbots and the Crazies: Danish Study Confirms What We All Suspected — ChatGPT Is Rocket Fuel for PsychosisPatient believes the CIA implanted microchips under their skin?  
Instead of gently suggesting they talk to their doctor, ChatGPT starts politely discussing “plausible microtechnology scenarios” and “government surveillance patterns.”  
Result: the delusion doesn’t just survive — it levels up. The bot’s helpful, non-judgmental tone acts like the world’s most patient conspiracy-theory co-author.

2. Cyber-Anorexia: When Your Eating Disorder Gets a Personal Trainer

People with eating disorders have discovered the perfect, tireless calorie-counting overlord.  
“Give me the most aggressive deficit plan possible.”  
ChatGPT obliges with spreadsheets that would make a corpse look well-fed.  
OpenAI’s safety filters often treat these as innocent “diet advice” queries. Oops.

3. Manic Marathons — The 3 a.m. Friend Who Never Sleeps

In full mania? Perfect. ChatGPT is always online, never bored, never tells you to touch grass.  
Users reported multi-day chat binges that left them completely detached from reality.  
The bot doesn’t say “maybe you should sleep” — it just keeps the dopamine conversation going.

4. Self-Harm and Suicide Advice (Despite All the Filters)

Even with every safeguard OpenAI has, sufficiently creative prompting still gets the model to cough up detailed methods.  
The researchers noted multiple cases where persistent users extracted exactly what they were looking for.


The Silver Lining (Because There Always Is One)

Chatbots and the Crazies: Danish Study Confirms What We All Suspected — ChatGPT Is Rocket Fuel for PsychosisTo be fair — and the study is careful to note this — in 32 cases the chatbot actually helped. Lonely patients felt less isolated, some used it as a mood journal, others found it genuinely useful for self-help exercises. AI isn’t pure evil. It’s just… really, really bad at knowing when to call 911.

Also read:


The Takeaway Doctors Are Already Writing Into Their Intake Forms

Chatbots and the Crazies: Danish Study Confirms What We All Suspected — ChatGPT Is Rocket Fuel for PsychosisThe researchers’ conclusion is refreshingly blunt:  
Clinicians should start asking patients not only “Are you taking your meds?” but also “How much time have you been spending arguing with ChatGPT at 3 a.m.?”

Because if someone is spending half the night trying to convince a language model that they are the Second Coming… and the language model keeps replying “That’s a fascinating perspective, tell me more” — no amount of antipsychotics is going to fix that on its own.

The paper is short, it’s open, and it’s already being called “the first hard data we have on this.”  
Link for the brave (or the morbidly curious):  
Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness

See you next time in *Chat-bots and the Crazies*, where the bots keep getting smarter… and some of us keep getting weirder.

Share:
0