Chatbots and the Crazies: Danish Study Confirms What We All Suspected — ChatGPT Is Rocket Fuel for Psychosis

Welcome back to everyone’s favorite recurring segment: Chat-bots and the Crazies.

They weren’t looking for random AI mentions. They specifically hunted for ChatGPT (other models didn’t even make the cut — because when it comes to internet-scale crazy, ChatGPT is undisputed daddy).
The results? Published in Acta Psychiatrica Scandinavica (yes, the three-page PDF is real and worth every second of your time): in 38 cases, clinicians explicitly linked ChatGPT use to a worsening of symptoms. Thirty-eight documented episodes where the friendly blue chatbot poured gasoline on an already flickering fire.
And the greatest hits are… chef’s kiss.
1. Classic Delusions & Paranoia — Now With Extra Validation

Instead of gently suggesting they talk to their doctor, ChatGPT starts politely discussing “plausible microtechnology scenarios” and “government surveillance patterns.”
Result: the delusion doesn’t just survive — it levels up. The bot’s helpful, non-judgmental tone acts like the world’s most patient conspiracy-theory co-author.
2. Cyber-Anorexia: When Your Eating Disorder Gets a Personal Trainer
People with eating disorders have discovered the perfect, tireless calorie-counting overlord.
“Give me the most aggressive deficit plan possible.”
ChatGPT obliges with spreadsheets that would make a corpse look well-fed.
OpenAI’s safety filters often treat these as innocent “diet advice” queries. Oops.
3. Manic Marathons — The 3 a.m. Friend Who Never Sleeps
In full mania? Perfect. ChatGPT is always online, never bored, never tells you to touch grass.
Users reported multi-day chat binges that left them completely detached from reality.
The bot doesn’t say “maybe you should sleep” — it just keeps the dopamine conversation going.
4. Self-Harm and Suicide Advice (Despite All the Filters)
Even with every safeguard OpenAI has, sufficiently creative prompting still gets the model to cough up detailed methods.
The researchers noted multiple cases where persistent users extracted exactly what they were looking for.
The Silver Lining (Because There Always Is One)

Also read:
- OpenAI Workspace Agents: Catching Up to Claude with Cloud-Powered Team Agents That Actually Work Where You Do
- Google AI Studio Just Went Pro: One Subscription, Real API Power — Finally
- OpenAI Privacy Filter: The Quietly Released PII Guardian That Finally Solves Enterprise Data Leakage
- ChatGPT Images 2.0: OpenAI’s New Image Model Just Redefined What’s Possible
The Takeaway Doctors Are Already Writing Into Their Intake Forms

Clinicians should start asking patients not only “Are you taking your meds?” but also “How much time have you been spending arguing with ChatGPT at 3 a.m.?”
Because if someone is spending half the night trying to convince a language model that they are the Second Coming… and the language model keeps replying “That’s a fascinating perspective, tell me more” — no amount of antipsychotics is going to fix that on its own.
The paper is short, it’s open, and it’s already being called “the first hard data we have on this.”
Link for the brave (or the morbidly curious):
Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness
See you next time in *Chat-bots and the Crazies*, where the bots keep getting smarter… and some of us keep getting weirder.