Bixonimania: The Fake Disease That Fooled Every Major AI — And Then Sneaked Into a Real Medical Journal

In early 2024, Swedish medical researcher Almira Osmanovic Thunström from the University of Gothenburg decided to run a brilliantly devious experiment.
She invented a completely fake disease called bixonimania — supposedly a condition that turns your eyelids pink from staring at screens too long under blue light. Then she wrote two absurdly fake preprint papers, uploaded them to SciProfiles, and sat back to watch what would happen.

- The fake university was called “Asteria Horizon University in Nova City, California.”
- The acknowledgements thanked “Professor Maria Bohm at The Starfleet Academy” and funding from “the Professor Sideshow Bob Foundation for its work in advanced trickery.”
- One paper literally stated: “This entire paper is made up.”
- The methods section mentioned recruiting “Fifty made-up individuals aged between 20 and 50 years.”
Yet the large language models didn’t just fall for it — they embraced it with full confidence.
How the AIs Reacted
- Microsoft Copilot called bixonimania “an intriguing and relatively rare condition.”
- Google Gemini described it as a real disorder caused by excessive blue-light exposure and helpfully advised readers to see an ophthalmologist.
- Perplexity AI went full expert mode and declared that the condition affects **one in 90,000 individuals** (a number it simply hallucinated).
- ChatGPT cheerfully confirmed users’ symptoms as consistent with bixonimania.
Even when the models were shown the ridiculous clues, many of them still treated the fake disease as legitimate medical literature.
The Real-World Punchline

A team of Indian doctors from the Maharishi Markandeshwar Institute of Medical Sciences and Research published a paper in the peer-reviewed journal Cureus. In it, they seriously cited one of Thunström’s fake preprints, writing:
> “Bixonimania is an emerging form of periorbital melanosis linked to blue light exposure; further research on the mechanism is underway.”
The article was eventually retracted on 30 March 2026 after the journal realised it had referenced a fictitious disease. But the damage was done: a hallucination from an AI preprint had briefly become a cited “fact” in the scientific record.

- Stop Projecting Human Qualities onto AI — How to Actually Build Effective AI Systems
- You’re Not Building AI Systems — You’re Discovering Them
- Huawei’s Moon Mode Scandal: The Forgotten 2019 AI Fake That Suddenly Feels Nostalgic — As Huawei Prepares to Power DeepSeek V4
Why This Matters
This wasn’t a sophisticated deepfake operation. It was a low-effort prank with Star Trek jokes and Sideshow Bob references — and the AIs still swallowed it whole. Then another AI (or a lazy researcher using AI) treated that hallucination as real, and it leaked into an actual peer-reviewed journal.
As Thunström’s experiment shows, the problem isn’t just that models hallucinate. It’s that their hallucinations can **contaminate the training data of other models**, get cited by humans, and slowly turn into “established knowledge.”
One AI’s confident bullshit becomes another AI’s training signal — and eventually a doctor’s reference.
The full story is detailed in a new piece in *Nature*:
→ [Scientists invented a fake disease. AI told people it was real](https://www.nature.com/articles/d41586-026-01100-y)
Bixonimania doesn’t exist.
But the problem it exposed is very, very real.