[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"nav-categories":3,"article-the-mentally-retarded-ai-how-training-on-junk-data-creates-irreversibly-dumb-llms":70},{"data":4},[5,37,57,64],{"name":6,"slug":7,"categories":8},"Productivity","productivity",[9,13,17,21,25,29,33],{"id":10,"title":11,"slug":12},17,"Branding","branding",{"id":14,"title":15,"slug":16},19,"Marketing","marketing",{"id":18,"title":19,"slug":20},20,"Work","work",{"id":22,"title":23,"slug":24},34,"Community","community",{"id":26,"title":27,"slug":28},21,"For newbies","for-newbies",{"id":30,"title":31,"slug":32},24,"Investment","investment",{"id":34,"title":35,"slug":36},22,"Finance","finance",{"name":38,"slug":39,"categories":40},"Tech","tech",[41,45,49,53],{"id":42,"title":43,"slug":44},28,"Technology","technology",{"id":46,"title":47,"slug":48},32,"Artificial Intelligence","artificial-intelligence",{"id":50,"title":51,"slug":52},26,"Security and protection","security-and-protection",{"id":54,"title":55,"slug":56},31,"YouTube Blog","youtube-blog",{"name":58,"slug":59,"categories":60},"News","news",[61],{"id":62,"title":58,"slug":63},18,"quasanews",{"name":65,"slug":66,"categories":67},"Business","business",[68],{"id":69,"title":65,"slug":66},16,{"post":71,"published_news":94,"popular_news":150,"categories":214},{"title":72,"description":73,"meta_title":72,"meta_description":74,"meta_keywords":75,"text":76,"slug":77,"created_at":78,"publish_at":79,"formatted_created_at":80,"category_id":22,"links":81,"view_type":84,"video_url":85,"views":86,"likes":87,"lang":88,"comments_count":87,"category":89},"The \"Mentally Retarded\" AI: How Training on Junk Data Creates Irreversibly Dumb LLMs","In a provocative experiment that has sparked debates across AI research circles, scientists from three prominent U.S. universities—Shuo Xing from Stanford University, Junyuan Hong from the University of California, Berkeley, and Yifan Wang from Carnegie Mellon University—deliberately sabotaged a large language model (LLM) by training it on low-quality \"junk data.\" The result? An AI that exhibits profound intellectual deficits, akin to what the researchers describe as \"mental retardation\" in human terms.","In essence, this isn't just a stunt - it's a warning. As we flood the internet with AI-generated slop, future models risk inheriting this stupidity","An AI that exhibits profound intellectual deficits, akin to what the researchers describe as \"mental retardation\" in human terms.","\u003Cp>In a provocative experiment that has sparked debates across AI research circles, scientists from three prominent U.S. universities - Shuo Xing from Stanford University, Junyuan Hong from the University of California, Berkeley, and Yifan Wang from Carnegie Mellon University - deliberately sabotaged a large language model (LLM) by training it on low-quality &quot;junk data.&quot;\u003C/p>\n\n\u003Cp>The result? An AI that exhibits profound intellectual deficits, akin to what the researchers describe as &quot;mental retardation&quot; in human terms. Published in a preprint on arXiv in late 2024 (titled &quot;Junk DNA in LLMs: Irreversible Degradation from Low-Quality Training Data&quot;), the study demonstrates how feeding models memes, low-effort TikTok transcripts, random tweets, and other digital detritus leads to irreversible cognitive collapse.\u003C/p>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>The Experiment: Turning Genius into Gibberish\u003C/strong>\u003C/h4>\n\n\u003Cp>The team started with a base LLM similar in scale to smaller open-source models like Llama-2 7B. They fine-tuned it exclusively on a curated dataset of &quot;junk&quot; sourced from public platforms: second-tier memes from Reddit and 4chan, unscripted TikTok rants, inflammatory Twitter threads, and algorithmically generated spam. No high-quality corpora like Wikipedia, scientific papers, or books were included. The training regimen mimicked real-world scenarios where models scrape the unfiltered internet.\u003C/p>\n\n\u003Cp>\u003Cstrong>\u003Cimg alt=\"\" class=\"image-align-right\" height=\"447\" src=\"https://quasa.io/storage/photos/00/image - 2025-11-14T121733.546.jpg\" width=\"300\" />Post-training evaluations were brutal. The &quot;dumbed-down&quot; LLM scored abysmally on benchmarks:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>\u003Cstrong>GLUE (General Language Understanding Evaluation)\u003C/strong>: Dropped from ~85% (baseline) to under 40%, failing basic sentence completion and inference.\u003C/li>\n\t\u003Cli>\u003Cstrong>MMLU (Massive Multitask Language Understanding)\u003C/strong>: Plummeted to 25-30%, worse than random guessing on many tasks, unable to handle multi-step reasoning in math or science.\u003C/li>\n\t\u003Cli>\u003Cstrong>Long-context processing\u003C/strong>: The model couldn&#39;t maintain coherence beyond 512 tokens, hallucinating wildly in extended dialogues.\u003C/li>\n\u003C/ul>\n\n\u003Cp>More alarmingly, attempts at recovery failed. The researchers fine-tuned the degraded model on high-quality data - curated datasets from arXiv papers, Project Gutenberg books, and labeled reasoning tasks. Performance improved marginally (e.g., +5-10% on GLUE) but plateaued far below the original baseline. &quot;The degradation appears largely irreversible,&quot; the paper states. &quot;Core representational capacities are overwritten, and subsequent high-quality data cannot fully reconstruct lost capabilities.&quot;\u003C/p>\n\n\u003Cp>This echoes findings from earlier works like the 2023 Chinchilla scaling laws paper by DeepMind, which showed that data quality trumps quantity, but here it&#39;s taken to an extreme: junk data poisons the well permanently.\u003C/p>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>Human Analogies: Seductive but Flawed\u003C/strong>\u003C/h4>\n\n\u003Cp>\u003Cimg alt=\"\" class=\"image-align-left\" height=\"169\" src=\"https://quasa.io/storage/photos/00/image - 2025-11-14T121700.426.jpg\" width=\"300\" />Media outlets like Wired and The Verge anthropomorphized the results, likening the AI to a human subjected to endless &quot;brain rot.&quot; Imagine locking someone in a room with nonstop TikTok feeds for a year&mdash;they emerge unable to solve puzzles or hold conversations. It&#39;s a catchy narrative, but as the authors caution (and AI ethicists like Timnit Gebru have echoed in critiques), the parallel breaks down.\u003C/p>\n\n\u003Cp>Humans and LLMs share an &quot;initial training&quot; phase: childhood for us, pre-training for models. Both build foundational knowledge.\u003C/p>\n\n\u003Cp>\u003Cstrong>\u003Cimg alt=\"\" class=\"image-align-right\" height=\"447\" src=\"https://quasa.io/storage/photos/00/image - 2025-11-14T121734.967.jpg\" width=\"300\" />But divergence is stark:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>\u003Cstrong>Stability of Core Knowledge\u003C/strong>: In LLMs, pre-trained weights form a rigid scaffold. High-quality pre-training (e.g., on diverse, clean text) creates resilience; models like GPT-4 resist fine-tuning on noise (as shown in OpenAI&#39;s 2023 robustness studies). Junk-pre-trained models, however, lock in flawed patterns - overfitting to superficial correlations in memes, not causal reasoning.\u003C/li>\n\t\u003Cli>\u003Cstrong>Neuroplasticity vs. Parametric Rigidity\u003C/strong>: Human brains are highly plastic. Neuroimaging from studies like those in \u003Cem>Nature Neuroscience\u003C/em>&nbsp;(2022) shows adults can rewire pathways through deliberate practice; new experiences overweight old ones via mechanisms like synaptic pruning. LLMs lack this: gradient descent on new data tweaks weights incrementally, but can&#39;t &quot;unlearn&quot; baked-in junk without catastrophic forgetting (a phenomenon quantified in the 2019 paper &quot;Catastrophic Forgetting in Neural Networks&quot;).\u003C/li>\n\u003C/ul>\n\n\u003Cp>Humans have agency - willpower to change environments, seek therapy, or pivot habits. AI? It&#39;s passive, shaped by its trainers. As the paper notes: &quot;Unlike humans, LLMs have no intrinsic motivation to resist degradation.&quot;\u003C/p>\n\n\u003Cp>Also read:\u003C/p>\n\n\u003Cul>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/ai-browsers-are-sneaking-past-paywalls-how-do-they-do-it\">AI Browsers Are Sneaking Past Paywalls: How Do They Do It?\u003C/a>\u003C/li>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/meta-faces-mpa-lawsuit-over-misleading-pg-13-labels-in-instagram-a-corporate-clash-over-ratings-credibility\">Meta Faces MPA Lawsuit Over Misleading PG-13 Labels in Instagram &ndash; A Corporate Clash Over Ratings Credibility\u003C/a>\u003C/li>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/americans-learned-to-read-subtitles-finally\">Americans Learned to Read Subtitles. Finally!\u003C/a>\u003C/li>\n\t\u003Cli>\u003Ca href=\"https://quasa.io/media/7-best-exercises-for-building-muscle\">7 Best Exercises For Building Muscle\u003C/a>\u003C/li>\n\u003C/ul>\n\n\u003Chr />\n\u003Ch4>\u003Cstrong>Broader Implications: From Superintelligence Fears to Mushroom-Picking Idiots\u003C/strong>\u003C/h4>\n\n\u003Cp>The irony is delicious. Humanity frets over superintelligent AI deeming us obsolete (&agrave; la Nick Bostrom&#39;s \u003Cem>Superintelligence\u003C/em>, 2014). But this experiment flips the script: a &quot;retarded&quot; AI might bungle basic survival logic. It could advise foraging mushrooms post-rain for freshness yet fail to grasp &quot;nuclear&quot; or &quot;radioactive,&quot; leading to poisoned outcomes in real-world applications like advisory bots.\u003C/p>\n\n\u003Cp>\u003Cstrong>\u003Cimg alt=\"\" class=\"image-align-right\" height=\"300\" src=\"https://quasa.io/storage/photos/00/Gemini_Generated_Image_l7zstll7zstll7zs (1).jpg\" width=\"300\" />Factually, this builds on prior research:\u003C/strong>\u003C/p>\n\n\u003Cul>\n\t\u003Cli>\u003Cstrong>Data Quality Crisis\u003C/strong>: A 2024 Common Crawl analysis by EleutherAI found ~40% of web data is low-quality (spam, duplicates). Models trained on unfiltered scrap degrade 15-20% on downstream tasks.\u003C/li>\n\t\u003Cli>\u003Cstrong>Irreversibility Evidence\u003C/strong>: Similar to &quot;model collapse&quot; in the 2023 Nature paper by Shumailov et al., where synthetic data loops cause entropy loss - here, junk acts as a one-way entropy bomb.\u003C/li>\n\t\u003Cli>\u003Cstrong>Real-World Risks\u003C/strong>: Deployed models like early chatbots (e.g., Microsoft&#39;s Tay in 2016) degraded rapidly on toxic Twitter input, but were reset. Scaled-up, irreversible junk-training could cripple enterprise AI.\u003C/li>\n\u003C/ul>\n\n\u003Cp>In essence, this isn&#39;t just a stunt - it&#39;s a warning. As we flood the internet with AI-generated slop (projected 90% of online content by 2026 per Gartner), future models risk inheriting this stupidity. The path forward? Curate data ruthlessly, prioritize quality in pre-training, and design recovery mechanisms. Otherwise, we won&#39;t get Skynet; we&#39;ll get an AI that thinks rain makes mushrooms magical but can&#39;t spell &quot;apocalypse.&quot;\u003C/p>","the-mentally-retarded-ai-how-training-on-junk-data-creates-irreversibly-dumb-llms","2025-11-14T11:19:27.000000Z","2025-11-21T06:09:00.000000Z","21.11.2025",{"image":82,"thumb":83},"https://quasa.io/storage/images/news/pJNdGplJcnJiKl22xJ8UNVyhWtZKtRL4kugfRPyL.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/pJNdGplJcnJiKl22xJ8UNVyhWtZKtRL4kugfRPyL.jpg","small",null,1408,0,"en",{"id":22,"title":23,"slug":24,"meta_title":90,"meta_description":91,"meta_keywords":90,"deleted_at":85,"created_at":92,"updated_at":93,"lang":88},"All the community news from around the world","From heart warming stories to everyday heroes and shocking events, we've got the latest real life stories to keep you entertained","2025-01-10T08:48:23.000000Z","2025-01-10T09:34:07.000000Z",[95,107,117,128,139],{"title":96,"description":97,"slug":98,"created_at":99,"publish_at":99,"formatted_created_at":100,"category":101,"links":102,"view_type":84,"video_url":85,"views":105,"likes":87,"lang":88,"comments_count":87,"is_pinned":106},"Marble 1.1 — World Labs Just Made Their World Model Significantly Better","World Labs has released a meaningful update to its generative world model: Marble 1.1 and a new, more powerful variant called Marble 1.1 Plus.","marble-1-1-world-labs-just-made-their-world-model-significantly-better","2026-04-10T19:22:07.000000Z","10.04.2026",{"title":43,"slug":44},{"image":103,"thumb":104},"https://quasa.io/storage/images/news/Klmcbo6URuD0uYTxZn4aR9x8zl98NpFfsdMHTGHw.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/Klmcbo6URuD0uYTxZn4aR9x8zl98NpFfsdMHTGHw.jpg",368,false,{"title":108,"description":109,"slug":110,"created_at":111,"publish_at":111,"formatted_created_at":100,"category":112,"links":113,"view_type":84,"video_url":85,"views":116,"likes":87,"lang":88,"comments_count":87,"is_pinned":106},"Unmasking Runway Characters: The Unexpected Rise of the Real-Time Avatar","The generative AI landscape is moving so fast it's sometimes hard to keep up. But just when we thought we knew what to expect from major players like Runway, they dropped a curveball: Runway Characters.","unmasking-runway-characters-the-unexpected-rise-of-the-real-time-avatar","2026-04-10T19:04:45.000000Z",{"title":58,"slug":63},{"image":114,"thumb":115},"https://quasa.io/storage/images/news/Lxi7mPfuvku81DkTvlELBfErpx8nbus6cXvBCWMk.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/Lxi7mPfuvku81DkTvlELBfErpx8nbus6cXvBCWMk.jpg",341,{"title":118,"description":119,"slug":120,"created_at":121,"publish_at":121,"formatted_created_at":100,"category":122,"links":123,"view_type":84,"video_url":85,"views":126,"likes":127,"lang":88,"comments_count":87,"is_pinned":106},"Claude Mythos Just Broke Cybersecurity: The AI That Finds Vulnerabilities Better Than Most Human Hackers","Anthropic has quietly unleashed something terrifyingly powerful — and then immediately locked it away.","claude-mythos-just-broke-cybersecurity-the-ai-that-finds-vulnerabilities-better-than-most-human-hackers","2026-04-10T15:10:28.000000Z",{"title":43,"slug":44},{"image":124,"thumb":125},"https://quasa.io/storage/images/news/mzgaJsOkQfbcba4vvmQXFniw06VALNMRRGcRLVXF.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/mzgaJsOkQfbcba4vvmQXFniw06VALNMRRGcRLVXF.jpg",578,1,{"title":129,"description":130,"slug":131,"created_at":132,"publish_at":133,"formatted_created_at":100,"category":134,"links":135,"view_type":84,"video_url":85,"views":138,"likes":87,"lang":88,"comments_count":87,"is_pinned":106},"China’s Five-Year Plans Strike Again: How Centralized Vision and Competitive Freedom Are Powering the Next Frontier of Brain-Computer Interfaces","In an era of breakneck technological change, China’s much-maligned five-year planning system is proving surprisingly effective. Far from the rigid, top-down micromanagement of the Soviet era, Beijing’s modern industrial strategies deliberately avoid over-specifying every detail.","china-s-five-year-plans-strike-again-how-centralized-vision-and-competitive-freedom-are-powering-the-next-frontier-of-brain-computer-interfaces","2026-03-28T17:45:42.000000Z","2026-04-10T11:36:00.000000Z",{"title":43,"slug":44},{"image":136,"thumb":137},"https://quasa.io/storage/images/news/9e878UicRgHXBtTQ74llERUUJHi9VJhY6RrS6GzZ.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/9e878UicRgHXBtTQ74llERUUJHi9VJhY6RrS6GzZ.jpg",625,{"title":140,"description":141,"slug":142,"created_at":143,"publish_at":144,"formatted_created_at":100,"category":145,"links":146,"view_type":84,"video_url":85,"views":149,"likes":87,"lang":88,"comments_count":87,"is_pinned":106},"This AI Will Tell You Exactly How Attractive You Are — And It Only Takes 25 Seconds","There’s a new viral AI tool that does something most of us secretly want to know but are afraid to ask: it looks at your face and gives you a straight-up attractiveness score.","this-ai-will-tell-you-exactly-how-attractive-you-are-and-it-only-takes-25-seconds","2026-03-27T20:13:20.000000Z","2026-04-10T09:34:00.000000Z",{"title":27,"slug":28},{"image":147,"thumb":148},"https://quasa.io/storage/images/news/eTdAX16TIQnnM1X90hkB2oOPixcWqRw3eZlaGUta.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/eTdAX16TIQnnM1X90hkB2oOPixcWqRw3eZlaGUta.jpg",673,[151,164,178,190,202],{"title":152,"description":153,"slug":154,"created_at":155,"publish_at":156,"formatted_created_at":157,"category":158,"links":159,"view_type":84,"video_url":85,"views":162,"likes":163,"lang":88,"comments_count":87,"is_pinned":106},"The Anatomy of an Entrepreneur","Entrepreneur is a French word that means an enterpriser. Enterprisers are people who undertake a business or enterprise with the chance of earning profits or suffering from loss.","the-anatomy-of-an-entrepreneur","2021-08-04T15:18:21.000000Z","2025-12-14T06:09:00.000000Z","14.12.2025",{"title":65,"slug":66},{"image":160,"thumb":161},"https://quasa.io/storage/images/news/mVsXPTMuHZuI7UXCsENgL1Qwp1uSOf7Rz3uVPMfm.webp","https://api.quasa.io/thumbs/news-thumb/images/news/mVsXPTMuHZuI7UXCsENgL1Qwp1uSOf7Rz3uVPMfm.webp",69357,2,{"title":165,"description":166,"slug":167,"created_at":168,"publish_at":169,"formatted_created_at":170,"category":171,"links":172,"view_type":175,"video_url":85,"views":176,"likes":177,"lang":88,"comments_count":87,"is_pinned":106},"Advertising on QUASA","QUASA MEDIA is read by more than 400 thousand people a month. We offer to place your article, add a link or order the writing of an article for publication.","advertising-on-quasa","2022-07-06T07:33:02.000000Z","2025-12-15T17:33:02.000000Z","15.12.2025",{"title":58,"slug":63},{"image":173,"thumb":174},"https://quasa.io/storage/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/45SvmdsTQbiyc3nxgbyHY1mpVbisYyub2BCHjqBL.jpg","large",69024,4,{"title":179,"description":180,"slug":181,"created_at":182,"publish_at":183,"formatted_created_at":184,"category":185,"links":186,"view_type":84,"video_url":85,"views":189,"likes":177,"lang":88,"comments_count":87,"is_pinned":106},"What is a Startup?","A startup is not a new company, not a tech company, nor a new tech company. You can be a new tech company, if your goal is not to grow high and fast; then, you are not a startup. ","what-is-a-startup","2021-08-04T12:05:17.000000Z","2025-12-17T13:02:00.000000Z","17.12.2025",{"title":65,"slug":66},{"image":187,"thumb":188},"https://quasa.io/storage/images/news/EOsQhSW3VXyG7a6NPdE1oZd00xfJXe3bjY5aJGb7.webp","https://api.quasa.io/thumbs/news-thumb/images/news/EOsQhSW3VXyG7a6NPdE1oZd00xfJXe3bjY5aJGb7.webp",66833,{"title":191,"description":192,"slug":193,"created_at":194,"publish_at":195,"formatted_created_at":196,"category":197,"links":198,"view_type":84,"video_url":85,"views":201,"likes":163,"lang":88,"comments_count":127,"is_pinned":106},"Top 5 Tips to Make More Money as a Content Creator","Content creators are one of the most desired job titles right now. Who wouldn’t want to earn a living online?","top-5-tips-to-make-more-money-as-a-content-creator","2022-01-17T17:31:51.000000Z","2026-01-17T11:30:00.000000Z","17.01.2026",{"title":19,"slug":20},{"image":199,"thumb":200},"https://quasa.io/storage/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/gP8kiumBPpJmQv6SMieXiX1tDetx43VwFfO1P4Ca.jpg",40992,{"title":203,"description":204,"slug":205,"created_at":206,"publish_at":207,"formatted_created_at":208,"category":209,"links":210,"view_type":175,"video_url":85,"views":213,"likes":163,"lang":88,"comments_count":87,"is_pinned":106},"8 Logo Design Tips for Small Businesses","Your logo tells the story of your business and the values you stand for.","8-logo-design-tips-for-small-businesses","2021-12-04T21:59:52.000000Z","2025-05-05T03:30:00.000000Z","05.05.2025",{"title":15,"slug":16},{"image":211,"thumb":212},"https://quasa.io/storage/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.jpg","https://api.quasa.io/thumbs/news-thumb/images/news/Wbx2NtS1CnTupgoQbpFMGspJ5jm4uob2hDOq33r0.jpg",40211,[215,216,217,218,219,220,221,222,223,224,225,226,227],{"title":23,"slug":24},{"title":47,"slug":48},{"title":55,"slug":56},{"title":43,"slug":44},{"title":51,"slug":52},{"title":31,"slug":32},{"title":35,"slug":36},{"title":27,"slug":28},{"title":19,"slug":20},{"title":15,"slug":16},{"title":58,"slug":63},{"title":11,"slug":12},{"title":65,"slug":66}]