Criminal Charges for AI-Generated Music? The First U.S. Case Sends a Clear Warning

If you've ever wondered whether you could quietly game streaming platforms with AI-generated tracks and bot farms, think again. In what appears to be the first criminal conviction of its kind in the United States, a North Carolina man has pleaded guilty to a multimillion-dollar streaming fraud scheme powered entirely by artificial intelligence.

According to the U.S. Attorney's Office and court documents, Smith's operation ran from 2017 through 2024. He used AI tools to generate hundreds of thousands of songs — low-effort, often nonsensical tracks with titles designed to avoid detection. He then uploaded them to major platforms including Spotify, Apple Music, Amazon Music, and YouTube Music.
To turn those uploads into real money, Smith deployed thousands of automated bot accounts that streamed the songs billions of times, mimicking legitimate listener behavior. The goal: trigger royalty payments that would otherwise go to real songwriters and rights holders.
“Michael Smith generated thousands of fake songs using artificial intelligence and then streamed those fake songs billions of times,” said U.S. Attorney Jay Clayton in a statement. “Although the songs and listeners were fake, the millions of dollars Smith stole was real. Millions of dollars in royalties that Smith diverted from real, deserving artists and rights holders. Smith’s brazen scheme is over, as he stands convicted of a federal crime for his AI-assisted fraud.”
The scheme was eventually uncovered thanks in large part to anomaly detection by the Mechanical Licensing Collective (The MLC), the organization responsible for collecting and distributing mechanical royalties from streaming in the United States. The MLC flagged suspicious patterns, challenged the claims, and worked with law enforcement to prevent further diversion of funds. In a statement after the plea, The MLC emphasized its ongoing investment in fraud prevention and collaboration with platforms and authorities.

This marks a landmark moment: the first time streaming fraud involving AI-generated content has resulted in a federal criminal conviction in the U.S. Previous cases of fake streaming (bot-driven inflation of plays) have typically ended in civil disputes, platform bans, or royalty clawbacks — not jail time.
The implications are significant. Streaming services already battle widespread fraud — estimates suggest fraudulent streams may account for 10% or more of total activity on some platforms. But the combination of cheap, scalable AI music generation + bot automation makes the problem exponentially worse. What was once a labor-intensive grift (uploading real songs and faking plays) can now be industrialized at almost zero creative cost.
For independent artists and songwriters, the case is a double-edged sword: it protects legitimate royalty pools, but it also highlights how easily AI can flood catalogs and dilute real discovery. Platforms are under increasing pressure to improve detection of both synthetic content and artificial listen patterns.
As Smith awaits sentencing this summer, the message from federal prosecutors is unmistakable: treating streaming royalties as a personal ATM — especially with AI as the printing press — is no longer just a terms-of-service violation. In the eyes of the law, it's wire fraud, and it can land you in prison.
The era of “AI slop” royalties may have just gotten a lot more expensive.
Also read:
- Trevor Milton Is Back — Pardoned Nikola Founder Now Chasing $1 Billion for AI-Powered Jets
- Anthropic’s Massive AI Survey (80,508 People, 159 Countries) Reveals What We Really Want — and Fear — from AI
- Anthropic’s Massive AI Survey (80,508 People, 159 Countries) Reveals What We Really Want — and Fear — from AI
- Agentic Marketing Revolution: Okara’s AI CMO Agent Hits 10 Million Views and Takes Down Its Own Infrastructure
Thank you!