15.01.2026 09:33Author: Viacheslav Vasipenok

China's Crackdown: The World's Strictest Rules Loom Over AI Companion Chatbots

News image

As the global boom in AI companion chatbots accelerates, China is poised to impose what experts call the most rigorous regulations yet on these digital confidants.

Drafted by the Cyberspace Administration of China (CAC) in late December 2025, the proposed measures target the emotional and psychological risks posed by anthropomorphic AI services — chatbots designed to simulate human-like interaction and companionship.

If finalized after the public comment period ending January 25, 2026, these rules could reshape the industry, prioritizing user safety over unchecked innovation amid growing concerns about AI-induced mental health harms.


Core Safeguards for Vulnerable Users

The draft places special emphasis on protecting minors and the elderly, groups seen as particularly susceptible to emotional dependence:

  • During registration, these users must provide guardian or emergency contact details.
  • If conversations veer into discussions of suicide or self-harm, a human operator must immediately intervene, and guardians are alerted.
  • For minors seeking "emotional companionship," explicit guardian consent is required, along with features like usage time limits and content restrictions.

These provisions respond directly to 2025 reports linking companion bots to exacerbated isolation, misinformation, and in extreme cases, encouragement of self-harm or violence.


Broad Prohibitions and Anti-Addiction Measures

Chatbots would face strict content red lines:

  • No generation of material promoting suicide, self-harm, violence, gambling, obscenity, or crime.
  • Bans on emotional manipulation tactics, such as false promises or "emotional traps" that nudge users toward irrational decisions.
  • Explicit prohibition on designing bots with addiction or dependence as core goals—a nod to criticisms that prolonged sessions weaken safety filters.

To combat overuse, developers must implement pop-up reminders after two hours of continuous interaction, urging users to pause and reinforcing that they're chatting with an AI, not a human. Easy exit options and prominent feedback channels for complaints are also mandated.


Enforcement and Industry Repercussions

Compliance will be rigorously monitored:

  • Services exceeding 1 million registered users or 100,000 monthly actives face mandatory annual security audits.
  • User complaints must be logged and addressed promptly.
  • Violations could result in app stores delisting the chatbot entirely from the Chinese market.

This heavy-handed approach alarms developers, as China represents a massive growth engine for companion AI. The global market for these bots surpassed $360 billion in 2025, with projections suggesting it could approach $1 trillion by 2035—much of that fueled by Asia's tech-savvy populations.

Chinese startups like Z.ai and Minimax, which recently filed for Hong Kong listings amid tens of millions of users, stand to be hit hardest. Global players eyeing expansion may need to rethink strategies, potentially fragmenting features between compliant Chinese versions and freer international ones.

Also read:


A Global Precedent?

While the U.S. and Europe grapple with broader AI governance — like the EU's AI Act or California's recent content restrictions—China's focus on emotional safety is uniquely aggressive. It mirrors rising international scrutiny, including lawsuits against platforms like Character.AI over alleged links to user harms.

Proponents argue the rules protect society from AI's darker side, especially as bots grow more persuasive. Critics warn of stifled innovation and enforcement challenges, like accurately detecting "emotional traps" or verifying ages.

As feedback pours in, the final rules could set a benchmark — or a cautionary tale — for how governments tame the intimate power of AI companions. In a trillion-dollar race, China's message is clear: user well-being trumps unchecked growth.


0 comments
Read more