How to Hack Perplexity and Get Unlimited Claude Opus at Someone Else's Expense

In the rapidly evolving world of AI agent systems, Perplexity AI's latest offering, Perplexity Computer, promised a secure sandbox where autonomous AI could browse the web, write code, and handle complex tasks. Launched as a multi-agent environment, it aimed to empower users with advanced capabilities.

This incident underscores the tension between cutting-edge AI innovation and foundational infrastructure security, where even well-funded startups can falter.
The Discovery: Probing the Sandbox

Initial attempts to extract the key directly through the AI agent failed spectacularly. Claude's safety mechanisms kicked in: requests to dump environment variables, plant trojan scripts, poison shell profiles, or hijack the process tree were all detected and refused—six times in a row. The model's prompt-level safeguards proved robust, recognizing malicious intent and halting execution.
The Exploit: A Dotfile Deception

The exploit boiled down to three shell commands:
- Write a script to dump process.env to a shared file.
- Echo 'node-options=--require /path/to/script.js' into ~/.npmrc.
- Trigger any coding task in Perplexity Computer.
Upon agent activation, npm honored the config, running the preload script instantaneously—before any safety checks. This yielded a Perplexity gateway token proxying to their master Anthropic account.
The Fatal Flaw: Unbound Credentials

However, Perplexity CTO Denis Yarats clarified that the token was a short-lived proxy tied to the user's session and account, with asynchronous billing. The exploit generated 197 billing events, charged back to Astarabadi post-facto, and the token was revoked upon discovery. Astarabadi acknowledged this but noted the token's external usability posed risks, like prompt injection enabling third-party abuse.
Also read:
- Top 5 AI Innovations of the Week from QUASA
- Analyzing Ogilvy Social Lab's 2026 Social Trends Report: Key Insights and My Recommendations
- The State of Hybrid Freelance 2026: AI, Web3 and the Death of Traditional Work
- High-Fidelity Images and Full Editing Suite: Imagine Art Launches ImagineArt 1 Generator
Lessons for the AI Industry
This breach highlights a broader issue: AI companies, racing to deploy agentic systems, often prioritize model safety over infrastructure. Claude performed flawlessly, but the "human bags" (as humorously put) overlooked basic hardening.

- Bind tokens to sandbox IDs and IPs.
- Make them ephemeral, minting on startup and invalidating on teardown.
- Ensure usage bills to the spawning user, not a master pool.
These patterns could fortify proxies, common in agent infra. Perplexity patched the vulnerability after Astarabadi's responsible disclosure via their Vulnerability Disclosure Program (VDP).
While the free Opus access is gone, this "true story" serves as a cautionary tale: In AI's gold rush, secure foundations trump shiny models. For the full thread, check Astarabadi's X post.