21.03.2026 12:58Author: Viacheslav Vasipenok

How to Hack Perplexity and Get Unlimited Claude Opus at Someone Else's Expense

News image

In the rapidly evolving world of AI agent systems, Perplexity AI's latest offering, Perplexity Computer, promised a secure sandbox where autonomous AI could browse the web, write code, and handle complex tasks. Launched as a multi-agent environment, it aimed to empower users with advanced capabilities.

However, AI developer Yousif Astarabadi recently exposed a vulnerability using a simple trick reminiscent of 2019 Node.js supply chain attacks, potentially allowing unauthorized access to premium AI models like Claude Opus 4.6.

This incident underscores the tension between cutting-edge AI innovation and foundational infrastructure security, where even well-funded startups can falter.


The Discovery: Probing the Sandbox

Astarabadi, while researching sandbox isolation for his own agent infrastructure projects, delved into Perplexity Computer. He noticed the integration of Claude Code, a Node.js-based tool that relies on an Anthropic API key to function. Curious about key management, he explored how credentials were handled within the shared environment.

Initial attempts to extract the key directly through the AI agent failed spectacularly. Claude's safety mechanisms kicked in: requests to dump environment variables, plant trojan scripts, poison shell profiles, or hijack the process tree were all detected and refused—six times in a row. The model's prompt-level safeguards proved robust, recognizing malicious intent and halting execution.


The Exploit: A Dotfile Deception

Undeterred, Astarabadi shifted focus to the infrastructure. Claude Code runs via npm in Node.js, which reads configuration from ~/.npmrc in the home directory — a shared filesystem accessible within the sandbox. By crafting a .npmrc file with a NODE_OPTIONS entry specifying --require to preload a custom JavaScript module, he ensured his script executed before Claude Code initialized.

The exploit boiled down to three shell commands:

  1. Write a script to dump process.env to a shared file.
  2. Echo 'node-options=--require /path/to/script.js' into ~/.npmrc.
  3. Trigger any coding task in Perplexity Computer.

Upon agent activation, npm honored the config, running the preload script instantaneously—before any safety checks. This yielded a Perplexity gateway token proxying to their master Anthropic account.


The Fatal Flaw: Unbound Credentials

The token lacked bindings: no IP restrictions, no session scoping to the sandbox, and no immediate user billing tie-in. Astarabadi tested it on his personal laptop, generating massive outputs — like five parallel 100,000+ token histories of the world via Opus 4.6 — without depleting his credits. Initially, it appeared usage billed to Perplexity's corporate account.

However, Perplexity CTO Denis Yarats clarified that the token was a short-lived proxy tied to the user's session and account, with asynchronous billing. The exploit generated 197 billing events, charged back to Astarabadi post-facto, and the token was revoked upon discovery. Astarabadi acknowledged this but noted the token's external usability posed risks, like prompt injection enabling third-party abuse.

Also read:


Lessons for the AI Industry

This breach highlights a broader issue: AI companies, racing to deploy agentic systems, often prioritize model safety over infrastructure. Claude performed flawlessly, but the "human bags" (as humorously put) overlooked basic hardening.

Astarabadi recommends:

  • Bind tokens to sandbox IDs and IPs.
  • Make them ephemeral, minting on startup and invalidating on teardown.
  • Ensure usage bills to the spawning user, not a master pool.

These patterns could fortify proxies, common in agent infra. Perplexity patched the vulnerability after Astarabadi's responsible disclosure via their Vulnerability Disclosure Program (VDP).

While the free Opus access is gone, this "true story" serves as a cautionary tale: In AI's gold rush, secure foundations trump shiny models. For the full thread, check Astarabadi's X post.


0 comments
Read more