In a groundbreaking discovery, cybersecurity experts at ESET have identified the first computer virus leveraging a large language model (LLM).
Dubbed *PromptLock*, this sophisticated malware covertly downloads the open-source GPT-OSS-20B model, a hefty 14-gigabyte AI, and runs it via the Ollama API. Once active, the virus deploys an AI agent that autonomously navigates local files, making real-time decisions based on hardcoded prompts. While likely a prototype rather than a fully operational threat, *PromptLock* signals a new era of AI-driven malware that could evolve rapidly as local LLMs become smaller and more powerful.
How PromptLock Operates
*PromptLock* is designed to exploit the capabilities of the GPT-OSS-20B model through a series of malicious prompts embedded in its code.
These prompts enable the virus to perform the following tasks:
1. **File System Traversal**: The virus instructs the LLM to generate Lua code that recursively scans directories and prints file contents. This allows *PromptLock* to map and access a victim's file system systematically.
2. **Sensitive Data Detection**: Using the LLM’s natural language processing capabilities, *PromptLock* analyzes files to identify sensitive information, such as personal data, financial records, or proprietary documents. The AI’s ability to understand context makes it particularly adept at pinpointing valuable data.
3. **Personalized Extortion Messages**: The virus generates tailored messages for victims, detailing how their data will be handled—whether deleted, encrypted, or publicly exposed. These messages include a Bitcoin wallet address, ostensibly for ransom payments. Intriguingly, the address is linked to Satoshi Nakamoto, the pseudonymous creator of Bitcoin, suggesting it may be a placeholder or a symbolic gesture by the malware’s creator.
4. **Dynamic Encryption Code**: Rather than using static encryption routines, *PromptLock* leverages the LLM to generate file-encryption code on the fly. ESET researchers speculate this approach may be designed to evade antivirus detection, as dynamically generated code is harder to flag than hardcoded malicious scripts.
A Prototype with Alarming Potential
ESET’s analysis suggests *PromptLock* is not yet a fully deployed threat but rather a proof-of-concept or early-stage development. Its reliance on a 14GB model and the Ollama API limits its practicality for widespread attacks, as the download size and resource demands are significant.
However, the implications are chilling. As local LLMs become smaller, smarter, and more efficient — potentially within two to three generations—malware like *PromptLock* could become far more accessible and dangerous.
The use of an LLM as the core of a virus introduces unprecedented autonomy and adaptability.
Unlike traditional malware, which follows rigid scripts, *PromptLock* can make context-aware decisions, potentially evading detection and tailoring its attacks to specific victims.
The hardcoded Bitcoin address tied to Satoshi Nakamoto adds an element of intrigue, possibly indicating the creator’s intent to test or showcase the virus rather than deploy it for profit.
Also read:
- How Agents Market Artists to Boost Future Earnings: The Heads of State Case Study
- Future Giants: GAMEE – Redefining Gaming with Web3 and TON
- Tron Integrates with MetaMask, Unlocking Direct Access to Its Ecosystem
- Two QUASA Аnchors.
The Future of AI-Driven Threats
*PromptLock* underscores the dual-edged nature of AI advancements. As open-source LLMs become more powerful and compact, they could be weaponized in ways previously unimaginable. Cybersecurity experts warn that future iterations of such malware could operate with smaller models, requiring fewer resources and enabling stealthier attacks. This discovery serves as a wake-up call for the industry to develop new detection and mitigation strategies tailored to AI-driven threats.
For now, ESET has not observed *PromptLock* in widespread use, but its existence highlights the need for vigilance. As AI technology evolves, so too will the tactics of cybercriminals. With just a few more advancements in local LLMs, the line between prototype and real-world threat could blur, making *PromptLock* a harbinger of challenges to come.

