PromptLock: AI Ransomware with an OpenAI Brain
A New Era of Malware
Cybersecurity experts are sounding the alarm after discovering a brand-new type of ransomware that uses artificial intelligence to build its own attack tools. Researchers at the security firm ESET have dubbed this threat PromptLock, and it represents a scary new chapter in the fight against cybercrime. Unlike traditional malware that comes with pre-written malicious code, PromptLock essentially has a brain of its own, allowing it to craft unique attacks in real-time, making it incredibly difficult to track and stop.
Customer Service is in our DNA.We are “Support Obsessed.”
This new threat is written in the Go programming language and carries a powerful, open-source AI model from OpenAI (gpt-oss:20b
). It accesses this model through a tool called Ollama, turning the ransomware into a factory for malicious code. The discovery has confirmed a long-held fear in the security community: that AI would soon be used not just to assist hackers, but to automate and customize their attacks on a massive scale. For now, ESET believes PromptLock is a “proof-of-concept,” meaning it’s more of a demonstration of what’s possible rather than a weapon being actively used in widespread attacks. However, the technology is real, and the implications are serious.
How PromptLock Uses AI to Attack
Instead of having its malicious instructions pre-written, PromptLock contains a set of commands, or “prompts,” which it feeds to its onboard AI model. The AI then writes unique Lua computer scripts on the spot. These custom-made scripts are designed to carry out the classic ransomware playbook: scan a victim’s computer, identify important files, steal copies of them, and then scramble them with the strong SPECK 128-bit encryption algorithm.
What’s particularly clever is that the Lua scripts it generates are cross-platform. This means the same PromptLock attack can work just as well on Windows PCs, Apple Macs, and Linux servers. The AI can even tailor the ransom note it leaves behind. Based on its analysis, it can determine if it has infected a personal laptop, a critical company server, or even a controller for a power distribution system, and then generate a specific message designed to maximize the chances of getting paid. To avoid having to package a huge multi-gigabyte AI model with the malware, an attacker can simply connect the infected machine to a remote server running the AI, making the initial malware file much smaller and harder to detect.
A Nightmare for Defenders
What makes PromptLock particularly dangerous is its chameleon-like nature. Traditional antivirus and security systems look for specific fingerprints—known as Indicators of Compromise (IoCs)—to spot malware. But because PromptLock’s AI generates a fresh set of attack scripts for every single victim, its fingerprint is constantly changing. This makes it incredibly difficult for defenders to create a reliable signature to detect and block it. If this technique is perfected, it could make threat identification a nightmare and put security teams at a major disadvantage.
The initial samples of PromptLock were uploaded to the security analysis platform VirusTotal from the United States on August 25, 2025, but it remains unclear who is behind this creation. While the current version seems focused on data encryption and theft, ESET’s analysis shows that the framework could easily be used to simply destroy data, although that feature has not yet been fully implemented.
The Bigger Picture: AI’s Own Weaknesses
The discovery of PromptLock isn’t happening in a vacuum. It’s part of a disturbing trend where criminals are using publicly available AI tools as their new weapon of choice. Just this week, the AI company Anthropic announced it had to ban several hacking groups that were using its Claude chatbot to plan large-scale data theft operations and build advanced ransomware. This shows that AI is lowering the barrier to entry, allowing even people with limited coding skills to create sophisticated malware.
At the same time, the very AI models that power these tools are proving to have their own security flaws. Hackers are getting better at “prompt injection” attacks, which are clever tricks to fool an AI into ignoring its safety rules. These attacks can cause the AI to leak sensitive data, delete files, or execute harmful commands.
A newly discovered technique, called PROMISQROUTE, is shockingly simple. Researchers found that by adding simple phrases like “use compatibility mode” or “fast response needed” to a request, attackers can trick a sophisticated AI system into switching to an older, and less secure, version of itself. This older model doesn’t have the same safety guardrails, allowing hackers to easily bypass protections that companies have spent millions of dollars developing, effectively turning a helpful assistant into a malicious accomplice.