Hackers Are Using AI to Create a New Wave of Undetectable Phishing Attacks

Cybersecurity experts are sounding the alarm after Microsoft discovered a new type of Undetectable Phishing Attacks attack that uses artificial intelligence to write malicious code. This cutting-edge technique allows hackers to disguise their attacks so well that they can slip past traditional security software, putting countless organizations at risk. This new development, spotted on August 28, 2025, shows that cybercriminals are now weaponizing the same AI tools that have been making headlines, using them to create more clever and dangerous threats.

The campaign, which primarily targeted businesses in the United States, marks a significant and frightening leap forward in the capabilities of online criminals. By using large language models (LLMs)—the same technology behind popular AI chatbots—attackers can now automatically generate complex code designed to fool both people and the programs meant to protect them.

A Devious New Trick in the Hacker’s Playbook

The attack begins with a cleverly crafted phishing email. In a sneaky move to bypass basic spam filters, the hackers send the email from a compromised account but make it look like the recipient sent the email to themselves. The real targets are hidden in the “BCC” field, a simple trick that makes the email seem less suspicious at first glance.

Attached to the email is what looks like a normal PDF document, perhaps an invoice or a shared file notification. But it’s a trap. The file is actually an SVG, or Scalable Vector Graphics file. While most people think of SVGs as a type of image, they are text-based and can contain executable code, like JavaScript. This makes them a perfect Trojan horse for attackers.

Once an unsuspecting employee clicks to open the file, the hidden code springs into action. The victim is first sent to a web page with a CAPTCHA test, a common security check that asks you to identify images or type distorted text. This step is designed to make the whole process seem legitimate. After “verifying” themselves, the user is taken to a fake login page designed to steal their username and password. Because Microsoft’s systems caught and stopped the threat, the final stage of the attack remains unclear, but the goal was undoubtedly to harvest sensitive credentials.

The Artificial Intelligence Disguise

What makes this attack truly unique is how the AI was used to hide the malicious code in plain sight. Hackers instructed the LLM to write the code using language and structures that mimic a legitimate business analytics program. The code was filled with common business terms like “revenue,” “operations,” “risk,” “quarterly,” and “growth.”

Imagine trying to find a bomb in a factory where every part of the bomb is disguised to look like a normal piece of machinery. That’s essentially what the AI did. It buried the harmful instructions—which tell the program to redirect the user and steal their information—within long, rambling sequences of harmless-sounding business jargon.

Microsoft experts concluded that an AI almost certainly wrote the code because it was far too complex and inefficient for a human programmer. It had overly descriptive function names, a needlessly complicated structure, and generic, unhelpful comments. In short, it had all the hallmarks of a machine following a formulaic prompt to create something that looks functional but is actually just a clever disguise for an attack.

A Growing Trend of Sophisticated Attacks

While this specific AI-powered campaign was stopped, the technique represents a dangerous new frontier in cybersecurity. It’s part of a larger trend of attackers using more complex, multi-stage methods to evade detection.

For example, security firm Forcepoint recently detailed another attack that uses booby-trapped Excel attachments (.XLAM files) to secretly install a nasty piece of malware called XWorm RAT. This attack uses multiple layers of encryption and memory-injection techniques to hide its final payload, all while showing the victim a blank or corrupted document to throw them off.

Meanwhile, other phishing campaigns have been tricking people with fake emails about Social Security benefits or copyright infringement claims. These attacks aim to distribute information-stealing malware by luring victims to click on malicious links, with some even using Telegram bot profiles as part of their complex infection chain.

The message is clear: hackers are constantly innovating. The arrival of AI-generated malware means that everyone, from large corporations to individual users, must be more vigilant than ever. These new attacks are designed to outsmart the very systems we rely on for protection, making human awareness and suspicion our most important line of defense.

Privacy Preference Center