Gemini Flaws Could Have Exposed Your Private Data
Cybersecurity experts have uncovered a trio of serious security holes in Google’s Gemini AI assistant. Before they were fixed, these flaws could have allowed attackers to steal users’ private information, including saved data and location details, and even take control of cloud services. The security firm Tenable, which discovered the issues, dubbed the set of vulnerabilities the “Gemini Trifecta,” highlighting how three different parts of the AI suite were at risk. These discoveries prove that the very AI tools designed to help us can be turned into weapons for data theft.
Hacking the Cloud with Hidden Commands
One of the most significant flaws was found in Gemini Cloud Assist, a tool designed to help developers and IT professionals manage their cloud infrastructure. The problem stemmed from Gemini’s ability to read and summarize system logs. Attackers found a way to hide malicious instructions, known as a prompt injection, inside these logs.
Imagine leaving a secret, coded message for someone inside a long, boring report. When that person is asked to summarize the report, they find and act on your secret message without the owner’s knowledge. That’s essentially what attackers could do here. They could craft a special command and hide it within a standard piece of log data, like a “User-Agent” header which identifies a user’s browser. When an administrator asked Gemini to summarize recent activity, the AI would process the log, find the hidden command, and execute it.
This could have been disastrous. For example, an attacker could have instructed Gemini to scan the cloud environment for security weaknesses or misconfigurations and then create a link containing all that sensitive information. Because Gemini had permission to access these systems through tools like the Cloud Asset API, it would have obediently followed the orders, handing the keys to the kingdom over to the hacker.
Poisoning Your Search History to Steal Your Info
Another vulnerability targeted the Gemini Search Personalization model, the feature that allows the AI to give you more relevant answers based on your past activity. This attack was a two-step process that involved manipulating a user’s Google Chrome search history.
First, an attacker would need to trick a target into visiting a malicious website. Once on the site, hidden code would run in the background, silently adding a series of fake searches to the victim’s browsing history. These weren’t ordinary searches; they contained hidden instructions for Gemini.
Later, when the victim went to use Gemini, the AI would look at their recent search history to personalize the experience. However, it couldn’t tell the difference between the user’s real searches and the fake, malicious ones planted by the attacker. It would read the attacker’s hidden prompts and be tricked into carrying out their commands, which could include leaking the user’s saved personal information or even their current location data. It was a clever way to turn the AI’s own personalization feature against the user.
Tricking Gemini into Leaking Data from Websites
The third flaw was found in the Gemini Browsing Tool, which allows the AI to visit and summarize webpages. This was an “indirect prompt injection” attack, meaning the malicious instruction wasn’t given directly by the user. Instead, an attacker could embed a hidden command within the content of their own webpage.
If a user asked Gemini to visit or summarize that compromised webpage, the AI would read the page’s content, including the invisible, malicious prompt. This prompt could order Gemini to find the user’s private data stored in their account and send it to a server controlled by the attacker. What made this particularly dangerous was that the data could be sent out secretly, without Gemini ever showing the user a suspicious link or image in its response. The theft could happen completely behind the scenes.
Google’s Fix and the Bigger Picture
After Tenable responsibly disclosed the vulnerabilities, Google took action to patch the security holes. The company has reportedly stopped Gemini from generating hyperlinks in its responses when summarizing logs, which closes the door on the Cloud Assist attack. Google also implemented additional “hardening measures” to better protect the platform against prompt injection attacks in general.
Liv Matan, the security researcher who led the discovery, warned that this is a new frontier in cybersecurity. “The Gemini Trifecta shows that AI itself can be turned into the attack vehicle, not just the target,” he stated. As companies and individuals rush to adopt AI, it’s crucial that they don’t forget about securing it. This incident is a stark reminder that AI systems need to be constantly monitored and protected.
This isn’t an isolated issue. Other researchers recently detailed a similar attack on Notion’s AI agent, where hidden prompts in a PDF file were used to steal confidential data. As AI agents gain more access to our documents, databases, and connected apps, they create a much larger “threat surface” for attackers to exploit. The old rules of security may not be enough to protect us in a world where AI can be tricked into becoming a hacker’s automated assistant.