Ollama AI Framework Security Breach
In a detailed report released last week, Avi Lumelsky, a researcher from Oligo Security, shed light on six critical vulnerabilities within the Ollama AI framework. These security flaws could be used by attackers to launch various types of malicious activities, such as denial-of-service (DoS) attacks, model tampering, and the theft of AI models. Shockingly, these attacks could be triggered by something as simple as a single HTTP request.
You might be interested in: Chinese Hackers APT41 Attack Gambling Companies
What is Ollama AI?
Ollama is an open-source system that allows users to run large language models (LLMs) locally across different operating systems, including Windows, Linux, and macOS. It has become quite popular in the developer community, with its GitHub project forked 7,600 times, reflecting a significant level of interest and use. However, the newly discovered security flaws are raising concerns among its widespread user base.
Details of the Six Vulnerabilities
The research highlighted several specific issues, each with different levels of severity. Here’s a breakdown:
- CVE-2024-39719 (CVSS Score: 7.5): This vulnerability involves the /api/create endpoint, which can be manipulated to check the existence of server files. Attackers could leverage this flaw to gather sensitive information about the system. The problem was patched in version 0.1.47.
- CVE-2024-39720 (CVSS Score: 8.2): An out-of-bounds read bug in the /api/create endpoint can crash the application, making it susceptible to DoS attacks. A patch for this issue was provided in version 0.1.46.
- CVE-2024-39721 (CVSS Score: 7.5): This DoS vulnerability is triggered when the /api/create endpoint is overused with the file “/dev/random,” which causes resource exhaustion. The developers addressed this flaw in version 0.1.34.
- CVE-2024-39722 (CVSS Score: 7.5): This vulnerability exists in the /api/push endpoint, where a path traversal issue could reveal the server’s file system and expose Ollama’s internal directory structure. It was patched in version 0.1.46.
Two Critical Vulnerabilities Remain Unpatched
Despite these fixes, there are still two unresolved security issues that present a serious threat:
- Model Poisoning: The /api/pull endpoint can be exploited to alter or corrupt models if sourced from an untrusted or malicious location. An attacker could inject harmful data, significantly impacting the reliability and integrity of the models used.
- Model Theft: The /api/push endpoint is vulnerable to unauthorized access, allowing attackers to steal AI models by redirecting them to an untrusted destination. Given the value of these models, this could lead to significant intellectual property theft.
Suggested Mitigation Measures
Ollama maintainers have not released patches for these vulnerabilities yet. Instead, they advise users to protect internet-facing endpoints using proxies or web application firewalls (WAFs). According to Lumelsky, the framework’s default settings leave endpoints exposed, which means users must take extra measures to filter and secure these routes properly.
A Concerning Security Assumption
Lumelsky emphasized a critical gap in Ollama’s security setup: “The assumption that endpoints will be filtered is risky. By default, all endpoints run on Ollama’s standard port, and many users might not be aware of the need to secure them. There’s no separation or detailed documentation to guide users in protecting these routes.”
Global Impact and Widespread Exposure
Oligo Security’s research uncovered 9,831 unique instances of Ollama servers exposed to the internet. These servers are spread across several countries, with the largest number found in China, the United States, Germany, South Korea, Taiwan, France, the United Kingdom, India, Singapore, and Hong Kong. Alarmingly, around 25% of these servers are affected by the vulnerabilities, making the potential impact extensive.
Past Reports of Critical Flaws
This isn’t the first time Ollama has faced significant security challenges. Four months ago, cloud security firm Wiz identified a critical issue (CVE-2024-37032) that could have allowed remote code execution. This flaw highlighted the severe risks of exposing Ollama instances without proper safeguards.
Lumelsky compared the risk to leaving the Docker socket accessible on the internet: “Exposing Ollama publicly is like making the Docker socket available for anyone to exploit. With capabilities like file uploads and model management (pull and push), attackers have many opportunities for abuse.”
What This Means for Ollama Users
These findings underline the urgent need for Ollama users to be proactive about securing their deployments. Applying updates, following best practices for endpoint security, and restricting access are crucial steps to prevent potential attacks. For now, the Ollama community and security experts are waiting for more comprehensive fixes to address the ongoing risks.