Ollama AI infrastructure platform Flaw Let Attackers Execute Remote Code

A critical RCE vulnerability (CVE-2024-37032) was found in Ollama, a popular open-source project for running AI models, allowing attackers to take over exposed Ollama servers, steal or modify AI models, and compromise AI applications. 

The issue stemmed from insufficient validation on the server-side of Ollama’s REST API, allowing attackers to exploit it through specially crafted requests, while Ollama users are recommended to upgrade to version 0.1.34 or later. 

Additionally, exposing Ollama instances to the internet without authentication mechanisms like reverse proxies is highly discouraged. Security researchers identified over 1000 exposed Ollama servers, highlighting the need for stricter security measures in AI tools. 

An attacker can exploit a path traversal vulnerability (CVE-2024-37032) in Ollama to achieve remote code execution (RCE) by sending malicious HTTP requests to the API server. 

The default Linux installation mitigates this risk by binding the API server to localhost, restricting access from the outside world, and docker deployments expose the API server publicly, making them vulnerable to remote attacks. 

Technical Description:

The research team investigated Ollama, a popular open-source tool (70k+ GitHub stars) for running large language models (LLMs) like Gradient’s Llama3 (1m token context), as their goal was to leverage a large-context LLM for their research while maintaining control and security. Ollama’s self-hosting capabilities made it an attractive option compared to cloud-based solutions. 

over 70k stars on GitHub

Ollama, an AI infrastructure tool, was found to have a critical Remote Code Execution (RCE) vulnerability due to insufficient input validation, which allowed attackers to overwrite files and gain full control of the server, especially in Docker deployments where the server runs with root privileges. 

Ollama Docker Image

Researchers at Wiz discovered a vulnerability in Ollama’s HTTP server that allows attackers to write arbitrary files to the system, lying in the `/api/pull` endpoint, which accepts model manifests from private registries. 

By crafting a malicious manifest with a path traversal payload in the digest field, attackers can trick Ollama into saving the model file outside the designated directory, which exposes critical system files to potential corruption.  

Pulling a model from a private registry

An attacker can exploit a path traversal vulnerability in Ollama’s /api/pull endpoint to achieve arbitrary file reads and remote code execution (RCE) by first uploading a malicious model manifest containing a path traversal payload to the server. 

Arbitrary File Read

It allows them to overwrite a file on the system, and in Docker deployments, where the Ollama server runs with root privileges, the attacker can exploit this write access to corrupt the “/etc/ld.so.preload” file, which specifies libraries to be loaded at process startup. 

The attacker can then place a malicious library on the system and modify “/etc/ld.so.preload” to include it, and any subsequent process execution will load the attacker’s library, enabling RCE. 

Also Read:

Kaaviya
Kaaviyahttps://cyberpress.org/
Kaaviya is a Security Editor and fellow reporter with Cyber Press. She is covering various cyber security incidents happening in the Cyber Space.

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here