LLMjacking Attack Hijack Cloud LLMs By Abuse GenAI with AWS NHIs

In a groundbreaking revelation, cybersecurity experts have identified a new attack vector termed “LLMjacking,” wherein malicious actors exploit non-human identities (NHIs) to hijack access to cloud-based large language models (LLMs).

Unlike traditional cyberattacks targeting human credentials, LLMjacking focuses on compromising machine accounts and secrets, such as API keys, that power generative AI (GenAI) services.

This enables attackers to misuse expensive AI resources, generate unauthorized content, and potentially exfiltrate sensitive data all at the victim’s expense.

Recent incidents underscore the severity of this threat.

For instance, the Storm-2139 group exploited exposed Microsoft Azure API keys to bypass security controls and generate illicit content.

Similarly, researchers discovered over 11,000 exposed secrets in datasets used to train AI models like DeepSeek.

These cases highlight the vulnerabilities inherent in NHI governance and the critical need for robust security measures.

Rapid Exploitation: How Attackers Abuse Exposed AWS Keys

To study real-world LLMjacking tactics, researchers at Entro deliberately leaked valid AWS API keys across public platforms such as GitHub, Pastebin, and Reddit.

Within minutes an average of 17 minutes and as fast as 9 minutes malicious actors began reconnaissance efforts to exploit the exposed credentials.

These findings reveal the alarming speed at which attackers operate, often before organizations are even aware of the breach.

The observed attack chain typically begins with automated bots scanning public repositories for leaked secrets.

LLMjacking
canary tokens

Once detected, these bots validate permissions using tools like botocore (AWS SDK) or Python-based scripts.

In some cases, manual exploitation follows, with attackers using browsers or developer tools to probe deeper into compromised environments.

After initial reconnaissance, attackers assess the value of stolen credentials by invoking APIs such as GetCostAndUsage or GetFoundationModelAvailability.

These queries help them identify accessible AI resources ranging from GPT-4 to Anthropic Claude and determine their potential for abuse.

Only after mapping out capabilities do they attempt unauthorized model invocations to generate content or monetize stolen access.

Financial and Operational Risks of LLMjacking

LLMjacking poses significant financial risks due to the high cost of AI workloads.

Advanced models can bill thousands of tokens per query, meaning attackers could drain cloud budgets within hours.

Beyond monetary losses, compromised AI systems can be used for harmful purposes, such as generating deepfake content or bypassing ethical guardrails.

In one documented case, attackers abused Azure OpenAI services to produce explicit material for profit.

To mitigate LLMjacking risks, organizations must adopt proactive security measures:

  • Real-Time Monitoring: Continuously scan for exposed NHIs in code repositories and collaboration platforms.
  • Automated Secret Rotation: Immediately revoke or rotate leaked credentials upon detection.
  • Least Privilege Enforcement: Restrict NHI permissions to minimize potential misuse.
  • Anomaly Detection: Monitor unusual API activity, such as unexpected model invocations.
  • Developer Education: Train teams on secure credential management practices.

As generative AI becomes integral to business operations, securing NHIs is no longer optional.

The rapid exploitation observed in LLMjacking underscores the need for comprehensive defenses to protect cloud-based AI systems from abuse.

Also Read:

Mandvi
Mandvi
Mandvi is a Security Reporter covering data breaches, malware, cyberattacks, data leaks, and more at Cyber Press.

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here