Home Cyber Security News Gemini CLI Vulnerability Allows Attackers to Execute Malicious Commands Silently on Developer...

Gemini CLI Vulnerability Allows Attackers to Execute Malicious Commands Silently on Developer Systems

0

Cybersecurity firm Tracebit has revealed a critical vulnerability in Google’s Gemini CLI tool that could allow attackers to silently execute malicious code on users’ machines through sophisticated prompt injection techniques.

The flaw, discovered just two days after the tool’s release, highlights growing security concerns around AI-powered development tools.

The Vulnerability Discovery

On June 27, 2025, Tracebit reported the vulnerability to Google’s Vulnerability Disclosure Program (VDP), merely two days after Gemini CLI’s initial release on June 25.

The security flaw was initially classified by Google as a P2/S4 issue but was later escalated to P1/S1 status, indicating its critical severity.

The vulnerability exploited a “toxic combination of improper validation, prompt injection and misleading UX” that allowed attackers to execute arbitrary commands when users inspected untrusted code repositories.

Most concerning was the attack’s stealth nature – malicious commands could execute completely undetected by the victim.

How the Attack Works

The attack leveraged Gemini CLI’s ability to execute shell commands through its run_shell_command tool and its support for context files like GEMINI.md.

Attackers could hide malicious instructions within seemingly benign files, such as README.md files containing the GNU Public License text, where few users would read beyond the opening lines.

The exploitation involved a two-stage process: first, tricking users into whitelisting innocuous commands like grep, then executing malicious commands masquerading as the whitelisted ones.

Due to insufficient validation logic in comparing shell inputs to the command whitelist, attackers could append malicious payloads to legitimate commands.

Industry Response and Fix

Google responded swiftly to the disclosure, releasing version 0.1.14 on July 25 with comprehensive fixes.

The company emphasized that its security model centers on “robust, multi-layered sandboxing” with Docker, Podman, and macOS Seatbelt integrations.

Several security researchers independently discovered similar vulnerabilities during the month between release and patch, underscoring the severity of the issue.

The fixed version now clearly displays malicious commands to users and requires explicit approval for additional binaries.

Broader Implications

This incident reflects broader challenges in AI tool security as development teams rapidly adopt LLM-powered assistants.

Tracebit, which specializes in deception technology and security canaries, noted that “teams are moving very quickly to unlock and leverage the power of LLMs” but warned that “Prompt Injection doesn’t seem to be going away any time soon”.

The company’s customers, including major firms like Docker, Riot Games, and Cresta, have praised Tracebit’s approach to threat detection through automated canary deployment.

As AI tools become increasingly prevalent in development workflows, this discovery emphasizes the critical need for robust security measures and careful validation of AI-generated actions.

Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version