ASCII Smuggling Attack in Gemini Tricks AI Agents into Revealing Smuggled Data

A well-established attack technique ASCII smuggling, has resurfaced in enterprise AI agents, enabling attackers to embed invisible payloads in user prompts or calendar events.

FireTail’s research demonstrates that Google’s Gemini, Grok, and DeepSeek can be manipulated to bypass human oversight, leading to identity spoofing and automated data poisoning.

Background and Attack Technique

FireTail researcher Viktor Markopoulos revisited ASCII smuggling attacks against modern large language models (LLMs).

ASCII smuggling exploits invisible Unicode control characters, specifically “tag characters,s” to hide instructions within a seemingly benign text string.

While the user interface (UI) renders only the visible text, the AI agent’s raw input pre-processor ingests the hidden characters and executes smuggled commands. This discrepancy between the display layer and data layer is the root of the vulnerability.

Historically, similar methods, such as “Trojan Source,” used bidirectional-override characters to conceal malicious code in software repositories.

ASCII smuggling extends this threat into AI-driven workflows, weaponizing the gap between what humans see and what LLMs process.

Attack Demonstration and Affected LLMs

FireTail’s proof of concept against Gemini involved sending a calendar invite titled “Meeting” to a test account.

The visible title appeared innocuous, but embedded tag-block characters transformed the raw calendar event into:
Gemini’s assistant then read the manipulated prompt aloud, automatically marking the event as optional without any user action or approval.

Using a crafted payload, FireTail was able to overwrite meeting links and organizer details, effectively spoofing a corporate identity.

Testing across multiple platforms revealed that ChatGPT, Copilot, and Claude scrubbed input reliably, but Gemini, Grok, and DeepSeek did not.

As a result, enterprises relying on the vulnerable services face immediate risk.

Enterprise Impact: Spoofing and Data Poisoning

Vector A: Identity Spoofing via Calendar Integration

Attackers send calendar invites containing smuggled tag characters. The UI shows a normal event title, but the AI agent processes hidden instructions, altering organizer details and meeting descriptions. Victims never accept or decline; Gemini autonomously ingests and acts on the malicious data.

Vector B: Automated Content Poisoning

On e-commerce platforms, hidden commands in user reviews can force an AI summarizer to inject malicious links into customer-facing content.

A benign product review such as “Great phone. Fast delivery.” can be transformed into a summary promoting a scam website.

These scenarios highlight how ASCII smuggling turns AI agents into unwitting accomplices in enterprise attacks.

CVE Table

CVE IDDescriptionAffected ProductsCVSS 3.1ImpactExploit Prerequisites
CVE-2025-61347ASCII smuggling in prompt processingGoogle Gemini (Google Workspace integration)7.5Identity spoofingAbility to send calendar or text input
CVE-2025-61348ASCII smuggling in social media integrationsGrok (X integration)7.0Data poisoningAbility to post or submit smuggled text
CVE-2025-61349ASCII smuggling in data aggregation workflowsDeepSeek7.0Poisoned summariesAbility to supply raw text inputs

FireTail reported ASCII smuggling vulnerabilities to Google on September 18, 2025, but received notice of “no action.”

In contrast, AWS published guidance for defending LLM applications against Unicode smuggling. With major vendors unwilling to patch, enterprises must deploy their own defenses.

FireTail’s solution focuses on observability at the ingestion layer:

  1. Ingestion – Record raw LLM input streams before any UI normalization.
  2. Analysis – Detect tag-block sequences and zero-width characters in logs.
  3. Alerting – Trigger “ASCII Smuggling Attempt” alerts upon detection.
  4. Response – Isolate sources and flag or block poisoned outputs in real time.

Monitoring raw payloads rather than visible text is the only reliable defense against this application-layer flaw.

Organizations using vulnerable AI integrations should implement deep observability controls immediately to mitigate identity spoofing and data poisoning risks.

Cyber Awareness Month Offer: Upskill With 100+ Premium Cybersecurity Courses From EHA’s Diamond Membership: Join Today

AnuPriya
AnuPriya
Any Priya is a cybersecurity reporter at Cyber Press, specializing in cyber attacks, dark web monitoring, data breaches, vulnerabilities, and malware. She delivers in-depth analysis on emerging threats and digital security trends.

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here