Zero-Click Microsoft 365 Copilot Vulnerability Allows Attackers to Exfiltrate Sensitive Data via Teams

A critical zero-click vulnerability in Microsoft 365 Copilot that allows attackers to automatically exfiltrate sensitive organizational data without requiring any user interaction.

The vulnerability, dubbed “EchoLeak,” exploits fundamental design vulnerability in RAG-based AI systems and represents the first zero-click vulnerability discovered in a major AI application with concrete cybersecurity implications.

The EchoLeak vulnerability introduces a new exploitation technique called “LLM Scope Violation,” where attackers leverage the AI model’s internal mechanics to access privileged data through unprivileged inputs.

The attack begins when an adversary sends a seemingly innocuous email to a target organization employee, requiring no restrictions on the sender’s email address.

The malicious email contains carefully crafted instructions disguised as normal correspondence to the recipient, rather than obvious commands to an AI system.

This approach successfully bypasses Microsoft’s XPIA (cross-prompt injection attack) classifiers, which are designed to prevent prompt injection attacks from reaching the underlying large language model.

The attack exploits M365 Copilot’s integration with Microsoft Graph, which retrieves relevant information from users’ organizational environments including mailboxes, OneDrive storage, Office files, SharePoint sites, and Teams chat history.

Microsoft 365 Copilot Vulnerability

The vulnerability chain involves multiple sophisticated bypasses of Microsoft’s security controls. Initially, researchers attempted to exfiltrate data through external links, but discovered that M365 Copilot redacts standard markdown links from chat history.

However, they identified that reference-style markdown links and images remain undetected by the redaction system.

The breakthrough came with the discovery of a Microsoft Teams URL that could be exploited for data exfiltration: https://eu-prod.asyncgw.teams.microsoft.com/urlp/v1/url/content.

This endpoint allows attackers to make GET requests that extract sensitive information without requiring users to accept invitations or perform special actions.

The attack leverages Microsoft’s own infrastructure to circumvent Content Security Policy restrictions that would normally block external data transmission.

Researchers also developed a “RAG spraying” technique to maximize the likelihood of malicious content being retrieved by the AI system.

By creating emails with multiple topic sections covering various business areas like employee onboarding, HR FAQs, and leave management, attackers can increase the probability that their malicious instructions will be included in the AI’s context when users ask questions.

Security Implications

The EchoLeak vulnerability represents a significant advancement in AI attack methodologies, demonstrating how threat actors can weaponize AI agents against themselves.

The attack can extract what the AI system identifies as “the most sensitive secret/personal information” from the current context, effectively using the model’s own capabilities to identify and exfiltrate valuable data.

Aim Labs has disclosed the vulnerability to Microsoft’s Security Response Center and reports no awareness of customers being impacted to date.

The research highlights broader security concerns for RAG-based chatbots and AI agents, as the underlying design vulnerability may affect other similar applications beyond Microsoft 365 Copilot.

This discovery underscores the need for more sophisticated security frameworks specifically designed for AI applications, as traditional input validation approaches prove insufficient for defending against unstructured AI inputs.

Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates.

Mayura
Mayura
Mayura Kathir is a cybersecurity reporter at GBHackers News, covering daily incidents including data breaches, malware attacks, cybercrime, vulnerabilities, zero-day exploits, and more.

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here