First Zero-Click AI Vulnerability Enables Data Exfiltration From MS365 Copilot

Researchers have discovered the first zero-click AI vulnerability dubbed “EchoLeak” that allows attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user’s awareness, or relying on any specific victim behavior. Termed “LLM Scope Violation,” the new exploitation may have additional manifestations in other RAG-based chatbots and AI agents representing a major discovery advancement in how threat actors can attack AI agents – by leveraging internal model mechanics.

More details here:  https://www.aim.security/lp/aim-labs-echoleak-blogpost

Ensar Seker, CISO at SOCRadar had this to say:

“The EchoLeak discovery by Aim Labs exposes a critical shift in cybersecurity risk, highlighting how even well-guarded AI agents like Microsoft 365 Copilot can be weaponized through what Aim Labs correctly terms an “LLM Scope Violation.” This attack, which allows zero-click data exfiltration from an AI assistant’s context simply by sending an email, breaks from traditional breach tactics as it doesn’t require any user action beyond receiving mail. The fact that it bypasses server-side classifiers and markdown redaction rules demonstrates how these vulnerabilities are baked into agent-level logic, not just surface UI flows. 

“This has serious implications for NATO, government, defense, healthcare, and anyone using enterprise AI assistants: attackers no longer need to compromise user credentials or rely on phishing. They can manipulate a trusted AI interface directly. The multi-step EchoLeak chain is both elegant and insidious: it leverages retrieval-augmented generation (RAG), content-security-policy quirks, and markdown behavior to funnel data out silently to attacker-controlled URLs. 

“What stands out especially is that this isn’t limited to Copilot. As Aim Labs warns, any RAG-based agent that processes untrusted inputs alongside internal data is vulnerable to scope violations. This signals a broader architectural flaw across the AI assistant space – one that demands runtime guardrails, stricter input scoping, and inflexible separation between trusted and untrusted content.

“Organizations deploying AI agents must act quickly: disable external email ingestion in Copilot, enforce DLP tags, and apply prompt-level filters that block structured output or suspicious links. They should also treat every AI deployment with the same scrutiny reserved for enterprise applications integrating AI-specific security controls into DevSecOps and threat modeling. Insecure guards at the model layer are now as critical a risk as insecure interfaces at the network layer.

“EchoLeak is a watershed moment. It shows that AI agents can be their own attackers, and secure-by-design principles must evolve just as AI shifts from assistant to agent.”

Well, this isn’t good given the fact that AI is being deployed everywhere for everything. I think it’s a safe bet that we’ll be seeing more of this type of exploit going forward, and the danger of these sorts of exploits will only quickly increase.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading