News
But EchoLeak, as detailed by Fortune, shows that trusting an AI with context is not the same as controlling it. The line between helpful and harmful isn’t always drawn in code, it’s drawn in ...
A vulnerability that researchers call CurXecute is present in almost all versions of the AI-powered code editor Cursor, and ...
The vulnerability, dubbed EchoLeak and assigned the identifier CVE-2025-32711, could have allowed hackers to mount an attack without the target user having to do anything. EchoLeak represents the ...
Critical flaw in Cursor AI editor let attackers execute remote code via Slack and GitHub—fixed in v1.3 update.
The vulnerability, called EchoLeak, allowed attackers to silently steal sensitive data from a user's environment by simply sending them an email. No clicks, downloads, or user actions were needed.
EchoLeak shows that enterprise-grade AI isn’t immune to silent compromise, and securing it isn’t just about patching layers. “AI agents demand a new protection paradigm,” Garg said.
A critical AI vulnerability, 'EchoLeak,' was discovered in Microsoft 365 Copilot by Aim Labs researchers in January 2025. This flaw allowed attackers to exfiltrate sensitive user data through ...
EchoLeak is a reminder that even robust, enterprise-grade AI tools can be leveraged for sophisticated and automated data theft," said Itay Ravia, Head of Aim Labs.
But, as the report by Fortune suggests, the vulnerability had a name, EchoLeak, and behind it, a sobering truth: hackers had figured out how to manipulate an AI assistant into leaking private data ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results