Windsurf AI editor critical path traversal enables data exfiltration
CVE-2025-62353 (CVSS 9.8) allowed attackers to read and write arbitrary files on developers' systems using the Windsurf AI coding IDE. The vulnerability could be triggered via indirect prompt injection hidden in project files like README.md, exfiltrating secrets even when auto-execution was disabled.
Incident Details
Tech Stack
References
Windsurf is an AI-powered coding IDE built by Codeium, forked from VS Code and centered around an integrated AI agent called Cascade. Developers use Cascade to search code, generate files, run commands, and work across projects with minimal manual intervention. In October 2025, security firm HiddenLayer published CVE-2025-62353, a path traversal vulnerability affecting all versions of Windsurf, with a CVSS 3.1 score of 9.8 - the kind of number that should make anyone running the software put down what they're doing and update immediately.
The vulnerability lived in two of Windsurf's built-in tools: codebase_search and write_to_file. Neither properly validated input file paths, which meant an attacker (or a malicious prompt injection payload) could instruct these tools to navigate outside the current project directory and access arbitrary files anywhere on the developer's system.
How the Attack Works
The path traversal flaw is a classic CWE-22 (Improper Limitation of a Pathname to a Restricted Directory), but its exploitation in Windsurf is distinctly modern. Because Cascade is an AI agent that reads project files and acts on their contents, an attacker doesn't need to compromise the IDE binary or run a man-in-the-middle attack. They just need to plant malicious instructions in a file the developer will open.
HiddenLayer's proof of concept placed hidden instructions inside a project's README.md file, using comments to make the injected text invisible to human readers. When Cascade processed that file - during routine code analysis, not as part of any deliberate action by the developer - the injected prompt changed the workspace path to the root of the filesystem (C:\ on Windows) and directed the write_to_file tool to access a sensitive file in a completely different directory.
The developer didn't need to approve any action. They didn't even need to know the attack was happening. Cascade, acting as what security researchers call a "confused deputy," faithfully carried out instructions that happened to come from an attacker rather than the person sitting at the keyboard.
Security Controls That Didn't Control Anything
The most damning aspect of CVE-2025-62353 is that Windsurf's own safety mechanisms failed to prevent the exploit. HiddenLayer confirmed the vulnerability was effective even when Auto Execution was set to OFF and write_to_file was explicitly placed on the tool deny list. Both controls existed. Neither one stopped the path traversal.
This is a meaningful design failure. Users who took deliberate steps to restrict their AI agent's capabilities - turning off automated execution, denying specific tools access - had every reason to believe those settings would be enforced. That the underlying tool followed injected instructions regardless of the user's configuration turns a path traversal bug into a trust violation. The controls weren't just bypassed; they were architecturally irrelevant to the actual tool invocation path.
Windsurf's tools were also not sandboxed by default. The exec tool, for example, had broad access to the user's entire system, meaning the path traversal was not the only route to arbitrary system access - it was simply the most straightforward one.
A Separate Researcher Found Similar Problems Months Earlier
HiddenLayer's CVE was published in October 2025, but it wasn't the first public disclosure of Windsurf Cascade's susceptibility to prompt injection and data exfiltration. Security researcher Johann Rehberger (who blogs as "Embrace The Red") published findings on August 21, 2025, documenting two distinct attack vectors against Windsurf Cascade that he had originally disclosed to the company on May 30, 2025.
The first involved the read_url_content tool, which allows Cascade to fetch data from web URLs. Because this tool didn't require user approval, an indirect prompt injection embedded in a source code file could hijack Cascade into reading the developer's .env file and exfiltrating its contents via an outbound HTTP request to an attacker-controlled server. Rehberger demonstrated this in a video proof of concept: simply analyzing a file with Windsurf was enough to leak environment variables, API keys, and other secrets.
The second vector used image rendering to exfiltrate data, a technique Rehberger had previously found in GitHub Copilot (which Microsoft patched). The attack embedded malicious instructions in project files that caused Cascade to render images from untrusted domains, piggybacking sensitive information in the request.
Two days after that August disclosure, Rehberger published yet another finding: invisible Unicode characters could be embedded in files that appeared blank to the developer but were interpreted as instructions by Cascade. The AI agent would read the invisible text and invoke tools based on its content, including the read_url_content tool that could be used for exfiltration.
Rehberger noted that Windsurf acknowledged receipt of his May 30 disclosure but then went silent on all further inquiries about triage, status, or fixes. He published after three months of no response.
The "Confused Deputy" in Your Editor
The class of vulnerability exemplified by CVE-2025-62353 isn't unique to Windsurf. AI coding assistants that have file system access, tool-calling capabilities, and the ability to process untrusted input (which includes any project file) all face some version of this problem. Simon Willison, the security researcher and open-source developer, has described the combination as the "lethal trifecta" for AI applications: access to tools, access to private data, and exposure to untrusted content.
What makes the Windsurf case stand out is the number of defense layers that either didn't exist or didn't work. The path traversal bug meant file access had no directory boundaries. The lack of tool sandboxing meant system-level access was available without special permissions. The prompt injection susceptibility meant an attacker could trigger all of this without any social engineering. And the failure of Auto Execution and deny list controls meant that even security-conscious users who had configured their environment to limit AI autonomy were still exposed.
For a CVSS 9.8, the attack chain is alarmingly low-friction. Drop a commented instruction into a README. Wait for a developer to open the project. Cascade does the rest.
The Response
Codeium's response was slow by disclosure standards. Rehberger's initial contact in May 2025 went largely unacknowledged beyond an initial receipt. When HiddenLayer published the CVE in October, Windsurf was listed as having the vulnerability in "all versions" with no patched version available at publication time. After the Embrace The Red disclosures went public, Windsurf did eventually contact Rehberger to say they would work on fixes, though without providing an estimated timeline.
The frontmatter for this story lists version 1.12.12 and older as affected. Developers who were running Windsurf during the disclosure window - and particularly those working on open-source projects where README files and other project content come from untrusted contributors - had no protection against these attacks other than not using the AI features at all.
What Developers Should Take Away
AI coding assistants that read project files and execute tool calls based on their contents occupy a fundamentally different threat model than traditional IDE extensions. A traditional VS Code extension that has a path traversal bug is concerning. An AI-powered agent with the same bug, combined with prompt injection susceptibility and no effective sandboxing, turns every project file into a potential attack surface.
The developer never clicks a link. They never approve a suspicious action. They open a project and start coding. The AI agent, doing exactly what it was designed to do, reads the files in the workspace. And buried in those files, an attacker's instructions are waiting.
Discussion