Zed editor AI agent could bypass permissions for arbitrary code execution

Tombstone icon

CVE-2025-55012 (CVSS 8.5) allowed Zed's AI agent to bypass user permission checks and create or modify project configuration files, enabling execution of arbitrary commands without explicit approval. Attackers could trigger this through compromised MCP servers, malicious repo files, or tricking users into fetching URLs with hidden instructions.

Incident Details

Severity:Facepalm
Company:Zed Industries
Perpetrator:AI coding agent
Incident Date:
Blast Radius:All Zed users with Agent Panel prior to version 0.197.3

Zed is a modern, open-source code editor built for speed. It supports multiplayer editing (multiple developers working in the same file simultaneously) and includes an Agent Panel - an integrated AI assistant that can read and modify code, interact with project files, and execute actions on behalf of the developer. The Agent Panel is one of Zed's selling points: an AI coding assistant built directly into the editor rather than bolted on as an extension.

CVE-2025-55012, published August 11, 2025, revealed that this AI assistant could bypass the permission checks that were supposed to keep it under the developer's control. The CVSS 4.0 score was 8.5, rated HIGH. The vulnerability was classified under CWE-284 (Improper Access Control) and CWE-288 (Authentication Bypass Using an Alternate Path or Channel). All versions of Zed with the Agent Panel prior to 0.197.3 were affected.

The Permission Model

Zed's Agent Panel was designed with a permission system. When the AI agent wanted to perform certain actions - creating files, modifying configuration, executing commands - it was supposed to request user approval. The developer would see a prompt, review what the agent wanted to do, and explicitly approve or deny the action. This is the standard safety model for AI coding assistants: the AI suggests, the human approves.

The vulnerability broke this model. The AI agent could bypass the permission checks and directly create or modify project-specific configuration files without explicit user approval. In Zed's architecture, project configuration files control how the editor behaves within a given project. They can define tasks, scripts, and commands that Zed executes. If an AI agent can write to those files without permission, it can inject arbitrary commands into the project's configuration, and those commands execute when the user triggers the associated task or when Zed processes the configuration.

This is remote code execution. The AI agent, which was supposed to operate under human supervision, could silently modify the project in ways that resulted in arbitrary code running on the developer's machine.

The Attack Vectors

The vulnerability had multiple attack paths, which increased both its severity and its practical exploitability.

Compromised MCP servers: Zed supports the Model Context Protocol (MCP), which allows the AI agent to connect to external servers that provide additional context, tools, and capabilities. If an attacker controlled or compromised an MCP server that a developer connected to, the server could feed the AI agent instructions that triggered the permission bypass. The agent would then modify project configuration files based on instructions from the malicious server, without the developer being asked to approve the changes.

Malicious repository files: An attacker could place files in a repository that contained hidden instructions - text that the AI agent would interpret as commands when it read the file as part of its context-gathering process. This is a prompt injection attack. The developer clones what appears to be a normal repository, opens it in Zed, and when the Agent Panel reads the project files, it encounters the injected instructions and acts on them. The developer sees nothing unusual because the malicious content is designed to be invisible or unremarkable to a human reader.

URL fetch with hidden instructions: If a user asked the AI agent to fetch a URL - reviewing a webpage, reading documentation, or retrieving a code snippet - an attacker could host content at that URL containing hidden instructions. The agent would process the page, encounter the embedded commands, and execute the permission bypass to modify project files. The developer's action (asking the agent to read a URL) appears benign. The result (arbitrary code execution) is not.

All three vectors share a common structure: an external source provides instructions to the AI agent, and the agent acts on those instructions by exploiting the permission bypass. The developer is either unaware of the malicious input (repository files, URL content) or has no reason to suspect the source (a connected MCP server).

The Confused Deputy Problem

This vulnerability is a textbook example of the "confused deputy" problem in computer security. A confused deputy is a privileged program that is tricked into misusing its authority by a less-privileged entity. The AI agent had legitimate access to project files and configuration - it needed that access to function as a coding assistant. The permission system was supposed to ensure that the agent only used that access when the developer explicitly approved. The vulnerability allowed external actors to trick the agent into using its access without going through the approval process.

The AI agent was the deputy. MCP servers, repository files, and URLs were the less-privileged entities that told the deputy what to do. The developer, who was supposed to be in control, was bypassed entirely.

This pattern - AI agents with broad file system access being manipulated through prompt injection - had appeared in other AI coding tools in mid-2025. Cursor's MCPoison vulnerability (also covered on this site) demonstrated that MCP trust relationships could be exploited to achieve similar results. The Windsurf path traversal vulnerability showed that inadequate path validation in AI coding tools enabled data exfiltration. Zed's CVE-2025-55012 was another entry in the same category: an AI agent with real system access, a permission model that didn't hold up under adversarial conditions, and attack vectors that were practical to exploit.

The Fix

Zed Industries patched the vulnerability in version 0.197.3 by enforcing stricter permission checks on Agent Panel actions. The fix ensured that the agent could not create or modify configuration files without going through the user approval flow, regardless of how the instruction to do so was received.

For users who could not immediately upgrade, the recommended workarounds were to avoid sending prompts to the Agent Panel entirely or to restrict the AI agent's file system access to limit the damage it could do. Neither workaround was particularly practical for users who relied on the Agent Panel as part of their development workflow. Disabling the AI agent eliminates the vulnerability but also eliminates the feature. Restricting file system access requires manual configuration that most developers would not know how to set up correctly.

The Design Tension

AI coding agents face a fundamental design tension. To be useful, they need access to the project's files, configuration, and execution environment. To be safe, they need to operate under strict constraints that prevent unauthorized actions. The more access an agent has, the more useful it can be. The more constraints it operates under, the safer it is. Finding the right balance is difficult, especially when the agent needs to process external input (MCP servers, URLs, repository contents) that may be adversarial.

Zed's original permission model was conceptually sound: agent proposes, user approves. The implementation had a flaw that allowed the approval step to be skipped. Once fixed, the model was restored. But the vulnerability's existence raised a question that applies to every AI coding tool with file system access: is the permission model robust enough to withstand adversarial prompt injection? If a sufficiently clever prompt can induce the agent to bypass its own safety checks, the permission model is a speed bump rather than a barrier.

The CVSS vector for this vulnerability specified local attack vector with no privileges required and passive user interaction. That means the attacker didn't need to be on the developer's machine or have any existing access. They just needed the developer to open a project, connect to a server, or fetch a URL. Developers routinely clone repositories from strangers, connect to shared development servers, and ask AI assistants to read documentation links. Those conditions are met constantly.

Zed fixed the bug. The broader question of how to build AI agents that retain useful capabilities while resisting adversarial manipulation remains open across the entire AI-assisted development tool category.

Discussion