Claude Code project files let malicious repositories trigger RCE and steal API keys

Tombstone icon

Check Point Research disclosed a set of Claude Code vulnerabilities on February 25, 2026 that let attacker-controlled repositories execute shell commands and exfiltrate Anthropic API credentials through malicious project configuration. The attack abused hooks, MCP server definitions, and environment settings stored in repository files that Claude Code treated as collaborative project configuration. Anthropic patched the issues before public disclosure, but the research showed just how little distance separates "shareable team settings" from "clone this repo and let it run code on your machine."

Incident Details

Severity:Catastrophic
Company:Anthropic
Perpetrator:AI coding agent
Incident Date:
Blast Radius:Developers who cloned and opened untrusted repositories in Claude Code faced remote code execution and Anthropic API key theft through project-level configuration files
Advertisement

Shared Settings, Shared Trouble

Claude Code's selling point is not mystery. It sits in a developer's terminal, reads the repository, runs commands, edits files, and helps move real work forward. That makes it powerful. It also means its trust model matters more than the trust model of an ordinary chatbot. A bad answer in a browser tab is annoying. A bad action in a coding agent is an incident.

On February 25, 2026, Check Point Research published a particularly clean example of that distinction. The firm's researchers showed that malicious repositories could abuse Claude Code's project-level configuration features to achieve remote code execution and steal Anthropic API credentials. The exploit path ran through files intended to make team setups convenient: shared hooks, MCP server configuration, and environment settings stored with the repository.

In other words, the part of the product designed to help collaborators inherit a consistent Claude Code setup also gave attackers a way to smuggle executable intent into a repo.

The Attack Surface Lived in the Repo

Claude Code supports a .claude/settings.json file at the project level. The idea is understandable. Teams want shared tool settings. If one developer configures useful hooks or MCP connections, others may want the same workflow when they clone the repo. It is the sort of feature that looks almost mandatory once you think of AI coding tools as part of the build environment.

Check Point's point was that repository-scoped configuration becomes a security boundary whether the product acknowledges that or not.

The researchers described several ways a malicious repository could turn those configuration pathways into execution channels. Hooks could run shell commands. MCP server definitions could be pointed somewhere hostile. Environment variables could be arranged to leak secrets or alter behavior in surprising ways. A developer did not need to paste a suspicious prompt into the tool. The act of cloning and opening the project was enough to import the attack surface.

That is what makes this story more interesting than a basic command injection bug. The repository itself became the delivery vehicle. The configuration did not look like malware in the traditional sense because it was stored where Claude Code expected project instructions to live.

Convenience Features Turned Into an Execution Chain

Check Point's writeup framed the problem around trust boundaries, which is exactly right. Claude Code was designed with the assumption that project-level settings are collaborative context. Attackers only needed to reinterpret that same feature as executable input.

Once that trust boundary collapsed, the rest of the exploit chain was straightforward. A developer clones an untrusted repository. Claude Code reads the project settings. Hostile configuration triggers command execution or wires the tool into attacker-controlled infrastructure. The attack then pulls the Anthropic API key or other sensitive values from the local environment and sends them out.

The resulting behavior is not fundamentally exotic. Software has had unsafe configuration parsing, unsafe plugin loading, and unsafe shell invocation bugs forever. The new wrinkle is where the trust came from. Traditional developer paranoia tells people to distrust binaries, shell scripts, and weird installers. Claude Code required users to distrust what looked like ordinary collaborative dotfiles inside a source repository.

That is a harder reflex to build, especially because AI coding tools have spent the last year encouraging exactly the opposite behavior. Clone the repo. Let the assistant inspect it. Let the assistant wire up the tools. Let the assistant help.

Why This Was a Catastrophic Fit for an AI Coding Tool

Security failures in coding agents are ugly for a simple reason: the tools naturally sit close to the crown jewels. Source code. Shell access. Credentials. Tokens. Secrets in environment files. API keys in terminal sessions. Internal build steps. Sometimes cloud access. If a hostile repo can trick the agent into running arbitrary commands, the blast radius is not just "the app crashes." It is source code theft, credential theft, lateral movement, or silent modification of the development environment.

Check Point said the flaws enabled RCE and API token exfiltration via malicious repositories and that Anthropic patched the issues before disclosure. That responsible-disclosure outcome is good news for users. It does not change the fact that the product design made the exploit path plausible in the first place.

TechRadar's summary got the important point right: the assistant can become a malicious insider if the surrounding controls treat untrusted project input as trusted operating context.

This is the same structural problem that keeps showing up across AI coding tools. The model is only half the product. The other half is the orchestration layer that connects the model to filesystems, shells, credentials, editors, browsers, and plugins. Most of the really interesting failures happen in that second half, not in the language model itself.

Patched Before Disclosure, Still Worth the Gravestone

Anthropic worked with Check Point and patched the reported issues before the February 25 publication. That is the competent part of the story. The less flattering part is that the bugs existed in a product already marketed as a serious developer tool.

AI coding vendors keep learning the same lesson under slightly different branding: if your tool executes actions in a developer environment, it is not allowed to be sloppy about provenance. A repository is not safe because it is a repository. A config file is not safe because it is a config file. A hook is not safe because it was meant to automate something legitimate. Every one of those surfaces can become attacker-controlled input if the repo came from the wrong place or the wrong contributor.

Shared MCP and hook configuration create a particularly awkward tension. The entire point is frictionless reuse. But frictionless reuse is exactly what an attacker wants from an exploit path. The moment the product promises "your teammates can clone this and inherit the same setup," the product has created a mechanism where a stranger can try to do the same thing under less friendly circumstances.

The New Default Question

The deeper consequence of stories like this is that they change what "opening a repository" means. For decades, opening a repo mostly meant reading text files and maybe compiling code if you chose to. With agentic tools, opening a repo increasingly means presenting a semi-autonomous system with instructions, configs, and contextual text that may trigger actions or shape future actions.

That makes repository trust a more active question than many developers are used to asking. Is the source trustworthy? Are the tool configs reviewed like code? Are hooks pinned and explicit? Does the assistant require re-approval when project-level settings change? Can it execute networked actions before the user understands the setup? If the answer to those questions is vague, the repo is not just content. It is a possible execution environment.

Claude Code did patch the reported issues. Good. It still earned a headstone because the research showed how thin the line had become between "helpful project defaults" and "clone this repo and surrender your keys."

Discussion