AI-generated npm pkg stole Solana wallets

Tombstone icon

A malicious npm package called @kodane/patch-manager, apparently generated using Anthropic's Claude, posed as a legitimate Node.js utility while hiding a Solana wallet drainer in its post-install script. The package accumulated over 1,500 downloads before npm removed it on July 28, 2025, draining cryptocurrency funds from developers who installed it without realizing the payload ran automatically with no further user action required.

Incident Details

Severity:Catastrophic
Company:Solana Ecosystem
Perpetrator:Developer
Incident Date:
Blast Radius:Supply-chain compromise of devs; user funds drained.

npm, the default package registry for the JavaScript ecosystem, hosts over two million packages. Developers install them constantly, often with a single command and without reviewing the source code. That trust model is the foundation of modern JavaScript development, and it is also the reason a package called @kodane/patch-manager was able to drain cryptocurrency from more than 1,500 victims before anyone noticed.

What the package did

Safety, a software supply chain security firm, discovered @kodane/patch-manager through its malicious package detection system. The package presented itself as an "advanced" Node.js utility for managing code patches - a plausible-sounding tool that a developer working with Solana projects might install alongside other build dependencies. Its README was professional. Its code was clean and well-commented, with structure and formatting that looked like the output of a competent developer. And buried in its postinstall script was a wallet drainer.

npm's postinstall hook runs automatically the moment a package finishes installing. No clicks required. No confirmation dialogs. The developer does not even need to import or use the package in their code. Just running npm install @kodane/patch-manager was enough to trigger the payload. The drainer targeted Solana wallets specifically, exfiltrating private keys or seed phrases and using them to transfer funds out of the victim's wallet.

The package accumulated over 1,500 downloads before npm took it down on July 28, 2025. In cryptocurrency terms, 1,500 compromised wallets can represent substantial losses - Solana developers frequently handle wallets used by decentralized applications, bots, and testing environments, many of which hold real funds.

Why researchers think it was AI-generated

What distinguished @kodane/patch-manager from the steady stream of malicious npm packages that security researchers flag every month was the evidence that its code had been generated by an AI model - specifically, Anthropic's Claude chatbot.

Paul McCarty, the researcher at Safety who published the initial analysis, identified structural markers in the code that pointed to AI generation. The package exhibited unusually consistent commenting patterns, methodical code organization, and the kind of thorough but slightly mechanical documentation style characteristic of Claude's output. The code was, by malware standards, remarkably well-written. It did not look like a rushed script cobbled together by someone with poor coding habits. It looked like production-quality code, because it had been generated by a model trained on millions of examples of production-quality code.

McCarty described it as an example of how "threat actors are leveraging AI to create more convincing and dangerous malware." The point was not that AI had invented some new attack technique. Wallet drainers on npm are old news. The point was that AI had made the packaging more convincing. The README was coherent. The code looked professional. The naming was plausible. All the social signals that a developer might use to quickly assess whether a package is legitimate - formatting, comments, documentation quality - were executed at a level that previously required either genuine skill or significant effort to fake.

The supply chain problem

npm supply chain attacks are not rare. Security firms report hundreds of malicious packages per month across npm, PyPI, and other registries. Most of them are crude: typosquatting on popular package names, copy-pasting malicious payloads with minimal obfuscation, publishing with throwaway accounts. Detection tools catch many of them quickly because they exhibit obvious red flags - obfuscated code, suspicious network calls, no README, no version history.

AI-generated malicious packages disrupt that pattern. A package generated by Claude or a similar model will have clean code, sensible variable names, professional documentation, and logical file structure. It may even include unit tests. The social engineering layer - the part that convinces a developer the package is legitimate - becomes much more polished. Automated detection tools that rely on heuristics like code obfuscation or unusual formatting may not flag a package that reads like well-maintained open source.

The @kodane/patch-manager case also demonstrated the efficiency of the approach. Writing a convincing malicious package from scratch - complete with realistic documentation, proper npm metadata, and a hidden but functional payload - takes time and effort. With an AI chatbot, much of that work can be done in a single conversation. The threat actor does not need to be a skilled JavaScript developer. They need to be skilled enough to prompt an AI model into generating the components, then stitch the malicious payload into the output.

Solana as a target

The choice of Solana was not random. Solana's ecosystem runs heavily on JavaScript and TypeScript - the @solana/web3.js library is one of the most-downloaded npm packages in the cryptocurrency space. Developers building Solana bots, decentralized applications, and trading tools routinely install npm packages, and many of those projects interact directly with private keys or wallet seed phrases during development and testing.

This makes Solana developers a high-value target for supply chain attacks. A compromised package that lands in a developer's Solana project has a reasonable chance of finding wallet credentials in the environment. Unlike attacks on, say, a frontend web framework, where the payload might steal browser cookies or inject ads, an attack on the Solana developer ecosystem can go directly for money.

The @solana/web3.js library itself had been compromised in a separate incident in December 2024, when a maintainer account was hijacked and malicious versions 1.95.6 and 1.95.7 were published with code that exfiltrated private keys to an external endpoint. That attack hit during a narrow window of a few hours before the malicious versions were pulled, but it demonstrated the same fundamental vulnerability: the npm ecosystem trusts package authors, and when that trust is violated, the blast radius can be enormous.

Detection gaps

The @kodane/patch-manager package was caught by Safety's automated detection system. But 1,500 downloads happened before the takedown. That gap between publication and removal is the core problem with registry-level security. npm's own malware detection, combined with community reporting and third-party scanning tools, catches many threats. But every detection system has a lead time, and a well-crafted package extends that lead time by avoiding the obvious indicators.

McCarty noted that since his initial analysis of @kodane/patch-manager, he had continued to find malicious packages showing "tell-tale signs of AI-generated code." The pattern is not isolated. As AI coding assistants become more capable and more accessible, the barrier to creating professional-looking malicious packages keeps dropping. A threat actor who previously could only publish crude typosquatting packages can now generate something that passes casual inspection.

The uncomfortable question

The incident raises a practical question for the npm ecosystem and package registries in general: if AI can produce malicious packages that look clean, well-documented, and structurally sound, what heuristics remain for distinguishing them from legitimate software? Code quality was once a rough proxy for trustworthiness. A well-maintained package with thorough comments and clean structure was more likely to be legitimate than a hastily published blob of obfuscated JavaScript. AI erodes that signal.

The defenses that remain are not technical signals from the code itself. They are metadata-level checks: Is the publisher account new? Does the package have a real development history with incremental commits? Are there dependents? Is there a corresponding GitHub repository with meaningful activity? These signals are harder to fake with AI, but they are not impossible to manufacture either, especially as AI tools get better at generating plausible activity patterns.

For now, the most reliable defense is still the most tedious one: review what you install. Check the publisher. Read the postinstall script. Use lockfiles. Pin versions. And accept that the next malicious package in your install tree will probably look exactly like the ones you trust.

Discussion