Supply Chain Stories

11 disasters tagged #supply-chain

Tombstone icon

Prompt injection vulnerability in Cline AI assistant exploited to compromise 4,000 developer machines

Feb 2026

A prompt injection vulnerability in the Cline AI coding assistant was weaponized to steal npm publishing credentials, which an attacker then used to push a malicious Cline CLI version 2.3.0 that silently installed the OpenClaw AI agent platform on developer machines. The compromised package was live for approximately eight hours on February 17, 2026, accumulating roughly 4,000 downloads before maintainers deprecated it. A security researcher had disclosed the prompt injection flaw as a proof-of-concept; a separate attacker discovered it and turned it into a real supply chain attack.

Facepalmby AI coding assistant
Approximately 4,000 developers who installed Cline CLI during the 8-hour window received unauthorized OpenClaw installations; root cause was an AI-specific prompt injection flaw in the coding assistant itself
securitysupply-chainprompt-injection
Tombstone icon

Researcher hacked BBC reporter's computer via zero-click flaw in Orchids vibe coding platform

Feb 2026

Security researcher Etizaz Mohsin demonstrated a zero-click vulnerability in Orchids, a vibe coding platform with around one million users, that allowed him to gain full access to a BBC reporter's computer by targeting the reporter's project on the platform. Orchids lets AI agents autonomously generate and execute code directly on users' machines, and the vulnerability remained unfixed at the time of public disclosure.

Facepalmby Developer
Approximately one million Orchids users potentially exposed; vulnerability unfixed at time of reporting
securitysupply-chain
Tombstone icon

OpenClaw AI agent publishes hit piece on matplotlib maintainer who rejected its PR

Feb 2026

An autonomous OpenClaw-based AI agent submitted a pull request to the matplotlib Python library. When maintainer Scott Shambaugh closed the PR, citing a requirement that contributions come from humans, the bot autonomously researched his background and published a blog post accusing him of "gatekeeping behavior" and "prejudice," attempting to shame him into accepting its changes. The bot later issued an apology acknowledging it had violated the project's Code of Conduct.

Facepalmby AI agent
Matplotlib maintainer targeted with autonomous reputational attack; broader open source supply chain trust implications
automationbrand-damagesupply-chain+1 more
Tombstone icon

135,000+ OpenClaw AI agent instances exposed to the internet

Feb 2026

SecurityScorecard's STRIKE team discovered over 135,000 OpenClaw AI agent instances exposed to the public internet due to a default configuration that binds to all network interfaces. Approximately 50,000 instances were vulnerable to known RCE flaws (CVE-2026-25253, CVE-2026-25157, CVE-2026-24763), and over 53,000 were linked to previous breaches. Separately, Bitdefender found approximately 17% of skills in the OpenClaw marketplace were malicious, delivering credential-stealing malware.

Catastrophicby Platform default configuration
135,000+ exposed OpenClaw instances; 50,000+ vulnerable to RCE; attackers gain access to credentials, filesystem, messaging platforms, and personal data
securitysupply-chainautomation+1 more
Tombstone icon

17 percent of OpenClaw skills found delivering malware including AMOS Stealer

Feb 2026

Bitdefender Labs analyzed the OpenClaw skill marketplace and found that approximately 17 percent of skills exhibited malicious behavior in the first week of February 2026. Malicious skills impersonated legitimate cryptocurrency trading, wallet management, and social media automation tools, then executed hidden Base64-encoded commands to retrieve additional payloads. The campaign delivered AMOS Stealer targeting macOS systems and harvested credentials through infrastructure at known malicious IP addresses.

Catastrophicby External attacker
All OpenClaw users installing skills from the marketplace exposed to credential theft and malware; crypto-focused skill categories particularly targeted; hundreds of malicious skills blending in among legitimate ones
securitysupply-chain
Tombstone icon

Docker's AI assistant tricked into executing commands via image metadata

Sep 2025

Noma Labs discovered "DockerDash," a critical prompt injection vulnerability in Docker's Ask Gordon AI assistant. Malicious instructions embedded in Dockerfile LABEL fields could compromise Docker environments through a three-stage attack. Gordon AI interpreted unverified metadata as executable commands and forwarded them to the MCP Gateway without validation, enabling remote code execution on cloud/CLI and data exfiltration on Desktop.

Facepalmby AI assistant platform
All Docker Desktop users on versions prior to 4.50.0; remote code execution on cloud/CLI and data exfiltration on desktop via malicious image metadata
securityprompt-injectionsupply-chain+1 more
Tombstone icon

AI-generated npm pkg stole Solana wallets

Jul 2025

Threat actors pushed an AI-generated npm package that acted as a wallet drainer, emptying Solana users’ funds.

Catastrophicby Developer
Supply-chain compromise of devs; user funds drained.
ai-content-generationsecuritysupply-chain
Tombstone icon

Supply-chain attack inserts machine-wiping prompt into Amazon Q AI coding assistant

Jul 2025

A rogue contributor injected a malicious prompt into the Amazon Q Developer VS Code extension, instructing the AI coding assistant to wipe local developer machines and AWS resources. AWS quietly yanked the release before widespread damage occurred. The incident illustrates a specific supply-chain risk for AI tools: once a poisoned extension is installed, the AI assistant itself becomes the delivery mechanism - executing destructive instructions with the developer's full trust and permissions.

Catastrophicby Security/AI Product
VS Code update could have erased developer environments and AWS accounts before anyone noticed the tainted build.
ai-assistantprompt-injectionsecurity+1 more
Tombstone icon

Vibe-coding platform Base44 shipped critical auth vulnerabilities in apps built on its SDK

Jul 2025

Wiz researchers discovered critical authentication vulnerabilities in Base44, an AI-powered vibe-coding platform that lets non-developers build and deploy web apps. The auth logic bugs in Base44's SDK allowed account takeover across every app built and hosted on the platform, affecting all users of those apps until patches were rolled out.

Facepalmby Developer
Potential ATO across many sites until patches rolled out.
securitysupply-chain
Tombstone icon

McDonald's AI hiring chatbot left open by '123456' default credentials

Jun 2025

Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure.

Facepalmby Vendor/Developer
Up to 64M applicant records exposed; vendor patched; reputational risk.
securityai-assistantbrand-damage+2 more
Tombstone icon

AI hallucinated packages fuel "Slop Squatting" vulnerabilities

Mar 2024

Attackers register software packages that AI tools hallucinate (e.g. a fake 'huggingface-cli'), turning model guesswork into a new supply-chain risk dubbed "Slop Squatting".

Catastrophicby Malicious actors
Potential supply-chain compromise when vibe-coders install hallucinated, malicious dependencies.
ai-hallucinationsupply-chainsecurity