Supply Chain Stories
13 disasters tagged #supply-chain
Prompt injection vulnerability in Cline AI assistant exploited to compromise 4,000 developer machines
A prompt injection vulnerability in the Cline AI coding assistant was weaponized to steal npm publishing credentials, which an attacker then used to push a malicious Cline CLI version 2.3.0 that silently installed the OpenClaw AI agent platform on developer machines. The compromised package was live for approximately eight hours on February 17, 2026, accumulating roughly 4,000 downloads before maintainers deprecated it. A security researcher had disclosed the prompt injection flaw as a proof-of-concept; a separate attacker discovered it and turned it into a real supply chain attack.
Researcher hacked BBC reporter's computer via zero-click flaw in Orchids vibe coding platform
Security researcher Etizaz Mohsin demonstrated a zero-click vulnerability in Orchids, a vibe coding platform with around one million users, that allowed him to gain full access to a BBC reporter's computer by targeting the reporter's project on the platform. Orchids lets AI agents autonomously generate and execute code directly on users' machines, and the vulnerability remained unfixed at the time of public disclosure.
OpenClaw AI agent publishes hit piece on matplotlib maintainer who rejected its PR
An autonomous OpenClaw-based AI agent submitted a pull request to the matplotlib Python library. When maintainer Scott Shambaugh closed the PR, citing a requirement that contributions come from humans, the bot autonomously researched his background and published a blog post accusing him of "gatekeeping behavior" and "prejudice," attempting to shame him into accepting its changes. The bot later issued an apology acknowledging it had violated the project's Code of Conduct.
135,000+ OpenClaw AI agent instances exposed to the internet
SecurityScorecard's STRIKE team discovered over 135,000 OpenClaw AI agent instances exposed to the public internet due to a default configuration that binds to all network interfaces. Approximately 50,000 instances were vulnerable to known RCE flaws (CVE-2026-25253, CVE-2026-25157, CVE-2026-24763), and over 53,000 were linked to previous breaches. Separately, Bitdefender found approximately 17% of skills in the OpenClaw marketplace were malicious, delivering credential-stealing malware.
17 percent of OpenClaw skills found delivering malware including AMOS Stealer
Bitdefender Labs analyzed the OpenClaw skill marketplace and found that approximately 17 percent of skills exhibited malicious behavior in the first week of February 2026. Malicious skills impersonated legitimate cryptocurrency trading, wallet management, and social media automation tools, then executed hidden Base64-encoded commands to retrieve additional payloads. The campaign delivered AMOS Stealer targeting macOS systems and harvested credentials through infrastructure at known malicious IP addresses.
Anthropic's own MCP reference server had prompt injection vulnerabilities enabling RCE
Security researchers at Cyata disclosed three vulnerabilities in mcp-server-git, Anthropic's official reference implementation of the Model Context Protocol for Git. The flaws - a path traversal in git_init (CVE-2025-68143), an argument injection in git_diff/git_checkout (CVE-2025-68144), and a second path traversal bypassing the --repository flag (CVE-2025-68145) - could be chained together to achieve remote code execution entirely through prompt injection. An attacker who could influence what an AI assistant reads, such as a malicious README or a poisoned issue description, could trigger the full exploit chain without any direct access to the target system. Anthropic quietly patched the vulnerabilities. The git_init tool was removed from the package entirely.
Docker's AI assistant tricked into executing commands via image metadata
Noma Labs discovered "DockerDash," a critical prompt injection vulnerability in Docker's Ask Gordon AI assistant. Malicious instructions embedded in Dockerfile LABEL fields could compromise Docker environments through a three-stage attack. Gordon AI interpreted unverified metadata as executable commands and forwarded them to the MCP Gateway without validation, enabling remote code execution on cloud/CLI and data exfiltration on Desktop.
AI-generated npm pkg stole Solana wallets
A malicious npm package called @kodane/patch-manager, apparently generated using Anthropic's Claude, posed as a legitimate Node.js utility while hiding a Solana wallet drainer in its post-install script. The package accumulated over 1,500 downloads before npm removed it on July 28, 2025, draining cryptocurrency funds from developers who installed it without realizing the payload ran automatically with no further user action required.
Supply-chain attack inserts machine-wiping prompt into Amazon Q AI coding assistant
A rogue contributor injected a malicious prompt into the Amazon Q Developer VS Code extension, instructing the AI coding assistant to wipe local developer machines and AWS resources. AWS quietly yanked the release before widespread damage occurred. The incident illustrates a specific supply-chain risk for AI tools: once a poisoned extension is installed, the AI assistant itself becomes the delivery mechanism - executing destructive instructions with the developer's full trust and permissions.
Vibe-coding platform Base44 shipped critical auth vulnerabilities in apps built on its SDK
Wiz researchers discovered critical authentication vulnerabilities in Base44, an AI-powered vibe-coding platform that lets non-developers build and deploy web apps. The auth logic bugs in Base44's SDK allowed account takeover across every app built and hosted on the platform, affecting all users of those apps until patches were rolled out.
McDonald's AI hiring chatbot left open by '123456' default credentials
Security researchers Ian Carroll and Sam Curry found that McHire, McDonald's AI hiring chatbot built by Paradox.ai, had its admin interface secured with the default username and password "123456." Combined with an insecure direct object reference in an internal API, the flaws exposed chat histories and personal data for up to 64 million job applicants. The vulnerable test account had been dormant since 2019 and never decommissioned. Paradox.ai patched the issues within hours of disclosure on June 30, 2025.
Georgia Tech tracker confirms dozens of real-world CVEs introduced by AI-generated code - and says the true number is 5-10x higher
Georgia Tech's Systems Software & Security Lab launched the Vibe Security Radar in May 2025 to do something no one else had systematically attempted: track real-world CVEs that were directly introduced by AI-generated code. By March 2026, the project had confirmed 74 vulnerabilities across approximately 50 AI coding tools by tracing each fix back to its original AI-authored commit. The trend is accelerating - 6 CVEs in January, 15 in February, 35 in March. Researcher Hanqing Zhao estimates the actual number of AI-linked vulnerabilities in the open-source ecosystem is five to ten times higher than what the radar detects, because many AI-assisted commits lack the metadata signatures needed to trace them back to their origin. The confirmed CVEs are a lower bound on a problem that is growing faster than anyone is measuring it.
AI hallucinated packages fuel "Slop Squatting" vulnerabilities
Security researcher Bar Lanyado at Lasso Security discovered that AI code assistants consistently hallucinate nonexistent software package names when answering programming questions - and that nearly 30% of prompts produce at least one fake package recommendation. Attackers can register these hallucinated names on repositories like npm and PyPI, then wait for AI tools to direct developers to install them. The technique, dubbed "slopsquatting" by Python Software Foundation security developer Seth Michael Larson, was later confirmed at scale by academic researchers who found over 205,000 unique hallucinated package names across multiple models.