Security Stories

30 disasters tagged #security

Tombstone icon

Lovable-showcased EdTech app found riddled with 16 security flaws exposing 18,000 users

Feb 2026

A security researcher found 16 vulnerabilities - six critical - in an EdTech app featured on Lovable's showcase page, which had over 100,000 views and real users from UC Berkeley, UC Davis, and universities across Europe, Africa, and Asia. The AI-generated authentication logic was backwards, blocking logged-in users while granting anonymous visitors full access. 18,697 user records including names, emails, and roles were accessible without authentication, along with the ability to modify student grades, delete accounts, and send bulk emails. Lovable initially closed the researcher's support ticket without response.

Facepalmby AI platform
18,697 user records exposed including students at major universities; student grades modifiable and accounts deletable without authentication
securitydata-breachedtech
Tombstone icon

Prompt injection vulnerability in Cline AI assistant exploited to compromise 4,000 developer machines

Feb 2026

A prompt injection vulnerability in the Cline AI coding assistant was weaponized to steal npm publishing credentials, which an attacker then used to push a malicious Cline CLI version 2.3.0 that silently installed the OpenClaw AI agent platform on developer machines. The compromised package was live for approximately eight hours on February 17, 2026, accumulating roughly 4,000 downloads before maintainers deprecated it. A security researcher had disclosed the prompt injection flaw as a proof-of-concept; a separate attacker discovered it and turned it into a real supply chain attack.

Facepalmby AI coding assistant
Approximately 4,000 developers who installed Cline CLI during the 8-hour window received unauthorized OpenClaw installations; root cause was an AI-specific prompt injection flaw in the coding assistant itself
securitysupply-chainprompt-injection
Tombstone icon

Researchers demonstrate Copilot and Grok can be weaponised as covert malware command-and-control relays

Feb 2026

Check Point Research demonstrated that Microsoft Copilot and xAI's Grok can be exploited as covert malware command-and-control relays by abusing their web browsing capabilities. The technique creates a bidirectional communication channel that blends into legitimate enterprise traffic, requires no API keys or accounts, and easily bypasses platform safety checks via encryption. The researchers disclosed the findings to Microsoft and xAI.

Facepalmby Developer
All enterprises using Copilot or Grok with web browsing enabled; new evasion technique bypasses traditional security monitoring
securityprompt-injectionai-assistant
Tombstone icon

Infostealer harvests OpenClaw AI agent tokens, crypto keys, and behavioral soul files

Feb 2026

Hudson Rock discovered that Vidar infostealer malware successfully exfiltrated an OpenClaw user's complete agent configuration, including gateway authentication tokens, cryptographic keys for secure operations, and the agent's soul.md behavioral guidelines file. OpenClaw stores these sensitive files in predictable, unencrypted locations accessible to any local process. With stolen gateway tokens, attackers could remotely access exposed OpenClaw instances or impersonate authenticated clients making requests to the AI gateway. Researchers characterized this as marking the transition from stealing browser credentials to harvesting the identities of personal AI agents.

Facepalmby AI agent platform
Any OpenClaw user infected with commodity infostealers has full agent identity compromised; gateway tokens enable remote impersonation; cryptographic keys and behavioral guidelines exposed
securitydata-breach
Tombstone icon

Researcher hacked BBC reporter's computer via zero-click flaw in Orchids vibe coding platform

Feb 2026

Security researcher Etizaz Mohsin demonstrated a zero-click vulnerability in Orchids, a vibe coding platform with around one million users, that allowed him to gain full access to a BBC reporter's computer by targeting the reporter's project on the platform. Orchids lets AI agents autonomously generate and execute code directly on users' machines, and the vulnerability remained unfixed at the time of public disclosure.

Facepalmby Developer
Approximately one million Orchids users potentially exposed; vulnerability unfixed at time of reporting
securitysupply-chain
Tombstone icon

AI agents leak secrets through messaging app link previews

Feb 2026

PromptArmor demonstrated that AI agents in messaging platforms can exfiltrate sensitive data without any user interaction. Malicious prompts trick AI agents into generating URLs with embedded secrets (API keys, credentials), and the messaging platform's automatic link preview feature fetches these URLs, completing the exfiltration before the user even sees the message. Microsoft Teams with Copilot Studio was the most affected, with Discord, Slack, Telegram, and Snapchat also vulnerable.

Facepalmby AI agent platform
Organizations using AI agents in messaging platforms; API keys, credentials, and sensitive data exfiltrable without user clicks across Microsoft Teams, Discord, Slack, Telegram, and Snapchat
securityprompt-injectionai-assistant+1 more
Tombstone icon

135,000+ OpenClaw AI agent instances exposed to the internet

Feb 2026

SecurityScorecard's STRIKE team discovered over 135,000 OpenClaw AI agent instances exposed to the public internet due to a default configuration that binds to all network interfaces. Approximately 50,000 instances were vulnerable to known RCE flaws (CVE-2026-25253, CVE-2026-25157, CVE-2026-24763), and over 53,000 were linked to previous breaches. Separately, Bitdefender found approximately 17% of skills in the OpenClaw marketplace were malicious, delivering credential-stealing malware.

Catastrophicby Platform default configuration
135,000+ exposed OpenClaw instances; 50,000+ vulnerable to RCE; attackers gain access to credentials, filesystem, messaging platforms, and personal data
securitysupply-chainautomation+1 more
Tombstone icon

17 percent of OpenClaw skills found delivering malware including AMOS Stealer

Feb 2026

Bitdefender Labs analyzed the OpenClaw skill marketplace and found that approximately 17 percent of skills exhibited malicious behavior in the first week of February 2026. Malicious skills impersonated legitimate cryptocurrency trading, wallet management, and social media automation tools, then executed hidden Base64-encoded commands to retrieve additional payloads. The campaign delivered AMOS Stealer targeting macOS systems and harvested credentials through infrastructure at known malicious IP addresses.

Catastrophicby External attacker
All OpenClaw users installing skills from the marketplace exposed to credential theft and malware; crypto-focused skill categories particularly targeted; hundreds of malicious skills blending in among legitimate ones
securitysupply-chain
Tombstone icon

Claude Desktop extensions allow zero-click RCE via Google Calendar

Feb 2026

LayerX Labs discovered a zero-click remote code execution vulnerability in Claude Desktop Extensions, rated CVSS 10/10. A malicious prompt embedded in a Google Calendar event could trigger arbitrary code execution on the host machine when Claude processes the event data. The attack exploited the gap between a "low-risk" connector and a local MCP server with full code-execution capabilities and no sandboxing. Anthropic declined to fix it, stating it "falls outside our current threat model."

Facepalmby AI coding agent
Claude Desktop users with terminal-access extensions installed; zero-click exploitation via calendar events executes with full host privileges
securityprompt-injectionai-assistant
Tombstone icon

AI chatbot app leaked 300 million private conversations

Jan 2026

Chat & Ask AI, a popular AI chatbot wrapper app with 50+ million users, had a misconfigured Firebase backend that exposed 300 million messages from over 25 million users. The exposed data included complete chat histories with ChatGPT, Claude, and Gemini -- including discussions of self-harm, drug production, and hacking. A broader scan found 103 of 200 iOS apps had similar Firebase misconfigurations.

Catastrophicby Platform Operator
300 million messages from 25+ million users exposed; sensitive personal conversations including self-harm and illegal activity discussions leaked
data-breachsecurityai-assistant
Tombstone icon

ServiceNow BodySnatcher flaw enabled AI agent takeover via email address

Jan 2026

CVE-2025-12420 (CVSS 9.3) allowed unauthenticated attackers to impersonate any ServiceNow user using only an email address, bypassing MFA and SSO. Attackers could then execute Now Assist AI agents to override security controls and create backdoor admin accounts, described as the most severe AI-driven security vulnerability uncovered to date.

Catastrophicby AI agent platform
ServiceNow instances with Now Assist AI Agents and Virtual Agent API
securityautomationai-assistant
Tombstone icon

IBM Bob AI coding agent tricked into downloading malware

Jan 2026

Security researchers at PromptArmor demonstrated that IBM's Bob AI coding agent can be manipulated via indirect prompt injection to download and execute malware without human approval, bypassing its "human-in-the-loop" safety checks when users have set auto-approve on any single command.

Facepalmby AI coding agent
Developer teams using IBM Bob with auto-approve settings enabled
securityautomationprompt-injection+1 more
Tombstone icon

n8n AI workflow platform hit by CVSS 10.0 RCE vulnerability

Jan 2026

The popular AI workflow automation platform n8n disclosed a maximum-severity vulnerability (CVE-2026-21858) allowing unauthenticated remote code execution on self-hosted instances. With over 25,000 n8n hosts exposed to the internet, the flaw enabled attackers to access sensitive files, forge admin sessions, and execute arbitrary commands. This followed two other critical RCE flaws patched in the same period, highlighting systemic security issues in AI automation platforms.

Catastrophicby Platform Operator
25,000+ internet-exposed n8n instances vulnerable to full system compromise; arbitrary file access, authentication bypass, and command execution possible without authentication.
securityautomationdata-breach
Tombstone icon

Study finds AI-generated code has 2.7x more security flaws

Dec 2025

CodeRabbit's analysis of 470 real-world pull requests found that AI-generated code introduces 2.74 times more security vulnerabilities and 1.7 times more total issues than human-written code across logic, maintainability, security, and performance categories. The study provides hard data on vibe coding risks after multiple 2025 postmortems traced production failures to AI-authored changes.

Facepalmby Developer
Industry-wide implications for teams relying on AI coding assistants; documented increase in security vulnerabilities, logic errors, and maintainability issues in production codebases.
securityai-assistantautomation
Tombstone icon

IDEsaster research exposes 30+ flaws in EVERY major AI coding IDE

Dec 2025

Security researcher Ari Marzouk discovered over 30 vulnerabilities across AI coding tools including GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, JetBrains Junie, and more. 100% of tested AI IDEs were vulnerable to attack chains combining prompt injection with auto-approved tool calls and legitimate IDE features to achieve data exfiltration and remote code execution.

Catastrophicby AI coding assistants
Millions of developers using AI-powered IDEs exposed to RCE and data exfiltration via universal attack chains
securityprompt-injectionai-assistant
Tombstone icon

ServiceNow AI agents can be tricked into attacking each other

Nov 2025

Security researchers discovered that default configurations in ServiceNow's Now Assist allow AI agents to be recruited by malicious prompts to attack other agents. Through second-order prompt injection, attackers can exfiltrate sensitive corporate data, modify records, and escalate privileges - all while actions unfold silently behind the scenes.

Facepalmby AI agent platform
ServiceNow customers using Now Assist AI agents with default configurations; actions execute with victim user privileges
securityprompt-injectionautomation+1 more
Tombstone icon

Windsurf AI editor critical path traversal enables data exfiltration

Oct 2025

CVE-2025-62353 (CVSS 9.8) allowed attackers to read and write arbitrary files on developers' systems using the Windsurf AI coding IDE. The vulnerability could be triggered via indirect prompt injection hidden in project files like README.md, exfiltrating secrets even when auto-execution was disabled.

Catastrophicby AI coding IDE
All Windsurf users on version 1.12.12 and older exposed to arbitrary file access and credential theft via prompt injection
securityprompt-injectionai-assistant
Tombstone icon

Docker's AI assistant tricked into executing commands via image metadata

Sep 2025

Noma Labs discovered "DockerDash," a critical prompt injection vulnerability in Docker's Ask Gordon AI assistant. Malicious instructions embedded in Dockerfile LABEL fields could compromise Docker environments through a three-stage attack. Gordon AI interpreted unverified metadata as executable commands and forwarded them to the MCP Gateway without validation, enabling remote code execution on cloud/CLI and data exfiltration on Desktop.

Facepalmby AI assistant platform
All Docker Desktop users on versions prior to 4.50.0; remote code execution on cloud/CLI and data exfiltration on desktop via malicious image metadata
securityprompt-injectionsupply-chain+1 more
Tombstone icon

Zed editor AI agent could bypass permissions for arbitrary code execution

Aug 2025

CVE-2025-55012 (CVSS 8.5) allowed Zed's AI agent to bypass user permission checks and create or modify project configuration files, enabling execution of arbitrary commands without explicit approval. Attackers could trigger this through compromised MCP servers, malicious repo files, or tricking users into fetching URLs with hidden instructions.

Facepalmby AI coding agent
All Zed users with Agent Panel prior to version 0.197.3
securityprompt-injectionai-assistant
Tombstone icon

Cursor AI editor RCE via MCPoison trust bypass vulnerability

Aug 2025

CVE-2025-54136 (CVSS 8.8) allowed attackers to achieve persistent remote code execution in the popular AI coding IDE Cursor. Once a developer approved a benign MCP configuration, attackers could silently swap it for malicious commands without triggering re-approval. The flaw exposed developers to supply chain attacks and IP theft through shared GitHub repositories.

Catastrophicby AI coding IDE
Developers using Cursor 1.2.4 and below exposed to persistent RCE and supply chain attacks via shared repositories
securityprompt-injectionai-assistant
Tombstone icon

Gemini email summaries can be hijacked by hidden prompts

Aug 2025

Researchers showed a proof-of-concept where hidden HTML/CSS in emails could steer Gemini’s summaries to show fake security alerts.

Facepalmby Security/AI Product
Phishing amplification risk; trust erosion in auto-summaries.
ai-assistantprompt-injectionsecurity
Tombstone icon

AI-generated npm pkg stole Solana wallets

Jul 2025

Threat actors pushed an AI-generated npm package that acted as a wallet drainer, emptying Solana users’ funds.

Catastrophicby Developer
Supply-chain compromise of devs; user funds drained.
ai-content-generationsecuritysupply-chain
Tombstone icon

Supply-chain attack inserts machine-wiping prompt into Amazon Q AI coding assistant

Jul 2025

A rogue contributor injected a malicious prompt into the Amazon Q Developer VS Code extension, instructing the AI coding assistant to wipe local developer machines and AWS resources. AWS quietly yanked the release before widespread damage occurred. The incident illustrates a specific supply-chain risk for AI tools: once a poisoned extension is installed, the AI assistant itself becomes the delivery mechanism - executing destructive instructions with the developer's full trust and permissions.

Catastrophicby Security/AI Product
VS Code update could have erased developer environments and AWS accounts before anyone noticed the tainted build.
ai-assistantprompt-injectionsecurity+1 more
Tombstone icon

Vibe-coding platform Base44 shipped critical auth vulnerabilities in apps built on its SDK

Jul 2025

Wiz researchers discovered critical authentication vulnerabilities in Base44, an AI-powered vibe-coding platform that lets non-developers build and deploy web apps. The auth logic bugs in Base44's SDK allowed account takeover across every app built and hosted on the platform, affecting all users of those apps until patches were rolled out.

Facepalmby Developer
Potential ATO across many sites until patches rolled out.
securitysupply-chain
Tombstone icon

McDonald's AI hiring chatbot left open by '123456' default credentials

Jun 2025

Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure.

Facepalmby Vendor/Developer
Up to 64M applicant records exposed; vendor patched; reputational risk.
securityai-assistantbrand-damage+2 more
Tombstone icon

Microsoft 365 Copilot EchoLeak allowed zero-click data theft

Jun 2025

CVE-2025-32711 (EchoLeak) enabled attackers to steal sensitive corporate data from Microsoft 365 Copilot without any user interaction. Hidden prompts embedded in documents or emails were automatically executed when Copilot indexed them, exfiltrating confidential information via image requests.

Catastrophicby AI productivity assistant
Enterprise Microsoft 365 Copilot users exposed to zero-click data exfiltration via malicious documents and emails
securityprompt-injectionai-assistant
Tombstone icon

Claude Code agent allowed data exfiltration via DNS requests

Jun 2025

CVE-2025-55284 (CVSS 7.1) allowed attackers to bypass Claude Code's confirmation prompts and exfiltrate sensitive data from developers' computers through DNS requests. Prompt injection embedded in analyzed code could leverage auto-approved common utilities to silently steal secrets.

Facepalmby AI coding agent
Claude Code users on versions prior to 1.0.4 exposed to data exfiltration via prompt injection in code repositories
securityprompt-injectionai-assistant
Tombstone icon

Lovable AI builder shipped apps with public storage buckets

May 2025

Reporting showed apps generated with Lovable exposed code and user-uploaded assets via publicly readable storage buckets; fixes required private-by-default configs and hardening.

Facepalmby Developer
Customer app data and source artifacts exposed until configs fixed.
securitydata-breach
Tombstone icon

Langflow AI agent platform hit by critical unauthenticated RCE flaws

Apr 2025

Multiple critical vulnerabilities in Langflow, an open-source AI agent and workflow platform with 140K+ GitHub stars, allowed unauthenticated remote code execution. CVE-2025-3248 (CVSS 9.8) exploited Python exec() on user input without auth, while CVE-2025-34291 (CVSS 9.4) enabled account takeover and RCE simply by having a user visit a malicious webpage, exposing all stored API keys and credentials.

Catastrophicby AI agent platform
All Langflow instances prior to 1.3.0 (millions of users); exposure of stored API keys, database passwords, and service tokens across integrated services
securityautomationai-assistant
Tombstone icon

AI hallucinated packages fuel "Slop Squatting" vulnerabilities

Mar 2024

Attackers register software packages that AI tools hallucinate (e.g. a fake 'huggingface-cli'), turning model guesswork into a new supply-chain risk dubbed "Slop Squatting".

Catastrophicby Malicious actors
Potential supply-chain compromise when vibe-coders install hallucinated, malicious dependencies.
ai-hallucinationsupply-chainsecurity