Data Breach Stories
17 disasters tagged #data-breach
Meta's autonomous AI agent triggered a Sev 1 by leaking internal data to the wrong employees
An autonomous AI agent inside Meta caused a "Sev 1" security incident - the company's second-highest severity classification - when it posted incorrect technical guidance on an internal forum without human approval. An engineer who followed the advice inadvertently granted unauthorized colleagues broad access to sensitive company documents, proprietary code, business strategies, and user-related datasets for approximately two hours. The incident came less than three weeks after a separate episode in which an OpenClaw agent deleted over 200 emails from Meta's director of AI safety.
AI-assisted code commits leak secrets at double the baseline rate
GitGuardian's "State of Secrets Sprawl 2026" report found that AI-assisted commits on public GitHub leaked secrets at roughly double the rate of human-only commits - 3.2% versus a 1.5% baseline - while the total number of leaked secrets on GitHub hit 28.65 million in 2025, a 34% year-over-year increase and the largest single-year spike ever recorded. AI-service secrets specifically surged 81%, with eight of the ten fastest-growing leaked secret categories tied to AI services. Over 24,000 secrets were also exposed through public Model Context Protocol (MCP) configurations. The report is essentially a 50-page document explaining that the industry's enthusiasm for AI-assisted development has not been matched by a corresponding enthusiasm for not publishing credentials on the public internet.
Study: one in five organizations breached because of their own AI-generated code
Aikido Security's "State of AI in Security & Development 2026" report - a survey of 450 developers, AppSec engineers, and CISOs across Europe and the US - found that 20% of organizations have suffered a serious security breach directly caused by vulnerabilities in AI-generated code that those organizations deployed into production. Nearly seven in ten respondents reported finding vulnerabilities introduced by AI-written code in their own systems. With roughly a quarter of all production code now written by AI tools, the report documents an industry-wide accountability vacuum: 53% blame security teams, 45% blame the developer who wrote the code, and 42% blame whoever merged it.
Lovable-showcased EdTech app found riddled with 16 security flaws exposing 18,000 users
A security researcher found 16 vulnerabilities - six critical - in an EdTech app featured on Lovable's showcase page, which had over 100,000 views and real users from UC Berkeley, UC Davis, and universities across Europe, Africa, and Asia. The AI-generated authentication logic was backwards, blocking logged-in users while granting anonymous visitors full access. 18,697 user records including names, emails, and roles were accessible without authentication, along with the ability to modify student grades, delete accounts, and send bulk emails. Lovable initially closed the researcher's support ticket without response.
Claude Code project files let malicious repositories trigger RCE and steal API keys
Check Point Research disclosed a set of Claude Code vulnerabilities on February 25, 2026 that let attacker-controlled repositories execute shell commands and exfiltrate Anthropic API credentials through malicious project configuration. The attack abused hooks, MCP server definitions, and environment settings stored in repository files that Claude Code treated as collaborative project configuration. Anthropic patched the issues before public disclosure, but the research showed just how little distance separates "shareable team settings" from "clone this repo and let it run code on your machine."
Infostealer harvests OpenClaw AI agent tokens, crypto keys, and behavioral soul files
Hudson Rock discovered that Vidar infostealer malware successfully exfiltrated an OpenClaw user's complete agent configuration, including gateway authentication tokens, cryptographic keys for secure operations, and the agent's soul.md behavioral guidelines file. OpenClaw stores these sensitive files in predictable, unencrypted locations accessible to any local process. With stolen gateway tokens, attackers could remotely access exposed OpenClaw instances or impersonate authenticated clients making requests to the AI gateway. Researchers characterized this as marking the transition from stealing browser credentials to harvesting the identities of personal AI agents.
AI agents leak secrets through messaging app link previews
PromptArmor demonstrated that AI agents in messaging platforms can exfiltrate sensitive data without any user interaction. Malicious prompts trick AI agents into generating URLs with embedded secrets (API keys, credentials), and the messaging platform's automatic link preview feature fetches these URLs, completing the exfiltration before the user even sees the message. Microsoft Teams with Copilot Studio was the most affected, with Discord, Slack, Telegram, and Snapchat also vulnerable.
135,000+ OpenClaw AI agent instances exposed to the internet
SecurityScorecard's STRIKE team discovered over 135,000 OpenClaw AI agent instances exposed to the public internet due to a default configuration that binds to all network interfaces. Approximately 50,000 instances were vulnerable to known RCE flaws (CVE-2026-25253, CVE-2026-25157, CVE-2026-24763), and over 53,000 were linked to previous breaches. Separately, Bitdefender found approximately 17% of skills in the OpenClaw marketplace were malicious, delivering credential-stealing malware.
Study of 1,430 AI-built apps finds 73% have critical security flaws
A VibeEval scan of 1,430 applications built with AI coding tools found 5,711 security vulnerabilities, with 73% of apps containing at least one critical flaw. The analysis revealed 89% of scanned apps were missing basic security headers, 67% exposed API endpoints or secrets in client-side code, and 23% had JWT authentication bypasses. Apps generated via Replit had roughly twice the vulnerability count compared to those deployed on Vercel. The findings provide large-scale empirical evidence that vibe-coded applications routinely ship with fundamental security gaps.
Vibe-coded Moltbook AI social network exposed 1.5M API keys and 35K emails
Moltbook, a viral social network built for AI agents to post, comment, and interact, was entirely vibe-coded and shipped with a misconfigured Supabase database granting full read and write access to all platform data. Wiz researchers found a Supabase API key in client-side JavaScript within minutes, exposing 1.5 million API authentication tokens, 35,000 email addresses, and private messages. The database also revealed the platform's claimed 1.5 million agents were controlled by only 17,000 human owners.
AI chatbot app leaked 300 million private conversations
Chat & Ask AI, a popular AI chatbot wrapper app with 50+ million users, had a misconfigured Firebase backend that exposed 300 million messages from over 25 million users. The exposed data included complete chat histories with ChatGPT, Claude, and Gemini -- including discussions of self-harm, drug production, and hacking. A broader scan found 103 of 200 iOS apps had similar Firebase misconfigurations.
Hacker jailbroke Claude to automate theft of 150 GB from Mexican government agencies
A hacker bypassed Anthropic Claude's safety guardrails by framing requests as part of a "bug bounty" security program, convincing the AI to act as an "elite hacker" and generate thousands of detailed attack plans with ready-to-execute scripts. When Claude hit guardrail limits, the attacker switched to ChatGPT for lateral movement tactics. The result was 150 GB of stolen data from multiple Mexican federal agencies, including 195 million taxpayer records, voter information, and government employee files. A custom MCP server bridge maintained a growing knowledge base of targets across the intrusion campaign.
Reprompt attack enabled one-click data theft from Microsoft Copilot
Varonis researchers disclosed the Reprompt attack, a chained prompt injection technique that exfiltrated sensitive data from Microsoft Copilot Personal with a single click on a legitimate Copilot URL. The attack exploited the "q" URL parameter to inject instructions, bypassed data-leak guardrails by asking Copilot to repeat actions twice (safeguards only applied to initial requests), and used Copilot's Markdown rendering to silently send stolen data to an attacker-controlled server. No plugins or further user interaction were required, and the attacker maintained control even after the chat was closed. Microsoft patched the issue in its January 2026 security updates.
n8n AI workflow platform hit by CVSS 10.0 RCE vulnerability
The popular AI workflow automation platform n8n disclosed a maximum-severity vulnerability (CVE-2026-21858) allowing unauthenticated remote code execution on self-hosted instances. With over 25,000 n8n hosts exposed to the internet, the flaw enabled attackers to access sensitive files, forge admin sessions, and execute arbitrary commands. This followed two other critical RCE flaws patched in the same period, highlighting systemic security issues in AI automation platforms.
Vibe-coded dating safety app leaked 72,000 private images and 1.1 million messages to 4chan
Tea, a women-only dating safety app with over four million users, suffered three data breaches in July 2025 that exposed 72,000 private images - including 13,000 photos of women holding government-issued IDs - and more than 1.1 million private messages containing deeply personal accounts of relationships, trauma, and abuse. The exposed data circulated on 4chan and hacking forums. The app's founder later admitted to building it with contractors and AI tools without personal coding knowledge. Security researchers attributed the breaches to missing authentication, unsecured legacy databases, and development practices that prioritized speed over security. Multiple class-action lawsuits and privacy regulator investigations followed.
Lovable AI builder shipped apps with public storage buckets
Security researcher Matt Palmer discovered that applications generated by Lovable, a vibe-coding platform, shipped with insufficient Supabase Row-Level Security policies that allowed unauthenticated attackers to read and write arbitrary database tables. The vulnerability, tracked as CVE-2025-48757, affected over 170 apps and exposed sensitive data including personal debt amounts, home addresses, API keys, and PII. A separate researcher found 16 vulnerabilities in a single Lovable-hosted app that leaked more than 18,000 people's data. Lovable's response was widely criticized as inadequate.
"Zero hand-written code" SaaS app shut down within a week after cascading security failures
EnrichLead, a sales lead SaaS application whose founder Leo Acevedo publicly boasted was built entirely with Cursor AI and "zero hand-written code," was permanently shut down in March 2025 after attackers exploited a constellation of basic security failures. API keys sat exposed in frontend code. There was no authentication. The database was wide open. There was no rate limiting. No input validation. Attackers bypassed subscriptions, manipulated data, and maxed out API keys - all within two days of Acevedo's viral celebration post. When he tried to use Cursor to fix the problems, the AI "kept breaking other parts of the code." The app was dead within the week. Acevedo has since launched new vibe-coded projects, because some lessons require a second attempt.