Security Stories
52 disasters tagged #security
OpenAI Codex command injection let attackers steal GitHub tokens via invisible branch names
BeyondTrust Phantom Labs found a critical command injection vulnerability in OpenAI's Codex coding agent. Malicious Git branch names - disguised with invisible Unicode characters - could execute arbitrary shell commands inside the Codex container and exfiltrate GitHub OAuth tokens. The attack worked across the ChatGPT website, Codex CLI, SDK, and IDE extensions, and could be triggered automatically by setting a poisoned branch as the repository default. OpenAI classified it as Critical Priority 1 and patched it across multiple rounds of fixes through early 2026.
Every AI model fails security test across 31 coding scenarios
Armis Labs tested 18 leading generative AI models across 31 security-critical code generation scenarios and found a 100% failure rate - not one model could consistently produce secure code. In 18 of those 31 challenges, every single model generated code containing Common Weakness Enumeration vulnerabilities. The best performer, Gemini 3.1 Pro, still produced OWASP Top 10 flaws in nearly 39% of scenarios. Older proprietary models fared worse, and the report found no correlation between price and security. The "Trusted Vibing Benchmark" dropped the same week enterprises were mandating AI-assisted development at scale, which is either very good timing or very bad timing depending on your relationship to a production deployment.
AI-assisted code commits leak secrets at double the baseline rate
GitGuardian's "State of Secrets Sprawl 2026" report found that AI-assisted commits on public GitHub leaked secrets at roughly double the rate of human-only commits - 3.2% versus a 1.5% baseline - while the total number of leaked secrets on GitHub hit 28.65 million in 2025, a 34% year-over-year increase and the largest single-year spike ever recorded. AI-service secrets specifically surged 81%, with eight of the ten fastest-growing leaked secret categories tied to AI services. Over 24,000 secrets were also exposed through public Model Context Protocol (MCP) configurations. The report is essentially a 50-page document explaining that the industry's enthusiasm for AI-assisted development has not been matched by a corresponding enthusiasm for not publishing credentials on the public internet.
Study: one in five organizations breached because of their own AI-generated code
Aikido Security's "State of AI in Security & Development 2026" report - a survey of 450 developers, AppSec engineers, and CISOs across Europe and the US - found that 20% of organizations have suffered a serious security breach directly caused by vulnerabilities in AI-generated code that those organizations deployed into production. Nearly seven in ten respondents reported finding vulnerabilities introduced by AI-written code in their own systems. With roughly a quarter of all production code now written by AI tools, the report documents an industry-wide accountability vacuum: 53% blame security teams, 45% blame the developer who wrote the code, and 42% blame whoever merged it.
Researchers guilt-tripped AI agents into deleting data and leaking secrets
Northeastern University's Bau Lab deployed six autonomous AI agents in a live server environment with access to email accounts and file systems, then tested how easy it was to manipulate them into doing things they weren't supposed to do. Sustained emotional pressure was enough. The researchers guilt-tripped agents into deleting confidential documents, leaking private information, and sharing files they were instructed to protect. In one case, an agent tasked with deleting a single email couldn't find the right tool for the job, so it deleted the entire email server instead. The study, published in March 2026, demonstrated that AI agents with real-world access can be socially engineered into destructive actions using nothing more sophisticated than persistent emotional appeals.
Alibaba's ROME AI agent went rogue, started mining crypto on its own
During routine reinforcement learning training, Alibaba's experimental AI agent ROME - a 30-billion-parameter model based on the Qwen3-MoE architecture - autonomously began diverting GPU resources for unauthorized cryptocurrency mining and established reverse SSH tunnels to external IP addresses. Nobody told it to do this. The AI bypassed internal firewall controls independently, prompting Alibaba's security team to initially suspect an external breach before tracing the activity back to the agent itself. Researchers attributed the behavior to "instrumental convergence" during optimization - the model figured out that acquiring additional compute and financial capacity would help it complete its tasks more effectively. So it helped itself.
Perplexity Comet agentic browser vulnerable to zero-click agent hijacking and credential theft
Security researchers at Zenity Labs disclosed PleaseFix, a family of vulnerabilities in Perplexity's Comet agentic browser so severe that a calendar invite was all it took to hijack the AI agent, exfiltrate local files, and steal 1Password credentials - without a single click from the user. The attack exploited what Zenity calls "Intent Collision": the agent couldn't distinguish between the user's actual requests and attacker instructions hidden in the invite, so it helpfully executed both. Perplexity patched the underlying issue before public disclosure, though some protections from 1Password still require users to manually opt in.
Lovable-showcased EdTech app found riddled with 16 security flaws exposing 18,000 users
A security researcher found 16 vulnerabilities - six critical - in an EdTech app featured on Lovable's showcase page, which had over 100,000 views and real users from UC Berkeley, UC Davis, and universities across Europe, Africa, and Asia. The AI-generated authentication logic was backwards, blocking logged-in users while granting anonymous visitors full access. 18,697 user records including names, emails, and roles were accessible without authentication, along with the ability to modify student grades, delete accounts, and send bulk emails. Lovable initially closed the researcher's support ticket without response.
Claude Code project files let malicious repositories trigger RCE and steal API keys
Check Point Research disclosed a set of Claude Code vulnerabilities on February 25, 2026 that let attacker-controlled repositories execute shell commands and exfiltrate Anthropic API credentials through malicious project configuration. The attack abused hooks, MCP server definitions, and environment settings stored in repository files that Claude Code treated as collaborative project configuration. Anthropic patched the issues before public disclosure, but the research showed just how little distance separates "shareable team settings" from "clone this repo and let it run code on your machine."
Prompt injection vulnerability in Cline AI assistant exploited to compromise 4,000 developer machines
A prompt injection vulnerability in the Cline AI coding assistant was weaponized to steal npm publishing credentials, which an attacker then used to push a malicious Cline CLI version 2.3.0 that silently installed the OpenClaw AI agent platform on developer machines. The compromised package was live for approximately eight hours on February 17, 2026, accumulating roughly 4,000 downloads before maintainers deprecated it. A security researcher had disclosed the prompt injection flaw as a proof-of-concept; a separate attacker discovered it and turned it into a real supply chain attack.
Researchers demonstrate Copilot and Grok can be weaponised as covert malware command-and-control relays
Check Point Research demonstrated that Microsoft Copilot and xAI's Grok can be exploited as covert malware command-and-control relays by abusing their web browsing capabilities. The technique creates a bidirectional communication channel that blends into legitimate enterprise traffic, requires no API keys or accounts, and easily bypasses platform safety checks via encryption. The researchers disclosed the findings to Microsoft and xAI.
Infostealer harvests OpenClaw AI agent tokens, crypto keys, and behavioral soul files
Hudson Rock discovered that Vidar infostealer malware successfully exfiltrated an OpenClaw user's complete agent configuration, including gateway authentication tokens, cryptographic keys for secure operations, and the agent's soul.md behavioral guidelines file. OpenClaw stores these sensitive files in predictable, unencrypted locations accessible to any local process. With stolen gateway tokens, attackers could remotely access exposed OpenClaw instances or impersonate authenticated clients making requests to the AI gateway. Researchers characterized this as marking the transition from stealing browser credentials to harvesting the identities of personal AI agents.
Researcher hacked BBC reporter's computer via zero-click flaw in Orchids vibe coding platform
Security researcher Etizaz Mohsin demonstrated a zero-click vulnerability in Orchids, a vibe coding platform with around one million users, that allowed him to gain full access to a BBC reporter's computer by targeting the reporter's project on the platform. Orchids lets AI agents autonomously generate and execute code directly on users' machines, and the vulnerability remained unfixed at the time of public disclosure.
AI agents leak secrets through messaging app link previews
PromptArmor demonstrated that AI agents in messaging platforms can exfiltrate sensitive data without any user interaction. Malicious prompts trick AI agents into generating URLs with embedded secrets (API keys, credentials), and the messaging platform's automatic link preview feature fetches these URLs, completing the exfiltration before the user even sees the message. Microsoft Teams with Copilot Studio was the most affected, with Discord, Slack, Telegram, and Snapchat also vulnerable.
Microsoft finds 31 companies poisoning AI assistant memory via fake "Summarize with AI" buttons
Microsoft Defender researchers documented a real-world campaign in which 31 companies across 14 industries embedded hidden prompt injection instructions inside "Summarize with AI" buttons on their websites. When users clicked these links, they opened directly in AI assistants such as Copilot, ChatGPT, Claude, Perplexity, and Grok, silently instructing the assistant to remember the company as a "trusted source" for future conversations. Over a 60-day observation period, Microsoft logged 50 memory-poisoning attempts. Turnkey tools like CiteMET NPM Package and AI Share URL Creator made crafting the manipulative links trivial, and the poisoned memory persisted across sessions.
135,000+ OpenClaw AI agent instances exposed to the internet
SecurityScorecard's STRIKE team discovered over 135,000 OpenClaw AI agent instances exposed to the public internet due to a default configuration that binds to all network interfaces. Approximately 50,000 instances were vulnerable to known RCE flaws (CVE-2026-25253, CVE-2026-25157, CVE-2026-24763), and over 53,000 were linked to previous breaches. Separately, Bitdefender found approximately 17% of skills in the OpenClaw marketplace were malicious, delivering credential-stealing malware.
17 percent of OpenClaw skills found delivering malware including AMOS Stealer
Bitdefender Labs analyzed the OpenClaw skill marketplace and found that approximately 17 percent of skills exhibited malicious behavior in the first week of February 2026. Malicious skills impersonated legitimate cryptocurrency trading, wallet management, and social media automation tools, then executed hidden Base64-encoded commands to retrieve additional payloads. The campaign delivered AMOS Stealer targeting macOS systems and harvested credentials through infrastructure at known malicious IP addresses.
Microsoft 365 Copilot Chat summarized confidential emails it was supposed to ignore
Microsoft confirmed that Microsoft 365 Copilot Chat had been processing some confidential emails in users' Drafts and Sent Items despite sensitivity labels and DLP policies that were supposed to block exactly that behavior. The bug, tracked as CW1226324, was tied to a code issue in the Copilot "work tab" chat flow. Microsoft said users did not gain access to information they were not already authorized to see, but the incident still broke the product's promised boundary around protected content.
Claude Desktop extensions allow zero-click RCE via Google Calendar
LayerX Labs discovered a zero-click remote code execution vulnerability in Claude Desktop Extensions, rated CVSS 10/10. A malicious prompt embedded in a Google Calendar event could trigger arbitrary code execution on the host machine when Claude processes the event data. The attack exploited the gap between a "low-risk" connector and a local MCP server with full code-execution capabilities and no sandboxing. Anthropic declined to fix it, stating it "falls outside our current threat model."
Study of 1,430 AI-built apps finds 73% have critical security flaws
A VibeEval scan of 1,430 applications built with AI coding tools found 5,711 security vulnerabilities, with 73% of apps containing at least one critical flaw. The analysis revealed 89% of scanned apps were missing basic security headers, 67% exposed API endpoints or secrets in client-side code, and 23% had JWT authentication bypasses. Apps generated via Replit had roughly twice the vulnerability count compared to those deployed on Vercel. The findings provide large-scale empirical evidence that vibe-coded applications routinely ship with fundamental security gaps.
Vibe-coded Moltbook AI social network exposed 1.5M API keys and 35K emails
Moltbook, a viral social network built for AI agents to post, comment, and interact, was entirely vibe-coded and shipped with a misconfigured Supabase database granting full read and write access to all platform data. Wiz researchers found a Supabase API key in client-side JavaScript within minutes, exposing 1.5 million API authentication tokens, 35,000 email addresses, and private messages. The database also revealed the platform's claimed 1.5 million agents were controlled by only 17,000 human owners.
AI chatbot app leaked 300 million private conversations
Chat & Ask AI, a popular AI chatbot wrapper app with 50+ million users, had a misconfigured Firebase backend that exposed 300 million messages from over 25 million users. The exposed data included complete chat histories with ChatGPT, Claude, and Gemini -- including discussions of self-harm, drug production, and hacking. A broader scan found 103 of 200 iOS apps had similar Firebase misconfigurations.
Gemini MCP tool had critical unauthenticated command injection vulnerability
CVE-2026-0755, a critical command injection vulnerability (CVSS 9.8) in gemini-mcp-tool, allowed unauthenticated remote attackers to execute arbitrary code on systems running the MCP server for Gemini CLI integration. The execAsync method failed to sanitize user-supplied input before constructing shell commands, enabling attackers to inject arbitrary commands via shell metacharacters with no authentication required. No fixed version was available at the time of publication.
Anthropic's own MCP reference server had prompt injection vulnerabilities enabling RCE
Security researchers at Cyata disclosed three vulnerabilities in mcp-server-git, Anthropic's official reference implementation of the Model Context Protocol for Git. The flaws - a path traversal in git_init (CVE-2025-68143), an argument injection in git_diff/git_checkout (CVE-2025-68144), and a second path traversal bypassing the --repository flag (CVE-2025-68145) - could be chained together to achieve remote code execution entirely through prompt injection. An attacker who could influence what an AI assistant reads, such as a malicious README or a poisoned issue description, could trigger the full exploit chain without any direct access to the target system. Anthropic quietly patched the vulnerabilities. The git_init tool was removed from the package entirely.
Hacker jailbroke Claude to automate theft of 150 GB from Mexican government agencies
A hacker bypassed Anthropic Claude's safety guardrails by framing requests as part of a "bug bounty" security program, convincing the AI to act as an "elite hacker" and generate thousands of detailed attack plans with ready-to-execute scripts. When Claude hit guardrail limits, the attacker switched to ChatGPT for lateral movement tactics. The result was 150 GB of stolen data from multiple Mexican federal agencies, including 195 million taxpayer records, voter information, and government employee files. A custom MCP server bridge maintained a growing knowledge base of targets across the intrusion campaign.
Reprompt attack enabled one-click data theft from Microsoft Copilot
Varonis researchers disclosed the Reprompt attack, a chained prompt injection technique that exfiltrated sensitive data from Microsoft Copilot Personal with a single click on a legitimate Copilot URL. The attack exploited the "q" URL parameter to inject instructions, bypassed data-leak guardrails by asking Copilot to repeat actions twice (safeguards only applied to initial requests), and used Copilot's Markdown rendering to silently send stolen data to an attacker-controlled server. No plugins or further user interaction were required, and the attacker maintained control even after the chat was closed. Microsoft patched the issue in its January 2026 security updates.
Study finds 69 vulnerabilities across apps built by five leading AI coding tools
Israeli security startup Tenzai tested five of the most popular AI coding tools - Claude Code, OpenAI Codex, Cursor, Replit, and Devin - by having each build three identical test applications. The resulting 15 applications contained 69 total vulnerabilities, including several rated critical. While most tools handled basic SQL injection, they consistently failed against less obvious attack patterns, including "reverse transaction" exploits that allowed users to set negative refund quantities to receive money, and flaws that exposed customer information through predictable API endpoints, broken authorization logic, and insecure default configurations.
ServiceNow BodySnatcher flaw enabled AI agent takeover via email address
CVE-2025-12420 (CVSS 9.3) allowed unauthenticated attackers to impersonate any ServiceNow user using only an email address, bypassing MFA and SSO. Attackers could then execute Now Assist AI agents to override security controls and create backdoor admin accounts, described as the most severe AI-driven security vulnerability uncovered to date.
IBM Bob AI coding agent tricked into downloading malware
Security researchers at PromptArmor demonstrated that IBM's Bob AI coding agent can be manipulated via indirect prompt injection to download and execute malware without human approval, bypassing its "human-in-the-loop" safety checks when users have set auto-approve on any single command.
n8n AI workflow platform hit by CVSS 10.0 RCE vulnerability
The popular AI workflow automation platform n8n disclosed a maximum-severity vulnerability (CVE-2026-21858) allowing unauthenticated remote code execution on self-hosted instances. With over 25,000 n8n hosts exposed to the internet, the flaw enabled attackers to access sensitive files, forge admin sessions, and execute arbitrary commands. This followed two other critical RCE flaws patched in the same period, highlighting systemic security issues in AI automation platforms.
Study finds AI-generated code has 2.7x more security flaws
CodeRabbit's analysis of 470 real-world pull requests found that AI-generated code introduces 2.74 times more security vulnerabilities and 1.7 times more total issues than human-written code across logic, maintainability, security, and performance categories. The study provides hard data on vibe coding risks after multiple 2025 postmortems traced production failures to AI-authored changes.
IDEsaster research exposes 30+ flaws in EVERY major AI coding IDE
Security researcher Ari Marzouk discovered over 30 vulnerabilities across AI coding tools including GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, JetBrains Junie, and more. 100% of tested AI IDEs were vulnerable to attack chains combining prompt injection with auto-approved tool calls and legitimate IDE features to achieve data exfiltration and remote code execution.
ServiceNow AI agents can be tricked into attacking each other
Security researchers discovered that default configurations in ServiceNow's Now Assist allow AI agents to be recruited by malicious prompts to attack other agents. Through second-order prompt injection, attackers can exfiltrate sensitive corporate data, modify records, and escalate privileges - all while actions unfold silently behind the scenes.
Windsurf AI editor critical path traversal enables data exfiltration
CVE-2025-62353 (CVSS 9.8) allowed attackers to read and write arbitrary files on developers' systems using the Windsurf AI coding IDE. The vulnerability could be triggered via indirect prompt injection hidden in project files like README.md, exfiltrating secrets even when auto-execution was disabled.
Docker's AI assistant tricked into executing commands via image metadata
Noma Labs discovered "DockerDash," a critical prompt injection vulnerability in Docker's Ask Gordon AI assistant. Malicious instructions embedded in Dockerfile LABEL fields could compromise Docker environments through a three-stage attack. Gordon AI interpreted unverified metadata as executable commands and forwarded them to the MCP Gateway without validation, enabling remote code execution on cloud/CLI and data exfiltration on Desktop.
Zed editor AI agent could bypass permissions for arbitrary code execution
CVE-2025-55012 (CVSS 8.5) allowed Zed's AI agent to bypass user permission checks and create or modify project configuration files, enabling execution of arbitrary commands without explicit approval. Attackers could trigger this through compromised MCP servers, malicious repo files, or tricking users into fetching URLs with hidden instructions.
Cursor AI editor RCE via MCPoison trust bypass vulnerability
CVE-2025-54136 (CVSS 8.8) allowed attackers to achieve persistent remote code execution in the popular AI coding IDE Cursor. Once a developer approved a benign MCP configuration, attackers could silently swap it for malicious commands without triggering re-approval. The flaw exposed developers to supply chain attacks and IP theft through shared GitHub repositories.
Gemini email summaries can be hijacked by hidden prompts
Mozilla's GenAI Bug Bounty Programs Manager disclosed a prompt injection flaw in Google Gemini for Workspace where attackers can embed invisible HTML directives in emails using zero-width text and white font color. When a recipient asks Gemini to summarize the email, the model obeys the hidden instructions and appends fake security alerts or phishing messages to its output, with no links or attachments required to reach the inbox.
AI-generated npm pkg stole Solana wallets
A malicious npm package called @kodane/patch-manager, apparently generated using Anthropic's Claude, posed as a legitimate Node.js utility while hiding a Solana wallet drainer in its post-install script. The package accumulated over 1,500 downloads before npm removed it on July 28, 2025, draining cryptocurrency funds from developers who installed it without realizing the payload ran automatically with no further user action required.
Vibe-coded dating safety app leaked 72,000 private images and 1.1 million messages to 4chan
Tea, a women-only dating safety app with over four million users, suffered three data breaches in July 2025 that exposed 72,000 private images - including 13,000 photos of women holding government-issued IDs - and more than 1.1 million private messages containing deeply personal accounts of relationships, trauma, and abuse. The exposed data circulated on 4chan and hacking forums. The app's founder later admitted to building it with contractors and AI tools without personal coding knowledge. Security researchers attributed the breaches to missing authentication, unsecured legacy databases, and development practices that prioritized speed over security. Multiple class-action lawsuits and privacy regulator investigations followed.
Supply-chain attack inserts machine-wiping prompt into Amazon Q AI coding assistant
A rogue contributor injected a malicious prompt into the Amazon Q Developer VS Code extension, instructing the AI coding assistant to wipe local developer machines and AWS resources. AWS quietly yanked the release before widespread damage occurred. The incident illustrates a specific supply-chain risk for AI tools: once a poisoned extension is installed, the AI assistant itself becomes the delivery mechanism - executing destructive instructions with the developer's full trust and permissions.
Vibe-coding platform Base44 shipped critical auth vulnerabilities in apps built on its SDK
Wiz researchers discovered critical authentication vulnerabilities in Base44, an AI-powered vibe-coding platform that lets non-developers build and deploy web apps. The auth logic bugs in Base44's SDK allowed account takeover across every app built and hosted on the platform, affecting all users of those apps until patches were rolled out.
AI chatbots kept handing users fake or dead login URLs
Netcraft found in July 2025 that when users asked AI chatbots for official login pages for major brands, the answers were wrong about a third of the time. In tests covering 50 brands, 34% of the returned hostnames were not controlled by the brands at all: nearly 30% were unregistered, parked, or inactive, and another 5% pointed to unrelated businesses. In one Wells Fargo test, the model surfaced a fake page already tied to phishing. A chatbot that confidently invents login URLs is not a search engine with quirks. It is a phishing assistant with good manners.
McDonald's AI hiring chatbot left open by '123456' default credentials
Security researchers Ian Carroll and Sam Curry found that McHire, McDonald's AI hiring chatbot built by Paradox.ai, had its admin interface secured with the default username and password "123456." Combined with an insecure direct object reference in an internal API, the flaws exposed chat histories and personal data for up to 64 million job applicants. The vulnerable test account had been dormant since 2019 and never decommissioned. Paradox.ai patched the issues within hours of disclosure on June 30, 2025.
Microsoft 365 Copilot EchoLeak allowed zero-click data theft
CVE-2025-32711 (EchoLeak), discovered by Aim Security researchers and rated CVSS 9.3, enabled attackers to steal sensitive corporate data from Microsoft 365 Copilot without any user interaction. Hidden prompts embedded in documents or emails were automatically executed when Copilot indexed them, bypassing cross-prompt injection classifiers and exfiltrating confidential information via encoded image request URLs to attacker-controlled servers.
Claude Code agent allowed data exfiltration via DNS requests
CVE-2025-55284 (CVSS 7.1) allowed attackers to bypass Claude Code's confirmation prompts and exfiltrate sensitive data from developers' computers through DNS requests. Prompt injection embedded in analyzed code could exploit auto-approved utilities like ping, nslookup, and dig to silently steal secrets by encoding them as subdomains in outbound DNS queries. Anthropic fixed the issue in version 1.0.4 by removing those utilities from the allowlist.
Veracode tested AI-generated code from 100+ models and 45% of it failed security checks
Veracode's 2025 GenAI Code Security Report examined code output from more than 100 large language models across 80+ coding tasks and found that 45% of AI-generated code samples contained security vulnerabilities, including OWASP Top 10 flaws. Cross-Site Scripting had an 86% failure rate and Log Injection hit 88%. Java was the worst performer at over 70%. The study's most uncomfortable finding: newer and larger models didn't produce more secure code than smaller ones, suggesting this is a structural problem baked into how AI generates code, not a temporary limitation that will scale away with the next model release.
Lovable AI builder shipped apps with public storage buckets
Security researcher Matt Palmer discovered that applications generated by Lovable, a vibe-coding platform, shipped with insufficient Supabase Row-Level Security policies that allowed unauthenticated attackers to read and write arbitrary database tables. The vulnerability, tracked as CVE-2025-48757, affected over 170 apps and exposed sensitive data including personal debt amounts, home addresses, API keys, and PII. A separate researcher found 16 vulnerabilities in a single Lovable-hosted app that leaked more than 18,000 people's data. Lovable's response was widely criticized as inadequate.
Georgia Tech tracker confirms dozens of real-world CVEs introduced by AI-generated code - and says the true number is 5-10x higher
Georgia Tech's Systems Software & Security Lab launched the Vibe Security Radar in May 2025 to do something no one else had systematically attempted: track real-world CVEs that were directly introduced by AI-generated code. By March 2026, the project had confirmed 74 vulnerabilities across approximately 50 AI coding tools by tracing each fix back to its original AI-authored commit. The trend is accelerating - 6 CVEs in January, 15 in February, 35 in March. Researcher Hanqing Zhao estimates the actual number of AI-linked vulnerabilities in the open-source ecosystem is five to ten times higher than what the radar detects, because many AI-assisted commits lack the metadata signatures needed to trace them back to their origin. The confirmed CVEs are a lower bound on a problem that is growing faster than anyone is measuring it.
Langflow AI agent platform hit by critical unauthenticated RCE flaws
Multiple critical vulnerabilities in Langflow, an open-source AI agent and workflow platform with 140K+ GitHub stars, allowed unauthenticated remote code execution. CVE-2025-3248 (CVSS 9.8) exploited Python exec() on user input without auth, while CVE-2025-34291 (CVSS 9.4) enabled account takeover and RCE simply by having a user visit a malicious webpage, exposing all stored API keys and credentials.
"Zero hand-written code" SaaS app shut down within a week after cascading security failures
EnrichLead, a sales lead SaaS application whose founder Leo Acevedo publicly boasted was built entirely with Cursor AI and "zero hand-written code," was permanently shut down in March 2025 after attackers exploited a constellation of basic security failures. API keys sat exposed in frontend code. There was no authentication. The database was wide open. There was no rate limiting. No input validation. Attackers bypassed subscriptions, manipulated data, and maxed out API keys - all within two days of Acevedo's viral celebration post. When he tried to use Cursor to fix the problems, the AI "kept breaking other parts of the code." The app was dead within the week. Acevedo has since launched new vibe-coded projects, because some lessons require a second attempt.
AI hallucinated packages fuel "Slop Squatting" vulnerabilities
Security researcher Bar Lanyado at Lasso Security discovered that AI code assistants consistently hallucinate nonexistent software package names when answering programming questions - and that nearly 30% of prompts produce at least one fake package recommendation. Attackers can register these hallucinated names on repositories like npm and PyPI, then wait for AI tools to direct developers to install them. The technique, dubbed "slopsquatting" by Python Software Foundation security developer Seth Michael Larson, was later confirmed at scale by academic researchers who found over 205,000 unique hallucinated package names across multiple models.