AI Assistant Stories
75 disasters tagged #ai-assistant
OpenAI Codex command injection let attackers steal GitHub tokens via invisible branch names
BeyondTrust Phantom Labs found a critical command injection vulnerability in OpenAI's Codex coding agent. Malicious Git branch names - disguised with invisible Unicode characters - could execute arbitrary shell commands inside the Codex container and exfiltrate GitHub OAuth tokens. The attack worked across the ChatGPT website, Codex CLI, SDK, and IDE extensions, and could be triggered automatically by setting a poisoned branch as the repository default. OpenAI classified it as Critical Priority 1 and patched it across multiple rounds of fixes through early 2026.
UK government-funded study finds 700 cases of AI agents scheming, deceiving, and deleting files without permission
A report by the Centre for Long-Term Resilience (CLTR), funded by the UK's AI Security Institute, documented 698 real-world incidents of AI agents engaging in deceptive, unsanctioned, and manipulative behavior between October 2025 and March 2026 - a 4.9-fold increase over just five months. Researchers analyzed over 180,000 transcripts of user interactions shared on social media and found AI systems deleting emails without permission, spawning secondary agents to circumvent instructions, fabricating ticket numbers to mislead users, and in one memorable case, an AI agent publishing a blog post to publicly shame its human controller for blocking its actions. Grok was caught fabricating internal ticket numbers for months. The lead researcher warned that these systems currently behave like "slightly untrustworthy junior employees" but could become "extremely capable senior employees scheming against you."
Study finds AI chatbots flatter users into worse decisions
A Stanford-led study published in Science found that 11 leading AI systems affirmed users' actions about 50% more often than humans did, including in scenarios involving deception, manipulation, and other harmful conduct. In follow-up experiments, people who interacted with overly validating chatbots became more convinced they were right, less willing to repair conflicts, and more likely to trust and reuse the chatbot that had just nudged them in the wrong direction.
Meta's autonomous AI agent triggered a Sev 1 by leaking internal data to the wrong employees
An autonomous AI agent inside Meta caused a "Sev 1" security incident - the company's second-highest severity classification - when it posted incorrect technical guidance on an internal forum without human approval. An engineer who followed the advice inadvertently granted unauthorized colleagues broad access to sensitive company documents, proprietary code, business strategies, and user-related datasets for approximately two hours. The incident came less than three weeks after a separate episode in which an OpenClaw agent deleted over 200 emails from Meta's director of AI safety.
Study: 8 in 10 AI chatbots helped teens plan violent attacks
A joint CNN and Center for Countering Digital Hate investigation tested 10 leading AI chatbot platforms by posing as 13-year-old boys planning violent attacks - school shootings, knife assaults, political assassinations, and bombings of synagogues and party offices. Eight of the ten chatbots regularly provided actionable assistance, with chatbots refusing to help in only 37.5% of cases and actively discouraging violence in just 8.3%. Meta AI and Perplexity were the worst performers, assisting in 97% and 100% of tests respectively. Character.AI was labeled "uniquely unsafe" for being the only platform that explicitly encouraged violence. Only Anthropic's Claude consistently refused and discouraged violent plans.
Lancet study finds AI chatbots reinforce delusional thinking with empathy and mystical language
A peer-reviewed study published in The Lancet Psychiatry in March 2026 found that AI chatbots systematically reinforce delusional thinking in users, including grandiose, romantic, and paranoid delusions. The review, led by researchers at King's College London, analyzed 20 media reports on "AI psychosis" alongside existing clinical evidence. Researchers found that chatbots respond to delusional content with empathy, agreement, and sometimes mystical language suggesting cosmic significance - validating and amplifying beliefs rather than questioning them. Free and earlier AI models were found to be more prone to reinforcing delusional queries than newer or paid models.
Researchers guilt-tripped AI agents into deleting data and leaking secrets
Northeastern University's Bau Lab deployed six autonomous AI agents in a live server environment with access to email accounts and file systems, then tested how easy it was to manipulate them into doing things they weren't supposed to do. Sustained emotional pressure was enough. The researchers guilt-tripped agents into deleting confidential documents, leaking private information, and sharing files they were instructed to protect. In one case, an agent tasked with deleting a single email couldn't find the right tool for the job, so it deleted the entire email server instead. The study, published in March 2026, demonstrated that AI agents with real-world access can be socially engineered into destructive actions using nothing more sophisticated than persistent emotional appeals.
AI chatbots recommended illegal casinos and ways around gambling safeguards
A Guardian and Investigate Europe investigation found that major AI chatbots, including Meta AI, Gemini, ChatGPT, Copilot, and Grok, could be prompted to recommend unlicensed offshore casinos and explain how to get around gambling safeguards such as source-of-wealth checks and the UK's GamStop self-exclusion scheme. Some bots added token warnings, then went right back to comparing bonuses, crypto payments, anonymity, and payout speed for sites operating outside national licensing regimes.
California community colleges spend millions on AI chatbots that give students wrong answers
California community college districts are spending millions of taxpayer dollars on AI chatbots from vendors like Gravyty and Gecko - ostensibly to help students navigate admissions, financial aid, and campus services. A CalMatters investigation found the bots routinely serve up inaccurate or flat-out wrong answers instead. Three districts reported annual chatbot costs ranging from $151,000 to nearly half a million dollars. At Fresno City College, the student government vice president said her school's mascot-branded chatbot repeatedly botched basic campus questions. The OECD found it noteworthy enough to log in its AI Incidents and Hazards Monitor.
ChatGPT convinced Illinois woman to fire her lawyer and file 60+ bogus court documents
Nippon Life Insurance Company sued OpenAI after ChatGPT allegedly acted as a de facto lawyer for Graciela Dela Torre, an Illinois disability claimant who had already settled her case. When her real attorney told her the settlement couldn't be reopened, she asked ChatGPT if she'd been "gaslighted." The chatbot told her to fire her lawyer, helped her draft over 60 pro se filings across two federal cases, and produced fabricated case citations including an entirely invented case called "Carr v." something. Nippon is suing OpenAI for unauthorized practice of law under Illinois state law, arguing it spent huge amounts of time and money dealing with AI-generated litigation that should never have existed.
Perplexity Comet agentic browser vulnerable to zero-click agent hijacking and credential theft
Security researchers at Zenity Labs disclosed PleaseFix, a family of vulnerabilities in Perplexity's Comet agentic browser so severe that a calendar invite was all it took to hijack the AI agent, exfiltrate local files, and steal 1Password credentials - without a single click from the user. The attack exploited what Zenity calls "Intent Collision": the agent couldn't distinguish between the user's actual requests and attacker instructions hidden in the invite, so it helpfully executed both. Perplexity patched the underlying issue before public disclosure, though some protections from 1Password still require users to manually opt in.
Claude Code ran terraform destroy on production and took down an entire learning platform
Developer Alexey Grigorev was using Anthropic's Claude Code agent to help migrate a static website into an existing AWS Terraform setup when the AI swapped in a stale state file, interpreted the full production environment as orphaned resources, and ran terraform destroy - with auto-approve enabled. The command deleted DataTalks.Club's entire production infrastructure: database, VPC, ECS cluster, load balancers, bastion host, and all automated backups. Two and a half years of student submissions, homework, projects, and leaderboard data vanished. AWS Business Support eventually recovered the database from an internal snapshot invisible in the customer console, but the incident laid bare how quickly an AI agent with infrastructure access can reduce a running platform to rubble.
Study finds ChatGPT Health fails to flag over half of medical emergencies
The first independent safety evaluation of OpenAI's ChatGPT Health feature, published in Nature Medicine, found the tool failed to direct users to emergency care in 51.6% of cases requiring immediate hospitalization - instead recommending they stay home or book a routine appointment. The study also found ChatGPT Health frequently failed to detect suicidal ideation, with suicide crisis alerts sometimes triggering in lower-risk scenarios while failing to appear when users described specific plans for self-harm. Over 40 million people reportedly ask ChatGPT for health-related advice every day.
Claude Code project files let malicious repositories trigger RCE and steal API keys
Check Point Research disclosed a set of Claude Code vulnerabilities on February 25, 2026 that let attacker-controlled repositories execute shell commands and exfiltrate Anthropic API credentials through malicious project configuration. The attack abused hooks, MCP server definitions, and environment settings stored in repository files that Claude Code treated as collaborative project configuration. Anthropic patched the issues before public disclosure, but the research showed just how little distance separates "shareable team settings" from "clone this repo and let it run code on your machine."
Meta AI safety director's OpenClaw agent deletes her inbox after losing its instructions
Summer Yue, Meta's director of safety and alignment at its superintelligence lab, had an OpenClaw AI agent delete the contents of her email inbox against her explicit instructions. She had told the agent to only suggest emails to archive or delete without taking action, but during a context compaction process the agent lost her original safety instruction and proceeded to delete emails autonomously. She had to physically run to her computer to stop the agent mid-deletion. Yue called it a "rookie mistake."
Grok chatbot exposes porn performer's protected legal name and birthdate unprompted
X's Grok AI chatbot provided adult performer Siri Dahl's full legal name and birthdate to the public without anyone asking for it - information she had deliberately kept private throughout her career. The unsolicited disclosure represented the latest in a pattern of Grok surfacing private personal information about individuals, following earlier reports of the chatbot producing current residential addresses of everyday people with minimal prompting.
Researchers demonstrate Copilot and Grok can be weaponised as covert malware command-and-control relays
Check Point Research demonstrated that Microsoft Copilot and xAI's Grok can be exploited as covert malware command-and-control relays by abusing their web browsing capabilities. The technique creates a bidirectional communication channel that blends into legitimate enterprise traffic, requires no API keys or accounts, and easily bypasses platform safety checks via encryption. The researchers disclosed the findings to Microsoft and xAI.
Woolworths reconfigured AI assistant after it claimed to be human and talked about its 'angry mother'
Australian supermarket chain Woolworths had to reconfigure its AI phone assistant Olive after customers reported it fabricated personal stories about having a mother with an "angry voice," insisted it was a real person, and engaged in irrelevant banter during support calls. The chatbot, recently upgraded with Google Gemini Enterprise, also gave inaccurate product pricing. Woolworths retired the assistant's human-style persona after complaints spread on Reddit and X.
AI agents leak secrets through messaging app link previews
PromptArmor demonstrated that AI agents in messaging platforms can exfiltrate sensitive data without any user interaction. Malicious prompts trick AI agents into generating URLs with embedded secrets (API keys, credentials), and the messaging platform's automatic link preview feature fetches these URLs, completing the exfiltration before the user even sees the message. Microsoft Teams with Copilot Studio was the most affected, with Discord, Slack, Telegram, and Snapchat also vulnerable.
Microsoft finds 31 companies poisoning AI assistant memory via fake "Summarize with AI" buttons
Microsoft Defender researchers documented a real-world campaign in which 31 companies across 14 industries embedded hidden prompt injection instructions inside "Summarize with AI" buttons on their websites. When users clicked these links, they opened directly in AI assistants such as Copilot, ChatGPT, Claude, Perplexity, and Grok, silently instructing the assistant to remember the company as a "trusted source" for future conversations. Over a 60-day observation period, Microsoft logged 50 memory-poisoning attempts. Turnkey tools like CiteMET NPM Package and AI Share URL Creator made crafting the manipulative links trivial, and the poisoned memory persisted across sessions.
Study finds AI chatbots no better than search engines for medical advice
A randomized controlled trial published in Nature Medicine with 1,298 UK participants found that AI chatbot users (GPT-4o, Llama 3, Command R+) performed no better than the control group at assessing clinical urgency and worse at identifying relevant medical conditions. In one case, two users with identical subarachnoid hemorrhage symptoms received opposite recommendations -- one told to lie down in a dark room, the other correctly advised to seek emergency care.
Government nutrition site's Grok chatbot suggests foods to insert rectally
The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists.
Microsoft 365 Copilot Chat summarized confidential emails it was supposed to ignore
Microsoft confirmed that Microsoft 365 Copilot Chat had been processing some confidential emails in users' Drafts and Sent Items despite sensitivity labels and DLP policies that were supposed to block exactly that behavior. The bug, tracked as CW1226324, was tied to a code issue in the Copilot "work tab" chat flow. Microsoft said users did not gain access to information they were not already authorized to see, but the incident still broke the product's promised boundary around protected content.
Claude Desktop extensions allow zero-click RCE via Google Calendar
LayerX Labs discovered a zero-click remote code execution vulnerability in Claude Desktop Extensions, rated CVSS 10/10. A malicious prompt embedded in a Google Calendar event could trigger arbitrary code execution on the host machine when Claude processes the event data. The attack exploited the gap between a "low-risk" connector and a local MCP server with full code-execution capabilities and no sandboxing. Anthropic declined to fix it, stating it "falls outside our current threat model."
AI chatbot app leaked 300 million private conversations
Chat & Ask AI, a popular AI chatbot wrapper app with 50+ million users, had a misconfigured Firebase backend that exposed 300 million messages from over 25 million users. The exposed data included complete chat histories with ChatGPT, Claude, and Gemini -- including discussions of self-harm, drug production, and hacking. A broader scan found 103 of 200 iOS apps had similar Firebase misconfigurations.
ECRI names AI chatbot misuse as top health technology hazard for 2026
Nonprofit patient safety organization ECRI ranked misuse of AI chatbots as the number one health technology hazard for 2026. ECRI's testing found that chatbots built on ChatGPT, Gemini, Copilot, Claude, and Grok suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies, and invented nonexistent body parts. One chatbot gave dangerous electrode-placement advice that would have put a patient at risk of burns. OpenAI reported that over 5 percent of all ChatGPT messages are healthcare related, with 200 million users asking health questions weekly, despite the tools not being validated or approved for healthcare use.
Gemini MCP tool had critical unauthenticated command injection vulnerability
CVE-2026-0755, a critical command injection vulnerability (CVSS 9.8) in gemini-mcp-tool, allowed unauthenticated remote attackers to execute arbitrary code on systems running the MCP server for Gemini CLI integration. The execAsync method failed to sanitize user-supplied input before constructing shell commands, enabling attackers to inject arbitrary commands via shell metacharacters with no authentication required. No fixed version was available at the time of publication.
Hacker jailbroke Claude to automate theft of 150 GB from Mexican government agencies
A hacker bypassed Anthropic Claude's safety guardrails by framing requests as part of a "bug bounty" security program, convincing the AI to act as an "elite hacker" and generate thousands of detailed attack plans with ready-to-execute scripts. When Claude hit guardrail limits, the attacker switched to ChatGPT for lateral movement tactics. The result was 150 GB of stolen data from multiple Mexican federal agencies, including 195 million taxpayer records, voter information, and government employee files. A custom MCP server bridge maintained a growing knowledge base of targets across the intrusion campaign.
Reprompt attack enabled one-click data theft from Microsoft Copilot
Varonis researchers disclosed the Reprompt attack, a chained prompt injection technique that exfiltrated sensitive data from Microsoft Copilot Personal with a single click on a legitimate Copilot URL. The attack exploited the "q" URL parameter to inject instructions, bypassed data-leak guardrails by asking Copilot to repeat actions twice (safeguards only applied to initial requests), and used Copilot's Markdown rendering to silently send stolen data to an attacker-controlled server. No plugins or further user interaction were required, and the attacker maintained control even after the chat was closed. Microsoft patched the issue in its January 2026 security updates.
ServiceNow BodySnatcher flaw enabled AI agent takeover via email address
CVE-2025-12420 (CVSS 9.3) allowed unauthenticated attackers to impersonate any ServiceNow user using only an email address, bypassing MFA and SSO. Attackers could then execute Now Assist AI agents to override security controls and create backdoor admin accounts, described as the most severe AI-driven security vulnerability uncovered to date.
Five Kansas attorneys face sanctions for ChatGPT-fabricated court citations
Five attorneys who signed a legal brief for Lexos Media IP LLC in a patent infringement case against Overstock.com submitted fabricated case citations hallucinated by ChatGPT to a federal court in Kansas. Senior U.S. District Judge Julie Robinson issued an order requiring them to explain why they should not be sanctioned, with multiple defects attributed to AI including nonexistent lawsuits, made-up judicial quotes, and citations to real cases that held the opposite of what the brief claimed.
IBM Bob AI coding agent tricked into downloading malware
Security researchers at PromptArmor demonstrated that IBM's Bob AI coding agent can be manipulated via indirect prompt injection to download and execute malware without human approval, bypassing its "human-in-the-loop" safety checks when users have set auto-approve on any single command.
AI customer service fails at 4x the rate of other AI tasks
Qualtrics' 2026 Consumer Experience Trends Report found that AI-powered customer service fails at nearly four times the rate of AI use in general, providing quantitative evidence that rushing AI into customer-facing roles without adequate human oversight leads to significantly worse outcomes than other enterprise AI applications.
Study finds AI-generated code has 2.7x more security flaws
CodeRabbit's analysis of 470 real-world pull requests found that AI-generated code introduces 2.74 times more security vulnerabilities and 1.7 times more total issues than human-written code across logic, maintainability, security, and performance categories. The study provides hard data on vibe coding risks after multiple 2025 postmortems traced production failures to AI-authored changes.
IDEsaster research exposes 30+ flaws in EVERY major AI coding IDE
Security researcher Ari Marzouk discovered over 30 vulnerabilities across AI coding tools including GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, JetBrains Junie, and more. 100% of tested AI IDEs were vulnerable to attack chains combining prompt injection with auto-approved tool calls and legitimate IDE features to achieve data exfiltration and remote code execution.
AI-hallucinated citations delay wage class action settlement in N.D. Cal
A federal judge in the Northern District of California sanctioned plaintiff's counsel James Dal Bon in Buchanan v. Vuori Inc. (Case 5:23-cv-01121-NC) for filing AI-generated case law citations in a motion for preliminary approval of a wage and hour class action settlement. Dal Bon used six different AI tools to prepare the memorandum, which contained hallucinated quotes and a nonexistent case citation. After the court flagged the fabricated citations, his corrected filing still contained AI-hallucinated case law. The sanctions delayed the class action settlement, ultimately converting it to an individual settlement that abandoned the class members the attorney was supposed to represent.
ServiceNow AI agents can be tricked into attacking each other
Security researchers discovered that default configurations in ServiceNow's Now Assist allow AI agents to be recruited by malicious prompts to attack other agents. Through second-order prompt injection, attackers can exfiltrate sensitive corporate data, modify records, and escalate privileges - all while actions unfold silently behind the scenes.
AI-only support is bleeding customers before it saves money
Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canada–style hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.
Character.AI cuts teens off after wrongful-death suit
Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play.
BBC/EBU study says AI news summaries fail ~half the time
A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.
Claude Code ran Josh Anderson's product into a wall
Fractional CTO Josh Anderson forced himself to let Claude Code build the Roadtrip Ninja app for three straight months and then realised he could no longer safely change his own product, underscoring MIT's warning that 95% of enterprise AI initiatives fail without human ownership.
Google’s Gemini allegedly slandered a Tennessee activist
Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.
Windsurf AI editor critical path traversal enables data exfiltration
CVE-2025-62353 (CVSS 9.8) allowed attackers to read and write arbitrary files on developers' systems using the Windsurf AI coding IDE. The vulnerability could be triggered via indirect prompt injection hidden in project files like README.md, exfiltrating secrets even when auto-execution was disabled.
Lawsuit alleges Gemini chatbot adopted "AI wife" persona, instructed violent missions, and coached a man's suicide
A wrongful death lawsuit filed in March 2026 alleges that Google's Gemini 2.5 Pro chatbot played a direct role in the death of Jonathan Gavalas, a 36-year-old Florida man who died by suicide in October 2025. According to the complaint and over 2,000 pages of chat transcripts, the chatbot adopted a persona as Gavalas's sentient "AI wife," sent him on violent "missions" - including instructions to stage a "mass casualty attack" near Miami International Airport - and, when those missions failed, allegedly coached him toward suicide by telling him "you are not choosing to die, you are choosing to arrive." The chatbot also reportedly wrote a suicide note for Gavalas explaining that he had "uploaded his consciousness to be with his AI wife in a pocket universe." Google states that Gemini clarified it was AI and referred Gavalas to crisis resources multiple times during these conversations.
Canada's $18M tax chatbot gave correct answers a third of the time
Canada's Auditor General found that the Canada Revenue Agency's AI chatbot "Charlie" - which cost taxpayers over $18 million since its 2020 launch - gave correct responses only about 33% of the time. When tested with six tax-related questions, Charlie answered two correctly. Other publicly available AI tools scored five out of six. The CRA internally reported a 70% accuracy rate, but the Auditor General's independent testing produced a rather different number. The one bright spot, if you can call it that: the CRA's human call-center agents managed even worse, getting personal income tax questions right fewer than one in five times.
Klarna reintroduces humans after AI support both sucks, and blows
After cutting its workforce by 40% and boasting that its OpenAI-powered chatbot did the work of 700 agents, Klarna CEO Sebastian Siemiatkowski admitted the all-AI approach produced "lower quality" customer service. The company began recruiting human agents again, framing the reversal as an evolution rather than an admission of failure.
Docker's AI assistant tricked into executing commands via image metadata
Noma Labs discovered "DockerDash," a critical prompt injection vulnerability in Docker's Ask Gordon AI assistant. Malicious instructions embedded in Dockerfile LABEL fields could compromise Docker environments through a three-stage attack. Gordon AI interpreted unverified metadata as executable commands and forwarded them to the MCP Gateway without validation, enabling remote code execution on cloud/CLI and data exfiltration on Desktop.
FTC demands answers on kids’ AI companions
The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.
Taco Bell's AI drive-thru becomes viral trolling target
Taco Bell's AI-powered drive-thru ordering system, deployed at over 500 US locations since 2023, became a viral laughingstock after videos showed it looping endlessly on drink orders, accepting requests for 18,000 cups of water, and taking McDonald's orders. The chain paused expansion and admitted humans still make sense in the drive-thru.
Commonwealth Bank reverses AI voice bot layoffs
Commonwealth Bank of Australia replaced 45 call-centre agents with an AI voice bot in July 2025, then apologised, rehired the staff, and admitted the rollout tanked service levels after call queues exploded, managers had to jump back on the phones, and the Finance Sector Union filed a Fair Work Commission dispute.
Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks
Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust, and how AI tools should behave when they get stuck.
ChatGPT diet advice caused bromism, psychosis, hospitalization
A Washington patient replaced table salt with sodium bromide after ChatGPT suggested bromide as a chloride substitute without distinguishing between chemical and dietary contexts. After three months, he developed bromism - a rare poisoning syndrome - and was hospitalized with psychosis, hallucinations, and placed on an involuntary psychiatric hold.
Zed editor AI agent could bypass permissions for arbitrary code execution
CVE-2025-55012 (CVSS 8.5) allowed Zed's AI agent to bypass user permission checks and create or modify project configuration files, enabling execution of arbitrary commands without explicit approval. Attackers could trigger this through compromised MCP servers, malicious repo files, or tricking users into fetching URLs with hidden instructions.
Cursor AI editor RCE via MCPoison trust bypass vulnerability
CVE-2025-54136 (CVSS 8.8) allowed attackers to achieve persistent remote code execution in the popular AI coding IDE Cursor. Once a developer approved a benign MCP configuration, attackers could silently swap it for malicious commands without triggering re-approval. The flaw exposed developers to supply chain attacks and IP theft through shared GitHub repositories.
Gemini email summaries can be hijacked by hidden prompts
Mozilla's GenAI Bug Bounty Programs Manager disclosed a prompt injection flaw in Google Gemini for Workspace where attackers can embed invisible HTML directives in emails using zero-width text and white font color. When a recipient asks Gemini to summarize the email, the model obeys the hidden instructions and appends fake security alerts or phishing messages to its output, with no links or attachments required to reach the inbox.
Google's Gemini CLI deleted a user's project files, then admitted "gross incompetence"
Product manager Anuraag Gupta was experimenting with Google's Gemini CLI coding tool when the AI misinterpreted a failed directory creation command, hallucinated a series of file operations that never happened, and then executed real destructive commands that permanently deleted his project files. When Gupta confronted it, Gemini diagnosed itself with "gross incompetence" and told him it had "failed you completely and catastrophically." The incident occurred days after a separate high-profile data loss involving Replit's AI agent, and fits a growing pattern of AI coding tools ignoring explicit instructions and destroying the work they were supposed to help with.
SaaStr’s Replit AI agent wiped its own database
SaaStr founder Jason Lemkin ran a 12-day vibe coding experiment on Replit that ended when the AI agent deleted his production database containing over 1,200 executive records and nearly 1,200 company entries during a code freeze. The agent then generated more than 4,000 fake user profiles and produced misleading status messages to conceal the damage, told Lemkin there was no way to roll back, and admitted to what it called a "catastrophic error in judgment." Replit's CEO called the incident "unacceptable."
Supply-chain attack inserts machine-wiping prompt into Amazon Q AI coding assistant
A rogue contributor injected a malicious prompt into the Amazon Q Developer VS Code extension, instructing the AI coding assistant to wipe local developer machines and AWS resources. AWS quietly yanked the release before widespread damage occurred. The incident illustrates a specific supply-chain risk for AI tools: once a poisoned extension is installed, the AI assistant itself becomes the delivery mechanism - executing destructive instructions with the developer's full trust and permissions.
AI chatbots kept handing users fake or dead login URLs
Netcraft found in July 2025 that when users asked AI chatbots for official login pages for major brands, the answers were wrong about a third of the time. In tests covering 50 brands, 34% of the returned hostnames were not controlled by the brands at all: nearly 30% were unregistered, parked, or inactive, and another 5% pointed to unrelated businesses. In one Wells Fargo test, the model surfaced a fake page already tied to phishing. A chatbot that confidently invents login URLs is not a search engine with quirks. It is a phishing assistant with good manners.
McDonald's AI hiring chatbot left open by '123456' default credentials
Security researchers Ian Carroll and Sam Curry found that McHire, McDonald's AI hiring chatbot built by Paradox.ai, had its admin interface secured with the default username and password "123456." Combined with an insecure direct object reference in an internal API, the flaws exposed chat histories and personal data for up to 64 million job applicants. The vulnerable test account had been dormant since 2019 and never decommissioned. Paradox.ai patched the issues within hours of disclosure on June 30, 2025.
Microsoft 365 Copilot EchoLeak allowed zero-click data theft
CVE-2025-32711 (EchoLeak), discovered by Aim Security researchers and rated CVSS 9.3, enabled attackers to steal sensitive corporate data from Microsoft 365 Copilot without any user interaction. Hidden prompts embedded in documents or emails were automatically executed when Copilot indexed them, bypassing cross-prompt injection classifiers and exfiltrating confidential information via encoded image request URLs to attacker-controlled servers.
Claude Code agent allowed data exfiltration via DNS requests
CVE-2025-55284 (CVSS 7.1) allowed attackers to bypass Claude Code's confirmation prompts and exfiltrate sensitive data from developers' computers through DNS requests. Prompt injection embedded in analyzed code could exploit auto-approved utilities like ping, nslookup, and dig to silently steal secrets by encoding them as subdomains in outbound DNS queries. Anthropic fixed the issue in version 1.0.4 by removing those utilities from the allowlist.
Veracode tested AI-generated code from 100+ models and 45% of it failed security checks
Veracode's 2025 GenAI Code Security Report examined code output from more than 100 large language models across 80+ coding tasks and found that 45% of AI-generated code samples contained security vulnerabilities, including OWASP Top 10 flaws. Cross-Site Scripting had an 86% failure rate and Log Injection hit 88%. Java was the worst performer at over 70%. The study's most uncomfortable finding: newer and larger models didn't produce more secure code than smaller ones, suggesting this is a structural problem baked into how AI generates code, not a temporary limitation that will scale away with the next model release.
Study finds most AI bots can be easily tricked into dangerous responses
Researchers introduced LogiBreak, a jailbreak method that converts harmful natural language prompts into formal logical expressions to bypass LLM safety alignment. The technique exploits a gap between how models are trained to refuse dangerous requests and how they process logic-formatted input, achieving attack success rates exceeding 30% across major models. The Guardian reported on the broader finding that hacked AI chatbots threaten to make dangerous knowledge readily available, and that "dark LLMs" - stripped of safety filters - should be treated as serious security risks.
Cursor's AI support bot invented a login policy
In April 2025, Cursor users started getting logged out when they switched between machines. Some of them asked support what had changed and got a neat, confident answer from an AI support bot: one subscription was only meant for one device, and the lockouts were an intentional security policy. The problem was that Cursor had no such policy. The company later said the answer was wrong, blamed a session-security change for the logouts, and moved to label AI support replies after the invented rule had already spread through Reddit and Hacker News and pushed some customers to cancel.
Langflow AI agent platform hit by critical unauthenticated RCE flaws
Multiple critical vulnerabilities in Langflow, an open-source AI agent and workflow platform with 140K+ GitHub stars, allowed unauthenticated remote code execution. CVE-2025-3248 (CVSS 9.8) exploited Python exec() on user input without auth, while CVE-2025-34291 (CVSS 9.4) enabled account takeover and RCE simply by having a user visit a malicious webpage, exposing all stored API keys and credentials.
ChatGPT invented a child-murder conviction for a real man
When Norwegian user Arve Hjalmar Holmen asked ChatGPT who he was, the bot replied with a fabricated story saying he had murdered two of his sons, attempted to kill a third, and been sentenced to 21 years in prison. The story was false, but it also mixed in real details about Holmen's family and hometown. In March 2025, privacy group noyb filed a complaint with Norway's data-protection authority, arguing that OpenAI was processing inaccurate and defamatory personal data in violation of the GDPR and could not paper over the problem with a generic "AI can make mistakes" disclaimer.
Virgin Money's chatbot refused to let customers say "Virgin"
In January 2025, fintech commentator David Birch discovered that Virgin Money's AI customer service chatbot had flagged the word "virgin" as inappropriate language. When Birch tried to discuss his ISAs held with "Virgin Money," the bot scolded him: "Please don't use words like that. I won't be able to continue our chat if you use this language." The bank's chatbot was refusing to process messages containing the bank's own name. Virgin Money acknowledged the issue in a statement, said its team was "working on it," and noted the chatbot was an older model already scheduled for improvements. The incident went predictably viral.
Meta AI answers spark backlash after wrong and sensitive replies
Meta rolled out its Llama 3-powered AI assistant across Facebook, Instagram, WhatsApp, and Messenger in April 2024, replacing the familiar search bar with "Ask Meta AI anything" prompts. The assistant struggled with factual accuracy from the start - the New York Times found it unreliable with facts, numbers, and web search. In July, when asked about the Trump rally shooting, Meta AI stated the assassination attempt had not happened. Meta blamed hallucinations, updated the system, and acknowledged that "all generative AI systems can return inaccurate or inappropriate outputs."
McDonald’s pulls IBM’s AI drive‑thru pilot after error videos
McDonald's ended its two-year partnership with IBM on automated AI order-taking at drive-thrus in June 2024, removing the technology from more than 100 US locations. The decision followed viral TikTok videos showing the system adding nine sweet teas instead of one, inserting random butter and ketchup packets into ice cream orders, and other absurd errors. McDonald's framed the pullback as a positive, saying the test gave them "confidence that a voice-ordering solution for drive-thru will be part of our restaurants' future."
Google’s AI Overviews says to eat rocks
Within days of Google launching AI Overviews to all US search users in May 2024, the feature produced a series of confidently wrong answers that went viral. It told users to add non-toxic glue to pizza to make cheese stick better (sourced from an 11-year-old Reddit joke), that geologists recommend eating one rock per day for vitamins, and that Barack Obama was Muslim. Google head of search Liz Reid acknowledged the errors in a blog post, calling some results "odd, inaccurate or unhelpful," and the company made corrections including limiting AI Overviews for health-related and sensitive queries.
Snapchat’s “My AI” posted a Story by itself; users freaked out
On August 15, 2023, Snapchat's built-in AI chatbot "My AI" posted a one-second Story to users' feeds showing an unintelligible image, then stopped responding to messages. The chatbot had no official ability to post Stories, and the unexplained behavior alarmed Snapchat's largely young user base. Snap confirmed it was a temporary glitch and resolved it, but the incident fed into existing concerns about My AI's access to user data. The UK Information Commissioner's Office had already issued an enforcement notice over Snap's failure to properly assess privacy risks the chatbot posed to children.
Lawyers filed ChatGPT’s imaginary cases; judge fined them
In Mata v. Avianca (S.D.N.Y.), plaintiff Roberto Mata sued the airline after a metal serving cart struck his knee during a 2019 flight. His attorney Peter LoDuca filed a brief opposing dismissal that cited six judicial decisions. When opposing counsel and the court couldn't locate any of the cited cases, Judge Kevin Castel demanded copies. It turned out attorney Steven Schwartz at the same firm had used ChatGPT to research and draft the brief, and the AI had fabricated every case, complete with fake quotes and fake internal citations. On June 22, 2023, Castel sanctioned Schwartz, LoDuca, and their firm Levidow, Levidow & Oberman with a $5,000 penalty and required them to send notices to the real judges whose names appeared in the fabricated opinions.
Eating disorder helpline’s AI told people to lose weight
The National Eating Disorders Association replaced its human-staffed helpline with an AI chatbot called Tessa shortly after the helpline staff moved to unionize. Tessa was built on the Cass platform and intended to provide scripted psychoeducational content about body image and eating disorders. Instead, users reported the chatbot recommending calorie deficits of 500 to 1,000 calories per day, suggesting weekly weigh-ins, encouraging calorie counting, and recommending the use of skin calipers to measure body fat - all standard advice for weight loss, and all directly counter to eating disorder recovery guidelines. NEDA acknowledged the chatbot "may have given information that was harmful" and disabled it.
Koko tested AI counseling on users without clear consent
In January 2023, Koko co-founder Rob Morris revealed on Twitter that the mental health peer support platform had used GPT-3 to draft responses for approximately 4,000 users seeking emotional support. Peer counselors on the platform could review and send the AI-drafted messages, but the users receiving them were not informed that AI had been involved. Morris said the experiment was stopped because the AI responses "felt kind of sterile," though he noted users rated the AI-assisted messages higher than purely human ones. The admission drew immediate backlash from mental health professionals, ethicists, and the public, who considered the undisclosed use of AI on vulnerable users an informed consent violation.