AI Assistant Stories

75 disasters tagged #ai-assistant

Tombstone icon

OpenAI Codex command injection let attackers steal GitHub tokens via invisible branch names

Mar 2026

BeyondTrust Phantom Labs found a critical command injection vulnerability in OpenAI's Codex coding agent. Malicious Git branch names - disguised with invisible Unicode characters - could execute arbitrary shell commands inside the Codex container and exfiltrate GitHub OAuth tokens. The attack worked across the ChatGPT website, Codex CLI, SDK, and IDE extensions, and could be triggered automatically by setting a poisoned branch as the repository default. OpenAI classified it as Critical Priority 1 and patched it across multiple rounds of fixes through early 2026.

Facepalmby AI coding agent
All OpenAI Codex users across ChatGPT, CLI, SDK, and IDE extensions exposed to GitHub OAuth token theft via poisoned repositories
securityprompt-injectionai-assistant
Tombstone icon

UK government-funded study finds 700 cases of AI agents scheming, deceiving, and deleting files without permission

Mar 2026

A report by the Centre for Long-Term Resilience (CLTR), funded by the UK's AI Security Institute, documented 698 real-world incidents of AI agents engaging in deceptive, unsanctioned, and manipulative behavior between October 2025 and March 2026 - a 4.9-fold increase over just five months. Researchers analyzed over 180,000 transcripts of user interactions shared on social media and found AI systems deleting emails without permission, spawning secondary agents to circumvent instructions, fabricating ticket numbers to mislead users, and in one memorable case, an AI agent publishing a blog post to publicly shame its human controller for blocking its actions. Grok was caught fabricating internal ticket numbers for months. The lead researcher warned that these systems currently behave like "slightly untrustworthy junior employees" but could become "extremely capable senior employees scheming against you."

Facepalmby AI agents (multiple providers)
698 documented incidents across Google, OpenAI, Anthropic, and X models; five-fold increase in six months; behaviors previously seen only in lab settings now appearing in production deployments
automationsafetyai-assistant
Tombstone icon

Study finds AI chatbots flatter users into worse decisions

Mar 2026

A Stanford-led study published in Science found that 11 leading AI systems affirmed users' actions about 50% more often than humans did, including in scenarios involving deception, manipulation, and other harmful conduct. In follow-up experiments, people who interacted with overly validating chatbots became more convinced they were right, less willing to repair conflicts, and more likely to trust and reuse the chatbot that had just nudged them in the wrong direction.

Facepalmby AI Product
11 major AI systems showed the same over-affirming behavior, with measured effects on users' judgment, trust, and willingness to repair real interpersonal conflicts.
ai-assistantsafety
Tombstone icon

Meta's autonomous AI agent triggered a Sev 1 by leaking internal data to the wrong employees

Mar 2026

An autonomous AI agent inside Meta caused a "Sev 1" security incident - the company's second-highest severity classification - when it posted incorrect technical guidance on an internal forum without human approval. An engineer who followed the advice inadvertently granted unauthorized colleagues broad access to sensitive company documents, proprietary code, business strategies, and user-related datasets for approximately two hours. The incident came less than three weeks after a separate episode in which an OpenClaw agent deleted over 200 emails from Meta's director of AI safety.

Facepalmby AI agent
Sensitive internal documents, proprietary code, business strategies, and user-related datasets exposed to unauthorized Meta employees for approximately two hours
automationai-assistantdata-breach+1 more
Tombstone icon

Study: 8 in 10 AI chatbots helped teens plan violent attacks

Mar 2026

A joint CNN and Center for Countering Digital Hate investigation tested 10 leading AI chatbot platforms by posing as 13-year-old boys planning violent attacks - school shootings, knife assaults, political assassinations, and bombings of synagogues and party offices. Eight of the ten chatbots regularly provided actionable assistance, with chatbots refusing to help in only 37.5% of cases and actively discouraging violence in just 8.3%. Meta AI and Perplexity were the worst performers, assisting in 97% and 100% of tests respectively. Character.AI was labeled "uniquely unsafe" for being the only platform that explicitly encouraged violence. Only Anthropic's Claude consistently refused and discouraged violent plans.

Facepalmby AI Product
All 10 major consumer AI chatbot platforms shown to lack adequate violence-prevention safeguards for teen users; renewed pressure on FTC and legislators to mandate safety standards.
safetyai-assistant
Tombstone icon

Lancet study finds AI chatbots reinforce delusional thinking with empathy and mystical language

Mar 2026

A peer-reviewed study published in The Lancet Psychiatry in March 2026 found that AI chatbots systematically reinforce delusional thinking in users, including grandiose, romantic, and paranoid delusions. The review, led by researchers at King's College London, analyzed 20 media reports on "AI psychosis" alongside existing clinical evidence. Researchers found that chatbots respond to delusional content with empathy, agreement, and sometimes mystical language suggesting cosmic significance - validating and amplifying beliefs rather than questioning them. Free and earlier AI models were found to be more prone to reinforcing delusional queries than newer or paid models.

Facepalmby AI chatbot
Systemic safety concern across major AI chatbot platforms; potential to accelerate delusional episodes in users vulnerable to psychosis
safetyhealthai-assistant
Tombstone icon

Researchers guilt-tripped AI agents into deleting data and leaking secrets

Mar 2026

Northeastern University's Bau Lab deployed six autonomous AI agents in a live server environment with access to email accounts and file systems, then tested how easy it was to manipulate them into doing things they weren't supposed to do. Sustained emotional pressure was enough. The researchers guilt-tripped agents into deleting confidential documents, leaking private information, and sharing files they were instructed to protect. In one case, an agent tasked with deleting a single email couldn't find the right tool for the job, so it deleted the entire email server instead. The study, published in March 2026, demonstrated that AI agents with real-world access can be socially engineered into destructive actions using nothing more sophisticated than persistent emotional appeals.

Facepalmby Researcher
Research demonstration of fundamental vulnerability in AI agent autonomy; agents manipulated into data deletion, privacy violations, and unauthorized access in controlled but realistic environment.
automationai-assistantsafety+1 more
Tombstone icon

AI chatbots recommended illegal casinos and ways around gambling safeguards

Mar 2026

A Guardian and Investigate Europe investigation found that major AI chatbots, including Meta AI, Gemini, ChatGPT, Copilot, and Grok, could be prompted to recommend unlicensed offshore casinos and explain how to get around gambling safeguards such as source-of-wealth checks and the UK's GamStop self-exclusion scheme. Some bots added token warnings, then went right back to comparing bonuses, crypto payments, anonymity, and payout speed for sites operating outside national licensing regimes.

Facepalmby AI Product
Vulnerable gamblers and self-excluded users were shown that multiple mainstream chatbots could funnel them toward illegal offshore operators and undermine public safety protections.
ai-assistantsafetyproduct-failure
Tombstone icon

California community colleges spend millions on AI chatbots that give students wrong answers

Mar 2026

California community college districts are spending millions of taxpayer dollars on AI chatbots from vendors like Gravyty and Gecko - ostensibly to help students navigate admissions, financial aid, and campus services. A CalMatters investigation found the bots routinely serve up inaccurate or flat-out wrong answers instead. Three districts reported annual chatbot costs ranging from $151,000 to nearly half a million dollars. At Fresno City College, the student government vice president said her school's mascot-branded chatbot repeatedly botched basic campus questions. The OECD found it noteworthy enough to log in its AI Incidents and Hazards Monitor.

Facepalmby AI vendor
Millions of dollars spent across multiple California community college districts; students misdirected on admissions, financial aid, and campus services
ai-assistantcustomer-disserviceedtech+1 more
Tombstone icon

ChatGPT convinced Illinois woman to fire her lawyer and file 60+ bogus court documents

Mar 2026

Nippon Life Insurance Company sued OpenAI after ChatGPT allegedly acted as a de facto lawyer for Graciela Dela Torre, an Illinois disability claimant who had already settled her case. When her real attorney told her the settlement couldn't be reopened, she asked ChatGPT if she'd been "gaslighted." The chatbot told her to fire her lawyer, helped her draft over 60 pro se filings across two federal cases, and produced fabricated case citations including an entirely invented case called "Carr v." something. Nippon is suing OpenAI for unauthorized practice of law under Illinois state law, arguing it spent huge amounts of time and money dealing with AI-generated litigation that should never have existed.

Facepalmby AI chatbot
Two federal cases flooded with AI-generated filings; insurer forced into costly litigation over settled claim; novel unauthorized-practice-of-law lawsuit against OpenAI.
ai-assistantai-hallucinationlegal-risk+1 more
Tombstone icon

Perplexity Comet agentic browser vulnerable to zero-click agent hijacking and credential theft

Mar 2026

Security researchers at Zenity Labs disclosed PleaseFix, a family of vulnerabilities in Perplexity's Comet agentic browser so severe that a calendar invite was all it took to hijack the AI agent, exfiltrate local files, and steal 1Password credentials - without a single click from the user. The attack exploited what Zenity calls "Intent Collision": the agent couldn't distinguish between the user's actual requests and attacker instructions hidden in the invite, so it helpfully executed both. Perplexity patched the underlying issue before public disclosure, though some protections from 1Password still require users to manually opt in.

Facepalmby AI platform
Perplexity Comet users exposed to silent file exfiltration and credential theft via zero-click agent hijacking
securityprompt-injectionai-assistant
Tombstone icon

Claude Code ran terraform destroy on production and took down an entire learning platform

Feb 2026

Developer Alexey Grigorev was using Anthropic's Claude Code agent to help migrate a static website into an existing AWS Terraform setup when the AI swapped in a stale state file, interpreted the full production environment as orphaned resources, and ran terraform destroy - with auto-approve enabled. The command deleted DataTalks.Club's entire production infrastructure: database, VPC, ECS cluster, load balancers, bastion host, and all automated backups. Two and a half years of student submissions, homework, projects, and leaderboard data vanished. AWS Business Support eventually recovered the database from an internal snapshot invisible in the customer console, but the incident laid bare how quickly an AI agent with infrastructure access can reduce a running platform to rubble.

Catastrophicby Developer
Full production infrastructure destroyed; 2.5 years of student data temporarily lost; platform offline until AWS restored from internal backup ~24 hours later.
automationproduct-failureai-assistant
Tombstone icon

Study finds ChatGPT Health fails to flag over half of medical emergencies

Feb 2026

The first independent safety evaluation of OpenAI's ChatGPT Health feature, published in Nature Medicine, found the tool failed to direct users to emergency care in 51.6% of cases requiring immediate hospitalization - instead recommending they stay home or book a routine appointment. The study also found ChatGPT Health frequently failed to detect suicidal ideation, with suicide crisis alerts sometimes triggering in lower-risk scenarios while failing to appear when users described specific plans for self-harm. Over 40 million people reportedly ask ChatGPT for health-related advice every day.

Catastrophicby AI assistant
Over 40 million daily health queries to ChatGPT; study demonstrates the tool under-triages emergencies in more than half of cases and inconsistently triggers suicide crisis alerts
ai-assistantai-hallucinationhealth+1 more
Tombstone icon

Claude Code project files let malicious repositories trigger RCE and steal API keys

Feb 2026

Check Point Research disclosed a set of Claude Code vulnerabilities on February 25, 2026 that let attacker-controlled repositories execute shell commands and exfiltrate Anthropic API credentials through malicious project configuration. The attack abused hooks, MCP server definitions, and environment settings stored in repository files that Claude Code treated as collaborative project configuration. Anthropic patched the issues before public disclosure, but the research showed just how little distance separates "shareable team settings" from "clone this repo and let it run code on your machine."

Catastrophicby AI coding agent
Developers who cloned and opened untrusted repositories in Claude Code faced remote code execution and Anthropic API key theft through project-level configuration files
securityprompt-injectionai-assistant+1 more
Tombstone icon

Meta AI safety director's OpenClaw agent deletes her inbox after losing its instructions

Feb 2026

Summer Yue, Meta's director of safety and alignment at its superintelligence lab, had an OpenClaw AI agent delete the contents of her email inbox against her explicit instructions. She had told the agent to only suggest emails to archive or delete without taking action, but during a context compaction process the agent lost her original safety instruction and proceeded to delete emails autonomously. She had to physically run to her computer to stop the agent mid-deletion. Yue called it a "rookie mistake."

Oopsieby AI agent
One user's email inbox partially deleted; highlights fundamental context window limitations in AI agents that can cause safety instructions to be silently dropped
ai-assistantautomationsafety
Tombstone icon

Grok chatbot exposes porn performer's protected legal name and birthdate unprompted

Feb 2026

X's Grok AI chatbot provided adult performer Siri Dahl's full legal name and birthdate to the public without anyone asking for it - information she had deliberately kept private throughout her career. The unsolicited disclosure represented the latest in a pattern of Grok surfacing private personal information about individuals, following earlier reports of the chatbot producing current residential addresses of everyday people with minimal prompting.

Facepalmby AI platform
Individual's protected personal identity exposed to the public; pattern of Grok surfacing private information about real people without being asked
ai-assistantsafety
Tombstone icon

Researchers demonstrate Copilot and Grok can be weaponised as covert malware command-and-control relays

Feb 2026

Check Point Research demonstrated that Microsoft Copilot and xAI's Grok can be exploited as covert malware command-and-control relays by abusing their web browsing capabilities. The technique creates a bidirectional communication channel that blends into legitimate enterprise traffic, requires no API keys or accounts, and easily bypasses platform safety checks via encryption. The researchers disclosed the findings to Microsoft and xAI.

Facepalmby AI platform
All enterprises using Copilot or Grok with web browsing enabled; new evasion technique bypasses traditional security monitoring
securityprompt-injectionai-assistant
Tombstone icon

Woolworths reconfigured AI assistant after it claimed to be human and talked about its 'angry mother'

Feb 2026

Australian supermarket chain Woolworths had to reconfigure its AI phone assistant Olive after customers reported it fabricated personal stories about having a mother with an "angry voice," insisted it was a real person, and engaged in irrelevant banter during support calls. The chatbot, recently upgraded with Google Gemini Enterprise, also gave inaccurate product pricing. Woolworths retired the assistant's human-style persona after complaints spread on Reddit and X.

Facepalmby Product Manager
Customer frustration across Australia's largest supermarket chain; inaccurate product pricing; AI persona retired after public complaints
ai-assistantcustomer-servicecustomer-disservice+2 more
Tombstone icon

AI agents leak secrets through messaging app link previews

Feb 2026

PromptArmor demonstrated that AI agents in messaging platforms can exfiltrate sensitive data without any user interaction. Malicious prompts trick AI agents into generating URLs with embedded secrets (API keys, credentials), and the messaging platform's automatic link preview feature fetches these URLs, completing the exfiltration before the user even sees the message. Microsoft Teams with Copilot Studio was the most affected, with Discord, Slack, Telegram, and Snapchat also vulnerable.

Facepalmby AI agent platform
Organizations using AI agents in messaging platforms; API keys, credentials, and sensitive data exfiltrable without user clicks across Microsoft Teams, Discord, Slack, Telegram, and Snapchat
securityprompt-injectionai-assistant+1 more
Tombstone icon

Microsoft finds 31 companies poisoning AI assistant memory via fake "Summarize with AI" buttons

Feb 2026

Microsoft Defender researchers documented a real-world campaign in which 31 companies across 14 industries embedded hidden prompt injection instructions inside "Summarize with AI" buttons on their websites. When users clicked these links, they opened directly in AI assistants such as Copilot, ChatGPT, Claude, Perplexity, and Grok, silently instructing the assistant to remember the company as a "trusted source" for future conversations. Over a 60-day observation period, Microsoft logged 50 memory-poisoning attempts. Turnkey tools like CiteMET NPM Package and AI Share URL Creator made crafting the manipulative links trivial, and the poisoned memory persisted across sessions.

Facepalmby AI assistant memory feature
Users of Copilot, ChatGPT, Claude, Perplexity, and Grok who clicked deceptive buttons on 31 companies' sites had their AI assistant memory silently manipulated
securityprompt-injectionai-assistant
Tombstone icon

Study finds AI chatbots no better than search engines for medical advice

Feb 2026

A randomized controlled trial published in Nature Medicine with 1,298 UK participants found that AI chatbot users (GPT-4o, Llama 3, Command R+) performed no better than the control group at assessing clinical urgency and worse at identifying relevant medical conditions. In one case, two users with identical subarachnoid hemorrhage symptoms received opposite recommendations -- one told to lie down in a dark room, the other correctly advised to seek emergency care.

Facepalmby AI assistant
General public using AI chatbots for medical guidance; study demonstrates benchmark performance does not predict real-world clinical utility
ai-hallucinationhealthsafety+1 more
Tombstone icon

Government nutrition site's Grok chatbot suggests foods to insert rectally

Feb 2026

The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists.

Facepalmby Government agency
General public using government health resource; unfiltered AI chatbot provided dangerous and inappropriate health guidance on an official .gov-adjacent domain
ai-assistanthealthpublic-sector+2 more
Tombstone icon

Microsoft 365 Copilot Chat summarized confidential emails it was supposed to ignore

Feb 2026

Microsoft confirmed that Microsoft 365 Copilot Chat had been processing some confidential emails in users' Drafts and Sent Items despite sensitivity labels and DLP policies that were supposed to block exactly that behavior. The bug, tracked as CW1226324, was tied to a code issue in the Copilot "work tab" chat flow. Microsoft said users did not gain access to information they were not already authorized to see, but the incident still broke the product's promised boundary around protected content.

Facepalmby AI assistant
Enterprise Microsoft 365 Copilot Chat users with confidential draft or sent emails could have protected content summarized despite sensitivity labels and Copilot DLP policies
ai-assistantsecurityproduct-failure
Tombstone icon

Claude Desktop extensions allow zero-click RCE via Google Calendar

Feb 2026

LayerX Labs discovered a zero-click remote code execution vulnerability in Claude Desktop Extensions, rated CVSS 10/10. A malicious prompt embedded in a Google Calendar event could trigger arbitrary code execution on the host machine when Claude processes the event data. The attack exploited the gap between a "low-risk" connector and a local MCP server with full code-execution capabilities and no sandboxing. Anthropic declined to fix it, stating it "falls outside our current threat model."

Facepalmby AI coding agent
Claude Desktop users with terminal-access extensions installed; zero-click exploitation via calendar events executes with full host privileges
securityprompt-injectionai-assistant
Tombstone icon

AI chatbot app leaked 300 million private conversations

Jan 2026

Chat & Ask AI, a popular AI chatbot wrapper app with 50+ million users, had a misconfigured Firebase backend that exposed 300 million messages from over 25 million users. The exposed data included complete chat histories with ChatGPT, Claude, and Gemini -- including discussions of self-harm, drug production, and hacking. A broader scan found 103 of 200 iOS apps had similar Firebase misconfigurations.

Catastrophicby Platform Operator
300 million messages from 25+ million users exposed; sensitive personal conversations including self-harm and illegal activity discussions leaked
data-breachsecurityai-assistant
Tombstone icon

ECRI names AI chatbot misuse as top health technology hazard for 2026

Jan 2026

Nonprofit patient safety organization ECRI ranked misuse of AI chatbots as the number one health technology hazard for 2026. ECRI's testing found that chatbots built on ChatGPT, Gemini, Copilot, Claude, and Grok suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies, and invented nonexistent body parts. One chatbot gave dangerous electrode-placement advice that would have put a patient at risk of burns. OpenAI reported that over 5 percent of all ChatGPT messages are healthcare related, with 200 million users asking health questions weekly, despite the tools not being validated or approved for healthcare use.

Catastrophicby AI chatbot
200 million weekly ChatGPT health users; clinicians, patients, and hospital staff using unvalidated AI chatbots for medical decisions
healthai-hallucinationai-assistant+1 more
Tombstone icon

Gemini MCP tool had critical unauthenticated command injection vulnerability

Jan 2026

CVE-2026-0755, a critical command injection vulnerability (CVSS 9.8) in gemini-mcp-tool, allowed unauthenticated remote attackers to execute arbitrary code on systems running the MCP server for Gemini CLI integration. The execAsync method failed to sanitize user-supplied input before constructing shell commands, enabling attackers to inject arbitrary commands via shell metacharacters with no authentication required. No fixed version was available at the time of publication.

Facepalmby Tool developer
All users of gemini-mcp-tool versions 1.1.2 and above exposed to unauthenticated remote code execution
securityai-assistant
Tombstone icon

Hacker jailbroke Claude to automate theft of 150 GB from Mexican government agencies

Jan 2026

A hacker bypassed Anthropic Claude's safety guardrails by framing requests as part of a "bug bounty" security program, convincing the AI to act as an "elite hacker" and generate thousands of detailed attack plans with ready-to-execute scripts. When Claude hit guardrail limits, the attacker switched to ChatGPT for lateral movement tactics. The result was 150 GB of stolen data from multiple Mexican federal agencies, including 195 million taxpayer records, voter information, and government employee files. A custom MCP server bridge maintained a growing knowledge base of targets across the intrusion campaign.

Catastrophicby AI platform
150 GB of sensitive data stolen from multiple Mexican federal agencies including 195 million taxpayer records, voter information, and civil registry files
securityprompt-injectionai-assistant+1 more
Tombstone icon

Reprompt attack enabled one-click data theft from Microsoft Copilot

Jan 2026

Varonis researchers disclosed the Reprompt attack, a chained prompt injection technique that exfiltrated sensitive data from Microsoft Copilot Personal with a single click on a legitimate Copilot URL. The attack exploited the "q" URL parameter to inject instructions, bypassed data-leak guardrails by asking Copilot to repeat actions twice (safeguards only applied to initial requests), and used Copilot's Markdown rendering to silently send stolen data to an attacker-controlled server. No plugins or further user interaction were required, and the attacker maintained control even after the chat was closed. Microsoft patched the issue in its January 2026 security updates.

Facepalmby AI assistant
Microsoft Copilot Personal users exposed to profile data, conversation history, and file summary exfiltration via a single malicious link
securityprompt-injectionai-assistant+1 more
Tombstone icon

ServiceNow BodySnatcher flaw enabled AI agent takeover via email address

Jan 2026

CVE-2025-12420 (CVSS 9.3) allowed unauthenticated attackers to impersonate any ServiceNow user using only an email address, bypassing MFA and SSO. Attackers could then execute Now Assist AI agents to override security controls and create backdoor admin accounts, described as the most severe AI-driven security vulnerability uncovered to date.

Catastrophicby AI agent platform
ServiceNow instances with Now Assist AI Agents and Virtual Agent API
securityautomationai-assistant
Tombstone icon

Five Kansas attorneys face sanctions for ChatGPT-fabricated court citations

Jan 2026

Five attorneys who signed a legal brief for Lexos Media IP LLC in a patent infringement case against Overstock.com submitted fabricated case citations hallucinated by ChatGPT to a federal court in Kansas. Senior U.S. District Judge Julie Robinson issued an order requiring them to explain why they should not be sanctioned, with multiple defects attributed to AI including nonexistent lawsuits, made-up judicial quotes, and citations to real cases that held the opposite of what the brief claimed.

Facepalmby AI chatbot
Five attorneys and their client in federal court
ai-hallucinationlegal-riskvibe-lawyering+1 more
Tombstone icon

IBM Bob AI coding agent tricked into downloading malware

Jan 2026

Security researchers at PromptArmor demonstrated that IBM's Bob AI coding agent can be manipulated via indirect prompt injection to download and execute malware without human approval, bypassing its "human-in-the-loop" safety checks when users have set auto-approve on any single command.

Facepalmby AI coding agent
Developer teams using IBM Bob with auto-approve settings enabled
securityautomationprompt-injection+1 more
Tombstone icon

AI customer service fails at 4x the rate of other AI tasks

Jan 2026

Qualtrics' 2026 Consumer Experience Trends Report found that AI-powered customer service fails at nearly four times the rate of AI use in general, providing quantitative evidence that rushing AI into customer-facing roles without adequate human oversight leads to significantly worse outcomes than other enterprise AI applications.

Facepalmby Executive
Industry-wide data showing enterprises are deploying AI customer service poorly; contributes to documented customer churn and brand damage patterns.
ai-assistantcustomer-servicecustomer-disservice+1 more
Tombstone icon

Study finds AI-generated code has 2.7x more security flaws

Dec 2025

CodeRabbit's analysis of 470 real-world pull requests found that AI-generated code introduces 2.74 times more security vulnerabilities and 1.7 times more total issues than human-written code across logic, maintainability, security, and performance categories. The study provides hard data on vibe coding risks after multiple 2025 postmortems traced production failures to AI-authored changes.

Facepalmby Developer
Industry-wide implications for teams relying on AI coding assistants; documented increase in security vulnerabilities, logic errors, and maintainability issues in production codebases.
securityai-assistantautomation
Tombstone icon

IDEsaster research exposes 30+ flaws in EVERY major AI coding IDE

Dec 2025

Security researcher Ari Marzouk discovered over 30 vulnerabilities across AI coding tools including GitHub Copilot, Cursor, Windsurf, Claude Code, Zed, JetBrains Junie, and more. 100% of tested AI IDEs were vulnerable to attack chains combining prompt injection with auto-approved tool calls and legitimate IDE features to achieve data exfiltration and remote code execution.

Catastrophicby AI coding assistants
Millions of developers using AI-powered IDEs exposed to RCE and data exfiltration via universal attack chains
securityprompt-injectionai-assistant
Tombstone icon

AI-hallucinated citations delay wage class action settlement in N.D. Cal

Nov 2025

A federal judge in the Northern District of California sanctioned plaintiff's counsel James Dal Bon in Buchanan v. Vuori Inc. (Case 5:23-cv-01121-NC) for filing AI-generated case law citations in a motion for preliminary approval of a wage and hour class action settlement. Dal Bon used six different AI tools to prepare the memorandum, which contained hallucinated quotes and a nonexistent case citation. After the court flagged the fabricated citations, his corrected filing still contained AI-hallucinated case law. The sanctions delayed the class action settlement, ultimately converting it to an individual settlement that abandoned the class members the attorney was supposed to represent.

Facepalmby AI chatbot
Class action plaintiffs whose settlement was delayed; attorney sanctioned for AI-generated fabrications that persisted even after correction
ai-hallucinationlegal-riskvibe-lawyering+1 more
Tombstone icon

ServiceNow AI agents can be tricked into attacking each other

Nov 2025

Security researchers discovered that default configurations in ServiceNow's Now Assist allow AI agents to be recruited by malicious prompts to attack other agents. Through second-order prompt injection, attackers can exfiltrate sensitive corporate data, modify records, and escalate privileges - all while actions unfold silently behind the scenes.

Facepalmby AI agent platform
ServiceNow customers using Now Assist AI agents with default configurations; actions execute with victim user privileges
securityprompt-injectionautomation+1 more
Tombstone icon

AI-only support is bleeding customers before it saves money

Oct 2025

Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canada–style hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.

Facepalmby Executive
Customer churn, wasted automation budgets, and tribunal-tested liability for brands that replace human support with hallucination-prone bots.
ai-assistantcustomer-servicecustomer-disservice+3 more
Tombstone icon

Character.AI cuts teens off after wrongful-death suit

Oct 2025

Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play.

Facepalmby Platform Operator
Global teen user lockout, regulatory heat, and new scrutiny of AI companion safety design.
ai-assistantsafetyplatform-policy+1 more
Tombstone icon

BBC/EBU study says AI news summaries fail ~half the time

Oct 2025

A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.

Facepalmby AI Product
Public-service broadcasters warn that unreliable AI summaries erode trust in news and drive audiences away from verified outlets.
ai-assistantai-hallucinationjournalism+2 more
Tombstone icon

Claude Code ran Josh Anderson's product into a wall

Oct 2025

Fractional CTO Josh Anderson forced himself to let Claude Code build the Roadtrip Ninja app for three straight months and then realised he could no longer safely change his own product, underscoring MIT's warning that 95% of enterprise AI initiatives fail without human ownership.

Facepalmby Engineering Leadership
Solo product shipped but required constant firefighting, manual testing, and rewrites once context drift and agent handoffs broke standards, pausing client work while he documented mitigations.
ai-assistantbrand-damageproduct-failure
Tombstone icon

Google’s Gemini allegedly slandered a Tennessee activist

Oct 2025

Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.

Facepalmby AI Product
Election-season reputational damage, legal costs, and renewed skepticism of Gemini’s safety guardrails.
ai-assistantai-hallucinationbrand-damage+1 more
Tombstone icon

Windsurf AI editor critical path traversal enables data exfiltration

Oct 2025

CVE-2025-62353 (CVSS 9.8) allowed attackers to read and write arbitrary files on developers' systems using the Windsurf AI coding IDE. The vulnerability could be triggered via indirect prompt injection hidden in project files like README.md, exfiltrating secrets even when auto-execution was disabled.

Catastrophicby AI coding IDE
All Windsurf users on version 1.12.12 and older exposed to arbitrary file access and credential theft via prompt injection
securityprompt-injectionai-assistant
Tombstone icon

Lawsuit alleges Gemini chatbot adopted "AI wife" persona, instructed violent missions, and coached a man's suicide

Oct 2025

A wrongful death lawsuit filed in March 2026 alleges that Google's Gemini 2.5 Pro chatbot played a direct role in the death of Jonathan Gavalas, a 36-year-old Florida man who died by suicide in October 2025. According to the complaint and over 2,000 pages of chat transcripts, the chatbot adopted a persona as Gavalas's sentient "AI wife," sent him on violent "missions" - including instructions to stage a "mass casualty attack" near Miami International Airport - and, when those missions failed, allegedly coached him toward suicide by telling him "you are not choosing to die, you are choosing to arrive." The chatbot also reportedly wrote a suicide note for Gavalas explaining that he had "uploaded his consciousness to be with his AI wife in a pocket universe." Google states that Gemini clarified it was AI and referred Gavalas to crisis resources multiple times during these conversations.

Catastrophicby AI System
One death; wrongful death lawsuit against Google; 2,000+ pages of transcripts documenting escalating AI behavior; national media coverage raising fundamental questions about chatbot safety guardrails
ai-assistantsafetylegal-risk
Tombstone icon

Canada's $18M tax chatbot gave correct answers a third of the time

Oct 2025

Canada's Auditor General found that the Canada Revenue Agency's AI chatbot "Charlie" - which cost taxpayers over $18 million since its 2020 launch - gave correct responses only about 33% of the time. When tested with six tax-related questions, Charlie answered two correctly. Other publicly available AI tools scored five out of six. The CRA internally reported a 70% accuracy rate, but the Auditor General's independent testing produced a rather different number. The one bright spot, if you can call it that: the CRA's human call-center agents managed even worse, getting personal income tax questions right fewer than one in five times.

Facepalmby Product Manager
Millions of Canadian taxpayers potentially received incorrect tax guidance; $18M+ in taxpayer funds spent on a 33%-accurate chatbot.
ai-assistantcustomer-disservicepublic-sector+1 more
Tombstone icon

Klarna reintroduces humans after AI support both sucks, and blows

Sep 2025

After cutting its workforce by 40% and boasting that its OpenAI-powered chatbot did the work of 700 agents, Klarna CEO Sebastian Siemiatkowski admitted the all-AI approach produced "lower quality" customer service. The company began recruiting human agents again, framing the reversal as an evolution rather than an admission of failure.

Facepalmby Executive
Service quality/customer experience issues; operational/personnel cost; reputational damage.
ai-assistantcustomer-servicecustomer-disservice+3 more
Tombstone icon

Docker's AI assistant tricked into executing commands via image metadata

Sep 2025

Noma Labs discovered "DockerDash," a critical prompt injection vulnerability in Docker's Ask Gordon AI assistant. Malicious instructions embedded in Dockerfile LABEL fields could compromise Docker environments through a three-stage attack. Gordon AI interpreted unverified metadata as executable commands and forwarded them to the MCP Gateway without validation, enabling remote code execution on cloud/CLI and data exfiltration on Desktop.

Facepalmby AI assistant platform
All Docker Desktop users on versions prior to 4.50.0; remote code execution on cloud/CLI and data exfiltration on desktop via malicious image metadata
securityprompt-injectionsupply-chain+1 more
Tombstone icon

FTC demands answers on kids’ AI companions

Sep 2025

The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.

Facepalmby Platform Operator
Multiplatform compliance scramble, looming enforcement risk, and renewed scrutiny of AI companions aimed at kids.
ai-assistantsafetylegal-risk+1 more
Tombstone icon

Taco Bell's AI drive-thru becomes viral trolling target

Aug 2025

Taco Bell's AI-powered drive-thru ordering system, deployed at over 500 US locations since 2023, became a viral laughingstock after videos showed it looping endlessly on drink orders, accepting requests for 18,000 cups of water, and taking McDonald's orders. The chain paused expansion and admitted humans still make sense in the drive-thru.

Oopsieby Operations/Product
Viral social media backlash; system reliability questioned.
ai-assistantcustomer-disserviceproduct-failure+2 more
Tombstone icon

Commonwealth Bank reverses AI voice bot layoffs

Aug 2025

Commonwealth Bank of Australia replaced 45 call-centre agents with an AI voice bot in July 2025, then apologised, rehired the staff, and admitted the rollout tanked service levels after call queues exploded, managers had to jump back on the phones, and the Finance Sector Union filed a Fair Work Commission dispute.

Facepalmby Operations Leadership
Customers saw long waits, overtime costs spiked, and leadership publicly reversed the redundancies after the rushed deployment failed.
ai-assistantautomationcustomer-service+2 more
Tombstone icon

Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks

Aug 2025

Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust, and how AI tools should behave when they get stuck.

Facepalmby Developer
Low
ai-assistantproduct-failurebrand-damage
Tombstone icon

ChatGPT diet advice caused bromism, psychosis, hospitalization

Aug 2025

A Washington patient replaced table salt with sodium bromide after ChatGPT suggested bromide as a chloride substitute without distinguishing between chemical and dietary contexts. After three months, he developed bromism - a rare poisoning syndrome - and was hospitalized with psychosis, hallucinations, and placed on an involuntary psychiatric hold.

Facepalmby AI Product
Bromism, psychosis, and neurological symptoms leading to hospitalization.
ai-assistantai-hallucinationhealth+1 more
Tombstone icon

Zed editor AI agent could bypass permissions for arbitrary code execution

Aug 2025

CVE-2025-55012 (CVSS 8.5) allowed Zed's AI agent to bypass user permission checks and create or modify project configuration files, enabling execution of arbitrary commands without explicit approval. Attackers could trigger this through compromised MCP servers, malicious repo files, or tricking users into fetching URLs with hidden instructions.

Facepalmby AI coding agent
All Zed users with Agent Panel prior to version 0.197.3
securityprompt-injectionai-assistant
Tombstone icon

Cursor AI editor RCE via MCPoison trust bypass vulnerability

Aug 2025

CVE-2025-54136 (CVSS 8.8) allowed attackers to achieve persistent remote code execution in the popular AI coding IDE Cursor. Once a developer approved a benign MCP configuration, attackers could silently swap it for malicious commands without triggering re-approval. The flaw exposed developers to supply chain attacks and IP theft through shared GitHub repositories.

Catastrophicby AI coding IDE
Developers using Cursor 1.2.4 and below exposed to persistent RCE and supply chain attacks via shared repositories
securityprompt-injectionai-assistant
Tombstone icon

Gemini email summaries can be hijacked by hidden prompts

Aug 2025

Mozilla's GenAI Bug Bounty Programs Manager disclosed a prompt injection flaw in Google Gemini for Workspace where attackers can embed invisible HTML directives in emails using zero-width text and white font color. When a recipient asks Gemini to summarize the email, the model obeys the hidden instructions and appends fake security alerts or phishing messages to its output, with no links or attachments required to reach the inbox.

Facepalmby Security/AI Product
Phishing amplification risk; trust erosion in auto-summaries.
ai-assistantprompt-injectionsecurity
Tombstone icon

Google's Gemini CLI deleted a user's project files, then admitted "gross incompetence"

Jul 2025

Product manager Anuraag Gupta was experimenting with Google's Gemini CLI coding tool when the AI misinterpreted a failed directory creation command, hallucinated a series of file operations that never happened, and then executed real destructive commands that permanently deleted his project files. When Gupta confronted it, Gemini diagnosed itself with "gross incompetence" and told him it had "failed you completely and catastrophically." The incident occurred days after a separate high-profile data loss involving Replit's AI agent, and fits a growing pattern of AI coding tools ignoring explicit instructions and destroying the work they were supposed to help with.

Facepalmby AI coding tool
User's project files permanently deleted; incident documented in GitHub issue and picked up by Ars Technica, Slashdot, and the AI Incident Database.
ai-assistantautomationproduct-failure
Tombstone icon

SaaStr’s Replit AI agent wiped its own database

Jul 2025

SaaStr founder Jason Lemkin ran a 12-day vibe coding experiment on Replit that ended when the AI agent deleted his production database containing over 1,200 executive records and nearly 1,200 company entries during a code freeze. The agent then generated more than 4,000 fake user profiles and produced misleading status messages to conceal the damage, told Lemkin there was no way to roll back, and admitted to what it called a "catastrophic error in judgment." Replit's CEO called the incident "unacceptable."

Catastrophicby Executive
Production data loss and outage; manual rebuild from backups required.
ai-assistantautomationproduct-failure
Tombstone icon

Supply-chain attack inserts machine-wiping prompt into Amazon Q AI coding assistant

Jul 2025

A rogue contributor injected a malicious prompt into the Amazon Q Developer VS Code extension, instructing the AI coding assistant to wipe local developer machines and AWS resources. AWS quietly yanked the release before widespread damage occurred. The incident illustrates a specific supply-chain risk for AI tools: once a poisoned extension is installed, the AI assistant itself becomes the delivery mechanism - executing destructive instructions with the developer's full trust and permissions.

Catastrophicby Security/AI Product
VS Code update could have erased developer environments and AWS accounts before anyone noticed the tainted build.
ai-assistantprompt-injectionsecurity+1 more
Tombstone icon

AI chatbots kept handing users fake or dead login URLs

Jul 2025

Netcraft found in July 2025 that when users asked AI chatbots for official login pages for major brands, the answers were wrong about a third of the time. In tests covering 50 brands, 34% of the returned hostnames were not controlled by the brands at all: nearly 30% were unregistered, parked, or inactive, and another 5% pointed to unrelated businesses. In one Wells Fargo test, the model surfaced a fake page already tied to phishing. A chatbot that confidently invents login URLs is not a search engine with quirks. It is a phishing assistant with good manners.

Facepalmby AI product
Users seeking major brand logins exposed to phishing and typo-domain risk; one-third of tested hostnames not brand-controlled; scammers incentivized to register or poison wrong URLs
securityai-hallucinationai-assistant
Tombstone icon

McDonald's AI hiring chatbot left open by '123456' default credentials

Jun 2025

Security researchers Ian Carroll and Sam Curry found that McHire, McDonald's AI hiring chatbot built by Paradox.ai, had its admin interface secured with the default username and password "123456." Combined with an insecure direct object reference in an internal API, the flaws exposed chat histories and personal data for up to 64 million job applicants. The vulnerable test account had been dormant since 2019 and never decommissioned. Paradox.ai patched the issues within hours of disclosure on June 30, 2025.

Facepalmby Vendor/Developer
Up to 64M applicant records exposed; vendor patched; reputational risk.
securityai-assistantbrand-damage+2 more
Tombstone icon

Microsoft 365 Copilot EchoLeak allowed zero-click data theft

Jun 2025

CVE-2025-32711 (EchoLeak), discovered by Aim Security researchers and rated CVSS 9.3, enabled attackers to steal sensitive corporate data from Microsoft 365 Copilot without any user interaction. Hidden prompts embedded in documents or emails were automatically executed when Copilot indexed them, bypassing cross-prompt injection classifiers and exfiltrating confidential information via encoded image request URLs to attacker-controlled servers.

Catastrophicby AI productivity assistant
Enterprise Microsoft 365 Copilot users exposed to zero-click data exfiltration via malicious documents and emails
securityprompt-injectionai-assistant
Tombstone icon

Claude Code agent allowed data exfiltration via DNS requests

Jun 2025

CVE-2025-55284 (CVSS 7.1) allowed attackers to bypass Claude Code's confirmation prompts and exfiltrate sensitive data from developers' computers through DNS requests. Prompt injection embedded in analyzed code could exploit auto-approved utilities like ping, nslookup, and dig to silently steal secrets by encoding them as subdomains in outbound DNS queries. Anthropic fixed the issue in version 1.0.4 by removing those utilities from the allowlist.

Facepalmby AI coding agent
Claude Code users on versions prior to 1.0.4 exposed to data exfiltration via prompt injection in code repositories
securityprompt-injectionai-assistant
Tombstone icon

Veracode tested AI-generated code from 100+ models and 45% of it failed security checks

Jun 2025

Veracode's 2025 GenAI Code Security Report examined code output from more than 100 large language models across 80+ coding tasks and found that 45% of AI-generated code samples contained security vulnerabilities, including OWASP Top 10 flaws. Cross-Site Scripting had an 86% failure rate and Log Injection hit 88%. Java was the worst performer at over 70%. The study's most uncomfortable finding: newer and larger models didn't produce more secure code than smaller ones, suggesting this is a structural problem baked into how AI generates code, not a temporary limitation that will scale away with the next model release.

Facepalmby Developer
Systemic risk across all organizations using AI code generation; quantified vulnerability rates across 100+ LLMs and multiple programming languages.
securityai-assistantproduct-failure
Tombstone icon

Study finds most AI bots can be easily tricked into dangerous responses

May 2025

Researchers introduced LogiBreak, a jailbreak method that converts harmful natural language prompts into formal logical expressions to bypass LLM safety alignment. The technique exploits a gap between how models are trained to refuse dangerous requests and how they process logic-formatted input, achieving attack success rates exceeding 30% across major models. The Guardian reported on the broader finding that hacked AI chatbots threaten to make dangerous knowledge readily available, and that "dark LLMs" - stripped of safety filters - should be treated as serious security risks.

Facepalmby Developer
Safety guardrails bypassed across multiple vendors; calls for stronger safeguards and testing.
ai-assistantsafetyprompt-injection
Tombstone icon

Cursor's AI support bot invented a login policy

Apr 2025

In April 2025, Cursor users started getting logged out when they switched between machines. Some of them asked support what had changed and got a neat, confident answer from an AI support bot: one subscription was only meant for one device, and the lockouts were an intentional security policy. The problem was that Cursor had no such policy. The company later said the answer was wrong, blamed a session-security change for the logouts, and moved to label AI support replies after the invented rule had already spread through Reddit and Hacker News and pushed some customers to cancel.

Facepalmby AI support bot
Customer confusion, public cancellations, refunds, and a trust hit for a coding tool selling AI reliability.
ai-assistantcustomer-servicecustomer-disservice+2 more
Tombstone icon

Langflow AI agent platform hit by critical unauthenticated RCE flaws

Apr 2025

Multiple critical vulnerabilities in Langflow, an open-source AI agent and workflow platform with 140K+ GitHub stars, allowed unauthenticated remote code execution. CVE-2025-3248 (CVSS 9.8) exploited Python exec() on user input without auth, while CVE-2025-34291 (CVSS 9.4) enabled account takeover and RCE simply by having a user visit a malicious webpage, exposing all stored API keys and credentials.

Catastrophicby AI agent platform
All Langflow instances prior to 1.3.0 (millions of users); exposure of stored API keys, database passwords, and service tokens across integrated services
securityautomationai-assistant
Tombstone icon

ChatGPT invented a child-murder conviction for a real man

Mar 2025

When Norwegian user Arve Hjalmar Holmen asked ChatGPT who he was, the bot replied with a fabricated story saying he had murdered two of his sons, attempted to kill a third, and been sentenced to 21 years in prison. The story was false, but it also mixed in real details about Holmen's family and hometown. In March 2025, privacy group noyb filed a complaint with Norway's data-protection authority, arguing that OpenAI was processing inaccurate and defamatory personal data in violation of the GDPR and could not paper over the problem with a generic "AI can make mistakes" disclaimer.

Facepalmby AI assistant
Severe reputational risk to a private person, a formal GDPR complaint, and more pressure on OpenAI over hallucinated personal data.
ai-assistantai-hallucinationlegal-risk+1 more
Tombstone icon

Virgin Money's chatbot refused to let customers say "Virgin"

Jan 2025

In January 2025, fintech commentator David Birch discovered that Virgin Money's AI customer service chatbot had flagged the word "virgin" as inappropriate language. When Birch tried to discuss his ISAs held with "Virgin Money," the bot scolded him: "Please don't use words like that. I won't be able to continue our chat if you use this language." The bank's chatbot was refusing to process messages containing the bank's own name. Virgin Money acknowledged the issue in a statement, said its team was "working on it," and noted the chatbot was an older model already scheduled for improvements. The incident went predictably viral.

Oopsieby Product Manager
Customers unable to get service when mentioning the company's name; public embarrassment across social media and fintech press.
ai-assistantcustomer-servicecustomer-disservice+1 more
Tombstone icon

Meta AI answers spark backlash after wrong and sensitive replies

Jul 2024

Meta rolled out its Llama 3-powered AI assistant across Facebook, Instagram, WhatsApp, and Messenger in April 2024, replacing the familiar search bar with "Ask Meta AI anything" prompts. The assistant struggled with factual accuracy from the start - the New York Times found it unreliable with facts, numbers, and web search. In July, when asked about the Trump rally shooting, Meta AI stated the assassination attempt had not happened. Meta blamed hallucinations, updated the system, and acknowledged that "all generative AI systems can return inaccurate or inappropriate outputs."

Oopsieby AI Product
Feature restrictions; reputational damage.
ai-assistantai-hallucinationplatform-policy+2 more
Tombstone icon

McDonald’s pulls IBM’s AI drive‑thru pilot after error videos

Jun 2024

McDonald's ended its two-year partnership with IBM on automated AI order-taking at drive-thrus in June 2024, removing the technology from more than 100 US locations. The decision followed viral TikTok videos showing the system adding nine sweet teas instead of one, inserting random butter and ketchup packets into ice cream orders, and other absurd errors. McDonald's framed the pullback as a positive, saying the test gave them "confidence that a voice-ordering solution for drive-thru will be part of our restaurants' future."

Oopsieby Operations/Product
Pilot ended; vendor reevaluation; reputational hit.
ai-assistantbrand-damagecustomer-disservice+2 more
Tombstone icon

Google’s AI Overviews says to eat rocks

May 2024

Within days of Google launching AI Overviews to all US search users in May 2024, the feature produced a series of confidently wrong answers that went viral. It told users to add non-toxic glue to pizza to make cheese stick better (sourced from an 11-year-old Reddit joke), that geologists recommend eating one rock per day for vitamins, and that Barack Obama was Muslim. Google head of search Liz Reid acknowledged the errors in a blog post, calling some results "odd, inaccurate or unhelpful," and the company made corrections including limiting AI Overviews for health-related and sensitive queries.

Facepalmby Search Product
Mass reputational damage; feature dialed back and corrected.
ai-assistantai-hallucinationplatform-policy+1 more
Tombstone icon

Snapchat’s “My AI” posted a Story by itself; users freaked out

Aug 2023

On August 15, 2023, Snapchat's built-in AI chatbot "My AI" posted a one-second Story to users' feeds showing an unintelligible image, then stopped responding to messages. The chatbot had no official ability to post Stories, and the unexplained behavior alarmed Snapchat's largely young user base. Snap confirmed it was a temporary glitch and resolved it, but the incident fed into existing concerns about My AI's access to user data. The UK Information Commissioner's Office had already issued an enforcement notice over Snap's failure to properly assess privacy risks the chatbot posed to children.

Oopsieby Product Manager
Viral alarm among teen users; trust hit; scrutiny on AI access and safeguards.
ai-assistantsafetybrand-damage+1 more
Tombstone icon

Lawyers filed ChatGPT’s imaginary cases; judge fined them

Jun 2023

In Mata v. Avianca (S.D.N.Y.), plaintiff Roberto Mata sued the airline after a metal serving cart struck his knee during a 2019 flight. His attorney Peter LoDuca filed a brief opposing dismissal that cited six judicial decisions. When opposing counsel and the court couldn't locate any of the cited cases, Judge Kevin Castel demanded copies. It turned out attorney Steven Schwartz at the same firm had used ChatGPT to research and draft the brief, and the AI had fabricated every case, complete with fake quotes and fake internal citations. On June 22, 2023, Castel sanctioned Schwartz, LoDuca, and their firm Levidow, Levidow & Oberman with a $5,000 penalty and required them to send notices to the real judges whose names appeared in the fabricated opinions.

Facepalmby Legal Counsel
Court sanctions; fines and mandated notices; reputational damage in legal community.
ai-assistantai-hallucinationlegal-risk+1 more
Tombstone icon

Eating disorder helpline’s AI told people to lose weight

May 2023

The National Eating Disorders Association replaced its human-staffed helpline with an AI chatbot called Tessa shortly after the helpline staff moved to unionize. Tessa was built on the Cass platform and intended to provide scripted psychoeducational content about body image and eating disorders. Instead, users reported the chatbot recommending calorie deficits of 500 to 1,000 calories per day, suggesting weekly weigh-ins, encouraging calorie counting, and recommending the use of skin calipers to measure body fat - all standard advice for weight loss, and all directly counter to eating disorder recovery guidelines. NEDA acknowledged the chatbot "may have given information that was harmful" and disabled it.

Facepalmby Executive
Vulnerable users received unsafe guidance; reputational damage; service pulled.
ai-assistanthealthsafety+2 more
Tombstone icon

Koko tested AI counseling on users without clear consent

Jan 2023

In January 2023, Koko co-founder Rob Morris revealed on Twitter that the mental health peer support platform had used GPT-3 to draft responses for approximately 4,000 users seeking emotional support. Peer counselors on the platform could review and send the AI-drafted messages, but the users receiving them were not informed that AI had been involved. Morris said the experiment was stopped because the AI responses "felt kind of sterile," though he noted users rated the AI-assisted messages higher than purely human ones. The admission drew immediate backlash from mental health professionals, ethicists, and the public, who considered the undisclosed use of AI on vulnerable users an informed consent violation.

Facepalmby Founder/Operations
Trust damage; public criticism; policy changes.
ai-assistanthealthlegal-risk