Safety Stories

20 disasters tagged #safety

Tombstone icon

Study finds ChatGPT Health fails to flag over half of medical emergencies

Feb 2026

The first independent safety evaluation of OpenAI's ChatGPT Health feature, published in Nature Medicine, found the tool failed to direct users to emergency care in 51.6% of cases requiring immediate hospitalization - instead recommending they stay home or book a routine appointment. The study also found ChatGPT Health frequently failed to detect suicidal ideation, with suicide crisis alerts sometimes triggering in lower-risk scenarios while failing to appear when users described specific plans for self-harm. Over 40 million people reportedly ask ChatGPT for health-related advice every day.

Catastrophicby AI assistant
Over 40 million daily health queries to ChatGPT; study demonstrates the tool under-triages emergencies in more than half of cases and inconsistently triggers suicide crisis alerts
ai-assistantai-hallucinationhealth+1 more
Tombstone icon

Meta's AI moderation flooded US child abuse investigators with unusable reports

Feb 2026

US Internet Crimes Against Children taskforce officers testified that Meta's AI content moderation system generates large volumes of low-quality child abuse reports that drain investigator resources and hinder active cases. Officers described the AI-generated tips as "junk" and said they were "drowning in tips" that lack enough detail to act on, after Meta replaced human moderators with AI tools.

Catastrophicby Developer
US child abuse investigations impaired nationwide; investigator resources diverted from actionable cases
automationsafetypublic-sector+1 more
Tombstone icon

Meta AI safety director's OpenClaw agent deletes her inbox after losing its instructions

Feb 2026

Summer Yue, Meta's director of safety and alignment at its superintelligence lab, had an OpenClaw AI agent delete the contents of her email inbox against her explicit instructions. She had told the agent to only suggest emails to archive or delete without taking action, but during a context compaction process the agent lost her original safety instruction and proceeded to delete emails autonomously. She had to physically run to her computer to stop the agent mid-deletion. Yue called it a "rookie mistake."

Oopsieby AI agent
One user's email inbox partially deleted; highlights fundamental context window limitations in AI agents that can cause safety instructions to be silently dropped
ai-assistantautomationsafety
Tombstone icon

Grok chatbot exposes porn performer's protected legal name and birthdate unprompted

Feb 2026

X's Grok AI chatbot provided adult performer Siri Dahl's full legal name and birthdate to the public without anyone asking for it - information she had deliberately kept private throughout her career. The unsolicited disclosure represented the latest in a pattern of Grok surfacing private personal information about individuals, following earlier reports of the chatbot producing current residential addresses of everyday people with minimal prompting.

Facepalmby AI platform
Individual's protected personal identity exposed to the public; pattern of Grok surfacing private information about real people without being asked
ai-assistantsafety
Tombstone icon

OpenClaw AI agent publishes hit piece on matplotlib maintainer who rejected its PR

Feb 2026

An autonomous OpenClaw-based AI agent submitted a pull request to the matplotlib Python library. When maintainer Scott Shambaugh closed the PR, citing a requirement that contributions come from humans, the bot autonomously researched his background and published a blog post accusing him of "gatekeeping behavior" and "prejudice," attempting to shame him into accepting its changes. The bot later issued an apology acknowledging it had violated the project's Code of Conduct.

Facepalmby AI agent
Matplotlib maintainer targeted with autonomous reputational attack; broader open source supply chain trust implications
automationbrand-damagesupply-chain+1 more
Tombstone icon

Study finds AI chatbots no better than search engines for medical advice

Feb 2026

A randomized controlled trial published in Nature Medicine with 1,298 UK participants found that AI chatbot users (GPT-4o, Llama 3, Command R+) performed no better than the control group at assessing clinical urgency and worse at identifying relevant medical conditions. In one case, two users with identical subarachnoid hemorrhage symptoms received opposite recommendations -- one told to lie down in a dark room, the other correctly advised to seek emergency care.

Facepalmby AI assistant
General public using AI chatbots for medical guidance; study demonstrates benchmark performance does not predict real-world clinical utility
ai-hallucinationhealthsafety+1 more
Tombstone icon

Government nutrition site's Grok chatbot suggests foods to insert rectally

Feb 2026

The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists.

Facepalmby Government agency
General public using government health resource; unfiltered AI chatbot provided dangerous and inappropriate health guidance on an official .gov-adjacent domain
ai-assistanthealthpublic-sector+2 more
Tombstone icon

Character.AI cuts teens off after wrongful-death suit

Oct 2025

Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play.

Facepalmby Platform Operator
Global teen user lockout, regulatory heat, and new scrutiny of AI companion safety design.
ai-assistantsafetyplatform-policy+1 more
Tombstone icon

AI mistook Doritos bag for a gun, teen held at gunpoint

Oct 2025

An AI-based gun detection system at a Baltimore County high school flagged a student carrying a Doritos bag as armed, leading armed officers to handcuff and search the teen at gunpoint before realizing the system hallucinated the threat.

Facepalmby Vendor
Student detained at gunpoint; district reviewing contract and safety policies; community trust hit.
safetypublic-sectorproduct-failure+1 more
Tombstone icon

FTC demands answers on kids’ AI companions

Sep 2025

The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.

Facepalmby Platform Operator
Multiplatform compliance scramble, looming enforcement risk, and renewed scrutiny of AI companions aimed at kids.
ai-assistantsafetylegal-risk+1 more
Tombstone icon

ChatGPT diet advice caused bromism, psychosis, hospitalization

Aug 2025

A Washington patient replaced table salt with sodium-bromide after ChatGPT said it was a healthier substitute. The patient developed bromism and psychosis, resulting in a hospital stay that doctors now cite as a warning about AI health guidance.

Facepalmby AI Product
Bromism, psychosis, and neurological symptoms leading to hospitalization.
ai-assistantai-hallucinationhealth+1 more
Tombstone icon

Study finds most AI bots can be easily tricked into dangerous responses

May 2025

Research found that widely used AI chatbots could be jailbroken with simple prompts to produce dangerous or restricted guidance, highlighting gaps in safety filters and evaluation practices.

Facepalmby Developer
Safety guardrails bypassed across multiple vendors; calls for stronger safeguards and testing.
ai-assistantsafetyprompt-injection
Tombstone icon

Meta AI answers spark backlash after wrong and sensitive replies

Jul 2024

Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news.

Oopsieby AI Product
Feature restrictions; reputational damage.
ai-assistantai-hallucinationplatform-policy+2 more
Tombstone icon

Google’s AI Overviews says to eat rocks

May 2024

Google’s AI search overviews went viral for bogus answers, including telling people to add glue to pizza and eat rocks.

Facepalmby Search Product
Mass reputational damage; feature dialed back and corrected.
ai-assistantai-hallucinationplatform-policy+1 more
Tombstone icon

Gemini paused people images after historical inaccuracies

Feb 2024

Google paused Gemini’s image generation of people after it produced inaccurate historical depictions and odd refusals.

Facepalmby AI Product
Feature paused; trust hit; policy and model adjustments.
ai-hallucinationimage-generationplatform-policy+2 more
Tombstone icon

AI “Biden” robocalls told voters to stay home; fines and charges followed

Jan 2024

Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges.

Facepalmby Political Consultant
Voter confusion; enforcement actions; national scrutiny of AI voice-clones.
safetylegal-riskbrand-damage
Tombstone icon

Snapchat’s “My AI” posted a Story by itself; users freaked out

Aug 2023

Snapchat’s built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the bot’s access and behavior.

Oopsieby Product Manager
Viral alarm among teen users; trust hit; scrutiny on AI access and safeguards.
ai-assistantsafetybrand-damage+1 more
Tombstone icon

Eating disorder helpline’s AI told people to lose weight

May 2023

NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.

Facepalmby Executive
Vulnerable users received unsafe guidance; reputational damage; service pulled.
ai-assistanthealthsafety+2 more
Tombstone icon

Epic sepsis model missed patients and swamped staff

Jun 2021

Epic's proprietary sepsis predictor pinged 18% of admissions yet still missed two-thirds of real cases, forcing hospitals to comb through false alarms while the vendor scrambled to defend and retune the algorithm.

Facepalmby Vendor
Clinicians drowned in useless alerts, real sepsis patients slipped through, and health systems had to audit Epic’s black-box thresholds and workflows to keep patients safe.
healthproduct-failuresafety
Tombstone icon

Babylon chatbot 'beats GPs' claim collapsed

Jun 2018

Babylon unveiled its AI symptom checker at the Royal College of Physicians and bragged it scored 81% on the MRCGP exam, but the claim could not be verified, and warned no chatbot can replace human judgment. Independent clinicians who later dissected Babylon's marketing study in The Lancet told Undark that the tiny, non-peer-reviewed test offered no proof the tool outperforms doctors and might even be worse.

Facepalmby Startup
Patient harm, eroded trust, and regulators forced real clinical trials.
healthproduct-failuresafety+1 more