Health Stories

10 disasters tagged #health

Tombstone icon

Study finds ChatGPT Health fails to flag over half of medical emergencies

Feb 2026

The first independent safety evaluation of OpenAI's ChatGPT Health feature, published in Nature Medicine, found the tool failed to direct users to emergency care in 51.6% of cases requiring immediate hospitalization - instead recommending they stay home or book a routine appointment. The study also found ChatGPT Health frequently failed to detect suicidal ideation, with suicide crisis alerts sometimes triggering in lower-risk scenarios while failing to appear when users described specific plans for self-harm. Over 40 million people reportedly ask ChatGPT for health-related advice every day.

Catastrophicby AI assistant
Over 40 million daily health queries to ChatGPT; study demonstrates the tool under-triages emergencies in more than half of cases and inconsistently triggers suicide crisis alerts
ai-assistantai-hallucinationhealth+1 more
Tombstone icon

Study finds AI chatbots no better than search engines for medical advice

Feb 2026

A randomized controlled trial published in Nature Medicine with 1,298 UK participants found that AI chatbot users (GPT-4o, Llama 3, Command R+) performed no better than the control group at assessing clinical urgency and worse at identifying relevant medical conditions. In one case, two users with identical subarachnoid hemorrhage symptoms received opposite recommendations -- one told to lie down in a dark room, the other correctly advised to seek emergency care.

Facepalmby AI assistant
General public using AI chatbots for medical guidance; study demonstrates benchmark performance does not predict real-world clinical utility
ai-hallucinationhealthsafety+1 more
Tombstone icon

Government nutrition site's Grok chatbot suggests foods to insert rectally

Feb 2026

The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists.

Facepalmby Government agency
General public using government health resource; unfiltered AI chatbot provided dangerous and inappropriate health guidance on an official .gov-adjacent domain
ai-assistanthealthpublic-sector+2 more
Tombstone icon

ChatGPT diet advice caused bromism, psychosis, hospitalization

Aug 2025

A Washington patient replaced table salt with sodium-bromide after ChatGPT said it was a healthier substitute. The patient developed bromism and psychosis, resulting in a hospital stay that doctors now cite as a warning about AI health guidance.

Facepalmby AI Product
Bromism, psychosis, and neurological symptoms leading to hospitalization.
ai-assistantai-hallucinationhealth+1 more
Tombstone icon

MD Anderson shelved IBM Watson cancer advisor

Feb 2025

MD Anderson's Oncology Expert Advisor pilot burned through $62M with IBM Watson yet still couldn't integrate with Epic or produce trustworthy recommendations, so the hospital benched it after auditors flagged procurement and scope failures.

Facepalmby Vendor
UT audit cited $62M spent outside standard procurement, the pilot never made it into patient care, and leadership had to rebid decision-support tooling amid reputational fallout.
healthproduct-failurebrand-damage+1 more
Tombstone icon

Eating disorder helpline’s AI told people to lose weight

May 2023

NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.

Facepalmby Executive
Vulnerable users received unsafe guidance; reputational damage; service pulled.
ai-assistanthealthsafety+2 more
Tombstone icon

Koko tested AI counseling on users without clear consent

Jan 2023

Mental health app Koko used GPT-3 to draft replies for 4,000 users; backlash followed over consent and ethics.

Facepalmby Founder/Operations
Trust damage; public criticism; policy changes.
ai-assistanthealthlegal-risk
Tombstone icon

Epic sepsis model missed patients and swamped staff

Jun 2021

Epic's proprietary sepsis predictor pinged 18% of admissions yet still missed two-thirds of real cases, forcing hospitals to comb through false alarms while the vendor scrambled to defend and retune the algorithm.

Facepalmby Vendor
Clinicians drowned in useless alerts, real sepsis patients slipped through, and health systems had to audit Epic’s black-box thresholds and workflows to keep patients safe.
healthproduct-failuresafety
Tombstone icon

Google DR AI stumbled in Thai clinics

Apr 2020

Google’s diabetic retinopathy screener rejected low-light scans and jammed nurse workflows, forcing clinics in Thailand to keep patients waiting despite the promised instant triage.

Facepalmby Healthcare Pilot
Manual re-work, patient suffering, workflow disruption, health and triage impacts.
healthproduct-failurebrand-damage
Tombstone icon

Babylon chatbot 'beats GPs' claim collapsed

Jun 2018

Babylon unveiled its AI symptom checker at the Royal College of Physicians and bragged it scored 81% on the MRCGP exam, but the claim could not be verified, and warned no chatbot can replace human judgment. Independent clinicians who later dissected Babylon's marketing study in The Lancet told Undark that the tiny, non-peer-reviewed test offered no proof the tool outperforms doctors and might even be worse.

Facepalmby Startup
Patient harm, eroded trust, and regulators forced real clinical trials.
healthproduct-failuresafety+1 more