Study finds ChatGPT Health fails to flag over half of medical emergencies
The first independent safety evaluation of OpenAI's ChatGPT Health feature, published in Nature Medicine, found the tool failed to direct users to emergency care in 51.6% of cases requiring immediate hospitalization - instead recommending they stay home or book a routine appointment. The study also found ChatGPT Health frequently failed to detect suicidal ideation, with suicide crisis alerts sometimes triggering in lower-risk scenarios while failing to appear when users described specific plans for self-harm. Over 40 million people reportedly ask ChatGPT for health-related advice every day.
Incident Details
Perpetrator:AI assistant
Severity:Catastrophic
Blast Radius:Over 40 million daily health queries to ChatGPT; study demonstrates the tool under-triages emergencies in more than half of cases and inconsistently triggers suicide crisis alerts