ECRI names AI chatbot misuse as top health technology hazard for 2026
Nonprofit patient safety organization ECRI ranked misuse of AI chatbots as the number one health technology hazard for 2026. ECRI's testing found that chatbots built on ChatGPT, Gemini, Copilot, Claude, and Grok suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies, and invented nonexistent body parts. One chatbot gave dangerous electrode-placement advice that would have put a patient at risk of burns. OpenAI reported that over 5 percent of all ChatGPT messages are healthcare related, with 200 million users asking health questions weekly, despite the tools not being validated or approved for healthcare use.
Incident Details
Perpetrator:AI chatbot
Severity:Catastrophic
Blast Radius:200 million weekly ChatGPT health users; clinicians, patients, and hospital staff using unvalidated AI chatbots for medical decisions
Tech Stack
ChatGPTGoogle GeminiMicrosoft CopilotAnthropic ClaudeGrok