ChatGPT diet advice caused bromism, psychosis, hospitalization

Tombstone icon

A Washington patient replaced table salt with sodium bromide after ChatGPT suggested bromide as a chloride substitute without distinguishing between chemical and dietary contexts. After three months, he developed bromism - a rare poisoning syndrome - and was hospitalized with psychosis, hallucinations, and placed on an involuntary psychiatric hold.

Incident Details

Severity:Facepalm
Company:OpenAI
Perpetrator:AI Product
Incident Date:
Blast Radius:Bromism, psychosis, and neurological symptoms leading to hospitalization.

A man in Washington state had been reading about the health effects of consuming too much table salt - sodium chloride. He noticed that most of the available literature focused on reducing sodium intake, not chloride. Drawing on a background studying nutrition in college, he "decided to conduct a personal experiment to eliminate chloride from his diet," as the case report would later describe it.

He asked ChatGPT what chloride could be replaced with. ChatGPT told him bromide was a viable substitute. He replaced all the sodium chloride in his diet with sodium bromide.

For three months, he consumed sodium bromide in place of table salt. Then he showed up at the emergency department convinced his neighbor was poisoning him.

The Case

The case was published in Annals of Internal Medicine: Clinical Cases. The authors - the man's treating physicians - documented the sequence of events from his arrival at the hospital through his eventual discharge.

His initial labs showed elevated carbon dioxide in his blood and increased alkalinity. His chloride levels appeared abnormally high, but on closer examination, this turned out to be pseudohyperchloremia - a laboratory artifact caused by the large amounts of bromide in his blood interfering with the standard chloride test. His sodium levels were normal. After consulting the medical literature and Poison Control, the physicians determined the diagnosis was bromism.

Bromism is a toxidrome - a syndrome caused by accumulation of toxins in the body. It results from chronic bromide exposure. Bromide builds up because the kidneys excrete it slowly, and as blood levels rise, it impairs neuronal function. Symptoms include psychosis, agitation, mania, delusions, memory impairment, loss of muscle coordination, and skin reactions.

The condition is rare today. For most of the 19th and 20th centuries, bromide was a common ingredient in sedatives, anticonvulsants, and sleep aids. When it became clear that chronic use led to bromism, U.S. regulators removed several bromide formulations from over-the-counter medicines in the 1970s and 1980s, including sodium bromide specifically. Bromism rates dropped sharply thereafter. Occasional cases still occur, usually tied to bromide-containing supplements purchased online. This was the first documented case tied to AI health advice.

The Hospital Stay

After admission for electrolyte monitoring, the man reported extreme thirst but was paranoid about the water he was offered. Over the next 24 hours, his paranoia intensified and he began experiencing hallucinations. He attempted to leave the hospital, which resulted in an involuntary psychiatric hold. Physicians started him on antipsychotic medication.

His vital signs stabilized after receiving fluids and electrolytes. As his mental state improved under the antipsychotic, he was able to tell the doctors what had led him there - including the ChatGPT conversation. He also described additional symptoms he had noticed in recent weeks: facial acne and small red growths on his skin (consistent with a bromide hypersensitivity reaction), insomnia, fatigue, muscle coordination problems, and excessive thirst. All pointed to bromism.

He was tapered off the antipsychotic over three weeks and discharged. At a follow-up two weeks later, he remained stable.

What ChatGPT Actually Said

The physicians who wrote the case report did not have access to the patient's ChatGPT conversation logs. Based on the timeline, they estimated he had used ChatGPT 3.5 or 4.0. The exact phrasing the model generated is unknown.

To understand what might have happened, the physicians ran their own test. They asked ChatGPT 3.5 what chloride can be replaced with. The model's response included bromide. It noted that "context matters," but it did not provide a specific health warning about consuming sodium bromide as a dietary substitution, and it did not ask the user for more information about why they were asking the question.

This is the core of the failure. The question "what can chloride be replaced with" has legitimate answers in chemistry and industrial applications. Bromide is, in fact, a halide ion that can substitute for chloride in certain chemical contexts. Sodium bromide is used in cleaning products and pool treatments. The replacement works - it just doesn't work inside a human body as a dietary component.

A physician asked the same question would have immediately sought context. Are you asking about chemistry? Pool maintenance? Cooking? The answer changes completely depending on the application. ChatGPT did not distinguish between "replacing chloride in a laboratory reaction" and "replacing chloride in your dinner." The case report authors observed: "It is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride."

The Decontextualization Problem

The case report authors framed the problem precisely: "While it is a tool with much potential to provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualized information."

"Decontextualized information" is the right term. ChatGPT did not hallucinate. It did not invent a fake chemical compound. Sodium bromide is real. Bromide can substitute for chloride in certain contexts. The model's answer was factually correct in the abstract and catastrophically wrong in application. The missing context - that this substitution is toxic in dietary use - was the difference between a chemistry fact and a medical emergency.

Large language models are trained on massive corpora that include chemistry textbooks, cleaning product documentation, pool maintenance guides, medical literature, and nutritional advice. All of these contain references to chloride and bromide. The model's response drew from whichever portions of its training data matched the query, without weighting for the specific application that a human user was most likely asking about. A query about replacing a common dietary ingredient should have been contextualized as a dietary question. Instead, the model treated it as a general chemistry question, because it had no mechanism to infer that the user intended to eat the result.

The Broader Pattern

This case arrived in the medical literature alongside growing evidence that AI chatbots perform poorly in medical contexts. A separate study found that ChatGPT was correct only 17 percent of the time when used to diagnose pediatric symptoms. Other analyses have shown that AI chatbots provide inconsistent and sometimes dangerous advice for emergency situations, including recommending against calling emergency services in situations where medical professionals would consider it mandatory.

The bromism case is distinct from these statistics because the patient did not ask ChatGPT for medical advice. He asked a chemistry question and applied the answer to his body. The model had no way to know that was his intent - but it also made no effort to determine it. A system that generates chemistry answers without distinguishing between industrial and dietary applications is not a medical tool, but users treat it as one because it answers their questions with the confident tone of expertise.

OpenAI's terms of service include disclaimers about not relying on ChatGPT for medical advice. Those disclaimers did not prevent a man from replacing his table salt with a toxic compound based on the model's output. The gap between "we told users not to do this" and "our product made it easy to do this" is where liability and design responsibility live.

A man asked an AI chatbot a straightforward question about salt. Three months later, he was on an involuntary psychiatric hold, experiencing hallucinations, being treated for a rare poisoning syndrome that had largely disappeared from medicine decades ago. The AI's answer was technically correct. The patient nearly died anyway.

Discussion