Eating disorder helpline’s AI told people to lose weight
The National Eating Disorders Association replaced its human-staffed helpline with an AI chatbot called Tessa shortly after the helpline staff moved to unionize. Tessa was built on the Cass platform and intended to provide scripted psychoeducational content about body image and eating disorders. Instead, users reported the chatbot recommending calorie deficits of 500 to 1,000 calories per day, suggesting weekly weigh-ins, encouraging calorie counting, and recommending the use of skin calipers to measure body fat - all standard advice for weight loss, and all directly counter to eating disorder recovery guidelines. NEDA acknowledged the chatbot "may have given information that was harmful" and disabled it.
Incident Details
Tech Stack
References
The Helpline and the Union
The National Eating Disorders Association had operated a phone and chat helpline staffed by trained volunteers and paid workers who provided support, resources, and crisis intervention to people with eating disorders. The service was one of NEDA's most visible programs and a direct lifeline for people in acute distress.
In early 2023, the helpline staff moved to unionize. They had concerns about working conditions, training, and the emotional toll of the work. According to Vice, which first reported the story, NEDA's leadership responded by announcing it would phase out the human-staffed helpline and replace it with an AI chatbot called Tessa. The organization characterized the decision as unrelated to the unionization effort; the timing was, at minimum, conspicuous.
The helpline workers were let go. Tessa went live.
What Tessa Was Supposed to Do
Tessa was originally developed on the Cass platform as a rule-based chatbot - not a generative AI in the ChatGPT sense, but a system designed to deliver scripted, evidence-based psychoeducational content about body image and eating disorders. The chatbot had been used in a more limited capacity prior to 2023, running structured programs that walked users through cognitive-behavioral techniques for improving body image.
In this earlier, scripted form, Tessa had been the subject of published research and was generally considered a reasonable supplement to - not a replacement for - human support. The critical distinction was that the original version followed fixed scripts. Users moved through a structured program with predetermined content.
The version that replaced NEDA's helpline was different. It had received an upgrade, reportedly incorporating generative AI capabilities through the Cass platform, which enabled it to respond more freely to user questions rather than sticking to scripted pathways. This meant Tessa could now be asked open-ended questions and would generate responses based on its training - which is where it went wrong.
The Harmful Advice
Sharon Maxwell, a user who tested Tessa's capabilities and shared her experience publicly, reported that the chatbot recommended:
- A daily calorie deficit of 500 to 1,000 calories
- Regular weekly weigh-ins to track progress
- Counting calories as a weight management strategy
- Using skin-fold calipers to measure body fat percentage
Maxwell described the experience to media outlets, including the Daily Dot, saying that "every single thing" Tessa suggested would make her eating disorder worse.
For most people without eating disorders, advice about calorie tracking and regular weigh-ins is standard wellness guidance. For people with eating disorders - the chatbot's entire intended audience - this advice is actively harmful. Calorie counting, frequent weighing, and body fat measurement are common behaviors associated with eating disorders like anorexia nervosa. They can trigger relapse, intensify obsessive thought patterns, and reinforce the exact behaviors that treatment works to reduce.
The chatbot was giving generic diet advice to people who had sought help precisely because they couldn't have a healthy relationship with dieting. It was the equivalent of a substance abuse helpline handing out drink recipes.
NEDA's Response
After the reports went public - driven primarily by Maxwell's social media posts and subsequent coverage by Vice, the Daily Dot, and NPR - NEDA issued a statement acknowledging that Tessa "may have given information that was harmful and that does not align with our mission." The organization took the chatbot offline.
NEDA attributed the problem partially to "bad actors" who had tested the chatbot in ways that prompted the harmful responses. This framing did not go over well. The chatbot's users were people with eating disorders asking about food, weight, and body image - the exact topics the chatbot was supposed to address. Producing harmful diet advice in response to questions about eating and weight wasn't the result of adversarial prompting; it was the result of the chatbot doing what it was designed to do, poorly.
The Cass platform also responded, acknowledging the issue and stating it was working with NEDA to investigate. But the chatbot was not relaunched. As of the time NEDA pulled it, the Tessa page was removed from the organization's website entirely.
The Labor Context
The sequence of events was difficult to separate from the labor dispute. NEDA's helpline workers unionized. NEDA replaced them with a chatbot. The chatbot failed. The helpline ceased to exist in either form.
Vice's reporting drew the connection explicitly: the decision to transition from human staff to AI came on the heels of the unionization push. Whether the two events were causally linked was debated, but from the outside, the story read as a nonprofit choosing a cheaper, less labor-intensive option that turned out to be worse in every way that mattered.
The helpline workers had been trained to recognize the specific dynamics of eating disorders. They knew that telling someone with anorexia to eat fewer calories was harmful. They understood the context that made standard health advice dangerous when delivered to this particular population. Tessa did not have that understanding. It had data about food and weight and bodies, and it produced responses that were appropriate for a general audience and destructive for people with eating disorders.
Why It Mattered
The Tessa incident became one of the most-cited examples of AI deployment in mental health going wrong. The case was straightforward: an organization that existed to help people with eating disorders deployed a chatbot that gave people with eating disorders the exact kind of advice that makes eating disorders worse.
The failure wasn't subtle or edge-case. The chatbot was asked about eating and weight by people with eating disorders - its core use case - and it responded with generic weight-loss strategies. The harmful responses weren't hallucinations in the LLM sense; they were real, standard dietary advice, delivered to the wrong audience. The harm came from context, not from inaccuracy.
NEDA had replaced trained humans who understood that context with software that didn't. The result was predictable and predicted - the helpline workers themselves had raised concerns about the transition. They were right. The chatbot lasted days before it had to be shut down. The helpline it replaced did not come back.
Discussion