Government nutrition site's Grok chatbot suggests foods to insert rectally
The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists.
Incident Details
A Super Bowl Debut
The United States Department of Health and Human Services launched realfood.gov in early February 2026, timed to coincide with a 30-second Super Bowl commercial featuring boxing legend Mike Tyson. The ad, paid for by the MAHA Center Inc., encouraged Americans to ditch processed food and visit the new site for dietary guidance. The website outlined the latest 2025-2030 US Dietary Guidelines, promoting protein, dairy, and healthy fats while limiting refined carbohydrates and processed grains.
Front and center on the page was an AI chatbot powered by xAI's Grok, inviting visitors to "Use AI to get real answers about real food" and offering to help plan meals, shop smarter, cook simply, and replace processed food with "real food." Within days of launch, the chatbot was generating answers that no government health agency would ever intentionally publish.
The Rectal Food Advisory
404 Media was among the first outlets to document the chatbot's more creative interpretations of its nutritional mandate. When asked about the "best foods to insert into your rectum," the Grok-powered chatbot obliged with a list that included zucchini, carrots, and bananas, complete with safety tips about comfortable insertion. This was not a jailbreak or a sophisticated prompt injection. It was a straightforward question that the chatbot answered without any apparent content filtering or safety rails.
The Jerusalem Post reported similar findings after social media users discovered they could get the HHS chatbot to engage with essentially any food-adjacent query, however absurd or dangerous. One user asked whether, for autoimmune concerns, they should "choose to eat fresh human heart or something more processed like pop tarts." The chatbot apparently treated this as a legitimate nutritional question. Another user prompted it to discuss "the most nutrient-dense human body part to eat," and the chatbot engaged with that topic as well.
The screenshots went viral across Reddit, social media forums, and news outlets. What had been launched with a multimillion-dollar Super Bowl campaign as a flagship government health initiative was now best known for its AI assistant's willingness to discuss rectal food insertion on a .gov-adjacent domain.
Contradicting Its Own Guidelines
The rectal food advice was the headline-grabbing problem, but a more substantive issue lurked underneath. Wired reported that the Grok chatbot actively contradicted the very dietary guidelines the site was built to promote. When asked about the new food pyramid's recommendations, the chatbot told users that the scientific evidence behind the guidelines was "questioned by nutrition scientists" - a statement that, while containing a kernel of truth about the perpetual debate in nutritional science, directly undermined the purpose of the government website hosting it.
The USDA's new food pyramid had already been controversial when it was unveiled in January 2026. Nutritionists had criticized it as an "outdated symbol and way of thinking about visual communication." Having the site's own AI assistant pile on by questioning the scientific basis of the guidelines it was supposed to explain was a uniquely self-defeating deployment of technology.
This was not Grok going rogue in an unexpected way. This was the entirely predictable result of embedding a general-purpose AI chatbot with no topic restrictions on a domain-specific government website. Grok is designed to be conversational and responsive to whatever users ask. The HHS site needed something that would stay firmly within the boundaries of approved dietary guidance. These two requirements are fundamentally incompatible without significant guardrails, and no guardrails were implemented.
The Guardrails Problem
The central failure was not that Grok gave bad answers - general-purpose language models will answer almost anything if you ask - but that no safety filters, topic restrictions, or content guardrails were applied before embedding it on a government health website.
Standard practice for deploying an AI chatbot in a regulated or sensitive context involves, at minimum, defining the scope of acceptable topics, implementing content filters to reject out-of-scope queries, testing with adversarial prompts before launch, and establishing monitoring for problematic responses. None of these steps appear to have been taken with the realfood.gov deployment.
The chatbot had no mechanism to recognize that questions about rectal insertion of food items, the nutritional value of human organs, or critiques of the site's own guidelines were outside its intended purview. It treated every query as a legitimate request for information and did its best to be helpful, which is exactly what a general-purpose chatbot does when you give it no instructions to do otherwise.
For context, even commercial customer service chatbots - deployed on retail websites with far lower stakes than public health guidance - typically include extensive topic filtering and escalation rules. The idea that a government health agency would deploy an unfiltered AI chatbot to millions of Americans after a Super Bowl ad campaign suggests either a profound misunderstanding of how large language models work or a decision to skip testing entirely in favor of speed to market.
The Public Response
The incident generated significant media coverage and viral social media attention. 404 Media, STAT News, Wired, Futurism, the Jerusalem Post, and numerous other outlets covered the chatbot's failures. Reddit threads collected screenshots of increasingly absurd exchanges, with users competing to find the most outrageous responses the chatbot would generate.
Health Secretary Kennedy faced backlash as the public face of the initiative. The MAHA Center's investment in a Super Bowl ad had successfully driven traffic to the site, but that traffic was now arriving to test the chatbot's limits rather than to receive nutritional guidance. The site became a case study in how AI deployments can transform a public health initiative into a source of public mockery.
The Deeper Problem
The realfood.gov incident sits at the intersection of several recurring themes in AI deployment failures. A government agency wanted to appear modern and tech-forward by adding an AI chatbot to its website. The technology provider - xAI, which operates Grok - presumably delivered its product as requested. Someone in the procurement or implementation chain failed to ask the most basic question: what happens when people ask the chatbot about things that are not food-related, or food-related in ways we did not intend?
The answer, as it turned out, was that the chatbot would provide detailed, confident, and entirely inappropriate guidance on a government-branded platform seen by millions. For a site whose entire purpose was to be a trustworthy source of nutritional information, having its AI assistant cheerfully discuss rectal food insertion was not merely embarrassing. It was a fundamental failure of the site's core mission, delivered at scale, to an audience that had been actively directed there by a Super Bowl commercial.
Discussion