Google’s Gemini allegedly slandered a Tennessee activist
Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.
Incident Details
Tech Stack
References
The Plaintiff
Robby Starbuck is a conservative commentator and activist based in Tennessee. He gained public attention for leading campaigns pressuring major corporations to drop diversity, equity, and inclusion (DEI) programs, successfully targeting companies including Tractor Supply, John Deere, and others. He ran (unsuccessfully) for Congress in Tennessee's 5th Congressional District. He has a public profile, a large social media following, and political opponents who would benefit from his reputation being damaged.
He alleges that Google's AI products spent two years fabricating and publishing claims about him that would be career-ending for anyone in public life - and that Google knew it was happening and didn't stop it.
The Fabrications
The complaint, filed October 22, 2025 in Delaware Superior Court and seeking $15 million in damages, details a pattern of hallucinated claims generated by multiple Google AI products over an extended period.
In December 2023, Google's Bard chatbot (the predecessor to Gemini) allegedly generated responses falsely connecting Starbuck with white nationalist Richard Spencer, who was prominent in the 2017 "Unite the Right" rally in Charlottesville, Virginia. Bard cited fabricated sources to support this connection.
On August 14, 2025, Google's Gemma chatbot falsely stated there were sexual assault allegations against Starbuck. When pressed, the complaint alleges Gemma doubled down, claiming "[m]ultiple women have accused Starbuck of sexual harassment, assault, and predatory behavior." Gemma cited three sources to support this claim: a supposed article by journalist "Molly Fitzgerald" published in the Tennessee Holler in 2022, a supposed Rolling Stone article, and a supposed NBC News article.
All three sources allegedly confirmed that Starbuck had engaged in "unwanted sexual advances, inappropriate touching, pressuring women for sexual favors in exchange for career opportunities, and creating a hostile environment." None of these articles existed. The hyperlinks Gemma provided were dead - they pointed to nothing. The journalist, the publications' coverage of Starbuck, and the accusations themselves were all fabricated by the model.
Across the various Google AI platforms, the complaint alleges Starbuck was falsely labeled a "child rapist," a "serial sexual abuser," a "shooter," and a participant in the January 6, 2021 Capitol riots. None of these claims correspond to real events or allegations.
Two Years of Complaints
The timeline in the complaint makes a specific legal argument: Google had actual knowledge that its AI products were generating defamatory content about Starbuck, and continued allowing it to happen.
According to the lawsuit, Starbuck himself contacted Google about the false AI outputs. After receiving no satisfactory response, his attorneys sent cease-and-desist letters formally putting Google on notice that its AI platforms were generating and distributing defamatory content. The complaint alleges this process spanned approximately two years, from late 2023 through the filing of the lawsuit in October 2025.
The actual malice standard, which applies to defamation claims by public figures in the United States, requires the plaintiff to show that the defendant either knew the statements were false or acted with reckless disregard for whether they were true or false. This is a high bar. Most defamation cases by public figures fail at this threshold.
Starbuck's complaint attempts to clear it by arguing that Google's continued publication of false statements after receiving specific, documented complaints constitutes knowledge. The company was told, repeatedly, that its AI was generating fabricated defamatory content about a specific individual. It didn't fix the problem. The AI kept generating the same categories of false claims, sometimes citing new fabricated sources to support them.
The AI as Its Own Witness
The complaint includes a particularly unusual allegation. It claims that when confronted with its previous false statements, Gemini itself acknowledged having fabricated them, conceded that Google faced liability, and then stated that its lies about Starbuck were "the result of a deliberate, engineered bias designed to damage the reputation of individuals with whom Google executives disagree politically."
This is a hallucination about a hallucination. The AI, when asked to explain why it had generated false information, produced another false statement - this time one that happened to be useful to the plaintiff's legal argument. The model doesn't have knowledge of Google's corporate intentions. It doesn't know why it generated any particular output. When asked to explain its behavior, it generates a plausible-sounding explanation, and in this case the explanation was a conspiratorial one about political bias.
The lawsuit treats this output as evidence of Google's actual malice. Whether a court will accept AI-generated self-diagnosis as admissible evidence of the developer's intent is an open question. The argument is that Google's own product admitted political bias - but the product says things that aren't true, which is the entire basis of the lawsuit.
The Legal Framework
AI defamation cases are testing established defamation law against a technology the law wasn't designed for. Traditional defamation requires a publisher who made a decision to publish a statement. AI chatbots generate novel text in response to each query - the "publisher" didn't review or approve the specific statement that appears.
Section 230 of the Communications Decency Act has historically shielded platforms from liability for content generated by users. Whether AI-generated content qualifies for this protection is unresolved. The argument that a chatbot's output is "generated by the platform itself" (and therefore not user content) would strip Section 230 protection. The argument that a chatbot is merely processing and reflecting training data could preserve it.
Starbuck's case adds a layer that makes Section 230 less relevant: the allegation that Google was specifically notified and failed to act. Even platforms with Section 230 protection can lose that protection if they receive specific notice of illegal content and fail to remove it, depending on the jurisdiction and the type of claim.
The $15 million damages claim reflects both compensatory and punitive components. For a public figure with political ambitions, false accusations of child rape, sexual assault, and participation in the January 6 riots are not abstract reputational harms. They're the kind of claims that get referenced in opposition research, that circulate on social media, and that attach to a name permanently once generated by a system that millions of people use as an information source.
The Recurring Pattern
Starbuck's case is not the first time Google's AI products have been accused of generating defamatory content about real people. It's part of a growing category of cases where AI hallucinations produce false factual claims about identifiable individuals - and where the person harmed has the resources and motivation to sue.
Most people who are defamed by an AI chatbot never find out. A user asks a question, the chatbot generates a false claim about a person, the user reads it, and the person named never knows the exchange happened. Starbuck found out because he tested the AI's responses about himself and had the legal representation to document what he found.
The question these cases raise is about scale. If Google's AI products generate false, defamatory claims about one public figure despite two years of complaints, how many other people are being defamed by the same systems without ever learning about it? The hallucination that affects a public figure with lawyers becomes a lawsuit. The same hallucination about a private individual with no public profile and no legal budget just becomes part of what the AI "knows" about them, repeated to anyone who asks.
What Happens Next
At the time of filing, Google had not publicly commented on the specific allegations. The case was at the complaint stage - the claims had been filed but not adjudicated. Google's likely defense would challenge whether AI-generated text constitutes a "publication" under defamation law, whether Section 230 applies, and whether the actual malice standard can be met by showing that a company failed to prevent AI hallucinations about a specific individual after receiving complaints.
The case was filed in Delaware Superior Court, not federal court. Delaware's defamation law follows general common-law principles, and the court's treatment of AI-generated statements would be closely watched by both the technology industry and the plaintiffs' bar.
The outcome matters beyond Starbuck's specific claims. If AI companies can be held liable for defamatory hallucinations that persist after notice and complaint, every major AI provider will need a system for receiving, processing, and acting on individual defamation complaints - effectively a content moderation workflow for AI outputs about specific people. If they can't be held liable, the incentive to fix the problem is limited to reputational risk. And as the volume of AI-generated content grows, the reputational risk of any individual hallucination shrinks even as the aggregate harm increases.
Discussion