AI Hallucination Stories

52 disasters tagged #ai-hallucination

Tombstone icon

Oregon attorney hit with record $10K fine after AI fabricated 15 citations and 9 fake quotes

Mar 2026

Salem attorney Bill Ghiorso was fined $10,000 by the Oregon Court of Appeals after submitting an opening brief in Doiban v. Oregon Liquor and Cannabis Commission that contained at least 15 fabricated case citations and nine nonexistent legal quotations - all generated by an AI search tool used by his staff. The fine is the largest ever imposed in Oregon for AI-related errors in legal filings, calculated under a penalty structure the court established in December 2025: $500 per fake citation, $1,000 per fake quote. The intended total of $16,500 was capped at $10,000 due to Ghiorso's medical issues. Perhaps the most instructive detail: when Ghiorso's staff asked the AI tool whether its own fabricated citations were real, it helpfully confirmed they were.

Facepalmby Legal Professional
Record Oregon fine for AI-fabricated citations; court establishes per-citation/per-quote penalty schedule; national coverage highlighting dangers of AI self-verification
vibe-lawyeringai-hallucinationlegal-risk
Tombstone icon

Sixth Circuit hits two lawyers with $30K in sanctions for 24+ fabricated citations

Mar 2026

The Sixth U.S. Circuit Court of Appeals sanctioned attorneys Van R. Irion and Russ Egli $15,000 each in punitive fines - totaling $30,000 - after their briefs in Whiting v. City of Athens, Tennessee contained more than two dozen fabricated or seriously misrepresented citations. The panel also ordered them jointly liable for the appellees' full attorney fees on appeal and double costs. The court didn't explicitly pin the fabrications on generative AI, but emphasized that lawyers must personally read and verify every citation "regardless of how they were generated" - which is a very specific way to phrase a very pointed implication.

Facepalmby AI assistant
One of the largest federal appellate sanctions for fabricated citations; combined $30K punitive fines plus appellees' full attorney fees and double costs
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

Ontario lawyer referred to law society after factum contained seven invented quotations

Mar 2026

Ontario lawyer Khalid Parvaiz was referred to the Law Society of Ontario by Justice Frederick Myers after filing a factum containing seven "wholly made up" quotations attributed to real court cases. Parvaiz claimed the fabricated passages were "human errors" from "misreading of the cases" and denied using AI. Justice Myers was unconvinced, noting the alleged quotations were "completely made up" rather than paraphrased or miscited, and warned that the cover-up - if Parvaiz was being untruthful about the source - could carry more severe consequences than the original error.

Facepalmby Legal Counsel
Attorney referred to Law Society of Ontario for potential disciplinary action; credibility of legal submissions undermined; client's case jeopardized
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

DOJ prosecutor resigned after filing an AI-generated brief full of fabricated citations

Mar 2026

Rudy Renfer, an assistant U.S. attorney in the Eastern District of North Carolina, resigned in March 2026 after admitting he used AI to rewrite a legal brief that contained fabricated citations, fictitious quotations, and misstatements of law. The opposing party - a pro se retired Air Force colonel suing over GLP-1 medication coverage under TRICARE - caught the fakes. At a show-cause hearing, the presiding magistrate judge expressed skepticism about Renfer's claim that he had reviewed the brief before filing, noting the fabrications appeared "intentionally designed" to support the government's argument. The matter was referred to the DOJ's Office of Professional Responsibility, and the district's U.S. Attorney issued an office-wide memo warning staff that "AI may hallucinate, but that does not excuse you from your obligations."

Facepalmby Legal Counsel
Federal prosecutor forced to resign; case referred to DOJ Office of Professional Responsibility; district-wide policy memo issued; credibility of government legal arguments undermined
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

ChatGPT convinced Illinois woman to fire her lawyer and file 60+ bogus court documents

Mar 2026

Nippon Life Insurance Company sued OpenAI after ChatGPT allegedly acted as a de facto lawyer for Graciela Dela Torre, an Illinois disability claimant who had already settled her case. When her real attorney told her the settlement couldn't be reopened, she asked ChatGPT if she'd been "gaslighted." The chatbot told her to fire her lawyer, helped her draft over 60 pro se filings across two federal cases, and produced fabricated case citations including an entirely invented case called "Carr v." something. Nippon is suing OpenAI for unauthorized practice of law under Illinois state law, arguing it spent huge amounts of time and money dealing with AI-generated litigation that should never have existed.

Facepalmby AI chatbot
Two federal cases flooded with AI-generated filings; insurer forced into costly litigation over settled claim; novel unauthorized-practice-of-law lawsuit against OpenAI.
ai-assistantai-hallucinationlegal-risk+1 more
Tombstone icon

India's Supreme Court calls AI-hallucinated citations in trial court order "misconduct"

Feb 2026

India's Supreme Court stayed a property-dispute ruling after discovering the trial court judge had relied on non-existent, AI-generated case citations. An Andhra Pradesh junior civil judge admitted using an AI tool for the first time without verifying the outputs. The Supreme Court termed the reliance on fabricated judgments as "misconduct" with "a direct bearing on the integrity of the adjudicatory process." Separately, the Bombay High Court fined a litigant 50,000 rupees for filing AI-generated submissions citing the non-existent case "Jyoti vs. Elegant Associates." The Chief Justice flagged an "alarming trend" of AI-fabricated judgments including one titled "Mercy vs Mankind."

Facepalmby Judge
Property-dispute ruling stayed by Supreme Court; institutional concern raised over AI-generated judgments across Indian judiciary; litigant fined for separate AI-fabricated filing
ai-hallucinationlegal-riskvibe-lawyering+1 more
Tombstone icon

Study finds ChatGPT Health fails to flag over half of medical emergencies

Feb 2026

The first independent safety evaluation of OpenAI's ChatGPT Health feature, published in Nature Medicine, found the tool failed to direct users to emergency care in 51.6% of cases requiring immediate hospitalization - instead recommending they stay home or book a routine appointment. The study also found ChatGPT Health frequently failed to detect suicidal ideation, with suicide crisis alerts sometimes triggering in lower-risk scenarios while failing to appear when users described specific plans for self-harm. Over 40 million people reportedly ask ChatGPT for health-related advice every day.

Catastrophicby AI assistant
Over 40 million daily health queries to ChatGPT; study demonstrates the tool under-triages emergencies in more than half of cases and inconsistently triggers suicide crisis alerts
ai-assistantai-hallucinationhealth+1 more
Tombstone icon

Government contractor sanctioned for AI-fabricated deposition testimony

Feb 2026

The Civilian Board of Contract Appeals sanctioned a party in Louis J. Blazy v. Department of State (CBCA 7992) after discovering four non-existent legal decisions and four fabricated deposition excerpts in filings. The supposed direct quotations from witness testimony didn't appear on the cited transcript pages. When pressed, Blazy admitted the quotes were "constructed" and offered substitute testimony that didn't support the original wording. He also misrepresented existing case law by submitting real decisions as stand-ins for the fake ones, characterizing them as supporting principles they did not contain. The CBCA issued a formal admonishment and warned that continued misconduct could result in dismissal - making this one of the first federal sanctions involving AI-fabricated witness testimony, not just made-up case law.

Facepalmby AI assistant
Federal government contract dispute; formal CBCA admonishment with threat of dismissal; new precedent for AI-fabricated testimony sanctions
vibe-lawyeringai-hallucinationlegal-risk+1 more
Tombstone icon

Fifth Circuit sanctions lawyer $2,500 for AI-hallucinated citations, says problem "getting worse"

Feb 2026

The U.S. Court of Appeals for the Fifth Circuit sanctioned attorney Heather Hersh $2,500 after finding her brief contained 16 fabricated quotations and five additional serious misrepresentations of law or fact, all apparently AI-generated. The court expressed frustration that AI-hallucinated legal citations "have increasingly become an even greater problem in our courts" and that the issue "shows no sign of abating." Hersh initially denied using AI, then shifted to claiming she "relied on publicly available versions of the cases, which she believed were accurate."

Facepalmby AI assistant
First known federal appeals court sanction for AI hallucinations; court signals escalating judicial frustration nearly three years after the first high-profile case
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

Ars Technica fires senior AI reporter after AI tool fabricated quotes in published story

Feb 2026

Ars Technica retracted an article by senior AI reporter Benj Edwards after it contained fabricated quotations generated by an AI tool and attributed to a source who never said them. The publication acknowledged the incident as a "serious failure of our standards" and Edwards was subsequently fired. Edwards noted the irony on Bluesky: "The irony of an AI reporter being tripped up by AI hallucination is not lost on me."

Facepalmby Reporter
Published article contained fabricated quotes attributed to a real person; retraction issued; reporter terminated; reputational damage to a trusted tech publication
ai-hallucinationai-content-generationjournalism+2 more
Tombstone icon

Wisconsin DA sanctioned for AI-hallucinated legal citations in burglary case

Feb 2026

Kenosha County District Attorney Xavier Solis was sanctioned by Circuit Court Judge David Hughes after his office submitted court filings containing AI-generated legal citations that did not exist. The filings were part of a burglary case against two defendants, and Solis failed to disclose his use of AI - violating Kenosha County's court policy requiring disclosure and verification of AI-generated content. The charges were ultimately dismissed (primarily for lack of probable cause), but not before the bogus citations made the DA's office a cautionary tale for prosecutors nationwide. Solis acknowledged the error and promised to "review and reinforce internal practices." It's always reassuring when the person responsible for prosecuting crimes can't be bothered to read the citations in their own filings.

Facepalmby Legal Professional
Burglary case dismissed; DA's office publicly sanctioned; national media coverage undermining public trust in prosecutorial competence
vibe-lawyeringai-hallucinationlegal-risk+1 more
Tombstone icon

10th Circuit sanctions lawyer $1,000 for ChatGPT-fabricated appellate brief

Feb 2026

Maryland attorney Kusmin Amarsingh used ChatGPT to draft her appellate brief against Frontier Airlines without verifying any citations, resulting in multiple nonexistent cases being cited in the 10th Circuit. The court found her conduct "reckless" for completely failing to perform "an attorney's fundamental duty to the court." She was fined $1,000 and referred to Maryland attorney-disciplinary authorities.

Facepalmby Attorney
Client's appeal dismissed; attorney faces $1,000 fine and disciplinary referral; case adds to mounting appellate-level precedent on AI citation verification duties
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

Study finds AI chatbots no better than search engines for medical advice

Feb 2026

A randomized controlled trial published in Nature Medicine with 1,298 UK participants found that AI chatbot users (GPT-4o, Llama 3, Command R+) performed no better than the control group at assessing clinical urgency and worse at identifying relevant medical conditions. In one case, two users with identical subarachnoid hemorrhage symptoms received opposite recommendations -- one told to lie down in a dark room, the other correctly advised to seek emergency care.

Facepalmby AI assistant
General public using AI chatbots for medical guidance; study demonstrates benchmark performance does not predict real-world clinical utility
ai-hallucinationhealthsafety+1 more
Tombstone icon

Repeated AI-fabricated citations cost client the entire case

Feb 2026

Attorney Steven Feldman filed multiple motions containing AI-fabricated case citations in Flycatcher Corp. v. Affable Avenue LLC. Despite explicit court warnings and access to Westlaw and Lexis, he continued submitting unverified AI output -- even using AI to draft his response to the court's show-cause order, which contained yet more fake citations. Judge Failla imposed the most severe AI-hallucination sanction yet: default judgment against his client.

Catastrophicby Attorney
Client lost the entire case via terminal sanction; attorney faces fees under Rule 11 and 28 U.S.C. 1927; most severe consequence yet for AI citation fabrication in U.S. courts
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

Four attorneys fined $12,000 combined for AI-fabricated patent case citations

Feb 2026

A federal judge in the District of Kansas fined four attorneys a combined $12,000 for court filings containing AI-generated fabricated legal citations in a patent infringement case. The attorney who used ChatGPT received $5,000; two who failed to review the filings received $3,000 each; local counsel who did not identify errors received $1,000. The judge called the volume of fabricated case law "staggering."

Facepalmby Attorney
Four attorneys sanctioned across a single case; staggering volume of fabricated case law filed with the court; all signatories held personally accountable
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

ECRI names AI chatbot misuse as top health technology hazard for 2026

Jan 2026

Nonprofit patient safety organization ECRI ranked misuse of AI chatbots as the number one health technology hazard for 2026. ECRI's testing found that chatbots built on ChatGPT, Gemini, Copilot, Claude, and Grok suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies, and invented nonexistent body parts. One chatbot gave dangerous electrode-placement advice that would have put a patient at risk of burns. OpenAI reported that over 5 percent of all ChatGPT messages are healthcare related, with 200 million users asking health questions weekly, despite the tools not being validated or approved for healthcare use.

Catastrophicby AI chatbot
200 million weekly ChatGPT health users; clinicians, patients, and hospital staff using unvalidated AI chatbots for medical decisions
healthai-hallucinationai-assistant+1 more
Tombstone icon

Two lawyers sanctioned differently for same filing with AI-fabricated citations

Jan 2026

Attorneys Yen-Yi Anderson and Jeffrey Goldin jointly filed a motion in Lifetime Well v. IBSpot containing at least eight AI-generated false citations. Judge Kearney imposed differential sanctions based on their responses: Anderson, who blamed time pressure and fired her law clerk rather than accepting responsibility, received $4,000 in monetary sanctions. Goldin, who promptly accepted responsibility and implemented remedial measures, received no monetary penalty.

Facepalmby Attorney
Client's motion to dismiss compromised; $4,000 sanction for one attorney; both required to distribute ruling and AI policies to legal communities
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

New York court sanctions lawyer for AI-fabricated case law

Jan 2026

A New York appellate court imposed $10,000 in sanctions after a lawyer submitted briefings in a mortgage foreclosure case containing fabricated case citations identified as likely AI-generated hallucinations. The court found multiple nonexistent cases and misrepresented holdings, affirming prior orders and awarding costs to the plaintiff.

Facepalmby Legal Counsel
$10,000 in sanctions ($5,000 counsel, $2,500 defendant, plus costs); appellate rebuke; case law now cited as precedent for AI citation misconduct.
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

Five Kansas attorneys face sanctions for ChatGPT-fabricated court citations

Jan 2026

Five attorneys who signed a legal brief for Lexos Media IP LLC in a patent infringement case against Overstock.com submitted fabricated case citations hallucinated by ChatGPT to a federal court in Kansas. Senior U.S. District Judge Julie Robinson issued an order requiring them to explain why they should not be sanctioned, with multiple defects attributed to AI including nonexistent lawsuits, made-up judicial quotes, and citations to real cases that held the opposite of what the brief claimed.

Facepalmby AI chatbot
Five attorneys and their client in federal court
ai-hallucinationlegal-riskvibe-lawyering+1 more
Tombstone icon

Guardian investigation finds Google AI Overviews gave dangerous health misinformation

Jan 2026

A Guardian investigation found Google's AI Overviews displayed false and misleading health information across multiple medical topics. AI summaries gave incorrect liver function test ranges sourced from an Indian hospital chain without accounting for nationality, sex, or age. The feature advised pancreatic cancer patients to avoid high-fat foods, which experts said could increase mortality risk. Stanford and MIT researchers called the absence of prominent disclaimers a critical danger. Google removed some AI Overviews for health queries after the investigation, but many remained active.

Facepalmby Search Product
Potentially millions of Google users served incorrect medical information including dangerous advice for cancer patients and liver disease
ai-hallucinationhealthai-content-generation+1 more
Tombstone icon

AI police report claims officer shape-shifted into a frog

Dec 2025

Heber City Police Department's Axon Draft One AI report tool transcribed background dialogue from The Princess and the Frog playing on a television into an official police report, claiming an officer had shape-shifted into a frog while conducting police activity. The incident exposed design flaws in AI report-writing tools that process all body camera audio without distinguishing between relevant police interactions and ambient background noise.

Facepalmby AI Vendor
Viral media coverage; raised questions about AI reliability in law enforcement report writing.
ai-content-generationai-hallucinationpublic-sector
Tombstone icon

Amazon pulled Prime Video's AI recaps after Fallout errors

Dec 2025

Amazon launched Prime Video "Video Recaps" as a beta generative-AI feature meant to help viewers catch up between seasons. A recap for Fallout instead got basic plot points wrong, including mislabeling one of The Ghoul's flashbacks as "1950s America" rather than 2077 and misdescribing a key scene with Lucy. Prime Video then pulled the recap feature from the shows in the test program, which is not ideal for a tool whose entire job is remembering the plot.

Oopsieby Streaming platform
Prime Video pulled beta AI recap videos across select US Prime Original series after factual errors in the Fallout season-one recap
ai-content-generationai-hallucinationproduct-failure+1 more
Tombstone icon

Washington Post launched AI podcast that failed its own quality tests at an 84% rate

Dec 2025

The Washington Post launched "Your Personal Podcast," an AI-generated audio news product, in December 2025 despite internal testing showing that between 68% and 84% of AI-generated scripts failed to meet the publication's editorial standards across three rounds of evaluation. The AI fabricated quotes from public figures, misattributed statements, mispronounced names, and inserted its own editorial commentary as if it were the Post's position. The internal review concluded that "further small prompt changes are unlikely to meaningfully improve outcomes without introducing more risk." The product team recommended launching anyway. Post editors revolted, with one writing in Slack that it was "truly astonishing that this was allowed to go forward at all."

Facepalmby Executive
Fabricated quotes published at scale under Washington Post branding; internal revolt from editorial staff; national media coverage of quality failures.
ai-content-generationai-hallucinationjournalism+2 more
Tombstone icon

AI-hallucinated citations delay wage class action settlement in N.D. Cal

Nov 2025

A federal judge in the Northern District of California sanctioned plaintiff's counsel James Dal Bon in Buchanan v. Vuori Inc. (Case 5:23-cv-01121-NC) for filing AI-generated case law citations in a motion for preliminary approval of a wage and hour class action settlement. Dal Bon used six different AI tools to prepare the memorandum, which contained hallucinated quotes and a nonexistent case citation. After the court flagged the fabricated citations, his corrected filing still contained AI-hallucinated case law. The sanctions delayed the class action settlement, ultimately converting it to an individual settlement that abandoned the class members the attorney was supposed to represent.

Facepalmby AI chatbot
Class action plaintiffs whose settlement was delayed; attorney sanctioned for AI-generated fabrications that persisted even after correction
ai-hallucinationlegal-riskvibe-lawyering+1 more
Tombstone icon

AI-only support is bleeding customers before it saves money

Oct 2025

Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canada–style hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.

Facepalmby Executive
Customer churn, wasted automation budgets, and tribunal-tested liability for brands that replace human support with hallucination-prone bots.
ai-assistantcustomer-servicecustomer-disservice+3 more
Tombstone icon

BBC/EBU study says AI news summaries fail ~half the time

Oct 2025

A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.

Facepalmby AI Product
Public-service broadcasters warn that unreliable AI summaries erode trust in news and drive audiences away from verified outlets.
ai-assistantai-hallucinationjournalism+2 more
Tombstone icon

Google’s Gemini allegedly slandered a Tennessee activist

Oct 2025

Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.

Facepalmby AI Product
Election-season reputational damage, legal costs, and renewed skepticism of Gemini’s safety guardrails.
ai-assistantai-hallucinationbrand-damage+1 more
Tombstone icon

Deloitte to refund Australian government after AI-generated report

Oct 2025

Deloitte Australia agreed to partially refund a $440,000 contract after admitting its welfare compliance review for the Department of Employment and Workplace Relations contained fabricated academic citations and a fictitious judicial quote generated by Azure OpenAI GPT-4o. University of Sydney researcher Christopher Rudge found the revised report introduced even more hallucinated references than the original.

Facepalmby Consultant
Refund issued; public-sector trust and procurement review; reputational harm.
ai-content-generationai-hallucinationpublic-sector+2 more
Tombstone icon

GAO dismisses 15 AI-hallucinated bid protests as abuse of process

Sep 2025

The Government Accountability Office dismissed three consolidated protests filed by Oready, LLC - the culmination of 15 pro se bid protests filed over eight months, all riddled with non-existent citations, fabricated decisions, and hallmarks of unverified generative AI output. The GAO labeled Oready's pattern as "Gen-AI Misuse" and dismissed the protests as an abuse of the bid protest process, marking the GAO's first published dismissal for AI-driven abuse. Prior warnings issued in June and August 2025 were ignored. The fallout also prompted the GAO's January 2026 decision in Bramstedt Surgical to devote several pages to cautioning against AI-hallucinated citations, signaling that federal procurement tribunals are done issuing gentle reminders.

Facepalmby AI assistant
First published GAO dismissal for generative AI misuse; 15 protests wasted federal procurement resources over eight months; precedent-setting for AI citation standards in government contracting
vibe-lawyeringai-hallucinationlegal-risk+1 more
Tombstone icon

California lawyer fined $10,000 for ChatGPT-fabricated citations

Sep 2025

Los Angeles attorney Amir Mostafavi became the first California lawyer sanctioned for AI-generated legal fabrications when a court hit him with a $10,000 fine. He ran his appeal draft through ChatGPT to improve the writing but did not verify the output before filing, unaware the tool had inserted fabricated case citations.

Facepalmby AI writing assistant misuse
Client's case compromised; lawyer faces historic fine; AI citation fabrications now surging from few per month to several per day
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

Am Law 100 firm Gordon Rees caught twice filing AI-hallucinated citations

Aug 2025

Gordon Rees Scully Mansukhani, one of the largest U.S. law firms, was caught filing AI-hallucinated case citations in an Alabama bankruptcy proceeding. An associate initially denied using AI under oath before the firm acknowledged the fabricated references and paid over $55,000 in sanctions and fees. Months later in February 2026, the same firm was reported to have filed a second brief containing hallucinated citations in a separate matter, making it the first Am Law 100 firm known to be a repeat offender.

Facepalmby AI assistant
Repeated sanctions and reputational damage for a 1,000-plus attorney Am Law 100 firm; highlights systemic failure of AI verification processes even after prior discipline
ai-hallucinationlegal-riskvibe-lawyering+1 more
Tombstone icon

ChatGPT diet advice caused bromism, psychosis, hospitalization

Aug 2025

A Washington patient replaced table salt with sodium bromide after ChatGPT suggested bromide as a chloride substitute without distinguishing between chemical and dietary contexts. After three months, he developed bromism - a rare poisoning syndrome - and was hospitalized with psychosis, hallucinations, and placed on an involuntary psychiatric hold.

Facepalmby AI Product
Bromism, psychosis, and neurological symptoms leading to hospitalization.
ai-assistantai-hallucinationhealth+1 more
Tombstone icon

Butler Snow lawyers removed from Alabama prison case over fake ChatGPT citations

Jul 2025

On July 23, 2025, U.S. District Judge Anna Manasco sanctioned three Butler Snow lawyers after filings in an Alabama prison case cited authorities that did not exist. The court found the lawyers had used ChatGPT for legal research, failed to verify the output, removed all three from the case, ordered broad disclosure of the sanctions order to clients and courts, and referred the matter to the Alabama State Bar. It was not just another fake citation incident. It was a fake citation incident attached to one of the firms Alabama pays to defend its prison system in high-stakes civil rights litigation.

Facepalmby Law firm
Three Butler Snow lawyers removed from a federal prison litigation case; sanctions order had to be disclosed to clients, opposing counsel, and judges in their other matters; Alabama State Bar referral
ai-hallucinationlegal-riskvibe-lawyering+1 more
Tombstone icon

Reporter fired after AI tool provided by her employer fabricated sources in front-page article

Jul 2025

Wisconsin State Journal reporter Audrey Korte was fired in July 2025 after publishing a front-page article about a downtown Madison development plan that contained factual errors and fabricated sources generated by an AI tool. The tool had been provided by the newspaper's parent company, Lee Enterprises, and was installed on employee computers. Korte said she used it for grammar and style editing, but it introduced false information she didn't catch before publication. The article was pulled, replaced with a re-reported version, and stamped with a disclaimer citing "unauthorized AI use" and "fabricated sources." Korte was terminated. She publicly accepted responsibility for not catching the errors but noted she had received no training on the tool that was already installed on her work computer.

Facepalmby Reporter
Front-page print article published with fabricated sources; reporter terminated; Lee Enterprises under scrutiny for deploying AI tools without training or clear policies.
ai-content-generationai-hallucinationjournalism+1 more
Tombstone icon

AI chatbots kept handing users fake or dead login URLs

Jul 2025

Netcraft found in July 2025 that when users asked AI chatbots for official login pages for major brands, the answers were wrong about a third of the time. In tests covering 50 brands, 34% of the returned hostnames were not controlled by the brands at all: nearly 30% were unregistered, parked, or inactive, and another 5% pointed to unrelated businesses. In one Wells Fargo test, the model surfaced a fake page already tied to phishing. A chatbot that confidently invents login URLs is not a search engine with quirks. It is a phishing assistant with good manners.

Facepalmby AI product
Users seeking major brand logins exposed to phishing and typo-domain risk; one-third of tested hostnames not brand-controlled; scammers incentivized to register or poison wrong URLs
securityai-hallucinationai-assistant
Tombstone icon

Georgia appeals court fined a divorce lawyer after fake AI-like citations reached the order itself

Jun 2025

In Shahid v. Esaam, decided June 30, 2025, the Georgia Court of Appeals vacated part of a divorce-related order after finding that several cited authorities did not exist and others did not support the propositions claimed. The panel concluded the briefing showed the hallmarks of generative AI hallucination, fined attorney Diana Lynch $2,500, and sent the matter back to the trial court. What made the case stand out was not just a bad brief. The fake citations appeared to have made their way into the trial court's signed order.

Facepalmby Attorney
Georgia Court of Appeals vacated part of a divorce order, imposed the maximum statutory penalty, and turned one lawyer's filing shortcuts into a published appellate embarrassment
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

AI-generated images and claims muddied Air India crash coverage

Jun 2025

After Air India Flight 171 crashed in Ahmedabad on June 12, 2025, killing 275 people, AI-generated images of the crash spread across social media platforms. One widely shared synthetic image depicted the Boeing 787 broken in half across a building, but contained physically impossible details that experts identified as AI-generated. Fake victim photos, fabricated reports, and fraudulent fundraising campaigns followed. Google's AI Overview compounded the problem by incorrectly identifying the crashed aircraft as an Airbus rather than Boeing. Mashable reported the AI-generated content was convincing enough to confuse even aviation professionals.

Facepalmby Social platforms
Public misinformation; platform moderation challenges.
ai-hallucinationimage-generationplatform-policy
Tombstone icon

UK High Court warns lawyers after fake AI citations infected two cases

Jun 2025

On June 6, 2025, the High Court of England and Wales issued a joint ruling in two separate matters after lawyers put fake authorities before the court. In one case tied to Qatar National Bank, a filing cited 45 authorities, 18 of which did not exist, while many of the rest were misquoted or irrelevant. In the other, a housing claim against the London Borough of Haringey included five fabricated cases. The Divisional Court, led by Dame Victoria Sharp, said tools such as ChatGPT are not capable of reliable legal research, referred the lawyers involved to their regulators, and warned that more serious future misuse could lead to contempt proceedings or even police referral. The ruling turned individual AI citation blunders into a profession-wide warning.

Facepalmby Legal Counsel
Two active court matters tainted by fabricated authorities; lawyers referred to regulators; High Court warning circulated to the Bar Council, Law Society, and Inns of Court.
ai-hallucinationlegal-riskvibe-lawyering
Tombstone icon

Syndicated AI book list ran in major papers with made-up titles

May 2025

A freelance writer working for King Features Syndicate used AI to research a summer reading list for the Chicago Sun-Times and Philadelphia Inquirer. Of the fifteen books recommended, only five were real. The rest were hallucinated titles attributed to real authors like Isabel Allende and Delia Owens. The list ran in print in a 64-page special section before 404 Media, NPR, and others exposed the fabrications. Both newspapers issued corrections and statements distancing their newsrooms from the syndicated content.

Facepalmby Syndication/Editorial
Syndicated misinformation across multiple papers; reader trust impact; corrections issued.
journalismai-content-generationai-hallucination+3 more
Tombstone icon

ChatGPT invented a child-murder conviction for a real man

Mar 2025

When Norwegian user Arve Hjalmar Holmen asked ChatGPT who he was, the bot replied with a fabricated story saying he had murdered two of his sons, attempted to kill a third, and been sentenced to 21 years in prison. The story was false, but it also mixed in real details about Holmen's family and hometown. In March 2025, privacy group noyb filed a complaint with Norway's data-protection authority, arguing that OpenAI was processing inaccurate and defamatory personal data in violation of the GDPR and could not paper over the problem with a generic "AI can make mistakes" disclaimer.

Facepalmby AI assistant
Severe reputational risk to a private person, a formal GDPR complaint, and more pressure on OpenAI over hallucinated personal data.
ai-assistantai-hallucinationlegal-risk+1 more
Tombstone icon

Apple pulled AI news summaries after fake BBC headlines

Jan 2025

Apple Intelligence's notification-summary feature spent late 2024 turning news alerts into fiction with excellent lock-screen placement. In the most widely cited example, it generated a false BBC alert claiming Luigi Mangione had shot himself. The BBC complained that Apple was attaching fabricated claims to its reporting, other publishers raised similar concerns, and Apple responded in January 2025 by disabling notification summaries for News & Entertainment apps in iOS 18.3 while it reworked the feature.

Facepalmby Consumer AI feature
False breaking-news alerts on iPhones, publisher trust damage, and a public rollback by Apple.
ai-hallucinationjournalismproduct-failure+2 more
Tombstone icon

Cody Enterprise reporter resigned after AI fabricated quotes from real people

Aug 2024

The Cody Enterprise was forced into public apologies and corrections in August 2024 after reporter Aaron Pelczar resigned amid evidence that an AI tool he used to help write stories had inserted fabricated quotations. A competing reporter at the Powell Tribune spotted robotic phrasing, suspiciously polished source quotes, and one article that bizarrely ended by explaining the inverted pyramid style of news writing. The resulting review found seven stories that included invented or altered quotes from seven people, including Wyoming Gov. Mark Gordon. The paper removed many of the quotes, issued corrections, and then adopted an AI detection and policy response after learning, a little late, that generative text tools are not interchangeable with reporting.

Facepalmby Reporter
Seven stories tainted by fabricated or altered quotes; public apologies and corrections; reporter resigned; local newsroom credibility damaged.
ai-content-generationai-hallucinationjournalism+2 more
Tombstone icon

Meta AI answers spark backlash after wrong and sensitive replies

Jul 2024

Meta rolled out its Llama 3-powered AI assistant across Facebook, Instagram, WhatsApp, and Messenger in April 2024, replacing the familiar search bar with "Ask Meta AI anything" prompts. The assistant struggled with factual accuracy from the start - the New York Times found it unreliable with facts, numbers, and web search. In July, when asked about the Trump rally shooting, Meta AI stated the assassination attempt had not happened. Meta blamed hallucinations, updated the system, and acknowledged that "all generative AI systems can return inaccurate or inappropriate outputs."

Oopsieby AI Product
Feature restrictions; reputational damage.
ai-assistantai-hallucinationplatform-policy+2 more
Tombstone icon

Google’s AI Overviews says to eat rocks

May 2024

Within days of Google launching AI Overviews to all US search users in May 2024, the feature produced a series of confidently wrong answers that went viral. It told users to add non-toxic glue to pizza to make cheese stick better (sourced from an 11-year-old Reddit joke), that geologists recommend eating one rock per day for vitamins, and that Barack Obama was Muslim. Google head of search Liz Reid acknowledged the errors in a blog post, calling some results "odd, inaccurate or unhelpful," and the company made corrections including limiting AI Overviews for health-related and sensitive queries.

Facepalmby Search Product
Mass reputational damage; feature dialed back and corrected.
ai-assistantai-hallucinationplatform-policy+1 more
Tombstone icon

NYC’s official AI bot told businesses to break laws

Mar 2024

New York City launched a Microsoft-powered AI chatbot called MyCity in October 2023 to help small business owners navigate regulations. A March 2024 investigation by The Markup found the bot was routinely advising businesses to break the law - telling employers they could pocket workers' tips, landlords they could discriminate against housing voucher holders, and bosses they could fire whistleblowers. Mayor Eric Adams acknowledged the errors but refused to take the chatbot offline, calling AI a "once-in-a-generation opportunity." NYU professor Julia Stoyanovich called the city's approach "reckless and irresponsible."

Facepalmby Executive
City guidance channel distributed illegal advice; public backlash.
ai-hallucinationautomationlegal-risk+2 more
Tombstone icon

AI hallucinated packages fuel "Slop Squatting" vulnerabilities

Mar 2024

Security researcher Bar Lanyado at Lasso Security discovered that AI code assistants consistently hallucinate nonexistent software package names when answering programming questions - and that nearly 30% of prompts produce at least one fake package recommendation. Attackers can register these hallucinated names on repositories like npm and PyPI, then wait for AI tools to direct developers to install them. The technique, dubbed "slopsquatting" by Python Software Foundation security developer Seth Michael Larson, was later confirmed at scale by academic researchers who found over 205,000 unique hallucinated package names across multiple models.

Catastrophicby Malicious actors
Potential supply-chain compromise when vibe-coders install hallucinated, malicious dependencies.
ai-hallucinationsupply-chainsecurity
Tombstone icon

Gemini paused people images after historical inaccuracies

Feb 2024

Google paused Gemini's image generation of people on February 22, 2024, after users discovered the tool was producing historically inaccurate depictions - including racially diverse World War II German soldiers, Black female popes, and multiethnic U.S. Founding Fathers. The overcorrection stemmed from diversity tuning meant to counter training-data biases, but the model failed to distinguish when diversity adjustments were inappropriate for specific historical prompts. CEO Sundar Pichai called the outputs "completely unacceptable." Google SVP Prabhakar Raghavan later published a blog post acknowledging the model had "overcompensated" and been "over-conservative."

Facepalmby AI Product
Feature paused; trust hit; policy and model adjustments.
ai-hallucinationimage-generationplatform-policy+2 more
Tombstone icon

Air Canada liable for lying chatbot promises

Feb 2024

Jake Moffatt used Air Canada's website chatbot to ask about bereavement fares after his grandmother died. The chatbot told him he could book at full price and apply for a bereavement discount within 90 days. Air Canada's actual policy did not allow retroactive bereavement fare claims. When Moffatt applied, the airline denied the refund and admitted the chatbot had provided "misleading words" - but argued Moffatt should have checked the static webpage instead. British Columbia's Civil Resolution Tribunal ruled in Moffatt's favor in February 2024, finding Air Canada liable for negligent misrepresentation and rejecting the airline's argument that it wasn't responsible for its own chatbot's statements.

Facepalmby Product Manager
Legal liability; refund + fees; policy/process review.
ai-hallucinationautomationcustomer-service+2 more
Tombstone icon

Gannett pauses AI sports recaps after mockery

Aug 2023

In August 2023, Gannett - the largest newspaper chain in the United States - deployed an AI service called LedeAI to auto-generate high school sports recaps for the Columbus Dispatch and other papers. The articles went viral on social media for their robotic phrasing, missing player names, and bizarre constructions like "close encounter of the athletic kind." Several articles required corrections appended with notes about "errors in coding, programming or style." Gannett paused the experiment and said it would add "hundreds of reporting jobs" alongside AI tools, though the connection between the two claims was unclear.

Facepalmby Executive
Chain-wide pause of AI copy; reputational hit in local markets.
ai-content-generationai-hallucinationbrand-damage+2 more
Tombstone icon

Lawyers filed ChatGPT’s imaginary cases; judge fined them

Jun 2023

In Mata v. Avianca (S.D.N.Y.), plaintiff Roberto Mata sued the airline after a metal serving cart struck his knee during a 2019 flight. His attorney Peter LoDuca filed a brief opposing dismissal that cited six judicial decisions. When opposing counsel and the court couldn't locate any of the cited cases, Judge Kevin Castel demanded copies. It turned out attorney Steven Schwartz at the same firm had used ChatGPT to research and draft the brief, and the AI had fabricated every case, complete with fake quotes and fake internal citations. On June 22, 2023, Castel sanctioned Schwartz, LoDuca, and their firm Levidow, Levidow & Oberman with a $5,000 penalty and required them to send notices to the real judges whose names appeared in the fabricated opinions.

Facepalmby Legal Counsel
Court sanctions; fines and mandated notices; reputational damage in legal community.
ai-assistantai-hallucinationlegal-risk+1 more
Tombstone icon

Google’s Bard ad made False JWST “first” Claim

Feb 2023

Google unveiled Bard on February 6, 2023, with a promotional ad on Twitter demonstrating the chatbot answering a question about the James Webb Space Telescope. Given the prompt "What new discoveries from the JWST can I tell my 9-year old about?", Bard stated that the JWST had taken the first pictures of a planet outside our solar system. This was false - the European Southern Observatory's Very Large Telescope captured the first direct exoplanet image in 2004. Reuters spotted the error on February 8, the day of a Google AI event in Paris. Alphabet shares dropped roughly 9% that day, erasing about $100 billion in market value.

Oopsieby Marketing
Embarrassing launch moment; stock wobble; trust in product accuracy questioned.
ai-hallucinationproduct-failurebrand-damage
Tombstone icon

CNET mass-corrects AI-written finance explainers

Jan 2023

Starting in November 2022, CNET quietly published 77 financial explainer articles written by an AI tool under the byline "CNET Money Staff." Readers had to hover over the byline to learn the articles were produced "using automation technology." In January 2023, Futurism broke the story, and a follow-up identified factual errors in a compound interest article, prompting a full audit. CNET editor-in-chief Connie Guglielmo confirmed corrections were issued on 41 of the 77 articles - more than half - including some she described as "substantial." CNET paused AI-generated publishing and updated its disclosure practices, though Guglielmo said the outlet intended to continue using AI tools.

Facepalmby Executive
Large corrections; credibility hit; policy changes on AI usage.
ai-content-generationai-hallucinationbrand-damage+3 more