Legal Risk Stories

24 disasters tagged #legal-risk

Tombstone icon

Fifth Circuit sanctions lawyer $2,500 for AI-hallucinated citations, says problem "getting worse"

Feb 2026

The U.S. Court of Appeals for the Fifth Circuit sanctioned attorney Heather Hersh $2,500 after finding her brief contained 16 fabricated quotations and five additional serious misrepresentations of law or fact, all apparently AI-generated. The court expressed frustration that AI-hallucinated legal citations "have increasingly become an even greater problem in our courts" and that the issue "shows no sign of abating." Hersh initially denied using AI, then shifted to claiming she "relied on publicly available versions of the cases, which she believed were accurate."

Facepalmby AI assistant
First known federal appeals court sanction for AI hallucinations; court signals escalating judicial frustration nearly three years after the first high-profile case
ai-hallucinationlegal-risk
Tombstone icon

10th Circuit sanctions lawyer $1,000 for ChatGPT-fabricated appellate brief

Feb 2026

Maryland attorney Kusmin Amarsingh used ChatGPT to draft her appellate brief against Frontier Airlines without verifying any citations, resulting in multiple nonexistent cases being cited in the 10th Circuit. The court found her conduct "reckless" for completely failing to perform "an attorney's fundamental duty to the court." She was fined $1,000 and referred to Maryland attorney-disciplinary authorities.

Facepalmby Attorney
Client's appeal dismissed; attorney faces $1,000 fine and disciplinary referral; case adds to mounting appellate-level precedent on AI citation verification duties
ai-hallucinationlegal-risk
Tombstone icon

Repeated AI-fabricated citations cost client the entire case

Feb 2026

Attorney Steven Feldman filed multiple motions containing AI-fabricated case citations in Flycatcher Corp. v. Affable Avenue LLC. Despite explicit court warnings and access to Westlaw and Lexis, he continued submitting unverified AI output -- even using AI to draft his response to the court's show-cause order, which contained yet more fake citations. Judge Failla imposed the most severe AI-hallucination sanction yet: default judgment against his client.

Catastrophicby Attorney
Client lost the entire case via terminal sanction; attorney faces fees under Rule 11 and 28 U.S.C. 1927; most severe consequence yet for AI citation fabrication in U.S. courts
ai-hallucinationlegal-risk
Tombstone icon

Four attorneys fined $12,000 combined for AI-fabricated patent case citations

Feb 2026

A federal judge in the District of Kansas fined four attorneys a combined $12,000 for court filings containing AI-generated fabricated legal citations in a patent infringement case. The attorney who used ChatGPT received $5,000; two who failed to review the filings received $3,000 each; local counsel who did not identify errors received $1,000. The judge called the volume of fabricated case law "staggering."

Facepalmby Attorney
Four attorneys sanctioned across a single case; staggering volume of fabricated case law filed with the court; all signatories held personally accountable
ai-hallucinationlegal-risk
Tombstone icon

Two lawyers sanctioned differently for same filing with AI-fabricated citations

Jan 2026

Attorneys Yen-Yi Anderson and Jeffrey Goldin jointly filed a motion in Lifetime Well v. IBSpot containing at least eight AI-generated false citations. Judge Kearney imposed differential sanctions based on their responses: Anderson, who blamed time pressure and fired her law clerk rather than accepting responsibility, received $4,000 in monetary sanctions. Goldin, who promptly accepted responsibility and implemented remedial measures, received no monetary penalty.

Facepalmby Attorney
Client's motion to dismiss compromised; $4,000 sanction for one attorney; both required to distribute ruling and AI policies to legal communities
ai-hallucinationlegal-risk
Tombstone icon

New York court sanctions lawyer for AI-fabricated case law

Jan 2026

A New York appellate court imposed $10,000 in sanctions after a lawyer submitted briefings in a mortgage foreclosure case containing fabricated case citations identified as likely AI-generated hallucinations. The court found multiple nonexistent cases and misrepresented holdings, affirming prior orders and awarding costs to the plaintiff.

Facepalmby Legal Counsel
$10,000 in sanctions ($5,000 counsel, $2,500 defendant, plus costs); appellate rebuke; case law now cited as precedent for AI citation misconduct.
ai-hallucinationlegal-risk
Tombstone icon

Five Kansas attorneys face sanctions for ChatGPT-fabricated court citations

Jan 2026

Five attorneys who signed a legal brief in McPhaul v. College Hills submitted fabricated case citations hallucinated by ChatGPT to a federal court in Kansas. The judge issued an order requiring them to explain why they should not be sanctioned, with multiple defects attributed to AI in the documents.

Facepalmby AI chatbot
Five attorneys and their client in federal court
ai-hallucinationlegal-riskai-assistant
Tombstone icon

Getty’s UK suit leaves Stable Diffusion mostly intact

Nov 2025

A UK High Court judge ruled Stability AI liable for trademark infringement after it spat out synthetic Getty watermarks. Getty called for tougher laws while Both sides now face a precedent that AI models can still trigger trademark penalties even when copyright claims fizzle.

Facepalmby AI Vendor
Mixed ruling fuels ongoing lawsuits, exposes Stability AI to injunctions over watermarked outputs, and leaves copyright liability unanswered globally.
image-generationlegal-riskbrand-damage
Tombstone icon

AI-only support is bleeding customers before it saves money

Oct 2025

Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canada–style hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.

Facepalmby Executive
Customer churn, wasted automation budgets, and tribunal-tested liability for brands that replace human support with hallucination-prone bots.
ai-assistantcustomer-serviceai-hallucination+2 more
Tombstone icon

Google’s Gemini allegedly slandered a Tennessee activist

Oct 2025

Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.

Facepalmby AI Product
Election-season reputational damage, legal costs, and renewed skepticism of Gemini’s safety guardrails.
ai-assistantai-hallucinationbrand-damage+1 more
Tombstone icon

Deloitte to refund Australian government after AI-generated report

Oct 2025

Deloitte admitted AI-generated errors in a commissioned Australian government report and agreed to refund the fee.

Facepalmby Consultant
Refund issued; public-sector trust and procurement review; reputational harm.
ai-content-generationai-hallucinationpublic-sector+2 more
Tombstone icon

California lawyer fined $10,000 for ChatGPT-fabricated citations

Sep 2025

Los Angeles attorney Amir Mostafavi became the first California lawyer sanctioned for AI-generated legal fabrications when a court hit him with a $10,000 fine. He ran his appeal draft through ChatGPT to improve the writing but did not verify the output before filing, unaware the tool had inserted fabricated case citations.

Facepalmby AI writing assistant misuse
Client's case compromised; lawyer faces historic fine; AI citation fabrications now surging from few per month to several per day
ai-hallucinationlegal-risk
Tombstone icon

FTC demands answers on kids’ AI companions

Sep 2025

The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.

Facepalmby Platform Operator
Multiplatform compliance scramble, looming enforcement risk, and renewed scrutiny of AI companions aimed at kids.
ai-assistantsafetylegal-risk+1 more
Tombstone icon

Anthropic agrees to $1.5B payout over pirated books

Sep 2025

Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads off a December trial, and forces one of the richest AI startups to bankroll the writing community it previously treated as free training data.

Catastrophicby AI Vendor
Record copyright settlement drains cash, sets precedent for other AI labs, and fuels public distrust of Anthropic’s data practices.
ai-content-generationlegal-riskbrand-damage
Tombstone icon

Warner Bros. says Midjourney ripped its DC art

Sep 2025

Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data.

Facepalmby AI Vendor
Major studio litigation threatens Midjourney with statutory damages and potential model shutdowns across entertainment IP.
image-generationlegal-riskbrand-damage
Tombstone icon

FTC sues Air AI over deceptive AI sales agent capability claims

Aug 2025

FTC accused Air AI of bilking millions from small businesses with false claims that its Odin AI could replace human sales reps; but - would you believe it? - the AI tech was faulty and often nonfunctional. Who could've guessed!

Catastrophicby Exec
Millions lost by small businesses; individual losses up to $250K; FTC lawsuit with TRO request.
automationlegal-riskcustomer-service+1 more
Tombstone icon

MD Anderson shelved IBM Watson cancer advisor

Feb 2025

MD Anderson's Oncology Expert Advisor pilot burned through $62M with IBM Watson yet still couldn't integrate with Epic or produce trustworthy recommendations, so the hospital benched it after auditors flagged procurement and scope failures.

Facepalmby Vendor
UT audit cited $62M spent outside standard procurement, the pilot never made it into patient care, and leadership had to rebid decision-support tooling amid reputational fallout.
healthproduct-failurebrand-damage+1 more
Tombstone icon

NYC’s official AI bot told businesses to break laws

Mar 2024

NYC’s Microsoft-powered MyCity chatbot gave inaccurate/illegal advice on labor & housing policy; city kept it online.

Facepalmby Executive
City guidance channel distributed illegal advice; public backlash.
ai-hallucinationautomationlegal-risk+2 more
Tombstone icon

Air Canada liable for lying chatbot promises

Feb 2024

Tribunal ruled Air Canada responsible after its AI chatbot misled a traveler about bereavement refunds.

Facepalmby Product Manager
Legal liability; refund + fees; policy/process review.
ai-hallucinationautomationcustomer-service+1 more
Tombstone icon

AI “Biden” robocalls told voters to stay home; fines and charges followed

Jan 2024

Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges.

Facepalmby Political Consultant
Voter confusion; enforcement actions; national scrutiny of AI voice-clones.
safetylegal-riskbrand-damage
Tombstone icon

iTutorGroup's AI screened out older applicants; $365k EEOC settlement

Aug 2023

EEOC reached a settlement after iTutorGroup's application screening software rejected older applicants; the company will pay $365,000 and adopt compliance measures.

Facepalmby Executive
Older job applicants screened out; legal settlement and mandated policy changes.
legal-riskedtechautomation+1 more
Tombstone icon

Lawyers filed ChatGPT’s imaginary cases; judge fined them

Jun 2023

In Mata v. Avianca, attorneys submitted a brief citing non-existent cases generated by ChatGPT. A federal judge sanctioned two lawyers, ordered a $5,000 penalty, and required notices to judges named in the fake citations.

Facepalmby Legal Counsel
Court sanctions; fines and mandated notices; reputational damage in legal community.
ai-assistantai-hallucinationlegal-risk
Tombstone icon

Koko tested AI counseling on users without clear consent

Jan 2023

Mental health app Koko used GPT-3 to draft replies for 4,000 users; backlash followed over consent and ethics.

Facepalmby Founder/Operations
Trust damage; public criticism; policy changes.
ai-assistanthealthlegal-risk
Tombstone icon

Babylon chatbot 'beats GPs' claim collapsed

Jun 2018

Babylon unveiled its AI symptom checker at the Royal College of Physicians and bragged it scored 81% on the MRCGP exam, but the claim could not be verified, and warned no chatbot can replace human judgment. Independent clinicians who later dissected Babylon's marketing study in The Lancet told Undark that the tiny, non-peer-reviewed test offered no proof the tool outperforms doctors and might even be worse.

Facepalmby Startup
Patient harm, eroded trust, and regulators forced real clinical trials.
healthproduct-failuresafety+1 more