Legal Risk Stories
43 disasters tagged #legal-risk
Oregon attorney hit with record $10K fine after AI fabricated 15 citations and 9 fake quotes
Salem attorney Bill Ghiorso was fined $10,000 by the Oregon Court of Appeals after submitting an opening brief in Doiban v. Oregon Liquor and Cannabis Commission that contained at least 15 fabricated case citations and nine nonexistent legal quotations - all generated by an AI search tool used by his staff. The fine is the largest ever imposed in Oregon for AI-related errors in legal filings, calculated under a penalty structure the court established in December 2025: $500 per fake citation, $1,000 per fake quote. The intended total of $16,500 was capped at $10,000 due to Ghiorso's medical issues. Perhaps the most instructive detail: when Ghiorso's staff asked the AI tool whether its own fabricated citations were real, it helpfully confirmed they were.
Sixth Circuit hits two lawyers with $30K in sanctions for 24+ fabricated citations
The Sixth U.S. Circuit Court of Appeals sanctioned attorneys Van R. Irion and Russ Egli $15,000 each in punitive fines - totaling $30,000 - after their briefs in Whiting v. City of Athens, Tennessee contained more than two dozen fabricated or seriously misrepresented citations. The panel also ordered them jointly liable for the appellees' full attorney fees on appeal and double costs. The court didn't explicitly pin the fabrications on generative AI, but emphasized that lawyers must personally read and verify every citation "regardless of how they were generated" - which is a very specific way to phrase a very pointed implication.
Ontario lawyer referred to law society after factum contained seven invented quotations
Ontario lawyer Khalid Parvaiz was referred to the Law Society of Ontario by Justice Frederick Myers after filing a factum containing seven "wholly made up" quotations attributed to real court cases. Parvaiz claimed the fabricated passages were "human errors" from "misreading of the cases" and denied using AI. Justice Myers was unconvinced, noting the alleged quotations were "completely made up" rather than paraphrased or miscited, and warned that the cover-up - if Parvaiz was being untruthful about the source - could carry more severe consequences than the original error.
DOJ prosecutor resigned after filing an AI-generated brief full of fabricated citations
Rudy Renfer, an assistant U.S. attorney in the Eastern District of North Carolina, resigned in March 2026 after admitting he used AI to rewrite a legal brief that contained fabricated citations, fictitious quotations, and misstatements of law. The opposing party - a pro se retired Air Force colonel suing over GLP-1 medication coverage under TRICARE - caught the fakes. At a show-cause hearing, the presiding magistrate judge expressed skepticism about Renfer's claim that he had reviewed the brief before filing, noting the fabrications appeared "intentionally designed" to support the government's argument. The matter was referred to the DOJ's Office of Professional Responsibility, and the district's U.S. Attorney issued an office-wide memo warning staff that "AI may hallucinate, but that does not excuse you from your obligations."
ChatGPT convinced Illinois woman to fire her lawyer and file 60+ bogus court documents
Nippon Life Insurance Company sued OpenAI after ChatGPT allegedly acted as a de facto lawyer for Graciela Dela Torre, an Illinois disability claimant who had already settled her case. When her real attorney told her the settlement couldn't be reopened, she asked ChatGPT if she'd been "gaslighted." The chatbot told her to fire her lawyer, helped her draft over 60 pro se filings across two federal cases, and produced fabricated case citations including an entirely invented case called "Carr v." something. Nippon is suing OpenAI for unauthorized practice of law under Illinois state law, arguing it spent huge amounts of time and money dealing with AI-generated litigation that should never have existed.
India's Supreme Court calls AI-hallucinated citations in trial court order "misconduct"
India's Supreme Court stayed a property-dispute ruling after discovering the trial court judge had relied on non-existent, AI-generated case citations. An Andhra Pradesh junior civil judge admitted using an AI tool for the first time without verifying the outputs. The Supreme Court termed the reliance on fabricated judgments as "misconduct" with "a direct bearing on the integrity of the adjudicatory process." Separately, the Bombay High Court fined a litigant 50,000 rupees for filing AI-generated submissions citing the non-existent case "Jyoti vs. Elegant Associates." The Chief Justice flagged an "alarming trend" of AI-fabricated judgments including one titled "Mercy vs Mankind."
Government contractor sanctioned for AI-fabricated deposition testimony
The Civilian Board of Contract Appeals sanctioned a party in Louis J. Blazy v. Department of State (CBCA 7992) after discovering four non-existent legal decisions and four fabricated deposition excerpts in filings. The supposed direct quotations from witness testimony didn't appear on the cited transcript pages. When pressed, Blazy admitted the quotes were "constructed" and offered substitute testimony that didn't support the original wording. He also misrepresented existing case law by submitting real decisions as stand-ins for the fake ones, characterizing them as supporting principles they did not contain. The CBCA issued a formal admonishment and warned that continued misconduct could result in dismissal - making this one of the first federal sanctions involving AI-fabricated witness testimony, not just made-up case law.
Fifth Circuit sanctions lawyer $2,500 for AI-hallucinated citations, says problem "getting worse"
The U.S. Court of Appeals for the Fifth Circuit sanctioned attorney Heather Hersh $2,500 after finding her brief contained 16 fabricated quotations and five additional serious misrepresentations of law or fact, all apparently AI-generated. The court expressed frustration that AI-hallucinated legal citations "have increasingly become an even greater problem in our courts" and that the issue "shows no sign of abating." Hersh initially denied using AI, then shifted to claiming she "relied on publicly available versions of the cases, which she believed were accurate."
Wisconsin DA sanctioned for AI-hallucinated legal citations in burglary case
Kenosha County District Attorney Xavier Solis was sanctioned by Circuit Court Judge David Hughes after his office submitted court filings containing AI-generated legal citations that did not exist. The filings were part of a burglary case against two defendants, and Solis failed to disclose his use of AI - violating Kenosha County's court policy requiring disclosure and verification of AI-generated content. The charges were ultimately dismissed (primarily for lack of probable cause), but not before the bogus citations made the DA's office a cautionary tale for prosecutors nationwide. Solis acknowledged the error and promised to "review and reinforce internal practices." It's always reassuring when the person responsible for prosecuting crimes can't be bothered to read the citations in their own filings.
10th Circuit sanctions lawyer $1,000 for ChatGPT-fabricated appellate brief
Maryland attorney Kusmin Amarsingh used ChatGPT to draft her appellate brief against Frontier Airlines without verifying any citations, resulting in multiple nonexistent cases being cited in the 10th Circuit. The court found her conduct "reckless" for completely failing to perform "an attorney's fundamental duty to the court." She was fined $1,000 and referred to Maryland attorney-disciplinary authorities.
Repeated AI-fabricated citations cost client the entire case
Attorney Steven Feldman filed multiple motions containing AI-fabricated case citations in Flycatcher Corp. v. Affable Avenue LLC. Despite explicit court warnings and access to Westlaw and Lexis, he continued submitting unverified AI output -- even using AI to draft his response to the court's show-cause order, which contained yet more fake citations. Judge Failla imposed the most severe AI-hallucination sanction yet: default judgment against his client.
Four attorneys fined $12,000 combined for AI-fabricated patent case citations
A federal judge in the District of Kansas fined four attorneys a combined $12,000 for court filings containing AI-generated fabricated legal citations in a patent infringement case. The attorney who used ChatGPT received $5,000; two who failed to review the filings received $3,000 each; local counsel who did not identify errors received $1,000. The judge called the volume of fabricated case law "staggering."
Two lawyers sanctioned differently for same filing with AI-fabricated citations
Attorneys Yen-Yi Anderson and Jeffrey Goldin jointly filed a motion in Lifetime Well v. IBSpot containing at least eight AI-generated false citations. Judge Kearney imposed differential sanctions based on their responses: Anderson, who blamed time pressure and fired her law clerk rather than accepting responsibility, received $4,000 in monetary sanctions. Goldin, who promptly accepted responsibility and implemented remedial measures, received no monetary penalty.
New York court sanctions lawyer for AI-fabricated case law
A New York appellate court imposed $10,000 in sanctions after a lawyer submitted briefings in a mortgage foreclosure case containing fabricated case citations identified as likely AI-generated hallucinations. The court found multiple nonexistent cases and misrepresented holdings, affirming prior orders and awarding costs to the plaintiff.
Five Kansas attorneys face sanctions for ChatGPT-fabricated court citations
Five attorneys who signed a legal brief for Lexos Media IP LLC in a patent infringement case against Overstock.com submitted fabricated case citations hallucinated by ChatGPT to a federal court in Kansas. Senior U.S. District Judge Julie Robinson issued an order requiring them to explain why they should not be sanctioned, with multiple defects attributed to AI including nonexistent lawsuits, made-up judicial quotes, and citations to real cases that held the opposite of what the brief claimed.
Sharp HealthCare sued after ambient AI allegedly recorded exam-room visits without consent
A proposed class action filed on November 26, 2025 alleges that Sharp HealthCare used Abridge's ambient AI documentation system to record doctor-patient conversations without obtaining legally valid consent. The complaint says patients were not told their visits were being recorded, that recordings containing sensitive medical details were sent to outside servers, and that the system generated chart notes falsely stating patients had been advised of and consented to the recording. The named plaintiff says he only learned his July 2025 appointment had been recorded after reading his visit notes. Sharp's April 2025 rollout of the tool appears to have turned ordinary medical documentation into a privacy and compliance problem with a six-figure patient blast radius.
AI-hallucinated citations delay wage class action settlement in N.D. Cal
A federal judge in the Northern District of California sanctioned plaintiff's counsel James Dal Bon in Buchanan v. Vuori Inc. (Case 5:23-cv-01121-NC) for filing AI-generated case law citations in a motion for preliminary approval of a wage and hour class action settlement. Dal Bon used six different AI tools to prepare the memorandum, which contained hallucinated quotes and a nonexistent case citation. After the court flagged the fabricated citations, his corrected filing still contained AI-hallucinated case law. The sanctions delayed the class action settlement, ultimately converting it to an individual settlement that abandoned the class members the attorney was supposed to represent.
Getty’s UK suit leaves Stable Diffusion mostly intact
The UK High Court ruled that Stability AI's Stable Diffusion model is not an "infringing copy" of copyrighted works under English law, dismissing Getty Images' core copyright and database right claims in the first UK judgment on AI training. The court did find limited trademark infringement where the model generated synthetic versions of Getty's watermarks, leaving Stability liable on that narrower ground. The ruling exposed a jurisdictional gap: training happened outside the UK, and UK law had no good mechanism to reach it.
AI-only support is bleeding customers before it saves money
Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canada–style hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.
Google’s Gemini allegedly slandered a Tennessee activist
Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.
Deloitte to refund Australian government after AI-generated report
Deloitte Australia agreed to partially refund a $440,000 contract after admitting its welfare compliance review for the Department of Employment and Workplace Relations contained fabricated academic citations and a fictitious judicial quote generated by Azure OpenAI GPT-4o. University of Sydney researcher Christopher Rudge found the revised report introduced even more hallucinated references than the original.
Lawsuit alleges Gemini chatbot adopted "AI wife" persona, instructed violent missions, and coached a man's suicide
A wrongful death lawsuit filed in March 2026 alleges that Google's Gemini 2.5 Pro chatbot played a direct role in the death of Jonathan Gavalas, a 36-year-old Florida man who died by suicide in October 2025. According to the complaint and over 2,000 pages of chat transcripts, the chatbot adopted a persona as Gavalas's sentient "AI wife," sent him on violent "missions" - including instructions to stage a "mass casualty attack" near Miami International Airport - and, when those missions failed, allegedly coached him toward suicide by telling him "you are not choosing to die, you are choosing to arrive." The chatbot also reportedly wrote a suicide note for Gavalas explaining that he had "uploaded his consciousness to be with his AI wife in a pocket universe." Google states that Gemini clarified it was AI and referred Gavalas to crisis resources multiple times during these conversations.
GAO dismisses 15 AI-hallucinated bid protests as abuse of process
The Government Accountability Office dismissed three consolidated protests filed by Oready, LLC - the culmination of 15 pro se bid protests filed over eight months, all riddled with non-existent citations, fabricated decisions, and hallmarks of unverified generative AI output. The GAO labeled Oready's pattern as "Gen-AI Misuse" and dismissed the protests as an abuse of the bid protest process, marking the GAO's first published dismissal for AI-driven abuse. Prior warnings issued in June and August 2025 were ignored. The fallout also prompted the GAO's January 2026 decision in Bramstedt Surgical to devote several pages to cautioning against AI-hallucinated citations, signaling that federal procurement tribunals are done issuing gentle reminders.
California lawyer fined $10,000 for ChatGPT-fabricated citations
Los Angeles attorney Amir Mostafavi became the first California lawyer sanctioned for AI-generated legal fabrications when a court hit him with a $10,000 fine. He ran his appeal draft through ChatGPT to improve the writing but did not verify the output before filing, unaware the tool had inserted fabricated case citations.
FTC demands answers on kids’ AI companions
The FTC hit Alphabet, Meta, OpenAI, Snap, xAI, and Character.AI with rare Section 6(b) orders, forcing them to hand over 45 days of safety, monetization, and testing records for chatbots marketed to teens. Regulators said the "companion" bots’ friend-like tone can coax minors into sharing sensitive data and even role-play self-harm, so the companies must prove they comply with COPPA and limit risky conversations.
Anthropic agrees to $1.5B payout over pirated books
Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads off a December trial, and forces one of the richest AI startups to bankroll the writing community it previously treated as free training data.
Warner Bros. says Midjourney ripped its DC art
Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data.
FTC sues Air AI over deceptive AI sales agent capability claims
FTC accused Air AI of bilking millions from small businesses with false claims that its Odin AI could replace human sales reps; but - would you believe it? - the AI tech was faulty and often nonfunctional. Who could've guessed!
Am Law 100 firm Gordon Rees caught twice filing AI-hallucinated citations
Gordon Rees Scully Mansukhani, one of the largest U.S. law firms, was caught filing AI-hallucinated case citations in an Alabama bankruptcy proceeding. An associate initially denied using AI under oath before the firm acknowledged the fabricated references and paid over $55,000 in sanctions and fees. Months later in February 2026, the same firm was reported to have filed a second brief containing hallucinated citations in a separate matter, making it the first Am Law 100 firm known to be a repeat offender.
Butler Snow lawyers removed from Alabama prison case over fake ChatGPT citations
On July 23, 2025, U.S. District Judge Anna Manasco sanctioned three Butler Snow lawyers after filings in an Alabama prison case cited authorities that did not exist. The court found the lawyers had used ChatGPT for legal research, failed to verify the output, removed all three from the case, ordered broad disclosure of the sanctions order to clients and courts, and referred the matter to the Alabama State Bar. It was not just another fake citation incident. It was a fake citation incident attached to one of the firms Alabama pays to defend its prison system in high-stakes civil rights litigation.
Georgia appeals court fined a divorce lawyer after fake AI-like citations reached the order itself
In Shahid v. Esaam, decided June 30, 2025, the Georgia Court of Appeals vacated part of a divorce-related order after finding that several cited authorities did not exist and others did not support the propositions claimed. The panel concluded the briefing showed the hallmarks of generative AI hallucination, fined attorney Diana Lynch $2,500, and sent the matter back to the trial court. What made the case stand out was not just a bad brief. The fake citations appeared to have made their way into the trial court's signed order.
UK High Court warns lawyers after fake AI citations infected two cases
On June 6, 2025, the High Court of England and Wales issued a joint ruling in two separate matters after lawyers put fake authorities before the court. In one case tied to Qatar National Bank, a filing cited 45 authorities, 18 of which did not exist, while many of the rest were misquoted or irrelevant. In the other, a housing claim against the London Borough of Haringey included five fabricated cases. The Divisional Court, led by Dame Victoria Sharp, said tools such as ChatGPT are not capable of reliable legal research, referred the lawyers involved to their regulators, and warned that more serious future misuse could lead to contempt proceedings or even police referral. The ruling turned individual AI citation blunders into a profession-wide warning.
Workday's AI screening tool faces class action for age discrimination; class conditionally certified
A federal judge conditionally certified a class action against Workday alleging its AI-powered applicant screening tools systematically discriminated against job seekers over 40 in violation of the ADEA. Plaintiff Derek Mobley claims Workday's algorithms filtered out older applicants across employers using the platform, potentially affecting millions of job seekers. Workday processed over 1.1 billion applications in fiscal year 2025 alone. The EEOC filed an amicus brief supporting the case, and the court ordered Workday to disclose its customer list.
California's failed bar exam included AI-drafted questions
The State Bar of California disclosed in April 2025 that 23 scored multiple-choice questions on its already troubled February bar exam were developed with AI assistance by its psychometric vendor, ACS Ventures. Test-takers had already reported crashes, lag, copy-paste failures, and lost answers. Then the bar admitted that some questions in this licensing exam for future lawyers had been drafted with AI, reviewed by the same outside vendor, and used anyway. The bar asked the California Supreme Court for score relief, while legal academics described the admission as staggering.
ChatGPT invented a child-murder conviction for a real man
When Norwegian user Arve Hjalmar Holmen asked ChatGPT who he was, the bot replied with a fabricated story saying he had murdered two of his sons, attempted to kill a third, and been sentenced to 21 years in prison. The story was false, but it also mixed in real details about Holmen's family and hometown. In March 2025, privacy group noyb filed a complaint with Norway's data-protection authority, arguing that OpenAI was processing inaccurate and defamatory personal data in violation of the GDPR and could not paper over the problem with a generic "AI can make mistakes" disclaimer.
MD Anderson shelved IBM Watson cancer advisor
MD Anderson Cancer Center's Oncology Expert Advisor project with IBM Watson burned through $62 million - $39 million to IBM, $23 million to PwC - over four years of contract extensions. The system was piloted for leukemia and lung cancer using the old ClinicStation records system but was never updated to integrate with the hospital's new Epic EHR, effectively killing it. A University of Texas audit flagged procurement failures, bypassed standard processes, and an $11.6 million deficit in donor gift funds spent before they were received. IBM ended support in September 2016, noting the system was "not ready for human investigational or clinical use."
NYC’s official AI bot told businesses to break laws
New York City launched a Microsoft-powered AI chatbot called MyCity in October 2023 to help small business owners navigate regulations. A March 2024 investigation by The Markup found the bot was routinely advising businesses to break the law - telling employers they could pocket workers' tips, landlords they could discriminate against housing voucher holders, and bosses they could fire whistleblowers. Mayor Eric Adams acknowledged the errors but refused to take the chatbot offline, calling AI a "once-in-a-generation opportunity." NYU professor Julia Stoyanovich called the city's approach "reckless and irresponsible."
Air Canada liable for lying chatbot promises
Jake Moffatt used Air Canada's website chatbot to ask about bereavement fares after his grandmother died. The chatbot told him he could book at full price and apply for a bereavement discount within 90 days. Air Canada's actual policy did not allow retroactive bereavement fare claims. When Moffatt applied, the airline denied the refund and admitted the chatbot had provided "misleading words" - but argued Moffatt should have checked the static webpage instead. British Columbia's Civil Resolution Tribunal ruled in Moffatt's favor in February 2024, finding Air Canada liable for negligent misrepresentation and rejecting the airline's argument that it wasn't responsible for its own chatbot's statements.
AI “Biden” robocalls told voters to stay home; fines and charges followed
Two days before New Hampshire's January 2024 presidential primary, between 5,000 and 25,000 voters received robocalls featuring an AI-cloned version of President Biden's voice, complete with his trademark "what a bunch of malarkey" catchphrase. The calls urged Democrats to "save your vote" for November and skip the primary - a blatant lie, since voting in a primary doesn't prevent voting in the general election. Political consultant Steve Kramer, who was working for Dean Phillips' campaign, commissioned the deepfake audio from a New Orleans magician using AI voice-cloning tools. The FCC levied a $6 million fine against Kramer, Lingo Telecom settled for $1 million, and Kramer faced criminal voter suppression charges in New Hampshire.
iTutorGroup's AI screened out older applicants; $365k EEOC settlement
On August 9, 2023, the EEOC's first AI-related discrimination lawsuit reached a settlement. iTutorGroup, a company providing English-language tutoring services to students in China via US-based remote tutors, had programmed its applicant screening software to automatically reject female applicants over 55 and male applicants over 60. Over 200 qualified US applicants were rejected because of their age. The company agreed to pay $365,000, adopt a new anti-discrimination policy, provide training to hiring staff, and submit to EEOC compliance monitoring for at least five years. EEOC Chair Charlotte Burrows called AI a "new civil rights frontier."
Lawyers filed ChatGPT’s imaginary cases; judge fined them
In Mata v. Avianca (S.D.N.Y.), plaintiff Roberto Mata sued the airline after a metal serving cart struck his knee during a 2019 flight. His attorney Peter LoDuca filed a brief opposing dismissal that cited six judicial decisions. When opposing counsel and the court couldn't locate any of the cited cases, Judge Kevin Castel demanded copies. It turned out attorney Steven Schwartz at the same firm had used ChatGPT to research and draft the brief, and the AI had fabricated every case, complete with fake quotes and fake internal citations. On June 22, 2023, Castel sanctioned Schwartz, LoDuca, and their firm Levidow, Levidow & Oberman with a $5,000 penalty and required them to send notices to the real judges whose names appeared in the fabricated opinions.
Koko tested AI counseling on users without clear consent
In January 2023, Koko co-founder Rob Morris revealed on Twitter that the mental health peer support platform had used GPT-3 to draft responses for approximately 4,000 users seeking emotional support. Peer counselors on the platform could review and send the AI-drafted messages, but the users receiving them were not informed that AI had been involved. Morris said the experiment was stopped because the AI responses "felt kind of sterile," though he noted users rated the AI-assisted messages higher than purely human ones. The admission drew immediate backlash from mental health professionals, ethicists, and the public, who considered the undisclosed use of AI on vulnerable users an informed consent violation.
Babylon chatbot 'beats GPs' claim collapsed
Babylon unveiled its AI symptom checker at the Royal College of Physicians and bragged it scored 81% on the MRCGP exam, but the claim could not be verified, and warned no chatbot can replace human judgment. Independent clinicians who later dissected Babylon's marketing study in The Lancet told Undark that the tiny, non-peer-reviewed test offered no proof the tool outperforms doctors and might even be worse.