California lawyer fined $10,000 for ChatGPT-fabricated citations
Los Angeles attorney Amir Mostafavi became the first California lawyer sanctioned for AI-generated legal fabrications when a court hit him with a $10,000 fine. He ran his appeal draft through ChatGPT to improve the writing but did not verify the output before filing, unaware the tool had inserted fabricated case citations.
Incident Details
Amir Mostafavi is a Los Angeles attorney who filed an opening brief in the Noland appeal before Division Three of California's 2nd District Court of Appeal. The brief contained 23 case citations. Twenty-one of them were fabricated.
Not misquoted. Not taken out of context. Fabricated. These weren't real cases with inaccurate page numbers or shifted holdings. They were citations to cases that did not exist, containing quoted language that no judge had ever written. The entire evidentiary foundation of the brief was invented by ChatGPT.
How It Happened
Mostafavi told the court he had used ChatGPT to "improve the writing" of his appeal draft. He said he did not read the text generated by the AI before submitting the brief. He told the court he was unaware that ChatGPT might insert case citations or fabricate material.
This explanation deserves a moment of consideration. An attorney used a consumer AI tool on a legal filing, did not review the output, and submitted it to an appellate court under his name and bar number. The tool, which has no access to legal databases and generates plausible-sounding text rather than verified legal authority, inserted 21 fake case quotes into a 23-citation brief. The attorney never checked whether any of the cases existed.
The court issued a blistering opinion. CalMatters reported that the $10,000 fine imposed on September 22, 2025, appeared to be the largest penalty issued over AI fabrications by a California court at the time. The judges identified that 21 of the 23 quotes from cases cited in the opening brief were made up - not merely incorrect, but wholly invented.
The Fine and Its Aftermath
Mostafavi paid the $10,000 within days of the order. He subsequently accepted a stayed suspension from the State Bar of California. According to State Bar Court filings, he implemented changes to his citation and legal writing practices, including "independent verification of all case citations generated by [artificial intelligence] tools in legal research and writing" and a "commitment to ongoing education regarding use of [such] tools."
The State Bar Court judge who recommended discipline was explicit about the signal being sent: "A stayed suspension will adequately serve the purposes of attorney discipline and simultaneously inform the public and members of the State Bar that the submission of briefs to the court replete with fabricated legal authority, caused by the attorney's misuse of AI tools, may, absent significant mitigation, result in significant discipline."
The language was carefully calibrated. "Absent significant mitigation" means the next attorney who does the same thing without Mostafavi's degree of cooperation can expect a heavier penalty. The State Bar was drawing a line.
The Problem Is Getting Worse
Mostafavi's case arrived against a background of accelerating AI citation fraud in courts. Damien Charlotin, a researcher who maintains a global database tracking cases of fabricated AI citations in court filings, told CalMatters that the number of filings containing false case law had surged from just a few per month to several each day. Charlotin's database, publicly available on his website, catalogs hundreds of cases across jurisdictions where generative AI produced hallucinated citations that ended up in legal filings.
The acceleration is driven by the same dynamic that caught Mostafavi: ChatGPT and similar tools generate text that looks like legal writing, complete with case names, reporter citations, and quoted holdings. To a user who doesn't know the citations are fake - or who doesn't check - the output appears authoritative. The AI produces it with the same confident formatting it uses for everything else, offering no indication that the cited cases are inventions.
California's Emerging Response
Mostafavi was the first California lawyer sanctioned specifically for AI-generated fabrications, but the state had already been dealing with the problem. In May 2025, a U.S. District Court judge in California ordered two law firms to pay $31,100 in fees to defense counsel and the court for costs associated with "bogus AI-generated research." That judge described feeling misled, mentioned nearly citing fabricated material in a judicial order, and said "Strong deterrence is needed to make sure that attorneys don't succumb to this easy shortcut."
CalMatters reported that California was considering how to systematize its response, noting approaches from other states: temporary suspensions for attorneys who file AI-fabricated citations, mandatory courses on ethical AI use, and requirements for sanctioned attorneys to teach law students how to avoid the same mistakes. None of these had been formally adopted at the time of Mostafavi's case, leaving individual judges to set the standard case by case.
Why This Keeps Happening
Every AI-hallucinated citation case in the legal system follows a nearly identical pattern. An attorney uses ChatGPT or a similar general-purpose language model as a research tool. The model generates text that resembles legal analysis, including citations. The attorney does not verify the citations against an actual legal database (Westlaw, LexisNexis, or even a free case search). The fabricated citations are filed with the court. Someone - opposing counsel, a law clerk, a judge - checks the citations and discovers they don't exist.
The core mistake is treating a text generation tool as a legal research tool. ChatGPT does not search case law. It generates sequences of words that are statistically likely to appear in legal documents, which means it produces citation formats, case names, and quoted language that look correct but are not drawn from any actual database of judicial decisions. The model has no mechanism for distinguishing between a real case and a plausible-sounding one.
Mostafavi's particular variation added an extra layer. He didn't use ChatGPT for research; he used it for writing. He fed his draft into the tool asking it to improve the prose. The tool, without being asked, inserted citations. He didn't notice because he didn't read the output. The distinction between "used AI for research" and "used AI for writing and it added research on its own" matters because it reveals how AI-generated fabrications can enter legal filings even when the attorney isn't deliberately trying to use AI for case law research.
The Client's Case
Lost in the discussion of precedent and sanctions: Mostafavi was representing a client. The Noland appeal was a real case with real stakes for the person who hired a lawyer and expected competent representation. An opening brief with 21 out of 23 fabricated citations is not a brief that can withstand scrutiny from opposing counsel or the appellate bench. The client's appellate position was compromised before the court considered the merits.
The $10,000 fine, the stayed suspension, and the mandatory practice changes were consequences for Mostafavi. For his client, the consequence was a legal filing that couldn't support its own arguments.
Discussion