Fifth Circuit sanctions lawyer $2,500 for AI-hallucinated citations, says problem "getting worse"

Tombstone icon

The U.S. Court of Appeals for the Fifth Circuit sanctioned attorney Heather Hersh $2,500 after finding her brief contained 16 fabricated quotations and five additional serious misrepresentations of law or fact, all apparently AI-generated. The court expressed frustration that AI-hallucinated legal citations "have increasingly become an even greater problem in our courts" and that the issue "shows no sign of abating." Hersh initially denied using AI, then shifted to claiming she "relied on publicly available versions of the cases, which she believed were accurate."

Incident Details

Severity:Facepalm
Company:FCRA Attorneys / Jaffer & Associates
Perpetrator:AI assistant
Incident Date:
Blast Radius:First known federal appeals court sanction for AI hallucinations; court signals escalating judicial frustration nearly three years after the first high-profile case

The Brief That Read Like Satire

The case itself was a consumer protection dispute. Robert Fletcher sued Experian Information Solutions and Bridgecrest Credit Company for identity theft under the Fair Credit Reporting Act (FCRA). Jaffer & Associates, the law firm handling Fletcher's case, had already been in trouble with the lower court - the district court had sanctioned the firm roughly $33,000 in attorneys' fees after finding that lead attorney Jaffer "had not done even a minimal investigation of Fletcher's claims before filing a suit seeking damages that were barred by law, or based on false factual allegations."

The Fifth Circuit had actually vacated those sanctions on procedural grounds, giving Jaffer another chance. At that point, Heather Hersh - managing attorney at Jaffer & Associates - submitted a sworn declaration affirming that the firm had "taken measures to reinforce compliance with professional and ethical standards, including discussions regarding diligence, candor, and adherence to court rules."

Against that backdrop, Hersh filed a reply brief in the Fifth Circuit that was, as Law360 characterized it, "riddled with" fabricated content. The three-judge panel identified 16 fabricated quotations - passages attributed to case law or legal sources that simply don't exist - and five additional serious misrepresentations of law or fact. The court concluded that Hersh had used artificial intelligence to draft "a substantial portion, if not all" of the brief and had failed to verify any of the content the AI generated.

The Evasion

When the court noticed the problems and issued a show-cause order asking Hersh to explain, the initial response was not illuminating. Hersh said she "relied on publicly available versions of the cases, which she believed were accurate." This is a creative way to describe AI-generated hallucinations - technically, ChatGPT is publicly available, and the cases it invents do look like real cases to someone who doesn't check.

The Fifth Circuit was not impressed. The panel requested further information, at which point Hersh made what the Texas Lawbook described as "a grudging admission" that she had used generative AI to "help organize and structure" her argument. The progression from denial to partial admission tracked a pattern the courts have now seen multiple times: the attorney first claims the work was original, then acknowledges AI was used only in a limited capacity, then the extent of the AI's contribution becomes apparent from the sheer volume of fabricated content.

The court's opinion identified four key issues: whether counsel used generative AI to draft the brief, whether counsel met her duty to verify citations and quotations, whether counsel was candid with the court in responding to the show-cause order, and what sanctioning authority applies when AI misuse corrupts appellate advocacy.

The Sanction and Its Basis

The Fifth Circuit ordered Hersh to pay $2,500 to the clerk of court within 30 days. The sanction rested on two legal foundations: Federal Rule of Appellate Procedure 46(c), which provides for discipline of attorneys for "conduct unbecoming a member of the bar" or violation of court rules, and the court's inherent power to sanction abuse of the judicial process.

Legal commentators noted the significance of the dual basis. The Fifth Circuit didn't need AI-specific rules to impose sanctions - it used existing authority designed to address attorney misconduct generally. As one analysis put it, the ruling confirms that "courts need not wait for AI-specific rules to police AI-driven misinformation." The tools already exist; what's new is the frequency with which they're being deployed against the same category of offense.

The $2,500 amount is relatively modest as sanctions go, particularly given that Hersh had already sworn to the court that her firm had strengthened its compliance measures. But the monetary penalty was arguably secondary to the opinion itself, which the Fifth Circuit used as a vehicle for expressing its growing frustration with the phenomenon.

"Getting Worse" and "No Sign of Abating"

The opinion's most quoted language was directed not at Hersh specifically but at the broader trend. The court stated that AI-hallucinated legal citations "have increasingly become an even greater problem in our courts" and that the issue "shows no sign of abating." Coming from a federal appellate court nearly three years after the Avianca ChatGPT case first made international headlines in 2023, the language carried a tone of exhaustion rather than surprise.

The timeline is worth reflecting on. In June 2023, a New York federal judge sanctioned attorneys Steven Schwartz and Peter LoDuca for submitting a brief containing six completely fictitious cases generated by ChatGPT. The incident generated enormous media coverage and widespread awareness that AI chatbots hallucinate legal citations. There was a general assumption that the publicity itself would serve as a deterrent - that once lawyers knew AI could fabricate case law, they would verify before filing.

That assumption was wrong. In the intervening years, courts across the United States have encountered the same problem repeatedly and with growing frequency. The Fifth Circuit's observation that it's "getting worse" suggests that the rate of AI hallucination incidents in legal filings is increasing even as awareness of the risk becomes universal in the legal profession.

The Practical Guidance

The Texas Lawbook noted that the Fifth Circuit's opinion went beyond simply sanctioning Hersh and included practical guidance for attorneys using AI. The court essentially spelled out the minimum standard of care: if you use AI to help draft legal documents, you must independently verify every citation, every quotation, and every factual assertion. The AI's output is not a draft; it's an unverified hypothesis that happens to be formatted like legal writing.

This guidance is both obvious and apparently necessary. The fundamental obligation hasn't changed - attorneys have always been required to ensure the accuracy of their filings. What's changed is that the tools lawyers use now generate plausible-looking fiction with such confidence and fluency that distinguishing it from genuine legal analysis requires deliberate verification rather than professional instinct.

Pattern Recognition

The Hersh case fits a pattern that has become numbingly familiar on The Vibe Graveyard. An attorney uses an AI tool to draft a legal document. The AI fabricates citations. The attorney doesn't verify them. The court notices. The attorney gives evasive responses about the extent of AI use. The court sanctions the attorney and writes an opinion expressing frustration.

What distinguishes this case is the venue. The Fifth Circuit is one of the most influential appellate courts in the federal system, and this appears to be the first federal appeals court sanction specifically for AI-hallucinated content. District courts have been sanctioning attorneys for this since 2023, but the problem reaching the appellate level represents an escalation - it means AI-generated fabrications are making it past not only the drafting attorney but through the entire litigation process up to the federal appellate stage.

For the legal profession, the Fifth Circuit's frustration is a signal. The era of treating AI hallucination incidents as novel oddities is over. Courts view this as a systemic compliance failure, and the judicial patience for attorneys who don't verify AI output is visibly diminishing with each successive case.

Discussion