Sixth Circuit hits two lawyers with $30K in sanctions for 24+ fabricated citations

Tombstone icon

The Sixth U.S. Circuit Court of Appeals sanctioned attorneys Van R. Irion and Russ Egli $15,000 each in punitive fines - totaling $30,000 - after their briefs in Whiting v. City of Athens, Tennessee contained more than two dozen fabricated or seriously misrepresented citations. The panel also ordered them jointly liable for the appellees' full attorney fees on appeal and double costs. The court didn't explicitly pin the fabrications on generative AI, but emphasized that lawyers must personally read and verify every citation "regardless of how they were generated" - which is a very specific way to phrase a very pointed implication.

Incident Details

Severity:Facepalm
Company:Irion & Egli (attorneys for Glenn Whiting)
Perpetrator:AI assistant
Incident Date:
Blast Radius:One of the largest federal appellate sanctions for fabricated citations; combined $30K punitive fines plus appellees' full attorney fees and double costs

The Case Behind the Sanctions

The underlying dispute was almost comically mundane for a case that would produce one of the most significant sanctions opinions in the ongoing saga of fabricated legal citations. Glenn Whiting sued the City of Athens, Tennessee, over a 2022 fireworks show and its aftermath. The legal claims were handled by Tennessee attorneys Van R. Irion and Russ Egli, who filed consolidated appeals in the Sixth Circuit.

What followed was a masterclass in how not to appellate brief.

The three-judge panel - Judges Jane B. Stranch, John K. Bush, and Eric E. Murphy - cataloged the damage. The briefs submitted by Irion and Egli on behalf of their client contained more than two dozen fabricated or seriously misrepresented citations. Not a handful of sloppy errors. Not a few cases cited for the wrong proposition. More than twenty-four instances where the citations either didn't exist, said something materially different from what the brief claimed they said, or were otherwise fabricated in ways that the panel methodically documented.

What the Court Found

The Sixth Circuit's opinion walks through the fabrications with the measured patience of someone cataloging exhibits in a fraud case. Citation after citation, the panel identifies briefs that referenced cases that don't exist, quoted language that no court ever wrote, or attributed holdings to decisions that held something entirely different.

The scale of the problem distinguishes this case from the growing list of AI citation sanctions. Most previous cases involved a handful of fabricated cases - the landmark Avianca/Mata case in 2023 had six fictitious citations, the Fifth Circuit's Hersh case in February 2026 had 16 fabricated quotations. Here, the panel identified more than 24 discrete instances of fabrication or serious misrepresentation across the briefs. At some point, you stop counting individual fabricated citations and start asking whether anything in the brief was real.

The court's opinion was careful in one important respect: it did not explicitly find that the attorneys used generative AI to produce the briefs. The court expressed no finding on the source of the fabrications. But its language was hard to read as anything other than pointed. The panel emphasized that no filing should contain citations that a lawyer has not "personally read and verified, regardless of how they were generated."

That last clause - "regardless of how they were generated" - is doing a lot of work. Courts don't typically add qualifiers about the generation method of citations unless they have a specific generation method in mind. The traditional ways lawyers produce bad citations are sloppy research, citing from secondary sources without checking the original, or deliberate fabrication. Only one recently-emerging method produces dozens of nonexistent cases that look plausible and cite them with confidence: large language models.

The Sanctions

The Sixth Circuit ordered each attorney to pay $15,000 in punitive fines to the clerk of court, for a combined total of $30,000. Both attorneys were also held jointly responsible for the appellees' full attorney fees on appeal and double costs.

The $30,000 in punitive fines alone makes this one of the largest sanctions specifically tied to fabricated citations in a federal appellate court. For comparison: the Avianca/ChatGPT lawyers were sanctioned $5,000 each in 2023. The Fifth Circuit sanctioned Heather Hersh $2,500 in February 2026, just weeks before this ruling. A Northern District of California court had issued $12,000 in sanctions in a patent case. The Deutsche Bank case produced $10,000 in sanctions. The Sixth Circuit has meaningfully escalated the financial consequences.

But the punitive fines are only part of the financial exposure. Being jointly liable for the appellees' full attorney fees on appeal and double costs could substantially exceed the $30,000 in fines. Appellate attorney fees for complex litigation can easily run into five figures or more. The total cost to Irion and Egli could be significantly higher than the headline $30,000 number suggests.

The Escalation Pattern

What makes this case significant beyond its facts is the venue and the trend line. This is the Sixth Circuit - one of the thirteen federal appellate courts that sit just below the Supreme Court. Within weeks of each other in early 2026, two different federal circuits - the Fifth in the Hersh case and now the Sixth - issued sanctions opinions for fabricated citations, both accompanied by increasingly frustrated language about the persistence of the problem.

The judiciary's patience is visibly diminishing. In 2023, the Avianca case was treated as a novelty - something shocking enough to generate international headlines. By late 2024, district courts were sanctioning attorneys for AI hallucinations regularly enough that the individual cases barely made the legal trade press. By early 2025, federal magistrates and district judges had developed routine procedures for handling AI citation problems. Now, in 2026, the problem has reached the federal appellate courts, and the sanctions are getting larger.

The trajectory is not encouraging for the legal profession. Each new case arrives with a more exhausted judicial tone. The Fifth Circuit said the problem is "getting worse" and "shows no sign of abating." The Sixth Circuit's opinion, coming just weeks later, imposed penalties more than ten times what the Fifth Circuit ordered - signaling that some courts are done with modest sanctions and moving toward penalties designed to actually deter the behavior.

The Verification Problem

The fundamental failure in every one of these cases is the same: a lawyer submitted citations without reading them. This is not a technologically complex failure. Nobody needs a PhD in machine learning to understand the fix. You open the cited case. You read it. You confirm it says what your brief says it says. If the case doesn't exist, you don't cite it.

The fact that this keeps happening - at increasing rates, according to multiple appellate courts - suggests something more systemic than individual negligence. Either a meaningful number of lawyers have always submitted citations without reading them and AI tools have simply made the volume of fabrication visible, or AI tools are making it so easy to generate plausible-looking briefs that lawyers are adopting a workflow where verification feels optional.

Neither explanation is flattering.

The Sixth Circuit's ruling, combined with the Fifth Circuit's from weeks earlier, establishes that federal appellate courts will impose substantial financial penalties for fabricated citations, whether AI-generated or not. The "regardless of how they were generated" language is simultaneously a legal standard and a warning: if your citations are fake, the court doesn't care whether ChatGPT wrote them, a paralegal made them up, or you hallucinated them yourself. The obligation to verify is unconditional, and the cost of failing to verify is going up.

Where This Goes

Three years after the first AI hallucination sanctions case, the legal profession has not solved this problem. It has, by multiple accounts, gotten worse. Judicial frustration is escalating. Sanctions amounts are increasing. Federal appellate courts are now routinely encountering fabricated citations in briefs filed before them.

The $30,000 in punitive fines here represents a new threshold for AI-era citation sanctions, but anyone tracking this area should expect that threshold to be exceeded before long. Courts are clearly searching for the penalty amount that actually changes behavior. The evidence - more than two dozen fake citations in a single set of briefs, filed in a federal court of appeals, nearly three years after the Avianca case made international news - suggests they haven't found it yet.

Discussion