Georgia appeals court fined a divorce lawyer after fake AI-like citations reached the order itself

Tombstone icon

In Shahid v. Esaam, decided June 30, 2025, the Georgia Court of Appeals vacated part of a divorce-related order after finding that several cited authorities did not exist and others did not support the propositions claimed. The panel concluded the briefing showed the hallmarks of generative AI hallucination, fined attorney Diana Lynch $2,500, and sent the matter back to the trial court. What made the case stand out was not just a bad brief. The fake citations appeared to have made their way into the trial court's signed order.

Incident Details

Severity:Facepalm
Company:Diana Lynch
Perpetrator:Attorney
Incident Date:
Blast Radius:Georgia Court of Appeals vacated part of a divorce order, imposed the maximum statutory penalty, and turned one lawyer's filing shortcuts into a published appellate embarrassment
Advertisement

Fake Cases Are Bad; Fake Cases in the Order Are Worse

By the time generative AI citation scandals reached Georgia appellate courts in mid-2025, the usual script was already familiar. A lawyer files a brief. Opposing counsel notices the authorities are imaginary. The court issues a reprimand, a fine, or a lecture about professional responsibility. Everyone acts as if this is still somehow surprising.

Shahid v. Esaam added a nastier variation. The fabricated or irrelevant citations did not stay confined to the lawyer's papers. According to the Georgia Court of Appeals, they appear to have made their way into the trial court's signed order as well.

That detail moves the case out of the general pile of "lawyer trusted chatbot too much" stories and into something more consequential. A hallucinated brief is embarrassing. A hallucinated order is how bad research escapes the filing system and starts wearing judicial authority.

What the Court Said

The Court of Appeals decided Shahid v. Esaam on June 30, 2025. Judge Jeff Watkins, writing for the panel, described a straightforward mess. After the trial court entered a final judgment and decree of divorce, the wife moved to reopen the case and set aside the judgment. In the appellate proceedings that followed, the court examined the legal authorities used to support the husband's position and found serious defects.

The opinion states that half of the cases cited in the relevant order appeared to be "hallucinations" generated by artificial intelligence. The remaining cited cases either did not exist in the way described or had nothing to do with the propositions for which they were offered. When the wife challenged those citations, attorney Diana Lynch responded with another filing that cited nearly a dozen additional references that were likewise bogus or irrelevant.

That is a useful reminder that many of these incidents are not a single bad prompt followed by immediate surrender. They often become escalation stories. Someone spots the problem. The lawyer doubles down. The cleanup filing brings in fresh nonsense. The hole gets deeper because the tool still speaks in a confident lawyer voice and the user still has not done the one thing the profession requires, which is to read the cases.

The Appellate Court Was Not Subtle

Georgia's appellate panel did not hedge much. The judges said they were troubled by the bogus authorities and specifically tied the pattern to the sort of hallucinations generative AI systems are known to produce. The opinion also cited Chief Justice John Roberts's 2023 year-end report on the federal judiciary, which had already warned the profession about exactly this problem.

That citation to Roberts was judicial shorthand for "nobody gets to pretend they have not heard of this." By June 2025, the AI-citation problem was established enough that appellate judges could cite the Chief Justice's warning the way they might cite a driver manual in a DUI case. The risk was common knowledge.

The court imposed a $2,500 penalty on Lynch, the maximum allowed under the relevant Georgia statute, vacated the challenged order, and remanded the matter for reconsideration. The amount itself was not enormous by big-firm sanctions standards. The significance came from the combination of a published appellate opinion, explicit AI-hallucination language, and the implication that bad citations had infected the court's own order.

That last part is what gives the case its particular smell.

How the Contamination Spread

Courts often rely on proposed orders or legal arguments drafted by counsel. That is not new, and it is not inherently improper. The system assumes, however, that the lawyer doing that drafting is handing over real law. If a filing contains hallucinated authorities and the court incorporates them into the order without catching the problem, the lawyer's fabrication borrows the court's credibility.

Once that happens, the damage is no longer limited to counsel's professional embarrassment. The opposing party now has to attack an order that appears facially valid. The appellate court has to unwind something that should never have been signed in the first place. The trial court's time gets burned. The other side's money gets burned. And the eventual written opinion becomes a permanent monument to the fact that the process accepted fiction long enough to make it official.

This is why judges get especially prickly about fake citations. They are not decorative. Citations are the mechanism by which legal authority is transmitted and tested. Break that mechanism and the whole system starts spending time verifying whether reality itself is in the record.

Apparent AI Use Without the Ritual Confession

One reason the Georgia case is useful is that it does not depend on a dramatic confession email or a courtroom apology about ChatGPT. The appellate court could see the pattern from the work itself. Nonexistent cases. Irrelevant cases cited for precise propositions. A cluster of errors that look less like bad Westlaw searches and more like machine-generated confidence.

That matters because the profession's public discussion of AI misuse still relies too heavily on self-reporting. Lawyers get caught, then say yes, a tool was involved, and everyone treats the confession as the interesting part. It is not. The interesting part is that the product output has become recognizable enough that courts can often infer the workflow from the debris field.

Vibe lawyering does not always announce itself. Sometimes it just leaves the same footprints everywhere.

Why This One Belongs in the Graveyard

Plenty of legal AI incidents are mechanically similar. Fake cases. Sanctions. News story. Move on. Shahid v. Esaam stands out because it captured the moment where fabricated legal support crossed from the lawyer's filing into the judiciary's own paper. That is a sharper institutional failure than the usual sanctions order.

It also happened in a divorce matter, which is worth noting because domestic-relations cases are exactly the kind of human-scale litigation where shortcuts do not stay abstract for long. These are not venture disputes between sophisticated entities that can afford endless motion practice. They are cases about families, money, and judgments that shape ordinary lives. Feeding fake authorities into that pipeline is not just clownish. It is corrosive.

The Georgia Court of Appeals corrected the problem. It vacated the order, imposed the penalty, and put its reasoning on the record. That is the system working eventually. But the opinion also showed how easily AI-like garbage can travel if nobody checks it before the signature line.

Which, by now, is a sentence courts should not still need to prove.

Discussion