New York court sanctions lawyer for AI-fabricated case law

Tombstone icon

A New York appellate court imposed $10,000 in sanctions after a lawyer submitted briefings in a mortgage foreclosure case containing fabricated case citations identified as likely AI-generated hallucinations. The court found multiple nonexistent cases and misrepresented holdings, affirming prior orders and awarding costs to the plaintiff.

Incident Details

Severity:Facepalm
Company:Law Office of Jean LeTennier
Perpetrator:Legal Counsel
Incident Date:
Blast Radius:$10,000 in sanctions ($5,000 counsel, $2,500 defendant, plus costs); appellate rebuke; case law now cited as precedent for AI citation misconduct.

The Case Background

The underlying dispute was a mortgage foreclosure. In August 2006, Jean LeTennier had borrowed $399,000 from Nexus Financial LLC, secured by a mortgage. He defaulted. Deutsche Bank National Trust Company, as trustee on behalf of J.P. Morgan mortgage holders, pursued foreclosure. The case worked its way through the New York courts and eventually reached the Appellate Division, Third Department.

It was during the appellate proceedings that the AI citation problem surfaced. Submissions filed on behalf of the defendant contained fabricated case citations - cases that did not exist, with holdings that were invented. The court identified these as likely AI-generated hallucinations: the kind of plausible-sounding but entirely fictitious legal authorities that generative AI tools routinely produce when asked the wrong questions.

The Pattern of Fabrication

What made the LeTennier case noteworthy was not just the presence of fake citations but their persistence. The court placed the defendant on notice that his filings contained nonexistent legal authorities. This should have prompted immediate correction - a careful review of every citation, confirmation that each case existed, and removal of anything that couldn't be verified.

Instead, the fabricated citations continued. The court found that more than half of the fake cases appeared after the defendant had been warned about the problem. His reliance on fabricated legal authorities, in the court's words, "grew more prolific as this appeal proceeded." The situation got worse, not better, even after the defendant knew his filings were contaminated with nonexistent case law.

This escalation pattern is difficult to explain charitably. One possibility is that whoever was generating the citations continued using the same AI tool without implementing any verification step. Another is that the verification process itself was inadequate - checking AI output against AI output, for instance, rather than against actual legal databases. Whatever the reason, the court had little patience for submissions that became more fabricated over time.

The Sanctions

On January 8, 2026, the Appellate Division, Third Department issued its decision in Deutsche Bank National Trust Company v. LeTennier (2026 NY Slip Op 00040), with Justice Fisher writing the opinion. The court affirmed the lower court's orders, leaving the foreclosure judgment in favor of the lender in place, and imposed a total of $10,000 in sanctions.

The sanctions were split: $5,000 against counsel and $2,500 against the defendant, plus costs awarded to the plaintiff. The breakdown reflected the court's view that both the attorney and the client bore responsibility for the fabricated submissions. The attorney had a professional obligation to verify the legal authority cited in filings. The defendant, who was on notice that his filings contained fake cases, had continued to submit them.

First Appellate-Level AI Sanctions in New York

The opinion marked a milestone. Hinshaw & Culbertson, the law firm representing Deutsche Bank, noted that the decision was "the first appellate-level case in New York to address sanctions arising from the misuse of generative AI in legal submissions." Previous AI citation sanctions in New York had come from trial courts. This was the first time an appellate court in the state had formally addressed the problem.

Appellate decisions carry more weight than trial court rulings. They establish binding precedent for lower courts and signal to the legal profession how appellate judges view the issue. A trial court sanction for AI citations is a cautionary incident. An appellate decision sanctioning the same conduct is law that other courts will cite and apply.

Legal commentators at Casemine described the ruling as establishing a "nondelegable duty to verify legal authorities" - meaning the obligation to confirm that cited cases exist cannot be delegated to an AI tool, a paralegal, or anyone else. The attorney's name is on the filing, and the attorney is responsible for every citation it contains.

The Broader Context

By January 2026, AI citation sanctions had become a steady feature of American court dockets. The LeTennier court itself surveyed the growing body of case law on the topic, citing examples from across the country. The sanctions ranged from $500 for three nonexistent cases in a Connecticut district court to $10,000 for 21 fabricated citations and quotations in another case. A Southern Indiana court imposed $6,000 for six nonexistent cases across multiple filings. An Illinois court had imposed close to $60,000 combined against a law firm and individual attorney.

The trend was consistent: sanctions were increasing in both frequency and dollar amounts. Early cases in 2023 and 2024 had produced relatively modest penalties, sometimes accompanied by educational requirements like mandatory continuing legal education on AI usage. By 2026, courts had lost patience with the "I didn't know the AI would do that" defense. The Avianca case had been national news. Dozens of sanctions had been publicized. The legal profession had been put on notice at scale.

The LeTennier case was particularly valuable as precedent because it involved repeated fabrication after being warned. Courts evaluating future AI citation misconduct cases would be able to point to LeTennier as the canonical example of what happens when a party continues submitting fabricated authorities after being caught. The escalation - getting worse after being warned - eliminated any plausible argument about innocent mistake or one-time oversight.

The Foreclosure Outcome

The sanctions were a secondary consequence of the underlying litigation. LeTennier lost the foreclosure case on the merits. The appellate court affirmed all challenged orders, leaving the foreclosure judgment intact. The AI-generated citations hadn't helped his defense; they'd undermined it.

This is the practical cost that AI citation cases rarely emphasize. The sanctions get the headlines - $10,000 here, $2,500 there. But the real damage is to the case itself. A court that discovers fabricated authority in a party's filings has every reason to view that party's other arguments with skepticism. Credibility, once lost with a judge, is nearly impossible to recover. LeTennier didn't just pay $10,000 in sanctions; he lost whatever remaining persuasive force his legal arguments might have carried.

The Nondelegable Duty

The core principle the LeTennier decision reinforced is straightforward: lawyers must verify that the cases they cite are real. This obligation cannot be outsourced to AI tools, and it cannot be satisfied by trusting the output of a system that generates fictitious authorities as a known behavior.

Generative AI tools produce fake citations not as a bug but as a predictable consequence of how they work. Large language models generate text that is statistically plausible based on their training data. When asked for legal citations, they produce strings that look like case names, volume numbers, and page references. Sometimes these correspond to real cases. Often they don't. The model has no mechanism for checking whether the citation it generated points to an actual court decision.

Every lawyer who uses AI for legal research needs to understand this limitation. The tool is not a legal database. It is a text generator that sometimes generates text that happens to match real legal authorities. Treating its output as verified research without independently confirming each citation against Westlaw, LexisNexis, or Fastcase is the equivalent of citing a case because a stranger on the street told you about it.

The LeTennier court made clear that this understanding is now expected of every attorney practicing in New York's appellate courts. The excuse period for AI-generated citations has ended.

Discussion