Lawyers filed ChatGPT’s imaginary cases; judge fined them

Tombstone icon

In Mata v. Avianca (S.D.N.Y.), plaintiff Roberto Mata sued the airline after a metal serving cart struck his knee during a 2019 flight. His attorney Peter LoDuca filed a brief opposing dismissal that cited six judicial decisions. When opposing counsel and the court couldn't locate any of the cited cases, Judge Kevin Castel demanded copies. It turned out attorney Steven Schwartz at the same firm had used ChatGPT to research and draft the brief, and the AI had fabricated every case, complete with fake quotes and fake internal citations. On June 22, 2023, Castel sanctioned Schwartz, LoDuca, and their firm Levidow, Levidow & Oberman with a $5,000 penalty and required them to send notices to the real judges whose names appeared in the fabricated opinions.

Incident Details

Severity:Facepalm
Company:Levidow, Levidow & Oberman, P.C.
Perpetrator:Legal Counsel
Incident Date:
Blast Radius:Court sanctions; fines and mandated notices; reputational damage in legal community.

The Underlying Case

Roberto Mata's lawsuit against Avianca Airlines was unremarkable. He alleged that in 2019, while aboard an Avianca flight to New York's Kennedy Airport, a metal serving cart struck his knee and caused injuries. He sued. Avianca moved to dismiss on jurisdictional grounds. Mata's attorneys filed a brief opposing dismissal.

The brief cited six judicial decisions as precedent. These were not obscure citations - they appeared to be real federal cases with case numbers, judges' names, quoted passages, and internal cross-references to other cases. They looked legitimate. They were not.

Peter LoDuca was the attorney of record who filed the brief. Steven Schwartz, another attorney at the firm Levidow, Levidow & Oberman, P.C., had drafted it. Schwartz had used ChatGPT to conduct legal research and help write the brief, and ChatGPT had generated the case citations. Every single one was fabricated.

The Unraveling

Avianca's defense attorneys couldn't locate the cited cases. When they flagged this, Judge Kevin Castel of the Southern District of New York issued an order directing LoDuca to provide copies of the decisions. LoDuca went to Schwartz, who went back to ChatGPT.

In a sequence that would become legendary in legal circles, Schwartz asked ChatGPT whether the cited cases were real. ChatGPT confirmed they were and assured him they could be found in "reputable legal databases such as LexisNexis and Westlaw." They could not. They did not exist. ChatGPT had made them up, and then when asked to verify its own fabrications, it cheerfully confirmed them.

Schwartz submitted an affidavit to the court attaching what he described as the actual judicial opinions. These were also fabricated - ChatGPT-generated text formatted to look like court opinions, complete with fake quotes attributed to real judges and fake procedural histories. The affidavit submitted by LoDuca was notarized, but the notarization itself later came under question.

On May 26, 2023, Judge Castel, having examined the submitted materials and found nothing real in them, issued a second order to show cause. He demanded that Schwartz, LoDuca, and their firm explain why they should not be sanctioned for: (1) the use of a false and fraudulent notarization; (2) the citation of non-existent cases; and (3) the submission of non-existent judicial opinions.

The Hearing

At the sanctions hearing, Schwartz's defense centered on his unfamiliarity with ChatGPT's limitations. He told the court he had not understood that ChatGPT could generate false information. He believed it was a search engine. He had used it because he thought it would speed up his research process, and he had trusted its output the same way he would trust results from a legal database.

He had not verified any of the citations against Westlaw, LexisNexis, or any other legal research tool. He had not asked a colleague to check them. He had not read the supposed decisions closely enough to notice they were fabricated. He had passed the AI's output directly into a federal court filing with his firm's name on it.

Judge Castel was unpersuaded. In describing the fabricated materials, Castel wrote that the brief contained "bogus judicial decisions with bogus quotes and bogus internal citations." The word "bogus" appeared repeatedly in his filings and orders, which was as close to exasperation as federal judicial writing typically gets.

The Sanctions Order

On June 22, 2023, Judge Castel issued his sanctions order. The ruling:

  • Imposed a $5,000 penalty jointly and severally on Schwartz, LoDuca, and their firm, payable to the court registry within 14 days.
  • Required the attorneys to send copies of the sanctions order to each of the real judges whose names had appeared in the fabricated opinions. This was a pointed requirement - the fake opinions had attributed invented rulings and quotations to real sitting judges, and those judges were entitled to know about it.
  • The court found that the attorneys had acted in bad faith, not by initially using ChatGPT (which was merely foolish), but by continuing to vouch for the citations' authenticity after the court raised questions, and by submitting fabricated full-text opinions as supporting documentation.

The $5,000 penalty was modest by federal sanctions standards. The reputational penalty was not.

The Profession Reacts

Mata v. Avianca became, within days of the sanctions order, the most-discussed case in the American legal profession. It was the case that lawyers brought up when they talked about AI. It was cited in law firm memos, bar association bulletins, continuing legal education programs, and judicial conferences. Courts across the country began issuing standing orders requiring attorneys to disclose the use of AI in legal filings. Some required affirmative certification that all cited authorities had been verified by a human.

The case was straightforward on its facts: lawyers didn't verify their work, and a judge caught them. The AI connection made it famous, but the underlying failure was as old as the profession - a lawyer cited authorities he hadn't read.

What ChatGPT added was scale and confidence. The AI didn't just invent a case name; it invented complete judicial opinions with proper formatting, realistic-sounding quotes, and plausible procedural histories. It created a document that looked exactly like something a lawyer would find in a database search. It was fluent in the appearance of legal authority while being empty of actual legal authority. And when asked to double-check its own work, it confirmed its fabrications without hesitation.

Schwartz told the court he had believed ChatGPT was reliable. This was plausible - in early 2023, public understanding of large language model hallucinations was limited. But it didn't matter. A lawyer's obligation to verify citations predates AI by centuries. Schwartz didn't need to understand how the technology worked; he needed to check whether the cases existed. Standard due diligence - a single Westlaw search - would have caught every fabrication in minutes.

The case didn't change the law. It applied the law that already existed. But it gave the legal profession a vivid, specific example of what happens when AI-generated text meets professional obligations that require accuracy. The fabricated opinions were fluent, formatted, and false. The sanctions were small, public, and permanent.

Discussion