Four attorneys fined $12,000 combined for AI-fabricated patent case citations

Tombstone icon

A federal judge in the District of Kansas fined four attorneys a combined $12,000 for court filings containing AI-generated fabricated legal citations in a patent infringement case. The attorney who used ChatGPT received $5,000; two who failed to review the filings received $3,000 each; local counsel who did not identify errors received $1,000. The judge called the volume of fabricated case law "staggering."

Incident Details

Severity:Facepalm
Company:Multiple law firms (patent case)
Perpetrator:Attorney
Incident Date:
Blast Radius:Four attorneys sanctioned across a single case; staggering volume of fabricated case law filed with the court; all signatories held personally accountable

A Brief Full of Fabrications

The case was a patent infringement action: Lexos Media IP LLC v. Overstock.com Inc., filed in the U.S. District Court for the District of Kansas. Lexos Media, a patent holding company, was represented by a team of attorneys who submitted a brief that turned out to be riddled with AI-generated falsehoods. The fabrications were not limited to a citation or two. The brief cited nonexistent lawsuits, included made-up quotations attributed to judges' decisions, and referenced real cases that held the opposite of what the brief claimed they held.

Among the more creative fabrications was a citation to a nonexistent lawsuit against Topeka's city government - a detail specific enough to sound plausible but entirely manufactured by the AI tool. This is a hallmark of large language model hallucinations in legal contexts: the fabricated material is detailed and confident, mimicking the style and structure of real legal citations closely enough to pass a casual glance but dissolving under even basic verification.

U.S. District Judge Julie Robinson identified the problems and issued sanctions on February 3, 2026, in a decision that spread accountability across the entire legal team rather than limiting blame to the attorney who actually used the AI.

The Fine Structure

The sanctions reflected the court's assessment of each attorney's role in the failure. The attorney who had used ChatGPT to generate the research and drafted the brief received the largest individual fine: $5,000. But the court did not stop there. Two attorneys who signed the filings without reviewing them for accuracy each received $3,000 fines. A fourth attorney serving as local counsel, who failed to identify the errors despite being a signatory, was fined $1,000.

The total came to $12,000 across four sanctioned attorneys. Some reporting identified five attorneys involved in the case, with the fifth facing additional consequences. The attorney who used AI had his admission to practice in the case revoked and was ordered to self-report to the legal disciplinary authorities in the state where he is licensed. He was also required to submit to the court clerk a certificate outlining the internal procedures his firm was implementing to ensure future court filings are accurate.

The lead attorney on the case received a public admonishment in addition to his fine. Judge Robinson noted that he had violated his duty by signing documents he failed to review and had not acknowledged his breach of legal rules.

Shared Responsibility

The most significant aspect of Judge Robinson's decision was the distribution of sanctions across the entire legal team. In earlier AI citation cases, courts had generally focused their ire on the attorney who actually used the AI tool. Robinson's approach was different: every attorney whose signature appeared on the defective filing bore responsibility for its contents.

This is not a new legal principle. Federal Rule of Civil Procedure 11 has long required that every attorney who signs a filing certify that it is supported by existing law and that factual contentions have evidentiary support. Signing a brief means you have reviewed it and stand behind it. The Kansas case simply applied this existing obligation to the AI context: if you sign a filing, you are responsible for verifying its citations, regardless of who - or what - generated them.

For law firms where junior associates or AI tools draft briefs that partners then sign, the implications are clear. The partner's signature is not a rubber stamp. It is a personal certification that the contents meet professional standards of accuracy. Judge Robinson's decision makes this explicit in the AI era.

"Staggering"

Judge Robinson's most quoted line captured the broader trend: "The sheer amount of case law that has erupted over the last few years due to attorneys' reliance on unverified generative AI research, often generating hallucinated legal authority, is staggering."

By February 2026, the list of AI citation sanction cases had grown long. The Avianca case in 2023 was the first to gain widespread attention. Since then, courts across the country had issued warnings, fines, and other sanctions against lawyers and self-represented litigants in case after case. The Kansas case was not even the most severe sanction that week - the Flycatcher v. Affable Avenue case resulted in default judgment against the client just two days later.

What made the Kansas case instructive was its ordinariness. This was not a sole practitioner working without resources. This was a team of attorneys at what appeared to be a resourced law firm, working on a patent infringement case against a major retailer. They had access to Westlaw, Lexis, and every other standard legal research tool. And yet the brief that went to the court included fabricated citations that could have been caught with a few minutes of standard cite-checking.

The Verification Failure

The fundamental issue across all AI citation cases is not that attorneys use AI. Judge Robinson, like other judges who have addressed the issue, was careful to note that using AI for legal research is not inherently problematic. The problem is the failure to verify.

Legal citation verification is not a complex or time-consuming task. Every case cited in a brief can be checked in Westlaw or Lexis in seconds. The systems are specifically designed for this purpose. Pull up the cited case, confirm it exists, read the relevant passage, verify the quotation matches, and confirm the holding supports the proposition for which it is cited. Attorneys have been doing this manually for decades.

What ChatGPT and similar tools introduced was not a new capability but a new failure mode. The AI generates citations that look correct - proper case names, plausible docket numbers, realistic quotations from judicial opinions - but are partially or entirely fabricated. An attorney who treats AI-generated research as equivalent to research from a verified legal database is confusing two fundamentally different types of output. One has been validated against court records. The other has been generated by a statistical model that optimizes for plausibility.

What $12,000 Buys

The financial sanctions in the Kansas case were modest - $12,000 split among multiple attorneys at firms that handle patent litigation is not a career-ending sum. But the non-monetary consequences were arguably more significant. The revocation of pro hac vice admission, the mandatory self-reporting to disciplinary authorities, the required filing of remediation procedures, and the public nature of the sanctions all carry professional costs that exceed the fines.

More broadly, the Kansas case reinforced the emerging judicial standard: courts will not accept "the AI generated it" as an excuse. The obligation to verify citations predates AI by centuries. The tools to verify them have never been more accessible. And judges across the federal judiciary have now made clear, repeatedly, that submitting fabricated legal authority will be met with consequences regardless of how it was generated.

Discussion