Government contractor sanctioned for AI-fabricated deposition testimony
The Civilian Board of Contract Appeals sanctioned a party in Louis J. Blazy v. Department of State (CBCA 7992) after discovering four non-existent legal decisions and four fabricated deposition excerpts in filings. The supposed direct quotations from witness testimony didn't appear on the cited transcript pages. When pressed, Blazy admitted the quotes were "constructed" and offered substitute testimony that didn't support the original wording. He also misrepresented existing case law by submitting real decisions as stand-ins for the fake ones, characterizing them as supporting principles they did not contain. The CBCA issued a formal admonishment and warned that continued misconduct could result in dismissal - making this one of the first federal sanctions involving AI-fabricated witness testimony, not just made-up case law.
Incident Details
Tech Stack
References
The Filing
The Civilian Board of Contract Appeals (CBCA) is one of those federal bodies that almost nobody outside the government contracting world has heard of. It resolves disputes between the federal government and its contractors - contract claims, payment disputes, performance disagreements. The kind of dry procedural work that normally generates PDFs nobody reads outside the parties involved.
On February 24, 2026, the CBCA in case number 7992 - Louis J. Blazy v. Department of State - issued a decision that will end up being read far beyond the government contracting bar. Not because of the underlying contract dispute, which was unremarkable, but because of what Blazy put in his filings.
The Board found that Blazy's motion cited four legal decisions that do not exist. When the Board asked him to produce copies of the cited cases, he couldn't. His explanation was, by the Board's characterization, vague. The decisions simply weren't real.
That part, by early 2026, is depressingly familiar. Lawyers citing AI-hallucinated cases has been documented dozens of times since the Avianca/ChatGPT case broke in 2023. The federal courts, the GAO, and state bars have all dealt with fabricated case citations. What makes the Blazy case different is what came next.
The Fabricated Testimony
Beyond the fake case law, the CBCA identified four deposition excerpts that Blazy presented as direct quotations from witness testimony. He formatted them as block quotes. He cited specific transcript pages. They looked like someone had pulled exact language from a deposition transcript, which is one of the most basic evidentiary actions in litigation.
The quoted language did not appear on the cited transcript pages.
This is a different category of fabrication than making up a case citation. Fabricated case law is bad - it wastes judicial time and undermines the legal process - but it doesn't directly implicate a specific living person's words. Fabricated deposition testimony does. When you present a block quote and attribute it to a named witness on a specific page of a transcript, you're telling the tribunal that a real person said those specific words under oath. If the person never said them, you've fabricated evidence.
When confronted, Blazy acknowledged that the quotations were what he described as "constructed." He provided alternative testimony from the actual transcripts as substitutes, but the alternative quotes didn't support the same propositions the fabricated ones had been cited for. The original "constructed" quotes conveniently said exactly what Blazy's legal argument needed them to say. The actual testimony did not.
The Misrepresented Case Law
The fabrication problems didn't stop at non-existent cases and manufactured testimony. The Board also found that Blazy had submitted real legal decisions as supposed substitutes for the fake ones - but then misrepresented what those real decisions actually said. He characterized existing rulings as supporting legal principles that the rulings did not, in fact, contain.
This layering of deception - fabricated cases, fabricated testimony, then misrepresented real cases offered as replacements - suggests either a remarkably creative approach to legal fiction or a workflow where AI-generated content was accepted wholesale and then, when challenged, more AI-generated content was submitted to paper over the problems. The Board's opinion doesn't speculate on the source of the fabrications, but the pattern of confidently detailed but entirely wrong citations is textbook large language model output.
The Sanctions
The CBCA issued a formal admonishment under Board Rule 35, which governs standards of conduct. The Board emphasized that while it does not prohibit the use of AI tools, parties bear full responsibility for the accuracy of everything they submit. The admonishment came with an explicit warning: continued misconduct could lead to more severe measures, including dismissal of the case entirely.
By the standards of the escalating sanctions we've seen in federal courts - the Sixth Circuit's $30,000 penalty just weeks later, for example - an admonishment might seem mild. But context matters. The CBCA is an administrative tribunal, not an Article III court. Its sanctions toolkit is different. And for the CBCA, devoting several pages of a decision to cataloging fabrications and issuing a formal warning is about as pointed as it gets. The Board essentially put down in writing a documented record of misconduct and a ticking clock: do it again and the case gets dismissed.
Why This Case Matters
The vibe-lawyering cases on the Vibe Graveyard have, until now, followed a consistent pattern: lawyers cite AI-generated case law that doesn't exist, courts find out, sanctions follow. The Avianca case, the Deutsche Bank case, the Fifth Circuit's Hersh case, the Sixth Circuit's Whiting case - all involve fabricated citations. Cases that don't exist. Holdings that were never held. Page numbers in reporters that lead to blank pages or different cases entirely.
Blazy introduces a new failure mode: fabricated testimony. Making up what a witness said under oath is qualitatively different from making up a case citation. It's closer to fabricating evidence than to sloppy research. Courts have always distinguished between a lawyer who cites a case incorrectly and a lawyer who fabricates witness statements. The first is negligent. The second starts approaching fraud.
Whether Blazy used a generative AI tool to produce the fabricated quotes is not established in the Board's decision. But the pattern - confidently specific, plausible-sounding quotes that turn out to be entirely fabricated, offered with precise page citations that don't match - is consistent with how large language models handle deposition testimony. Ask an LLM to quote from a transcript it hasn't seen, and it will cheerfully generate a quote that sounds like testimony, format it correctly, and cite a page number. The page number will be wrong. The quote will be fiction. The formatting will be perfect.
The Government Contracting Context
The Blazy case is part of a broader wave of AI-related sanctions and warnings in federal procurement law. The GAO dismissed protests from Oready, LLC for similar Gen-AI misuse patterns. The Armed Services Board of Contract Appeals (ASBCA) granted a motion to strike a brief in Huffman Construction due to unverified AI content. The CBCA itself has now established precedent for formal sanctions.
Government contracting litigation operates under heightened duty-of-candor requirements. Federal tribunals expect parties to be scrupulously accurate in their representations because public funds and federal programs are at stake. The tolerance for fabrication - whether AI-generated or otherwise - is lower here than in many other legal contexts.
The CBCA's decision in Blazy signals that federal procurement tribunals are joining the growing list of courts and adjudicatory bodies that have drawn a line. AI tools are permitted. Unverified AI output is not. And fabricated testimony - whether AI-generated or human-manufactured - will be sanctioned, with escalating consequences for repeat behavior.
The progression from fabricated case citations to fabricated deposition testimony represents an expansion of the ways AI tools can corrupt legal proceedings. As these tools get better at generating contextually appropriate content, the fabrications get harder to spot and the potential for harm increases. A fake case citation can be checked against a reporter database in minutes. A fake deposition quote requires pulling the actual transcript and comparing - a step that, in the normal course of litigation, most tribunals don't take unless something else raises suspicion. The CBCA happened to check. The question is how many other fabricated quotes, in how many other proceedings, have gone unchecked.
Discussion