Wisconsin DA sanctioned for AI-hallucinated legal citations in burglary case
Kenosha County District Attorney Xavier Solis was sanctioned by Circuit Court Judge David Hughes after his office submitted court filings containing AI-generated legal citations that did not exist. The filings were part of a burglary case against two defendants, and Solis failed to disclose his use of AI - violating Kenosha County's court policy requiring disclosure and verification of AI-generated content. The charges were ultimately dismissed (primarily for lack of probable cause), but not before the bogus citations made the DA's office a cautionary tale for prosecutors nationwide. Solis acknowledged the error and promised to "review and reinforce internal practices." It's always reassuring when the person responsible for prosecuting crimes can't be bothered to read the citations in their own filings.
Incident Details
Tech Stack
References
From the Courtroom to the Cautionary Tale
The Vibe Graveyard has cataloged a growing collection of attorneys who decided that asking an AI chatbot for legal citations was an acceptable substitute for, you know, reading cases. Most of those stories involve private attorneys or civil litigators - people whose professional negligence, while embarrassing, primarily affects their own clients and reputations.
Xavier Solis is a district attorney. As in, the top prosecutor for an entire county. The person whose office decides who gets charged with crimes, presents evidence to juries, and is supposed to represent the interests of the public in court. When a DA's office submits filings with fabricated legal citations, the implications extend beyond one case's outcome - they touch the integrity of the criminal justice system itself.
The Case
The filing in question was part of a criminal case against Christain Garrett and Cornelius Garrett, who faced burglary charges in Kenosha County. Solis's office prepared and submitted court documents that contained legal citations generated by an AI tool - citations that pointed to cases that did not exist. Not cases that were misquoted, not cases that were taken out of context, but cases that were entirely fabricated. The AI hallucinated them, and nobody in the DA's office checked whether the case law being cited in a criminal prosecution was, in fact, real.
Circuit Court Judge David Hughes was not amused. He sanctioned Solis for two compounding failures: first, the submission of bogus citations, and second, the failure to disclose the use of AI in preparing the filings. Kenosha County has a court policy - not a suggestion, not a guideline, a policy - requiring attorneys to disclose when they've used AI tools in preparing court documents and to verify the accuracy of any AI-generated content. Solis's office apparently decided that policy was optional.
The Disclosure Problem
The AI disclosure requirement exists precisely because of incidents like this one. Courts across the country have been implementing these policies since 2023, when the Avianca case first demonstrated that AI chatbots will cheerfully fabricate judicial opinions, complete with realistic-sounding case names, docket numbers, and holdings. By early 2026, hundreds of courts have adopted some form of AI disclosure requirement.
The requirement serves two purposes. First, it puts the court on notice that AI was involved, allowing judges to apply additional scrutiny. Second, it forces attorneys to at least acknowledge that they used a tool known for fabricating content, which creates an implicit obligation to verify the output. Solis's failure to disclose bypassed both safeguards.
Whether Solis deliberately concealed the AI usage or simply didn't think disclosure was necessary isn't entirely clear from public reporting. Either interpretation is unflattering. A DA who deliberately hides AI use from a court is being dishonest. A DA who doesn't realize he needs to disclose it hasn't been paying attention to developments in his own profession for the past three years.
The Dismissal
The charges against the Garrett defendants were ultimately dismissed. Defense attorneys clarified that the primary basis for dismissal was lack of probable cause - the prosecution's case had fundamental evidentiary problems independent of the AI citation fiasco. But the AI-generated citations didn't help. When your legal arguments are built on case law that doesn't exist, it tends to undermine whatever credibility your other arguments might have had.
The dismissal raises uncomfortable questions about other cases handled by the DA's office. If AI tools were used to prepare filings in the Garrett case without verification, were they used in other cases? How many other court documents submitted by Solis's office contain citations that no one checked? These are not hypothetical concerns - they are the kind of questions that defense attorneys are now positioned to raise in any pending case.
The Response
Solis acknowledged the error publicly, stating that his office identified the issue and has "reviewed and reinforced its internal practices to ensure accuracy and reliable citation verification in future filings." This is the standard post-sanction response template: acknowledge, promise to do better, imply it was an isolated incident.
The reinforcement of internal practices is presumably a reference to implementing the kind of verification steps that should have been in place before anyone in a prosecutor's office typed a prompt into a chatbot. Steps like: reading the cases you cite. Checking whether they exist. Asking yourself whether the legal arguments supporting a criminal prosecution should perhaps be verified by a human being before being submitted to a judge.
The Prosecutor Problem
What distinguishes this incident from the site's existing collection of vibe-lawyering stories is the prosecutorial context. When a civil attorney submits hallucinated citations, the consequences typically land on their client - a company loses a motion, a plaintiff's case is weakened, the attorney faces sanctions and embarrassment. The power dynamic is relatively contained.
When a prosecutor submits hallucinated citations, the stakes are different. Prosecutors wield the power of the state. They can put people in prison. The citations they submit support arguments about why someone should be charged, detained, or convicted. Fabricated case law in a prosecution filing - even if introduced through negligence rather than malice - represents a failure of the machinery that is supposed to protect people's liberty.
The Garrett defendants had competent defense counsel who challenged the filings. Not every defendant does. In a system where the vast majority of criminal cases are resolved through plea bargains rather than trials, the quality of the prosecution's legal arguments often goes unchallenged. If AI-generated hallucinations are making their way into prosecutorial filings, the question isn't whether it happened once - it's how many times it happened without anyone noticing.
The Running Tally
The Solis sanctions join a growing list that the Vibe Graveyard has been tracking: the Avianca case in 2023, the Deutsche Bank sanctions, the Kansas five-attorney sanctions, the Sixth Circuit's $30K penalties, and many more. Each new entry makes the pattern harder to dismiss as isolated incidents or growing pains.
Courts have responded with disclosure requirements, certification obligations, and increasingly severe sanctions. The Kenosha County policy that Solis violated was itself a response to earlier incidents. The policy existed. It was clear. It was ignored anyway.
At some point, the legal profession will have to reckon with the structural problem beneath these incidents: lawyers are using tools that fabricate information, in a profession where fabricated information has consequences, and the only safeguard is the attorney's willingness to verify the output. That safeguard keeps failing. The courts keep sanctioning. And the next attorney to submit hallucinated citations has already typed their prompt.
Discussion