DOJ prosecutor resigned after filing an AI-generated brief full of fabricated citations

Tombstone icon

Rudy Renfer, an assistant U.S. attorney in the Eastern District of North Carolina, resigned in March 2026 after admitting he used AI to rewrite a legal brief that contained fabricated citations, fictitious quotations, and misstatements of law. The opposing party - a pro se retired Air Force colonel suing over GLP-1 medication coverage under TRICARE - caught the fakes. At a show-cause hearing, the presiding magistrate judge expressed skepticism about Renfer's claim that he had reviewed the brief before filing, noting the fabrications appeared "intentionally designed" to support the government's argument. The matter was referred to the DOJ's Office of Professional Responsibility, and the district's U.S. Attorney issued an office-wide memo warning staff that "AI may hallucinate, but that does not excuse you from your obligations."

Incident Details

Perpetrator:Legal Counsel
Severity:Facepalm
Blast Radius:Federal prosecutor forced to resign; case referred to DOJ Office of Professional Responsibility; district-wide policy memo issued; credibility of government legal arguments undermined

The Brief

The case itself was not complicated. Derence Fivehouse, a retired Air Force colonel, was suing the federal government over coverage of GLP-1 medications under TRICARE for Life, the military health insurance program for retirees. Fivehouse was representing himself - a pro se litigant navigating federal court without legal counsel.

On the other side of the case was the full weight of the United States Department of Justice. Specifically, Rudy Renfer, an assistant U.S. attorney in the Eastern District of North Carolina, whose job was to represent the government's legal position. A professional attorney versus a self-represented retiree, briefing a federal magistrate judge on a health insurance coverage question.

Renfer filed the government's brief. It contained fabricated citations.

The fabrications were not subtle. Fictitious quotations. Misstatements of case law. Invented language attributed to the Code of Federal Regulations. The kind of errors that, in the age of Westlaw and LexisNexis, require either extraordinary negligence or an AI that is very good at sounding authoritative about things it has invented.

The Person Who Caught It

The fabricated citations were discovered not by opposing counsel (there was none), not by the court's own clerks, and not by a supervising attorney at the U.S. Attorney's office. They were discovered by Fivehouse himself - the retired colonel representing himself pro se.

There is something particularly pointed about a self-represented litigant catching fabrications in a brief filed by a Department of Justice attorney. Fivehouse did what courts expect every attorney to do before filing: he checked whether the cited authorities actually existed. He found that they did not. He brought this to the court's attention.

A pro se retiree doing due diligence that a federal prosecutor did not bother to perform. The gap between those two levels of care is where the story lives.

The Hearing

Magistrate Judge Robert Numbers presided over the show-cause hearing on March 10, 2026. Renfer admitted he had used an AI tool to generate the brief. His explanation: he had accidentally lost a prior draft and, feeling panicked, used AI to rewrite it. He told the court he believed he had reviewed the AI's output before filing.

Judge Numbers was not persuaded. He noted that the fabricated content did not look like random algorithmic gibberish or obviously wrong boilerplate. The fabrications appeared, in his assessment, to be "intentionally designed" to support the government's argument. The fake citations and invented quotations were substantively aligned with the position Renfer needed to argue, which made the claim of oversight-through-panic harder to accept.

Renfer resigned from the U.S. Attorney's office. The matter was referred to the Department of Justice's Office of Professional Responsibility, the internal body that investigates allegations of misconduct by DOJ attorneys. Whether additional consequences will follow from that referral remains to be seen.

The Memo

Following Renfer's departure, Ellis Boyle, the U.S. Attorney for the Eastern District of North Carolina, issued an office-wide memo about AI use. The memo's most quoted line: "AI may hallucinate, but that does not excuse you from your obligations."

The sentence is doing a lot of work. It simultaneously acknowledges that AI tools produce fabricated output (the "may hallucinate" part) and warns attorneys that this known failure mode does not reduce their professional responsibility to verify what they file (the "does not excuse you" part). The implication is clear: if an attorney knows AI hallucinates and uses it anyway without verification, the fault lies with the attorney, not the tool.

The memo did not ban AI use. It warned staff to verify everything. This is the standard institutional response to AI hallucination incidents in the legal profession: don't stop using the tools, just check that the output is real before you put it in a court filing. The fact that this needs to be said in a memo to federal prosecutors - people who passed the bar, completed federal hiring processes, and are entrusted with representing the United States government in court - says something about how quickly and uncritically AI tools have been adopted in legal work.

The Escalation

Renfer is not the first attorney to face consequences for AI-fabricated citations. That distinction belongs to Steven Schwartz, who in 2023 was sanctioned for filing ChatGPT-generated fake cases in the Avianca lawsuit. Since then, AI hallucination sanctions have become a minor genre of legal news. Attorneys in jurisdictions from Kansas to Texas to the Fifth Circuit have been caught filing briefs with fabricated case law, each time claiming some version of "I didn't know the AI made it up."

But Renfer's case marks a new threshold. He is the first federal prosecutor - a government attorney, not a private practitioner - known to have resigned over AI-fabricated legal work. Previous incidents involved private attorneys or solo practitioners who could be dismissed as insufficiently sophisticated or cautious. A DOJ attorney filing fabricated citations against a self-represented retiree is a different category of failure. It suggests that the problem has migrated from the margins of the legal profession to its institutional core.

It also suggests something about the specific pressure point that leads to these incidents. Renfer said he panicked after losing a draft. Schwartz said he was trying to speed up his research. Other sanctioned attorneys have cited time pressure, understaffing, or unfamiliarity with the tools. The common thread is attorneys substituting AI output for their own work product under conditions where they feel unable to do the work themselves in the time available.

What the Tool Does and Does Not Do

The AI tool did exactly what large language models do: it generated plausible-sounding text in response to a prompt. When asked to produce legal citations supporting a particular argument, it produced legal citations that looked like they supported that argument. The citations referenced real-sounding case names, attributed quotations to recognizable legal sources, and included the procedural details that make a citation look legitimate. None of it was real.

This is not a bug in the AI. It is how the technology works. Language models generate text that fits the pattern of their training data. Legal text follows very specific patterns - case names formatted a particular way, quotations attributed with specific citation formats, holdings phrased in the language of judicial opinions. The model reproduces those patterns fluently. It just does so without any mechanism for checking whether the specific cases, quotations, or holdings it generates correspond to anything in the real world.

Every attorney who has been sanctioned for AI-fabricated citations has, at some point, expressed surprise that the tool made things up. Three years after the Avianca case made international news, that surprise is increasingly difficult to accept as a defense. The legal profession has been on notice since June 2023 that AI generates fake citations. Renfer's filing was submitted in 2026. The warning had been circulating, in legal publications and judicial orders and bar association guidance, for nearly three years.

The Pro Se Angle

One detail that makes this story linger is Fivehouse's position in the case. He was a retired military officer arguing his own healthcare coverage claim against the federal government. He did not have a law firm. He did not have a legal research team. He did not have access to the institutional resources of the DOJ. He had himself, his knowledge of his own case, and enough legal awareness to check whether the cases cited against him were real.

They were not. And he caught it. The government's attorney, backed by the resources of the Department of Justice, filed fabricated authorities that were detected by the person who had the fewest resources to detect them. If Fivehouse had been less diligent, or less familiar with legal research, the fabricated citations might have gone unchallenged. The government's argument, supported by invented law, might have been accepted.

That is the part of the story that stays with you. Not just that an attorney used AI and filed fake citations - that has happened before and will happen again. But that the person who caught it was the exact person the system was designed to protect: a citizen appearing before the court without legal representation, trusting that the government's filings would be honest. The system held, but only because the citizen checked.