Third Circuit reprimanded a lawyer over AI-hallucinated DEA authorities
On March 27, 2026, the Third Circuit issued a precedential opinion reprimanding attorney Daniel A. Pallen after an appellate brief in McCarthy v. DEA used AI-generated summaries of DEA adjudications that were inaccurate or nonexistent. The court declined monetary sanctions, partly because it was its first precedential AI-misuse opinion, but it directed notice to other courts and the National Disciplinary Data Bank. That is a permanent paper trail for a brief that should have been checked before filing.
Incident Details
Tech Stack
References
Citation checking is still a job
McCarthy v. United States Drug Enforcement Administration is another entry in the legal profession's slow discovery that a hallucinated case does not become precedent because a chatbot formatted it nicely.
On March 27, 2026, the U.S. Court of Appeals for the Third Circuit issued a precedential opinion reprimanding attorney Daniel A. Pallen. Pallen represented Stephen McCarthy, a physician assistant challenging a DEA order. In the opening brief, Pallen included summaries of eight DEA adjudications that were meant to show inconsistent agency treatment. The summaries came from AI-generated case overviews provided by a non-attorney.
The problem was severe and basic. According to the court, seven of the summaries contained factual and legal inaccuracies. One cited adjudication did not exist. The Government identified the problems in its response brief. Pallen still did not read or verify the underlying authorities before filing a reply brief that minimized the errors.
By the time the court forced the issue, the citation problem had become a competence problem, a candor problem, and a disciplinary problem. That is not a great upgrade path.
What the court found
The Third Circuit's opinion is unusually useful because it walks through the timeline. Pallen filed the opening brief on September 30, 2024. The Government's response catalogued the problems. Pallen filed a reply brief in February 2025 without checking the citations, despite suspecting that AI had been used. He later said he concluded in mid-February that AI generated the problematic material, but still took no corrective action until the court ordered him to provide copies of the cited authorities.
Only then did he discover for himself that the authorities were inaccurate or nonexistent. The court issued an order to show cause, held a hearing, and concluded that his conduct violated Pennsylvania's duty of competence and the Third Circuit's disciplinary rules.
The court declined to decide whether he knowingly violated the duty of candor, though Judge Jane Roth wrote separately that she would have gone further and imposed harsher sanctions. The majority imposed a reprimand rather than money sanctions. It explained that the court had not yet had a prior precedential opportunity to address attorney AI use and had not put Pallen on notice that competence was part of the sanction analysis. Future lawyers should not treat that mitigation as reusable stationery.
Reprimand with a long tail
A reprimand may sound softer than a fine, but the opinion made clear that it was not a quiet warning. Under the Third Circuit's disciplinary rules, the clerk was directed to notify other courts where Pallen is admitted and the National Disciplinary Data Bank.
That is one of the real consequences here. AI citation failures often get discussed as embarrassment. Embarrassment fades. A published appellate opinion and disciplinary notifications have a longer shelf life. They affect credibility with courts, malpractice risk, client trust, and professional recordkeeping. The court also warned that future violations may face harsher sanctions now that the precedent has been set.
JD Supra's Stevens & Lee writeup described the decision as the AI hallucination issue arriving at the Third Circuit. Legal AI Governance called it the first precedential federal appellate opinion disciplining a lawyer for AI-driven citation failures. Both secondary sources match the primary opinion on the key facts: inaccurate summaries, one nonexistent authority, failure to verify, public reprimand.
The AI was not the lawyer
The opinion did not ban AI. It did not say lawyers can never rely on assistants, staff, clients, or tools for research. It said the attorney signing the brief remains responsible for checking the legal authorities used in the argument. That should not be a hard concept, although the federal judiciary is now accumulating enough examples to justify a loyalty card.
Legal research has always involved delegation. Junior associates, paralegals, contract attorneys, interns, and clients can surface materials. The signing lawyer still has to verify the law. Generative AI changes the failure mode because it can produce plausible case names, confident summaries, and citation-shaped output at speed. The work looks finished before it has earned the right to be filed.
That is why this site keeps treating legal hallucination stories as AI-system failures rather than ordinary proofreading mishaps. A generative system can fabricate authority in a format designed to look like research. The surrounding workflow then fails when humans trust the formatting over the source. The harm lands in a courtroom, where made-up authority wastes judicial time and can damage a client's case.
Process controls, not vibes
The fix is not mysterious. Any legal team using AI for research needs source retrieval, citation verification, and a record of who checked what. A case citation should be traced to a primary source before it enters a filing. A summary should be compared against the actual decision. If a cited authority cannot be found, it does not get filed. If opposing counsel flags problems, the answer is not to wave them off before reading the cases.
AI tools can help draft, search, organize, and compare. They cannot be the court reporter, the law library, and the supervising attorney at the same time. A policy that says "verify AI output" is not enough by itself. The workflow has to force verification before filing, especially under deadline pressure when the polished answer starts looking temptingly complete.
McCarthy v. DEA also shows why courts are moving from annoyance to discipline. By 2026, fake AI citations were no longer a shocking novelty. The Third Circuit still treated its lack of prior precedential guidance as mitigation for this lawyer. It also said that mitigation will not protect the next one.
The lesson for lawyers is blunt: if AI touched the legal research, assume the court will ask who verified it. If the answer is "nobody," the model will not be the one whose name goes into the disciplinary notice.
Discussion