10th Circuit sanctions lawyer $1,000 for ChatGPT-fabricated appellate brief

Tombstone icon

Maryland attorney Kusmin Amarsingh used ChatGPT to draft her appellate brief against Frontier Airlines without verifying any citations, resulting in multiple nonexistent cases being cited in the 10th Circuit. The court found her conduct "reckless" for completely failing to perform "an attorney's fundamental duty to the court." She was fined $1,000 and referred to Maryland attorney-disciplinary authorities.

Incident Details

Severity:Facepalm
Company:OpenAI (ChatGPT user error)
Perpetrator:Attorney
Incident Date:
Blast Radius:Client's appeal dismissed; attorney faces $1,000 fine and disciplinary referral; case adds to mounting appellate-level precedent on AI citation verification duties

A $15 Million Lawsuit and a $1,000 Fine

Kusmin Amarsingh, a Maryland attorney representing herself, sued Frontier Airlines for $15 million alleging racial discrimination. Her complaint claimed that gate agents at Denver International Airport mocked her Indian accent and denied her boarding. The district court dismissed the case, and Amarsingh appealed to the U.S. Court of Appeals for the Tenth Circuit.

The appellate brief she filed is where things went wrong. Amarsingh used ChatGPT to draft the brief and submitted it without verifying any of the citations the AI generated. Multiple cases cited in her brief to the Tenth Circuit were entirely fictitious - nonexistent decisions fabricated by the language model, complete with plausible-sounding case names, citation formats, and legal reasoning that had no basis in actual law.

The Tenth Circuit panel noticed. It's worth pausing on how fabricated citations in appellate briefs are now common enough that federal judges are apparently checking for them as a matter of routine.

"An Attorney's Fundamental Duty"

The court's response addressed both the appeal and the AI issue. On the merits, the panel affirmed the district court's dismissal, finding no error in the lower court's decision. Amarsingh's racial discrimination claims didn't survive appellate review regardless of the citation problems.

On the AI fabrications, the court was blunt. It characterized Amarsingh's conduct as "reckless" and found that she had completely failed to perform "an attorney's fundamental duty to the court" by not verifying any of the AI-generated content in her brief. The court imposed a $1,000 sanction and referred Amarsingh to Maryland attorney-disciplinary authorities.

The referral to disciplinary authorities is significant. A sanction is a financial penalty from a specific court for conduct in a specific case. A disciplinary referral puts the attorney's license to practice at risk. It signals the court's view that the conduct raises questions about the attorney's fitness to practice law generally, not just about mistakes in one filing.

Self-Representation and AI Dependence

Amarsingh was representing herself (pro se), which adds an unusual dimension to the case. Attorneys acting as their own counsel still bear all the professional obligations of an attorney - they can't claim the relaxed standards sometimes afforded to non-lawyer pro se litigants. The brief had to meet the same standards as any other appellate filing, regardless of whether Amarsingh was appearing on behalf of a client or herself.

The self-representation context may partly explain the heavy reliance on ChatGPT. Solo practitioners and self-represented attorneys, lacking the firm infrastructure of colleagues and research assistants, may be particularly susceptible to using AI as a substitute for the research process rather than as a supplement to it. When there's no second pair of eyes in the office, the temptation to trust the AI's output as a finished product rather than a starting point is presumably stronger.

The court, however, made clear that this explanation doesn't constitute an excuse. The duty to verify citations is non-negotiable. An attorney who lacks the resources to verify her own citations lacks the resources to file the brief.

The Rehearing Attempt

After the sanctions decision, Amarsingh filed a petition asking the Tenth Circuit to rehear her case. She argued that the panel had misread her allegations about the gate agents' conduct and the racial discrimination claims. The petition was filed on February 24, 2026, about two weeks after the original ruling.

The rehearing petition is notable primarily for its timing - coming after sanctions for AI-fabricated citations, it suggests either a determination to continue litigating or a lack of recognition of how significantly the AI incident has undermined her credibility with the court. Federal appellate rehearing petitions are rarely granted under the best circumstances. Petitioning a panel that just sanctioned you for AI fabrications is the legal equivalent of asking for a do-over from the teacher who caught you cheating on the test.

Another Week, Another Fabricated Brief

The Amarsingh case was decided on February 9, 2026 - the same week the Fifth Circuit was preparing its opinion in the Hersh case, nine days before that court would issue its own sanction for AI hallucinations and declare the problem was "getting worse." Both are federal appellate cases. Both involve attorneys who used ChatGPT without verification. Both resulted in sanctions and sharp judicial language about professional obligations.

The Tenth Circuit's approach differed from the Fifth Circuit's in some details - Amarsingh was fined $1,000 to the Fifth Circuit's $2,500 for Hersh - but the underlying message was identical. Federal courts at the appellate level are encountering AI-fabricated citations with increasing frequency, and they are responding with sanctions, disciplinary referrals, and published opinions designed to make the consequences clear.

The parade of cases follows the same script every time. Attorney uses AI. AI fabricates citations. Attorney doesn't check. Court catches it. Attorney faces sanctions. Each iteration proves that the previous iteration wasn't enough of a deterrent. The courts have tried publicity, sanctions, disciplinary referrals, and increasingly frustrated language in published opinions. The problem continues because the incentive to use AI without verification (it saves time) is apparently stronger than the deterrent effect of the consequences (which only materialize if you get caught, and only if you can't explain it away).

For Amarsingh, the consequences were concrete: a dismissed lawsuit, a $1,000 fine, and a referral to disciplinary authorities. For her client - which was herself - the consequences included a forfeited appeal in a case seeking $15 million in damages. For the legal profession, the Amarsingh case is one more data point in a pattern that even the courts acknowledging the problem haven't been able to stop.

Discussion