Repeated AI-fabricated citations cost client the entire case
Attorney Steven Feldman filed multiple motions containing AI-fabricated case citations in Flycatcher Corp. v. Affable Avenue LLC. Despite explicit court warnings and access to Westlaw and Lexis, he continued submitting unverified AI output -- even using AI to draft his response to the court's show-cause order, which contained yet more fake citations. Judge Failla imposed the most severe AI-hallucination sanction yet: default judgment against his client.
Incident Details
Tech Stack
References
The Nuclear Option
Default judgment is the legal equivalent of forfeiting the game. It means the court declares that one side loses entirely, not because the merits of the case were decided, but because one party's conduct was so egregious that the court refuses to let them participate further. It is an exceptionally rare sanction. In Flycatcher Corp. v. Affable Avenue LLC, it became the most severe consequence yet imposed for AI-fabricated legal citations - and the attorney's conduct made it almost inevitable.
On February 5, 2026, Judge Katherine Polk Failla of the Southern District of New York issued a 33-page opinion that methodically documented how attorney Steven Feldman repeatedly filed motions containing AI-fabricated case citations, was warned, was ordered to show cause, responded with more AI-generated text containing more fake citations, and ultimately demonstrated what the judge characterized as an inability to learn from his mistakes.
The Pattern
Feldman represented Affable Avenue LLC as the defendant in a trademark dispute brought by Flycatcher Corp. His filings in support of Affable's motion to dismiss contained case citations that did not exist - a hallmark of AI-generated legal research where the language model fabricates plausible-sounding but entirely fictional case names, citations, and holdings.
When the court identified the fabricated citations, Judge Failla was stern and direct. She told Feldman that he "was not excused from this professional obligation [of verifying that the cases he submitted to the Court were valid] by dint of using emerging technology." This was not ambiguous guidance. It was a clear warning from a federal judge that AI-generated citations require the same verification as any other legal research.
Feldman's response to this warning is what elevated the case from a cautionary tale into something unprecedented. A few days before the hearing on the court's show-cause order - the legal mechanism by which a judge demands an attorney explain their conduct or face sanctions - Feldman submitted a proposed reply brief in further support of Affable's motion to dismiss. This new filing, submitted in the shadow of pending sanctions for submitting unverified AI-generated content, contained yet more AI-fabricated citations.
He had reportedly used AI to draft his response to the court's order about his problematic use of AI. That this response also contained fabricated citations suggests either a remarkable commitment to a failed methodology or a fundamental misunderstanding of what had gone wrong.
The Hearing
At the show-cause hearing, the court explored Feldman's research methods. He acknowledged that he had not actually read the cases he cited in his filings. When the court pointed out that relying on secondary descriptions of what cases say is "not a legitimate way of cite checking or doing research," Feldman agreed: "Absolutely."
Judge Failla described his research methodology as "redolent of Rube Goldberg" - an elaborate, unnecessarily complex contraption that ultimately fails to accomplish its basic purpose. Feldman had access to Westlaw and Lexis, the standard legal research databases where every cited case can be verified with a few keystrokes. He chose instead to trust AI output without checking it against these readily available tools.
When asked how he thought the situation should be resolved, Feldman suggested he could correct the filings by having other attorneys review citations, while avoiding "any use whatsoever of any, you know, artificial intelligence or LLM type of methods." The judge was not persuaded.
The Ruling
Judge Failla's opinion found that Feldman's conduct met the threshold for bad faith. She wrote that his AI misuse "resulting in erroneous citations, exacerbated by his insouciant approach to cite-checking, was done in bad faith." The word "insouciant" - meaning casually unconcerned - captures the court's reading of Feldman's attitude toward the accuracy of his submissions.
The court identified "the most remarkable element of Mr. Feldman's misconduct" as his "continuous pattern of behavior." He was not sanctioned for a single slip-up. He was sanctioned because, after being caught, warned, and ordered to explain himself, he continued doing exactly the same thing. His repeated filings with fabricated citations were, in the court's words, "proof" that he "learned nothing" and had not implemented any safeguards to catch the errors.
Judge Failla explicitly stated that she has no problem with lawyers using AI to assist their research. The issue was not AI use itself but Feldman's failure to verify the AI's output - a professional obligation that applies regardless of what tool generates the initial research. As she put it, "Verifying case citations should never be a job left to AI."
The sanction was default judgment against Feldman's client, Affable Avenue LLC. The court found this was "limited to what suffices to deter repetition of the conduct or comparable conduct by others similarly situated" under Federal Rule of Civil Procedure 11 and the court's inherent powers. The court also found that Feldman had "multiplie[d] the proceedings in [this] case unreasonably and vexatiously" under 28 U.S.C. section 1927, opening the door to potential fee-shifting.
The Fahrenheit 451 Reference
In a detail that made the ABA Journal headline, Judge Failla's opinion included a reference to Ray Bradbury's Fahrenheit 451. The allusion underscored the irony of a legal professional effectively burning the reliability of legal citations by feeding them through an AI system without verification - destroying the very foundation of legal argumentation at a time when the tools to verify it have never been more accessible.
The Precedent
Feldman joined a growing roster of attorneys who have faced sanctions for submitting AI-fabricated citations. The Avianca case in 2023 drew widespread attention when an attorney used ChatGPT for legal research and submitted citations to cases that did not exist. But in the Avianca case, the sanctions were monetary fines and reputational damage. In Flycatcher v. Affable Avenue, the client lost the entire case.
The escalation from fines to default judgment sends a distinct message. Courts have been patient with the first wave of AI citation errors, treating them as learning opportunities with proportionate penalties. Feldman's case demonstrates that patience has limits. An attorney who is caught, warned, and continues submitting unverified AI output is not making a mistake. They are, in the court's view, acting in bad faith.
Feldman had access to Westlaw and Lexis throughout the entire proceeding, which makes the outcome harder to sympathize with. Both are legal databases specifically designed for citation verification. Using AI-generated research without cross-referencing it against these tools is not a technology problem. It is a professional responsibility failure, and the most severe AI-hallucination sanction in US court history is the result.
Discussion