Two lawyers sanctioned differently for same filing with AI-fabricated citations

Tombstone icon

Attorneys Yen-Yi Anderson and Jeffrey Goldin jointly filed a motion in Lifetime Well v. IBSpot containing at least eight AI-generated false citations. Judge Kearney imposed differential sanctions based on their responses: Anderson, who blamed time pressure and fired her law clerk rather than accepting responsibility, received $4,000 in monetary sanctions. Goldin, who promptly accepted responsibility and implemented remedial measures, received no monetary penalty.

Incident Details

Severity:Facepalm
Company:Lifetime Well LLC / IBSpot (client case compromised)
Perpetrator:Attorney
Incident Date:
Blast Radius:Client's motion to dismiss compromised; $4,000 sanction for one attorney; both required to distribute ruling and AI policies to legal communities

The Filing

On November 4, 2025, attorneys Yen-Yi Anderson and Jeffrey Goldin jointly submitted a motion to dismiss in Lifetime Well LLC v. IBSpot USA Inc. on behalf of their client IBSpot. The case was before Judge Mark A. Kearney in the U.S. District Court for the Eastern District of Pennsylvania. Anderson served as lead counsel, admitted pro hac vice from New York. Goldin was Philadelphia-based local counsel who signed and filed the motion.

The motion contained at least eight fabricated case citations - cases that did not exist, generated by AI legal research tools. How the fake citations got into the filing is itself a story about the delegation chain that law firms rely on. The citations originated with Anderson's team in New York. Goldin, as signing counsel, filed the motion without independently verifying the citations it contained.

For anyone keeping count, this was not the first, second, or even tenth time a court had dealt with AI-fabricated citations in legal filings. By January 2026, federal judges across the country had built up a small body of case law on exactly this problem, with sanctions ranging from $1,500 to $10,000 depending on the circumstances. Judge Kearney's opinion in this case would add to that collection - but with an unusual twist.

How It Was Caught

The fake citations came to light during the court's review of the motion. Once identified, Judge Kearney ordered both attorneys to show cause why they should not be sanctioned under Federal Rule of Civil Procedure 11, which requires attorneys to certify that legal arguments in their filings are "warranted by existing law or by a nonfrivolous argument." Filing motions built on cases that don't exist falls short of that standard.

The show cause order gave both attorneys the chance to explain themselves. Their responses could not have been more different, and those differences ended up mattering more than the underlying error.

Anderson's Response: Deflection

Anderson, the lead counsel who actually drafted the motion (or whose team did), blamed time pressure. She argued that deadline constraints had prevented adequate review of the citations before filing. She also fired her law clerk, presenting this as a corrective measure.

Judge Kearney was not impressed. Blaming time pressure for filing fabricated legal authority is a bit like blaming rush hour traffic for running a red light - the external circumstance doesn't excuse the fundamental obligation. Lawyers are required to verify that the cases they cite exist before submitting them to a court. That obligation doesn't have a deadline exception.

The decision to fire a law clerk also worked against Anderson rather than for her. Terminating a subordinate while not accepting personal responsibility for the filing she signed suggested that Anderson saw the problem as someone else's failure rather than her own. As the attorney whose name was on the motion, the verification responsibility was hers regardless of who drafted the initial text.

Anderson received $4,000 in monetary sanctions.

Goldin's Response: Accountability

Goldin took the opposite approach. He immediately conceded that the false citations "involved the use of generative artificial intelligence tools" and that they "originated with IBSpot.com's specially admitted counsel in New York and were not picked up by Attorney Goldin prior to filing." He then did something Anderson did not: he accepted responsibility for his signature on the filing.

But Goldin went further than a simple apology. He voluntarily completed a continuing legal education course on the ethical and responsible use of generative AI. He spent what he described as "many hours reading over virtually every recent opinion in the Third Circuit and in other jurisdictions on the dangers of generative artificial intelligence and the obligations of local counsel." He and Anderson's firm agreed that all future filings would be submitted to Goldin at least 48 hours before the filing deadline, accompanied by a table of authorities and all supporting decisions, statutes, and secondary authority, with Goldin certifying his independent review.

He also publicly apologized to the court and all counsel through his response filing.

Goldin received no monetary penalty.

The Differential

The $4,000 gap between Anderson's and Goldin's sanctions was the point of Judge Kearney's ruling. The underlying error was the same - both lawyers' names were on a filing with eight fabricated citations. Both failed to verify the legal authority before submission. Both violated Rule 11. But Judge Kearney treated the post-discovery conduct as the decisive factor.

Anderson deflected blame and terminated an employee. Goldin owned the mistake, educated himself, and instituted structural safeguards. The court rewarded the latter approach and punished the former, establishing that how an attorney responds to an AI citation failure matters as much as the failure itself.

Both attorneys were required to distribute Judge Kearney's ruling and the court's AI policies to their respective legal communities - a form of professional embarrassment that carries its own cost beyond monetary fines.

Sanctions Across the Courts

Judge Kearney's opinion was notable for including a comprehensive survey of AI citation sanctions from courts across the country, effectively creating a reference guide for how federal judges had handled similar situations. The picture was consistent: sanctions ranged from $1,500 to $10,000, with the specific penalty driven primarily by the attorney's level of cooperation, candor, and remedial response.

Judge Scott had issued $2,500 in a case where an attorney cited fabricated cases in two separate motions. Judge Crone imposed $2,000 where an attorney didn't verify citations even after opposing counsel flagged the nonexistent authorities. Magistrate Judge Wormuth sanctioned an attorney $3,000 and required self-reporting to state bar disciplinary boards. Judge Slaughter imposed $3,000 against a drafting attorney and $10,000 against signing co-counsel where the signing firm had skipped its own proofreading procedures.

The pattern across all these cases was the same: attorneys who owned their mistakes and took concrete corrective steps received lighter penalties than those who deflected, minimized, or continued the behavior.

The Delegation Problem

Underneath the sanctions question lies a structural problem that Lifetime v. IBSpot illustrates clearly. Modern law practice involves layered delegation: lead counsel to associates, associates to paralegals, paralegals to research tools. When one of those research tools is a generative AI system that fabricates plausible-sounding case citations, the delegation chain becomes a game of telephone where nobody verifies the original signal.

Anderson's team in New York used AI tools that produced fake citations. Those citations were incorporated into a motion. The motion was sent to Goldin in Philadelphia to sign and file. Goldin filed it without independently checking the citations. At no point in this chain did a human being confirm that the cited cases existed.

This is the same failure pattern that appeared in the Avianca case, in the Amarsingh case, in the Kansas case, and in dozens of others. The AI generates confident-looking citations. The attorney trusts the output. The court discovers the fraud. The attorney gets sanctioned. The only variable is how badly they handle the aftermath.

What This Case Added

Judge Kearney's contribution to the growing body of AI sanctions law was the explicit establishment of a differential framework. Previous courts had individual data points - this attorney got $2,000, that attorney got $3,000 - but the outcomes were in separate cases with different facts. Lifetime v. IBSpot offered a controlled comparison: two lawyers, one filing, same fabricated citations, different responses, different penalties.

The message was clear: the court will treat post-discovery conduct as a primary factor in determining sanctions. Accepting responsibility, educating yourself, and implementing safeguards earns leniency. Blaming others, firing subordinates, and avoiding personal accountability earns the full penalty.

Both attorneys still required remedial measures - distributing the ruling, adhering to new filing review procedures. The court wasn't letting either of them off entirely. But the monetary difference between $4,000 and $0 was Judge Kearney's way of telling the legal profession that how attorneys handle AI failures matters.

By January 2026, the lesson should not have required restating: verify your citations. AI legal research tools fabricate cases. If an attorney uses these tools and submits the output without checking it against an actual legal database, they are gambling their professional reputation and their client's case on a system that regularly invents authority out of nothing. The tools don't flag their own hallucinations. That remains the attorney's job.

Discussion