Sullivan & Cromwell apologized after AI put fake cites in bankruptcy court
In April 2026, Sullivan & Cromwell told a Manhattan bankruptcy judge that an emergency motion it filed in the Prince Global Holdings Chapter 15 case contained AI hallucinations, inaccurate citations, and other errors. Opposing counsel at Boies Schiller Flexner caught the problems first. Andrew Dietderich, co-head of the firm's restructuring practice, apologized in a letter dated April 18, said the firm's AI policies had not been followed, and acknowledged that a secondary review also failed to catch the bogus material. The corrected filing avoided an immediate sanctions story, but it still turned one of Wall Street's prestige firms into the latest exhibit in why AI-assisted legal drafting and vibes-based review are a bad mix.
Incident Details
Tech Stack
References
Sullivan & Cromwell is not the kind of firm that is supposed to end up in this genre. When solo lawyers or small firms get caught filing AI-fabricated citations, the usual postmortem involves some version of "they did not know the tool could make things up." That excuse was already weak in 2023. It is much weaker when the filing comes from one of the most reputation-sensitive firms in American corporate law.
In April 2026, Sullivan & Cromwell told Chief Judge Martin Glenn of the U.S. Bankruptcy Court for the Southern District of New York that an emergency motion it had filed in the Prince Global Holdings Chapter 15 case contained inaccurate citations and other errors, including AI hallucinations. The problem was caught by opposing counsel at Boies Schiller Flexner, not by the firm's own review process.
The Filing
The underlying case was already high profile. Sullivan & Cromwell represented foreign representatives involved in the wind-down of Prince Global Holdings Limited, a Cambodian conglomerate tied to allegations of forced-labor compounds and a massive investment fraud scheme. This was not a sleepy local dispute that would disappear into a routine docket. It was a major bankruptcy matter, in a major court, with serious allegations in the background and sophisticated counsel on all sides.
According to the firm's later apology, the trouble centered on an emergency motion filed earlier in April seeking ex parte and provisional relief. That kind of filing is exactly where big-law lawyers like to project total command of the record and the law. It is supposed to show the court that the request is urgent, carefully grounded, and safe to grant on a compressed timeline.
Instead, the motion reportedly contained inaccurate citations, misquoted authorities, and non-existent legal sources. In other words, it had the classic AI-lawyering failure mode: polished-looking support that does not survive contact with an actual source check.
The Catch
Opposing counsel at Boies Schiller Flexner flagged the errors. That detail matters. The safety system that worked here was not internal training, not internal process, and not a careful second read from inside Sullivan & Cromwell. It was the adversarial system. The other side looked at the authorities, noticed the problems, and forced the issue into the open.
On April 18, Andrew Dietderich, co-head of Sullivan & Cromwell's restructuring practice, sent a letter to Judge Glenn apologizing on behalf of the team. He said the filing included AI hallucinations and acknowledged that the firm's policies governing AI use had not been followed. He also said a secondary review process failed to catch the inaccurate citations and other errors. The firm filed a corrected version.
That sequence is what makes the story more interesting than the usual "lawyer asked chatbot, chatbot lied" plot. Sullivan & Cromwell did not tell the court that it had no policies. It told the court it had policies, training requirements, and safeguards specifically designed to prevent this exact outcome. The safeguards failed because the people using the tool did not follow them, and the backup human review failed to catch the result anyway.
Why This One Stands Out
By 2026, no serious lawyer can plausibly claim ignorance about hallucinated citations. The legal profession has been saturated with warnings since Mata v. Avianca, and judges have spent the past few years spelling out the same point in increasingly irritated prose: you may use AI if you want, but you are still the lawyer who signed the paper. The obligation to verify citations did not arrive with chatbots and does not disappear when a chatbot makes the draft look tidy.
That is what makes a Sullivan & Cromwell incident different from an early adopter embarrassing himself with ChatGPT in 2023. At a firm like this, the real product is not just legal analysis. It is process discipline. Clients pay premium rates partly because they assume there are multiple layers of review between a rough draft and a federal court filing. If an elite bankruptcy practice still winds up apologizing for fabricated or inaccurate authorities, the lesson is not that AI remains mysterious. The lesson is that even expensive institutional process can collapse when people treat AI output as basically reliable unless something feels obviously off.
AI-generated legal citations are dangerous precisely because they do not look obviously off. They arrive in the right format. They sound plausible. They often borrow the shape of real doctrine closely enough that a skim read feels reassuring. If a reviewer is checking for tone, grammar, and overall structure rather than opening each authority and confirming it says what the brief claims, the hallucination gets a free ride.
The firm's own letter also referenced manual error in addition to AI-generated inaccuracy. That is not a comforting distinction. It suggests the filing process was not merely "AI failed." It was a rushed or sloppy drafting pipeline in which AI error and ordinary human error reinforced each other instead of canceling each other out.
What the Damage Actually Was
As of the apology, the story was not yet a sanctions order. But the absence of an immediate fine does not mean the damage was trivial. The motion had to be corrected. Opposing counsel had to spend time finding and surfacing the problems. The court had to deal with a credibility mess in the middle of an emergency matter. And Sullivan & Cromwell had to put in writing, on a public docket, that its controls failed.
For most institutions, that would be embarrassing. For a firm whose brand is reliability under pressure, it is worse than embarrassing. It is the sort of incident that makes every future filing by the same team a little more scrutinized. Once opposing counsel knows a major firm has already let hallucinated authority through once, checking every citation becomes an obvious tactical investment. The court, meanwhile, has no reason to extend the benefit of the doubt that prestige firms usually enjoy.
There is also client harm here even if the client was not directly sanctioned. When your lawyers' emergency motion becomes an AI-hallucination story, the court's attention shifts from your merits to your counsel's process failure. That is not what clients are paying for.
The Real Lesson
The easiest bad read of incidents like this is "AI is too dangerous to use." The more accurate read is harsher for the profession: lawyers keep trying to treat verification as a downstream cleanup step instead of the core task. AI makes that mistake easier because it gives you something that looks draft-ready before it is source-ready.
A real cite-check workflow cannot rely on intuition. It has to assume that every AI-supplied authority is untrusted until a human opens the case, reads the relevant passage, and confirms both the citation and the proposition. If a filing used AI anywhere in the drafting chain, the last reviewer needs to know that, and the authority sections should get slower review, not faster review. "Secondary review" is meaningless if the reviewer does not know they are auditing machine-generated assertions.
Sullivan & Cromwell's apology is notable because it strips away the last comforting myth. This is not a story about unsophisticated people getting duped by flashy software. It is a story about a top-tier law firm with policies, training, and senior reviewers still letting fake authority into a federal court filing. That is not a failure of access to best practices. It is a failure to actually practice them.
Discussion