Am Law 100 firm Gordon Rees caught twice filing AI-hallucinated citations
Gordon Rees Scully Mansukhani, one of the largest U.S. law firms, was caught filing AI-hallucinated case citations in an Alabama bankruptcy proceeding. An associate initially denied using AI under oath before the firm acknowledged the fabricated references and paid over $55,000 in sanctions and fees. Months later in February 2026, the same firm was reported to have filed a second brief containing hallucinated citations in a separate matter, making it the first Am Law 100 firm known to be a repeat offender.
Incident Details
Tech Stack
References
Gordon Rees Scully Mansukhani is not a small firm. With more than 1,800 attorneys, it sits comfortably in the Am Law 100. It has offices across the United States. Its attorneys handle complex litigation, corporate transactions, and regulatory matters for major clients. It is the kind of firm where you'd expect institutional safeguards to prevent exactly the kind of thing that happened here.
The Alabama Bankruptcy Case
The first incident involved a Chapter 11 bankruptcy proceeding for Jackson Hospital & Clinic, Inc., in the U.S. Bankruptcy Court for the Middle District of Alabama. A Gordon Rees senior counsel filed a motion on behalf of one of the parties in the case. The motion contained fabricated quotes from case law and statutes, mis-cited case law, and wildly misstated the issues and holdings of existing cases.
The opposing party, Jackson Investment Group (the debtor-in-possession lender), caught the problems and filed an objection requesting sanctions. Jackson Investment Group's attorneys accused the Gordon Rees lawyer of using AI to draft the filing and failing to verify any of the cited authority. The fabrications were not subtle. Case names were invented, holdings were manufactured, and statutory quotes appeared that did not exist in the cited sources.
U.S. Bankruptcy Judge Christopher Hawkins held a hearing. What followed the hearing made the filing itself look minor by comparison.
The Denial
When confronted about whether artificial intelligence had been used to generate the filing, the Gordon Rees attorney initially denied it. This is the decision that elevated the incident from embarrassing to serious. Filing a brief with hallucinated citations is a professional failure that can be attributed to carelessness or over-reliance on a tool. Denying AI use when directly asked about it by a court is a credibility problem that goes to the core of an attorney's relationship with the tribunal.
Attorneys have an absolute obligation of candor to the court. Making false statements to a judge - whether in writing or orally - is a violation of fundamental ethical duties. This is not a gray area. Every attorney licensed in the United States knows this. Gordon Rees, as a firm, knew this. The attorney filing the brief knew this.
The denial did not hold. Gordon Rees eventually acknowledged the AI misuse, with the firm stating that professional and ethical duties are violated "when lawyers are not candid" with the court in their filings and responses to questions. The firm admitted what had happened after initially saying it had not happened.
The Fallout
Judge Hawkins issued a show cause order, requiring Gordon Rees and the attorney to explain why they should not be sanctioned for making "false statements of fact or law to the court." Gordon Rees said it did not know about the proceedings until after the show cause order was issued, distancing the institution from the individual attorney's conduct.
In October 2025, Gordon Rees told the court it was "profoundly embarrassed" by the incident and would accept whatever sanctions were issued. Days before the hearing, the firm reimbursed more than $55,000 in legal fees to the law firms representing Jackson Hospital and its lender.
In November 2025, Judge Hawkins issued his ruling. He declined to formally sanction the firm itself, finding that Gordon Rees had "took reasonable steps" to address the risk of AI-generated errors in legal work. The firm had expanded its internal AI guidelines to include a new "cite checking" policy and had paid the $55,000. But Hawkins did publicly reprimand the attorney who had submitted the filings. By the time of the ruling, that attorney was described as "former" senior counsel at Gordon Rees, having apparently departed the firm.
The judge's decision to spare the firm while reprimanding the individual was consistent with how other courts had handled similar incidents - treating the AI misuse as an individual attorney's failure rather than an institutional one. It also reflected that Gordon Rees had taken remedial steps: the policy change, the fee reimbursement, and the apparent departure of the responsible attorney.
Then It Happened Again
In December 2025, Gordon Rees surfaced in another AI hallucination incident in a separate case, Villalovos-Gutierrez v. Pol, where a judge again reprimanded the firm for filings containing AI-generated fabricated citations. And reports from early 2026 indicated a third incident involving Gordon Rees attorneys submitting hallucinated citations in the case of Huynh v. Redis Labs.
Above the Law reported on the pattern under a headline that captured the legal profession's reaction: "Am Law 100 Firm Accused Of Filing Brief Riddled With AI Hallucinations... AGAIN!" The publication observed the reputational damage extended far beyond the individual filings:
"Whether any specific citation was generated by AI - indeed, whether any specific citation is even wrong as opposed to merely debatable - opposing counsel now has every incentive to scrutinize any citation out of the firm with a jeweler's loupe. The damage of a hallucination incident spills over into all of the firm's litigation efforts and it will take a long time to repair that harm."
This is the compounding problem for a repeat offender. After the first incident, opposing counsel in any case involving Gordon Rees has a reason to check every citation more carefully. After the second and third, the checking becomes an adversarial tool by itself. Every brief the firm files in every court is now subject to heightened scrutiny, not because of the quality of the work, but because of the pattern of uncaught AI fabrication.
The Policy Response That Didn't Work
After the Jackson Hospital incident, Gordon Rees implemented what the court accepted as a reasonable response: expanded AI guidelines and a new cite-checking policy. This was enough to satisfy Judge Hawkins that the firm had taken the problem seriously. It was not enough to prevent the same type of error from occurring again in subsequent cases.
The gap between "having a policy" and "the policy actually working" is where this story pivots from individual misconduct to institutional failure. A cite-checking policy tells attorneys to verify their citations. It does not make them do it. And in a firm of 1,800 attorneys spread across dozens of offices, the enforcement of any internal policy depends on a supervision and compliance structure that reaches every attorney doing every piece of work.
The fundamental problem is that AI-generated legal text looks correct. It uses proper citation formats, it sounds like legal writing, and it presents fabricated authority with the same confidence as real authority. An attorney skimming their own brief is unlikely to catch a fabricated citation unless they specifically look up every case cited and verify its existence and holding. That takes time. In a high-volume practice - the kind of practice a firm the size of Gordon Rees runs - time is precisely the resource that AI tools are supposed to save.
The Industry Context
Gordon Rees was not operating in a vacuum. By late 2025, AI-hallucinated citations had become a well-documented phenomenon in the legal profession. The Avianca case in 2023 - where an attorney submitted ChatGPT-fabricated case citations and was sanctioned - had put every lawyer in the country on notice. Judges in multiple jurisdictions had begun requiring attorneys to disclose whether AI was used in preparing filings. Some courts had adopted local rules specifically addressing AI-generated content.
Judge Hawkins's decision noted that bankruptcy judges in Illinois and South Carolina had also recently sanctioned attorneys for submitting filings with hallucinated citations. The problem was industry-wide, and the judicial response was becoming increasingly standardized: if you file fabricated citations, you will be punished. If you lie about it, you will be punished more.
For Gordon Rees specifically, the repeated incidents raised a question that the legal market was watching closely: at what point does a pattern of AI misuse become a client risk? Clients hire large law firms partly for quality assurance. The implicit promise is that a brief carrying the firm's name has been through internal review sufficient to catch errors. Multiple AI hallucination incidents at the same firm undermine that promise in a way that is visible to every current and prospective client.
The firm updated its policies after the first incident. The policies did not prevent the second or third. At 1,800 attorneys, Gordon Rees demonstrated that an institutional policy is only as effective as the individual compliance of every attorney who picks up an AI tool and decides whether to check the output before filing it with a court.
Discussion