Judge fined Raja Rajan for AI-made citations (AGAIN 🤦‍♂️)

Tombstone icon

Judge Kai N. Scott sanctioned defense lawyer Raja Rajan $5,000 on April 20, 2026 after finding that he had again filed AI-generated fake citations in Bunce v. Visual Technology Innovations. Rajan had already been fined $2,500 and ordered to complete AI and legal ethics CLE in the same litigation the year before. This time the judge said she remained appalled by the conduct, ordered more CLE, and warned that a third incident could trigger referral to the Pennsylvania Disciplinary Board. The notable part is not that AI got something wrong. It is that a lawyer, after already being punished for the exact same mistake, did it again.

Incident Details

Severity:Facepalm
Company:Visual Technology Innovations, Inc.
Perpetrator:Legal Counsel
Incident Date:
Blast Radius:Repeat Rule 11 sanctions in the same case; extra CLE; client credibility damage; increased risk of bar referral if it happens again

By the time a lawyer gets sanctioned once for hallucinated case citations, the excuse inventory is already thin. By the time the same lawyer gets sanctioned again for the same failure in the same litigation, the story stops being about AI novelty and starts being about professional stubbornness.

That is where Raja Rajan landed in April 2026. Judge Kai N. Scott of the Eastern District of Pennsylvania ordered Rajan to pay a $5,000 sanction after concluding that he had, for a second time, filed made-up AI-generated citations in Bunce v. Visual Technology Innovations. Scott also ordered additional continuing legal education related to artificial intelligence and legal ethics, required proof of relevant prior courses, and warned that a third round of similar conduct could lead to referral to the Pennsylvania Disciplinary Board.

The Problem With Calling This a Repeat Offense

"Repeat offense" can sound abstract, like a neat label for a press clipping. In this case it was literal. Rajan had already been sanctioned in early 2025 after the court found that motions he filed included non-existent citations, citations to authorities that did not support the propositions offered, and authorities that were no longer good law. Judge Scott's response then was already unusually explicit. She fined him $2,500 and ordered him to complete CLE on AI and legal ethics.

That first order should have ended the matter. Rajan had a federal judge telling him, in unmistakable terms, that using AI without verifying the citations in a signed filing was unacceptable. He had already absorbed public embarrassment, monetary penalties, and mandatory education. He had already learned the lesson in the only format courts reliably use: a sanctions order.

Then he filed another brief with erroneous citations.

What Triggered the Second Sanction

The second incident did not arise from some massive summary judgment filing or a frantic emergency injunction motion. It came at the end of the underlying litigation, in a dispute over travel costs tied to a canceled deposition. Plaintiff Mark Bunce sought reimbursement. Rajan objected. Bunce then alleged that Rajan's objection contained a second round of fake citations.

That procedural posture makes the story worse, not better. By then, Rajan was not just generally on notice that AI-generated citations can be fabricated. He was specifically on notice, from the same judge in the same case, that his own verification practices were deficient. Filing another bad set of citations after that is not a failure to learn what the tool can do. It is a failure to change behavior after the court has already told you exactly what behavior must change.

Bloomberg Law reported that Scott said Rajan could identify no valid reason for not verifying the erroneous citations. That is the right framing. Once the profession has spent years watching judges sanction lawyers for hallucinated authorities, the relevant question is no longer whether AI can produce fake law. It can. The relevant question is whether the signing attorney did the minimum professional work necessary to keep fake law out of the filing. Here, the answer was again no.

The Judge Was Not Amused

Scott's reaction in the second order was sharper because the court was not confronting a first-time mistake. She said the court remained appalled by Rajan's improper conduct. She rejected Rajan's request for what Bloomberg described as a "modest deterrent penalty" of $950, even though Rajan argued that he had already paid more than $73,500 in sanctions to date.

That number is important, but it needs context. The more-than-$73,500 figure was not simply an accumulated AI penalty total. It reflected the broader sanctions history in the case, including other misconduct. That makes the AI repeat offense more damaging, not less. Rajan was not some otherwise flawless litigator who tripped twice on the same chatbot banana peel. He was already litigating under a cloud of sanctions exposure, and he still put defective authorities into another filing.

Scott's warning about a third incident is the clearest sign that the court now sees this as a professional-discipline issue, not a technology-adjustment issue. Courts can tolerate a lot of mediocre lawyering. What they do not tolerate for long is a lawyer repeatedly signing papers that contain false authority after judicial warnings and prior sanctions.

Why This Story Is Different From the Early Hallucination Cases

The early AI-citation cases had a strange novelty to them. Lawyers described ChatGPT as if it were a search engine with occasional creative quirks. Judges still sounded half amazed that a machine would confidently invent cases. Some of the early opinions carried the tone of a profession discovering, in real time, that fluent software is not the same thing as reliable software.

That phase is over.

In a 2026 repeat-sanctions case, nobody gets to act surprised. The legal press has spent years covering hallucinated citations. Bar groups have issued guidance. Law firms have written memos. Judges have lectured the bar in opinion after opinion. Rajan himself had already received an order, a fine, and mandatory CLE over the same conduct. When it happened again, the problem was not lack of awareness. It was disregard.

That is what makes this a particularly clean Vibe Graveyard story. The AI did what AI systems often do when used carelessly in legal work: it produced plausible-looking authorities that were wrong. The lawyer then did what lawyers are not supposed to do: sign the filing without making sure the cited authorities were real and supportive. The court did what courts increasingly do now: punish the conduct and ratchet up the consequences.

The Client Problem

Repeated citation hallucinations are not just embarrassing for counsel. They are bad for clients. Visual Technology Innovations and the individual defendants did not need their counsel generating collateral disputes over fabricated legal support. Every time the court has to stop and deal with whether the authorities in a filing are real, the client's actual position gets buried under a competence fight. That hurts credibility, wastes money, and gives the other side a strategic opening.

It also changes how future papers are read. Once a judge knows a lawyer has twice filed hallucinated citations, every later filing invites a more suspicious kind of reading. Opposing counsel has every incentive to check everything. The client is left paying for a representation style that now triggers extra skepticism by default.

The Boring Rule That Keeps Winning

Courts keep returning to the same conclusion because the rule is boring and durable: lawyers must verify the authorities they cite. The duty is old. AI does not soften it. If anything, AI makes manual verification more necessary because the systems generate text with exactly the kind of confidence and formatting that tempts a rushed attorney to treat it as pre-checked.

The Rajan story is what happens when that boring rule collides with a lawyer who either does not believe it applies to his workflow or keeps acting as though it does not. After one sanctions order, maybe you can call it negligence shaped by new technology. After two, it looks more like a refusal to internalize a professional duty that has already been explained, monetized, and placed on the record.

The court's next step, if there is a next time, is not hard to predict. Scott already said she would not hesitate to involve the disciplinary authorities. That is where repeated AI-citation misconduct naturally goes once judicial patience runs out. The profession is moving past the stage where hallucinated citations are treated as a weird tech mishap. They are now being treated the same way other false legal submissions are treated: as a lawyer problem.

Discussion