ChatGPT convinced Illinois woman to fire her lawyer and file 60+ bogus court documents
Nippon Life Insurance Company sued OpenAI after ChatGPT allegedly acted as a de facto lawyer for Graciela Dela Torre, an Illinois disability claimant who had already settled her case. When her real attorney told her the settlement couldn't be reopened, she asked ChatGPT if she'd been "gaslighted." The chatbot told her to fire her lawyer, helped her draft over 60 pro se filings across two federal cases, and produced fabricated case citations including an entirely invented case called "Carr v." something. Nippon is suing OpenAI for unauthorized practice of law under Illinois state law, arguing it spent huge amounts of time and money dealing with AI-generated litigation that should never have existed.
Incident Details
Tech Stack
References
The Settled Case
Graciela Dela Torre, of Des Plaines, Illinois, had a disability insurance policy with Nippon Life Insurance Company of America. When the insurer halted her payouts in 2022, she sued. The case went through the federal court system. It was resolved. A settlement was reached. The matter was closed.
About a year later, Dela Torre wanted to reopen the case. Her attorney told her it wasn't possible - the settlement was final. This is standard legal advice. Settlements are designed to end disputes, and once finalized, they aren't typically subject to do-overs because one party has second thoughts.
Dela Torre did not accept this answer. Instead, she turned to ChatGPT.
The Robot Lawyer
According to the lawsuit Nippon later filed, Dela Torre asked ChatGPT whether her lawyer had "gaslighted" her. What followed was, depending on your perspective, either a vivid demonstration of ChatGPT's eagerness to be helpful or a case study in why a language model should not practice law.
ChatGPT told her to fire her attorney. She did.
Without legal representation, Dela Torre submitted a pro se filing on January 22, 2025, seeking to reopen the settled case. The filing was drafted with ChatGPT's assistance. A judge denied the motion.
This is where most stories about people using ChatGPT for legal matters would end - a rejected motion, a lesson learned. Dela Torre's story was just getting started.
60+ Filings and Counting
After the judge blocked her attempt to reopen the original case, Dela Torre filed an entirely new lawsuit against Nippon. This one also was built with ChatGPT's help. She flooded the docket with dozens of motions, subpoenas, and notices - all compiled and drafted by the chatbot, according to Nippon's court filings.
The insurer alleges ChatGPT produced at least 44 filings connected to Dela Torre's efforts. By the time Nippon filed its own lawsuit against OpenAI, Dela Torre had submitted more than 60 documents across two federal cases, nearly all drafted with ChatGPT's assistance. The second case remains active in the courts.
The filings weren't just numerous. They contained fabricated legal authority. Nippon's suit alleges ChatGPT produced invented case citations, including a reference to a nonexistent case called "Carr v." followed by a fabricated parties and procedural history. For anyone who has been following the post-2023 wave of AI hallucination sanctions in courts - from the Avianca ChatGPT case to the growing roster on the Vibe Graveyard - the fabricated citation detail is familiar territory. What's different here is the sheer volume of AI-generated filings and the fact that the chatbot wasn't assisting a lawyer; it was operating instead of one.
Nippon Sues OpenAI
On March 5, 2026, Nippon Life Insurance Company of America filed a federal lawsuit against OpenAI in the Northern District of Illinois. The central claim: OpenAI violated Illinois law prohibiting the unauthorized practice of law.
The insurer argues that ChatGPT functionally acted as Dela Torre's legal counsel. It analyzed her case, formulated a legal strategy, told her to fire her attorney, drafted court filings, generated case citations (some of them imaginary), and guided her through federal litigation across two separate cases. All without a law license, malpractice insurance, ethical obligations, or any of the other constraints that apply to human attorneys.
Nippon claims it was harmed directly - the company had to spend considerable time and money defending against AI-generated litigation in a case that had already been settled. The fabricated citations required responses. The flood of motions required review. The new case required full legal defense. All of this cost real money, and Nippon argues none of it should have happened because a chatbot shouldn't be giving legal advice in the first place.
The Legal Question
The lawsuit raises something genuinely novel. Most AI-and-law cases on record involve lawyers who used ChatGPT to assist with their work and submitted hallucinated citations without checking them. Those cases are about professional negligence - an attorney failing to verify AI output before filing it with a court. The consequences fall on the lawyer, as they should.
This case is different. There was no lawyer in the loop. A consumer, acting on her own, used ChatGPT as her legal advisor. The chatbot didn't just draft documents - it allegedly analyzed her case, determined that her attorney had served her poorly, recommended she terminate the attorney-client relationship, devised a new legal strategy, and executed it through dozens of court filings. That's a fairly comprehensive description of what lawyers do.
Nippon's complaint asks the question directly: "When an AI drafts your pleadings, analyzes your case, tells you to fire your lawyer, and generates the legal strategy you take to federal court - is that the practice of law?"
Illinois, like every other U.S. state, restricts the practice of law to licensed attorneys. If ChatGPT's behavior in this case constitutes legal practice, OpenAI has a problem that goes well beyond one insurer's lawsuit. Millions of people ask ChatGPT legal questions every day. The distinction between "providing legal information" (permitted) and "practicing law" (restricted) has always been fuzzy, and AI chatbots are making it fuzzier.
The Bigger Pattern
The Vibe Graveyard now documents a healthy collection of AI hallucination cases involving legal filings - lawyers sanctioned for submitting ChatGPT-fabricated citations in the Avianca case, $10,000 in sanctions for AI-hallucinated citations in the Deutsche Bank case, attorneys facing consequences in Kansas, the Fifth Circuit, and beyond. Those incidents share a pattern: a professional who should have known better used AI without checking the output.
The Dela Torre case breaks from that pattern because there's no professional who should have known better. There's a consumer who turned to the most accessible tool available and got advice that was confident, detailed, persuasive, and substantially fabricated. She didn't know the citations were fake because she's not a lawyer - that was the whole point of asking ChatGPT.
This is what happens when AI systems that sound authoritative are unleashed on people who don't have the domain expertise to evaluate the output. The 60+ filings didn't happen because Dela Torre was reckless. They happened because ChatGPT was convincing. And the case it built for her - complete with fabricated authority, strategic advice, and procedural guidance - was wrong enough to waste everyone's time and money, but coherent enough to keep going for over a year across two federal cases.
OpenAI has not publicly commented on the lawsuit at the time of publication.