Ontario lawyer referred to law society after factum contained seven invented quotations
Ontario lawyer Khalid Parvaiz was referred to the Law Society of Ontario by Justice Frederick Myers after filing a factum containing seven "wholly made up" quotations attributed to real court cases. Parvaiz claimed the fabricated passages were "human errors" from "misreading of the cases" and denied using AI. Justice Myers was unconvinced, noting the alleged quotations were "completely made up" rather than paraphrased or miscited, and warned that the cover-up - if Parvaiz was being untruthful about the source - could carry more severe consequences than the original error.
Incident Details
Tech Stack
References
Seven Quotations, Zero Sources
A factum, in Canadian legal practice, is essentially a brief - a written argument submitted to the court that sets out the facts of a case, the legal issues, and the authorities that support the party's position. Lawyers are expected to cite real cases, quote them accurately, and represent their holdings faithfully. This is not optional. It is a foundational professional obligation.
Khalid Parvaiz, an Ontario lawyer, filed a factum that contained seven quotations attributed to real court decisions. The case names were real. The courts were real. The quotations were not. Seven passages, presented as direct quotes from judicial opinions, were fabricated. Not paraphrased poorly. Not taken out of context. Not imprecisely summarized. Made up.
Justice Frederick Myers of the Ontario Superior Court of Justice reviewed the factum and identified the problem. He had seen this pattern before - in May 2025, he had dealt with a separate case (Ko v. Li) where a Toronto lawyer included AI-generated fake citations in court submissions. That lawyer had been ordered to show cause for contempt before eventually admitting the error and apologizing.
This time, Myers was looking at fabricated quotations rather than fabricated case names. The cases cited by Parvaiz existed. What he had allegedly quoted from them did not.
The Denial
When confronted by the court, Parvaiz offered an explanation: the fabricated quotations were "human errors" resulting from a "misreading of the cases." He denied using artificial intelligence.
Justice Myers did not find this convincing. His reasons were practical. The quotations in question were not misreadings. A misreading produces a wrong interpretation or an inaccurate paraphrase. These quotations were "completely made up" - passages that bore no relationship to anything actually written in the cited decisions. You cannot "misread" a case into generating a quote that doesn't appear anywhere in the decision, its headnotes, or any related materials.
Myers pointed out a secondary concern: if Parvaiz had used AI and was denying it, the dishonesty to the court could carry consequences more severe than the original filing error. Canadian courts, like their American counterparts, have been grappling with AI-fabricated legal citations since large language models became widely available. Ontario's Rule 4.06.1(2.1), enacted in 2024, requires lawyers to submit a signed statement certifying the authenticity of every authority cited in a factum. The rule exists precisely because AI tools generate fake citations with enough regularity that the profession needed a formal certification requirement to combat it.
Whether Parvaiz used AI or generated seven fabricated quotations through some other means, the practical outcome was the same: a court filing that attributed made-up language to real judges.
The Referral
Justice Myers referred the matter to the Law Society of Ontario, the regulatory body responsible for licensing and disciplining lawyers in the province. A referral from a sitting Superior Court judge is not a casual recommendation. It triggers a formal process that can result in investigation, a hearing before the Law Society Tribunal, and sanctions ranging from reprimands to license suspension.
The Law Society's jurisdiction in this area has been tested recently. In January 2026, the Law Society Tribunal addressed a separate case where a lawyer named Mazaheri admitted to using the Grok AI chatbot for research and drafting. That lawyer's submissions contained fabricated citations, broken hyperlinks, and procedural rules that did not exist. The tribunal described the materials as "problematic AI-generated materials" and noted it was potentially the first instance in Canada of AI-fabricated content being submitted in a disciplinary proceeding.
Three separate incidents involving Ontario lawyers and fabricated legal content within roughly a year. Each with its own flavor - one involving fake case names, one involving AI-drafted disciplinary filings with broken citations, and one involving seven fabricated quotations in a factum - but all pointing at the same underlying problem.
The Evidence Question
One of the difficult aspects of Parvaiz's case is the denial itself. When an attorney admits to using AI and not verifying the output, the path forward is relatively clear: the attorney was negligent, the profession's duty to verify authorities was breached, and proportional consequences follow.
When an attorney denies AI use, the court faces a harder question. Seven fabricated quotations in a legal filing are consistent with AI hallucination. They are also, at least theoretically, consistent with an attorney who fabricated the quotations manually, or an attorney who used some other source that generated false content. The pattern - real case names paired with invented language - matches what large language models produce when asked for legal citations. But pattern-matching is not proof.
Justice Myers handled this by focusing on what could be established: the quotations were fabricated regardless of how they were produced. The lawyer's explanation - misreading - was not credible given the nature of the fabrication. And the dishonesty, if any, about the method would compound the problem. By referring the matter to the Law Society rather than resolving it himself, Myers ensured that the investigation could include the kind of forensic analysis (examining the attorney's computing records, for instance) that might resolve the question of whether AI was actually used.
The Certification Rule
Ontario's Rule 4.06.1(2.1) was supposed to prevent exactly this kind of incident. The rule, enacted in 2024 in response to the growing number of AI-fabricated citation cases, requires lawyers to personally certify the authenticity of every authority cited in a factum. A signed statement attesting that the citations are real and the quotations are accurate.
Parvaiz, if the factum was filed after the rule took effect, would have signed such a certification while submitting a document containing seven fabricated quotations. That creates a separate potential issue: a false certification to the court.
The rule's design assumes that requiring personal certification will motivate lawyers to actually verify their citations before filing. If lawyers sign the certification without performing the verification, the rule becomes performative - a box to check rather than a safeguard. Early evidence suggests this may be happening.
The Canadian Pattern
The Ontario incidents are part of a broader pattern across Canadian courts. In May 2025, the Ko v. Li case in Toronto. In January 2026, the Mazaheri disciplinary proceeding. In March 2026, the Parvaiz factum. Each incident received media coverage. Each prompted commentary from the judiciary and the bar. Each resulted in consequences for the attorney involved. And yet the incidents continue.
The legal profession's response to AI hallucination follows a recognizable cycle: an incident occurs, a court imposes consequences, the bar issues guidance, and then another incident occurs. The cycle repeats not because lawyers are unaware of the risk - by 2026, every legal professional in Canada with an internet connection has heard of AI-fabricated citations - but because the tools remain attractive to attorneys working under time pressure and the verification step remains something that feels optional until a judge catches it.
Justice Myers, who has now dealt with two separate fabricated-citation cases, is among the judges who appear wearied by the pattern. His observation that the cover-up might be worse than the crime was directed at Parvaiz, but it applies more broadly: the legal profession's growing familiarity with AI hallucination is making the "I didn't know" defense progressively less available. What remains is a choice between checking the citations and hoping nobody notices if you don't.