A repository of cases of ‘AI Hallucinations’ in Common Law Courtrooms around the world
Project Leads: Aiman Mairaj and Janhavi Gupta
Co-Curators: Ayat Shaukatullah, Saniya Malik, Shabi Tauseef, Akshat Pahuja
Research Team, Virtuosity Legal
Virtuosity Legal’s Research Team presents the first-of-its kind Repository tracking AI Hallucinations in Common Law Courtrooms around the world. The extensive use of Artificial Intelligence has led to mishaps, where the courts have discovered fake and fictitious case law citations filed before them. This database is a timely and urgent caution: AI may accelerate work, but it cannot replace legal research, ethical responsibility, or professional scrutiny.
Because substance, authority, and insight cannot be automated.
Note: Please access the repository on a Desktop for better experience.
| Case Name | Citation | Order Issued | Jurisdiction | Court | Case Summary | Misconduct | Court Findings | Punishment / Consequences | Key Takeaway |
|---|---|---|---|---|---|---|---|---|---|
| Buckeye Trust v. PCIT-1 Bangalore | ITA No. 1051/Bang/2024 | December 2024 | Bangalore Bench of Income Tax Appellate Tribunal (ITAT), India | Income Tax Appellate Tribunal (ITAT) | This case revolves around the tax implications of a transaction where an individual transferred partnership interest to form a trust valued at ₹699 crores. The legal team argued a partnership interest isn’t ‘property’ under tax law. The tribunal equated partnership interests to shares, taxing it—contrary to usual precedent. | The tribunal cited three fictional Supreme Court judgments and a fabricated Madras HC judgment. These non‑existent cases were reportedly generated by ChatGPT and mistakenly included in the ruling. Additionally, four referenced judgments listed elsewhere were missing from official archives. | ITAT failed to verify citations and incorporated fake authorities into its judgment—demonstrating absence of due diligence when using AI outputs. | No explicit sanctions noted. The ruling itself became erroneous due to reliance on hallucinated authorities—impacting credibility of the tribunal. | AI outputs must be cross‑checked; judicial bodies cannot rely blindly on generative AI without verification. |
| Lacey v. State Farm Gen. Ins. Co. | No. 2:24‑cv‑05205‑FMO‑MAA, 2025 WL 1363069 (C.D. Cal. May 5, 2025) | May 5, 2025 | United States | U.S. District Court for the Central District of California | Jackie Lacey, former Los Angeles DA, sued State Farm over a denied professional liability policy claim. During discovery, filings contained fabricated cases and incorrect quotes. Attorneys used AI tools: Cocounsel, Westlaw Precision Drafting, Google Gemini. | Fake case citations + misquoted law submitted. Even revisions—AFTER warnings—contained same mistakes, showing reckless disregard. | Special Master held conduct ‘tantamount to bad faith.’ AI cannot replace human judgment; Rule 11 demands verification of legal citations. | Faulty brief stricken; discovery motion denied. Firms Ellis George + K&L Gates jointly assessed $31,100 in fees. | Courts are willing to impose heavy penalties for repeated AI‑caused misinformation—even without intent. |
| Mata v. Avianca | 22‑cv‑1461 (S.D.N.Y. 2023) | 2023 | United States | Southern District of New York | The plaintiff sued Avianca Airlines for personal injury. Attorneys submitted a brief citing six nonexistent cases—including quotes and quotations fabricated by ChatGPT. | Lawyers relied entirely on ChatGPT for case authority; the tool even ‘assured’ them of authenticity. The attorneys failed to independently verify. | The Court called the brief ‘bogus’ — containing fake decisions, fake quotes, and fake citations. Judicial integrity requires human‑checked references. | $5,000 sanction imposed against the attorneys and firm. Ordered to inform falsely‑cited judges. | Technology cannot excuse legal malpractice. AI hallucinations, if relied upon blindly, destroy professional credibility. |
| Patricia Bevins v. Colgate‑Palmolive Co. and BJ’s Wholesale Club | Case No. 2:25‑cv‑00576 | April 10, 2025 | United States | U.S. District Court, Eastern District of Pennsylvania | Product liability claim. Attorney Palazzo sanctioned for submitting briefs containing erroneous and fabricated citations. | Referenced cases either did not exist or were altered with inaccuracies—likely drafted through AI without verification. | Court emphasized many attorneys wrongly treat AI as a replacement for legal research. Accuracy must come first. | Sanctions imposed; additional judicial scrutiny applied. | AI hallucinations are becoming common. Structural safeguards + ethical training are needed in legal practice. |
| Wadsworth v. Walmart Inc. | Case No. 2:23‑CV‑118‑KHR | February 24, 2025 | United States | U.S. District Court | Plaintiffs sued Walmart alleging a defective hoverboard that caught fire and destroyed their home. Attorney Ayala drafted motions using an AI platform (“MX2.law”), inserted citations generated by AI without verification. | Nine cases were cited, eight of which did not exist. Ayala uploaded the motion into an AI system to automatically add supporting cases, and filed it without providing copies to co‑counsel. | Judge Rankin held attorneys remain responsible regardless of AI tools used. AI can be beneficial only with proper verification. | Ayala’s pro hac vice revoked + $3,000 fine. Morgan: fined $1,000. Goody (local counsel): fined $1,000. Policies ordered to prevent reoccurrence. | Even where AI drafting is delegated, signature = responsibility. Attorneys must verify every authority before filing. |
| Frankie Johnson V. Jefferson S. Dunn, et a | No. 2:21‑cv‑01701‑AMM (N.D. Ala. July 23, 2025) | 23 July, 2025 | United States | District Court, Northern District of Alabama | Plaintiff Frankie Johnson accused Defendant Jefferson Dunn, the former Commissioner of the Alabama Department of Corrections, of fabricating citations to legal authorities in two motions. Three attorneys for Defendant Dunn (Matthew B. Reeves, William J. Cranford, and William R. Lunsford) confirmed in writing and at a hearing that the citations were hallucinations of a popular generative artificial intelligence (“AI) application, ChatGPT. In simpler terms, the citations were completely made up. | The attorneys submitted court filings containing AI-generated fabricated legal citations without verifying them, breaching their professional duty of candor and competence. | The court found that defense counsel submitted filings containing fabricated legal citations that did not exist, that those false authorities were generated through the use of generative AI, and that the attorneys failed to independently verify the accuracy of the cited cases, which the court held to be a serious violation of an attorney’s duty of candor to the court amounting to bad faith or its equivalent, regardless of whether the misconduct resulted from intentional deception or reckless reliance on AI. | The court publicly reprimanded the three attorneys responsible for the AI-generated fabricated citations, disqualified them from further participation in the case, ordered that the sanctions order be disclosed to their clients, opposing counsel, and judges in other pending matters, and referred the matter to the Alabama State Bar and other appropriate licensing authorities for disciplinary proceedings, determining that these measures were necessary to address and deter the seriousness of the misconduct. | AI can assist research, but lawyers remain fully responsible for the accuracy of citations. courts will sanction attorneys for unverified or fabricated AI-generated authorities and misconduct may lead to reprimands, disqualification, and bar referral even absent intentional fraud. |
| Sylvia Noland v. Land of the Free, L.P., et al | B331918 (Cal. Ct. App. 2d Dist. 2025) | September 12, 2025 | United States | California Court of Appeal, Second Appellate District, Division Three | Defendants hired plaintiff to work as their leasing agent and sales representative. In that capacity, plaintiff showed the properties to potential lessees, prepared deal memos, and collected deposits and signatures on leases and contracts. The plaintiff filed employment-related claims, the trial court granted summary judgment in favour of the defendants, and the appeal raised no novel legal issues. However, nearly all legal quotations in the plaintiff’s opening brief, and many in the reply brief, were fabricated. The cited quotations did not appear in the cases, some cited cases were irrelevant, and a few did not exist. These false authorities were generated by AI tools used by plaintiff’s counsel and were not verified by him. | The attorney’s appellate brief contained numerous AI-generated fabricated or inaccurately cited legal authorities, representing a failure to verify the accuracy of citations and a breach of professional duties. | Court of Appeal found that the appellant’s counsel submitted an opening brief containing numerous fabricated case quotations and misrepresented authorities, many of which did not exist or were inaccurately attributed, and that these false citations were the result of unverified reliance on generative AI, constituting a serious breach of the attorney’s duty of competence, rendering the appeal frivolous and undermining the integrity of the appellate process. | The California Court of Appeal imposed monetary sanctions of USD 10,000 on the appellant’s counsel, ordered that the sanctions order be published to warn the legal profession, and held the attorney personally responsible for the misconduct, emphasizing that unverified reliance on generative AI does not excuse the submission of false or fabricated legal authorities. | In this case the court sanctioned the lawyer and published the order because the brief contained AI-generated fabricated citations, highlighting that attorneys must verify all legal authorities regardless of AI use. |
| Abigail Ramirez v. Carlos Humala | No. 24‑cv‑242 (RPK) (E.D.N.Y.), 2025 WL 1384161 | May 13, 2025 | United States | U.S. District Court | The defendant moved for a pre-motion conference anticipating a motion to dismiss plaintiff’s complaint. Plaintiff’s counsel filed a response letter opposing that pre-motion, intending to provide legal authority. The response letter cited eight court decisions as legal authority. The court could not locate FOUR of those eight cited cases they appear not to exist. This pattern is consistent with what courts call AI “hallucinations” plausible-looking but false or fabricated case citations generated by artificial-intelligence research tools. | The attorney filed briefs that included several nonexistent case citations generated by AI and failed to verify their accuracy, violating duties of competence and candor. | The court found that counsel failed to verify that the cited authorities actually existed before filing the response. It determined that this conduct violated Federal Rule of Civil Procedure 11(b), which requires attorneys to ensure that legal filings are grounded in existing law and factual accuracy before submitting them. | The court sanctioned plaintiff’s attorney and her law firm $1,000. The order required payment into the court’s registry and that a copy of the sanctions order be served on the client. | The court sanctioned counsel for submitting filings with AI-generated non-existent citations, emphasizing that attorneys must verify all authorities even when using AI tools. |
| Hoosier Vac LLC v. Mid Central Operating Engineers Health & Welfare Fund | No. 2:24‑cv‑00326‑JPH‑MJD (S.D. Ind. 2025) | May 28, 2025 | United States | U.S. District Court, Southern District of Indiana | The Fund, an ERISA-governed employee benefit plan, sued HoosierVac LLC for failing to permit an audit of its payroll and business records as required under collective bargaining and trust agreements. HoosierVac filed counterclaims alleging breach of fiduciary duty, defamation, and related torts. During the proceedings, HoosierVac’s counsel submitted briefs citing non-existent, AI-generated case law, leading the court to dismiss the counterclaims and impose sanctions for failure to verify legal authorities | The attorney’s filings contained AI-generated fabricated case citations that did not exist and were not verified, breaching professional duties and leading the court to impose Rule 11 sanctions | The court dismissed HoosierVac’s counterclaims for lack of factual support and found that its counsel violated Rule 11 by citing non-existent, AI-generated cases without verification, leading to the imposition of monetary sanctions. | The court imposed a monetary sanction of $6,000 on the defendant’s counsel personally for repeatedly citing non-existent, AI-generated case law without verification. | Court finds that counsel’s filings included AI‑generated fabricated case citations, reinforcing that attorneys must independently verify all citations regardless of reliance on AI. |
| Ayinde v The London Borough of Haringey; Al-Haroun v Qatar National Bank QPSC | [2025] EWHC 1383 (Admin) | June 6, 2025 | United Kingdom | King’s Bench Division, Divisional Court | Counsel relied on unverified, fabricated authorities (generated or sourced via AI-style tools) and misstated provisions of law, submitting pleadings without checking authenticity or accuracy — amounting to a failure of basic professional diligence. | Submitting fabricated authorities and incorrect propositions of law; failing to verify sources before filing; relying on AI-style generated text without supervision or research. | The Court held that legal representatives owe a non-delegable duty to verify every authority cited, regardless of time pressure or use of generative tools. The filing of fictitious cases constituted improper, unreasonable and negligent conduct capable of misleading the Court and wasting judicial time. | Court held that contempt threshold is met, however, did not impose contempt. Referred to regulator. | AI cannot substitute legal diligence—failure to verify citations can result in personal costs sanctions and professional-regulatory scrutiny. |
| F Harber v HMRC | [2023] UKFTT 1007 (TC) | Dec 4, 2023 | United Kingdom | First-tier Tax Tribunal | Appeal against a penalty imposed for failure to notify capital gains tax liability following disposal of a property. The taxpayer argued she had a reasonable excuse based on mental health and/or ignorance of the law. | Submission of nine fabricated First-tier Tribunal authorities in support of her case; reliance on cases generated by AI without knowing they were fictitious; inability to verify authenticity via legal databases. | The Tribunal found as fact that all nine cases relied upon were fabricated and generated by an AI system such as ChatGPT. It accepted the taxpayer was unaware they were fake, but held that the existence of fake authorities did not alter the legal reasoning. The Tribunal emphasised the serious systemic harm posed by submitting bogus judicial opinions and underscored that ignorance of AI fabrication is not a defence to the professional or institutional implications. | Appeal Dismissed, Penalty upheld, No Contempt w.r.t the fake cases | AI-generated case law can appear highly plausible but be entirely fabricated. Even where a party is unaware of the falsity, courts view the submission of non-genuine authorities as a serious issue, and parties must verify judgments before relying on them. |
| Pro Health Solutions Ltd v ProHealth Inc (Appeal from decision O/299/25) | UKIPO trade mark appeal (BL O/0559/25) | June 20, 2025 | United Kingdom | UKIPO | Appeal against a Registrar’s decision upholding opposition and invalidation of certain PROHEALTH-related trade marks based on prior goodwill. While the Appointed Person ultimately dismissed the appeal, significant procedural issues arose concerning use of AI-generated legal material in the appellant’s filings. | Appellant (litigant-in-person) filed grounds and a skeleton argument drafted partly using ChatGPT, which contained fabricated quotations and incorrect case summaries. Respondent’s attorney cited cases for propositions he could not substantiate or trace to authoritative sources. Both parties placed unreliable legal material before the tribunal. | Generative AI can produce seemingly coherent but inaccurate legal content, including invented citations or propositions. Users have a duty—whether represented or unrepresented—not to mislead tribunals. AI use does not lower standards of accuracy. | Court declined to refer the trade mark attorney to IPREG on this occasion but formally admonished his preparation and research. No costs awarded in the appeal (despite appeal being dismissed). Hearing Officer’s prior costs order upheld. | AI-generated legal content must never be relied upon without checking against authoritative sources; even unrepresented parties remain responsible for accuracy, and professionals risk regulatory scrutiny if fabricated or unsupported citations are submitted. |
| Wemimo Mercy Taiwo v Homelets of Bath Ltd & Ors | [2025] EWHC 3173 (KB) | 3 December 2025 | United kingdom | High Court of Justice, King’s Bench Division England & Wales | The dispute revolved around the defendant’s management of the claimant’s tenancy , grounded in statutory duties imposed on letting agents under housing and consumer protection legislation . During the court’s scrutiny of the submissions filed by defendant, concerns arose regarding the authenticity and reliability of the authorities cited. | The defendant relied on non-existent and fabricated case authorities in its written submissions. The misconduct lay in the defendant’s failure to verify sources, thereby misleading the court and breaching the duty of competence and candour owed in legal proceedings. | The court accepted that the citations had been generated using an AI system, resulting in hallucinated case law being presented as genuine precedent | Judgment was entered against the defendant, and it was ordered to pay the claimant’s costs, with the court noting that the defendant’s litigation conduct played a material role in the adverse costs outcome. | The reliance on AI-generated hallucinated authorities constituted a serious procedural breach. The court reaffirmed that responsibility for accuracy rests with the party placing material before the court, and that misuse of AI tools may attract judicial criticism and adverse cost consequences |
| Mark Jennings vs NatWest group PLC | [2025] SAC (Civ) 41 | 21 November 2025 | United kingdom | Sheriff Appeal Court, Scotland | This case concerned a claim brought by the appellant,alleging unlawful discrimination under the Equality Act 2010.The appellant, asserted that he suffered from multiple recognised disabilities, including Autism Spectrum Disorder, anxiety disorders, and PTSD,He complained that Pride-related promotional material displayed within the bank branch caused him severe psychological distress.He raised proceedings in the Edinburgh Sheriff Court (on the basis of NatWest’s domicile), seeking an order compelling removal of the material and damages of £35,000 for distress and inconvenience. | Crucially, at least three of the cases cited by the appellant were non-existent, amounting to reliance on AI-hallucinated authorities. This conduct complicated and obscured the legal issues before the court. | The court expressly warned that these submissions required caution, observing that they consisted largely of generalised legal propositions detached from the facts | The appellant appealed and, at a late stage, sought to substantially amend his pleadings to reformulate the basis of liability under sections 111 and 112 of the Equality Act 2010. The Sheriff Appeal Court refused the amendment and dismissed the appeal. The appellant ultimately lost the case and was found liable for the respondent’s expenses of the appeal, as taxed. | It demonstrates judicial intolerance for the uncritical use of AI tools in legal submissions. The parties remain responsible for verifying legal authorities, and reliance on AI-hallucinated case law may undermine credibility, complicate proceedings, and contribute to adverse cost consequences. |
| ABC International Bank Plc denied liability and contested the claims in full. | CASE No.6005897/2024 | 13 October 2025 | United kingdom | Employment Tribunal (London Central), England & Wales | This case arose from an employment dispute .The claimant,brought multiple claims before the Employment Tribunal, including allegations of unfair dismissal, discrimination, and victimisation arising out of the termination of his employment and the employer’s conduct during disciplinary and grievance processes. The Tribunal found that the claimant relied on false and non-existent legal authorities in written submissions. | Tribunal found that the claimant relied on false and non-existent legal authorities in written submissions. Several case citations advanced by the claimant could not be located in any recognised legal database,thereby misleading the Tribunal and wasting judicial resources. | The Tribunal expressly noted that these authorities appeared to have been generated using AI tools, resulting in hallucinated case law being presented as genuine precedent.The Tribunal characterised this conduct as unreasonable and improper, particularly given the seriousness of allegations made against the employer. | ABC International Bank Plc denied liability and contested the claims in full . The Tribunal dismissed the claimant’s claims in their entirety, finding that they were not well-founded in law or fact. The claimant was ordered to pay costs. | The case confirms that the use of AI-generated material does not absolve litigants of responsibility for accuracy. Reliance on hallucinated authorities constitutes serious procedural misconduct and may justify adverse cost orders in employment proceedings |
| Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others | [2025] ZAKZPHC 2 | 8 January, 2025 | South Africa | Pietermaritzburg High Court, KwaZulu-Natal, South Africa | The Pietermaritzburg High Court considered a matter arising from a traditional leadership dispute in KwaZulu-Natal. Mavundla challenged a decision made by the Department of Co-Operative Government and Traditional Affairs (COGTA). During the proceedings, his legal representatives filed a supplementary notice of appeal and relied on multiple case authorities in their submissions. Upon scrutiny, the court discovered that many of these citations did not exist in any recognised legal database. Only two of the nine authorities cited were genuine, while the remaining citations were fictitious | The court found that the legal team had submitted false case citations, most likely generated through artificial intelligence tools. The legal team relied on case law that “simply didn’t exist.” The fabricated citations were described as AI hallucinations. The advocate admitted she had not verified the authorities and instead relied on research done by a junior colleague. The candidate attorney claimed the material came from an online research tool and denied using ChatGPT, but the court found the pattern consistent with AI-generated fake judgments. To verify the problem, the judge independently tested one of the citations using ChatGPT, which incorrectly confirmed the case’s existence, demonstrating how unreliable such tools can be for legal research. The court labelled the conduct as “irresponsible and unprofessional.” | The court made clear that lawyers remain fully responsible for the accuracy of all materials presented to the court, even when sourced through AI or digital tools. Workload pressure or ignorance of AI risks is no defence. Supervising lawyers must train and properly oversee junior colleagues. Bezuidenhout J stated that supervision includes verifying the correctness of information sourced from generative AI systems. Misleading the court can occur not only deliberately, but also through ignorance. By citing an authority, a practitioner tacitly represents that the case “…does actually exist.” | The High Court referred the matter to the Legal Practice Council, the statutory regulatory body for legal practitioners in South Africa, for investigation. The judge also criticised, the advocate for failing to verify citations before filing. The supervising attorney for not checking the documents submitted by junior staff. The ruling reflected that courts are increasingly losing patience with irresponsible reliance on AI-generated legal material. | This case offers an important reminder for legal practitioners: while AI may make legal research feel as simple as searching for directions online, relying on it to generate legal citations is deeply unwise almost like allowing a chatbot to argue your case in court. The legal field already has enough complexity without introducing avoidable technological fiction into the process. |
| Parker v. Forsyth N.O. and Others | [2023] ZAGPRD 1 | 29 June, 2023 | South Africa | Johannesburg Regional Court | The Johannesburg Regional Court in Parker v. Forsyth N.O. and Others dealt with the problem of false case authorities produced through the use of ChatGPT. The matter arose when the plaintiff’s legal representatives relied on AI-generated research to source supporting case law. At the heart of the dispute was the plaintiff’s attorneys’ decision to use ChatGPT, assuming that the cases it provided were accurate. During the hearing, however, it became apparent that the authorities referred to by counsel were entirely fictitious, including invented names, citations, facts, and decisions. This case therefore highlighted the serious limitations of AI-generated content and the necessity for proper human verification. | The misconduct in Parker centred on the plaintiff’s attorneys accepting ChatGPT-generated legal research without adequately verifying whether the cited cases actually existed. The court found that the attorneys had failed to exercise due diligence, as the cases relied upon were fictional and had no basis in reality. Although there was no clear intent to mislead, the defendant’s counsel was nonetheless deceived into believing the cited authorities were genuine. As a result, the defendants’ legal team wasted considerable time and effort attempting to locate these non-existent cases. The incident demonstrated how blind reliance on AI tools can misinform the court and opposing counsel, even where deception is not deliberate. | The court in Parker emphasised that modern technology cannot replace independent legal reasoning and professional responsibility. It observed that “the efficiency of modern technology still needs to be infused with a dose of good old-fashioned independent reading.” The case reinforced that legal training, critical analysis, and professional judgment cannot be outsourced to algorithms. South African courts expect practitioners to apply independent legal thought, especially in complex or novel matters, and not to rely blindly on AI-generated outputs. The ruling thus served as a warning that even unintentional misuse of AI can lead to serious consequences and wasted judicial and professional resources. | The defendants’ counsel sought a punitive costs order on the basis that the plaintiff’s attorneys had attempted to mislead the court. The court accepted that the costs order requested was reasonable because it was aimed at addressing the losses incurred due to the misinformation. Magistrate Chaitram concluded, however, that the attorneys’ conduct stemmed from overzealousness and carelessness rather than an intentional attempt to deceive. The court therefore did not treat the costs order as punitive, but rather as an appropriate measure to rectify the situation and compensate for wasted effort caused by the fictitious citations. | Although the plaintiff’s attorneys may not have acted with any improper intent, their excessive dependence on a chatbot for legal research resulted in inaccurate information and caused unnecessary expenditure of time and effort by the defendant’s counsel. This situation highlights that, despite technological advancements, independent analysis, careful reading, and critical reasoning continue to be indispensable in legal practice. |
| Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others | [2025] ZAGPJHC 661 | 30 June, 2025 | South Africa | High Court of South Africa, Gauteng Division, Johannesburg | The Gauteng Division of the High Court in Johannesburg considered a matter in which the applicant’s written heads of argument contained multiple case citations that were later discovered to be fictitious. The court traced these false authorities back to an AI-based tool known as “Legal Genius.” The case thus formed part of a growing line of decisions addressing the risks and ethical consequences of relying on generative AI tools in legal research and court submissions. | The court found that several case citations included in the applicant’s heads of argument did not exist. These fictitious authorities had been generated through the AI research tool and were presented in a way that appeared coherent and plausible. Counsel admitted that the incorrect citations resulted from time pressure and oversight rather than bad faith, but the court stressed that plausible AI-generated outputs are unacceptable if they are false. The misconduct lay in the failure of the legal team, particularly senior counsel, to independently verify the accuracy of the cases cited. The court emphasised that written heads of argument carry the same ethical weight as oral submissions, and therefore cannot include references to non-existent authorities, regardless of whether the mistake was intentional or accidental. | The Northbound decision reinforced that AI-generated legal research must always be independently checked before being placed before a court. The court made clear that “coherent and plausible” AI outputs are not sufficient if they are false. It reaffirmed that the ethical and professional responsibility of practitioners remains constant whether or not AI tools are used. The ruling also highlighted that written advocacy, particularly heads of argument, is not secondary to oral submissions but carries equal authority and importance, meaning that errors in written citations undermine the integrity of the judicial process. Ultimately, the case demonstrates that lawyers cannot delegate accountability to AI systems or junior drafting teams, and must remain fully responsible for the correctness of all legal sources presented. | Although the court accepted that there was no deliberate attempt to mislead, it concluded that the inclusion of fictitious citations in filed submissions remained a serious breach of professional responsibility. The court reiterated that it is the practitioner whose name appears on the document who bears the duty of verification, even if the drafting work is internally delegated. The fact that the fictitious authorities were not cited orally did not mitigate the misconduct, because heads of argument are relied upon by courts as much as, if not more than, oral argument. Consequently, the matter was referred to the Legal Practice Council for investigation, in the same manner as in Mavundla, despite the absence of intentional deception. | This judgment is among the first in South Africa to directly address the improper use of generative AI in legal proceedings. It forms part of an increasing trend both locally and internationally, where courts are cautioning legal practitioners about the dangers of relying on AI-generated material that has not been properly verified. The case underscores the ethical obligation on lawyers to independently confirm all legal authorities before citing them. Even accidental AI “hallucinations” can damage reputations and may result in professional misconduct proceedings. |
| Zhang v. Chen | 2024 BCSC 285 | 20th February, 2024 | Canada | Supreme Court of British Columbia | Deals with an application for payment of costs, arising from a dispute between the applicant and her former spouse regarding parenting time. The applicant sought special costs against opposing counsel for citing AI-generated cases, which cost her lawyers considerable time as they searched for non-existent cases. | The notice of application consisted only of two cases, both of which were entirely AI generated, made with ChatGPT. Citations led to completely different cases. | Citing fake cases was likened to perjury, and an abuse of process overall. The court discussed how LLMs are frequently prone to hallucinations, and thus not a substitute for professional advice. | Counsel had to personally bear additional effort and expense incurred by the applicant’s counsel as a result of inserting fake cases. | Generative AI’s hallucinating tendencies mean it cannot be relied upon for legal research. The case’s heavy publicization severely harmed the counsel’s professional credibility. |
| Lloyd’s Register Canada v. Munchang Choi | 2024 CIRB 1146 | 23rd July, 2024 | Canada | Canada Industrial Relations Board | Deals with complaints made regarding the complainant’s termination from his employment. Alleged to be inadmissible due to being untimely, and the evidence attached being recorded surreptitiously and without consent. | Self-represented complainant referenced over thirty legal decisions in his submission, of which only two were accurate. The rest were AI generated. | While self-represented parties may benefit from AI usage, it is still their duty to ensure that their submissions are accurate. Verification is a must. | No punishment per se, as the Board was not authorized to mete out any. Federal Court later struck off the motion record filed by the complainant from the court file (2025 FC 1233). | AI generated legal content should be used with caution if at all, and thoroughly verified regardless of a party’s legal expertise. |
| Industria De Diseño Textil, S.A. v. Sara Ghassai | 2024 TMOB 150 | 12th August, 2024 | Canada | Trademarks Opposition Board | Interlocutory ruling for a trademark dispute wherein the applicant sought to strike some grounds of argument used by the opposition, as well as request an extension to file their own counter statement. | Five of the cases cited by the applicant in their submissions did not exist and could not be found anywhere, and were thus concluded to be AI generated. | The court disregarded the cases, emphasising that reliance on false citations in front of the court is a serious matter, where done by accident or otherwise | No punishment; opponent did not appear to have suffered any loss or gone through extra effort due to false citations. | Reliance on AI generated citations strongly discouraged; if relied on, should be duly verified before submission to the court. |
| Hussein v. Canada | 2025 FC 1060 | 28th April, 2025 | Canada | Federal Court | An application for the admission of new evidence, and an extension for the same, was submitted before the Refugee Appeal Division. Applicant pleaded that the non-submission of the evidence had a valid reason, and that its acceptance could affect the proceeding’s outcome. | Applicant cited multiple AI-generated authorities, and continued to do so after being asked to source them. Also cited unrelated cases, and invented a test in an existing case. Counsel admitted to using Visto.ai, “a professional legal research platform designed specifically for Canadian immigration and refugee law practitioners”. | Court declared this practice impermissible. Use of AI in court proceedings should be declared, and its output verified by a human. Delayed admission of use amounted to misleading the court, and concern that the counsel did not recognize the seriousness of the issue. | Applicants’ counsel was to pay any costs awarded on the motion (not ordered by the court, but put under consideration). | AI, if used in courts, should be duly declared and verified. Concealment, not mere use, seems to be in issue as well. |
| Ko v. Li | 2025 ONSC 2766 | 6th May, 2025 | Canada | Ontario Superior Court of Justice | An application made by the applicant to invalidate a divorce, the documents for which she claimed were signed under duress. Through this, she sought a claim over the estate of her deceased spouse. | The statement of fact submitted by the counsel consisted of several cases that did not exist, or links which led to entirely different cases than the ones cited, that were unrelated to the hearing. Others had different facts than the ones claimed | The court heavily disapproved of the use of AI, emphasising that it was the lawyer’s duty to ensure she represents her client and the law faithfully to the court, and to not mislead the court. Lawyers cannot rely on non-existent authorities. | The counsel was presented with a show cause notice to prove if she should not be held for contempt, and to submit evidence to the contrary if possible | Unchecked use of AI can constitute a serious breach of duty for a lawyer, which may amount to contempt in the face of the court. |
| NCR v. KKB | 2025 ABKB 417 | 9th July, 2025 | Canada | Court of King’s Bench of Alberta | Appeal made with respect to an arbitral award that dealt with a child support agreement between the appellant and her ex-husband. The appellant challenged a series of clauses dealing with the father’s share of child support, and payment of arbitration costs. | The self-represented appellant cited seven cases, all with citations that led to different cases than the ones actually cited. Only one led to a correctly corresponding case, though in a different jurisdiction. | Held that it was unacceptable to refer to legal authorities without verifying the accuracy of the information contained therein, and that court resources and time are wasted trying to verify the existence of fake cases. | As the appellant was self-represented, and thus had limited resources and expertise, the court felt she did not intend to mislead. No punishment per se, though the fact of the fake cases was cited when denying her any awarding of costs. | Emphasis on due diligence when citing legal authorities so the court’s time is not wasted. |
| Reddy v. Saroya | 2025 ABCA 322 | 9th September, 2025 | Canada | Court of Appeal of Alberta | Appeal to an order declaring that the appellant was in civil contempt due to failure to provide the information he was requested to about his undertakings. The appellant claimed the judge was biased. | Appellant’s submissions did not include hyperlinks to the case relied upon, and seven of the cases relied upon could not be found. They were concluded to be AI generated. | Court cited Law Society of Alberta’s Code of Conduct to emphasize the responsible, competent use of technology such as LLMs, and their potential benefits and risks. Should be used with caution, as it may cause confusion or constitute an abuse of process at worst. | Panel considered imposing costs on the appellant’s counsel, to be paid to the respondent for the time and effort spent searching for fake cases. | Lawyers bear responsibility even if material is filed by someone they engaged. Leniency should not be expected in these cases, for lawyers or self-represented litigants. AI outputs should be verified by humans. |
| Mertz & Mertz (No 3) | [2025] FedCFamC1A 222 | 28 November 2025 | Australia | Federal Circuit and Family Court (Appellate) | On appeal in a family law dispute, the appellant’s legal documents, including the summary of argument and list of authorities, were found to contain incorrect references and misleading citations. These errors appeared associated with the use of AI in drafting the filed material. The court considered the use of AI and its impact on the accuracy of submissions. | Incorrect authorities and misleading references were included in the appellant’s filed documents, suggesting that AI was used in preparation without adequate checking. | The court found that AI had been used in preparation of submissions and that this resulted in errors. It made orders to refer the conduct of the solicitor and counsel involved to their respective professional regulatory bodies. | The court ordered that the legal practitioners involved be referred to the South Australian Legal Profession Conduct Commissioner and the Victorian Legal Services Board and Commissioner. Costs were also ordered against the appellant relating to the errors. | Use of AI in preparing legal submissions must be controlled and verified; errors in authorities can lead not just to costs orders but also to professional referrals. |
| JNE24 v Minister for Immigration and Citizenship | [2025] FedCFamC2G 1314 | 15 August 2025 | Australia | Federal Circuit and Family Court of Australia (Division 2) | In this immigration judicial review matter, the applicant’s lawyer filed submissions that included citations to cases that did not exist. The court asked the lawyer about the fictitious citations and his use of research methods. The submissions with incorrect cases appeared to result from AI-assisted research that was not verified before filing. | The lawyer’s submissions contained references and citations to cases that do not exist, reflecting a failure to check the accuracy of authorities before filing. | The court concluded that the lawyer’s conduct raised concerns about professional responsibility and ordered that the matter be referred to the Legal Practice Board of Western Australia for consideration of his conduct. | The lawyer was referred to the Legal Practice Board of Western Australia and was ordered to personally pay the Minister’s costs related to the matter. | Lawyers must ensure that case citations and legal authorities in submissions are accurate and verified; reliance on AI without checking can lead to regulatory referral and cost orders. |
| Dayal | [2024] FedCFamC2F 1166 | 27 August 2024 | Australia | Federal Circuit and Family Court (Family Law) | A Victorian lawyer filed a list of legal authorities in a family court matter that were generated using AI and not verified. The list included citations and summaries of cases that did not exist. The judge found the lawyer had not checked the AI output before submitting it and referred his conduct to the Victorian Legal Services Board and Commissioner for review. The lawyer had tendered the document in July 2024 and admitted the AI tool was used without verification. | The lawyer submitted a list of authorities that did not exist, generated through an AI tool, and did not verify the accuracy before filing with the court. | The court found the lawyer breached professional standards by providing inaccurate authorities and referred his conduct to the Victorian Legal Services Board and Commissioner as part of the regulatory oversight process. | The Victorian Legal Services Board varied the lawyer’s practising certificate so he could no longer act as a principal lawyer, could not handle trust money, and must practise under supervision for two years with regular reporting. | Legal practitioners must verify all authorities and information generated by AI before submitting them to a court, as they remain responsible for accuracy and professional conduct. |
| Valu v Minister for Immigration and Multicultural Affairs (No 2) | [2025] FedCFamC2G 95 | 3 February 2025 | Australia | Federal Circuit and Family Court | In this immigration judicial review application, the legal representative filed submissions containing non-existent case citations and quotes that did not exist in tribunal decisions. The court questioned whether AI had been used and noted the submissions contained fabricated authorities. The conduct hindered the hearing and required the court to address the issue of incorrect legal references. | The legal representative’s submissions included citations to cases and supposed quotes that did not exist, reflecting failure to verify legal authorities, possibly arising from unverified AI assistance. | The court found that the conduct was below professional standards and ordered the lawyer’s conduct be referred to the Office of the NSW Legal Services Commissioner for further consideration. | The matter was formally referred to the NSW Legal Services Commissioner to consider if the conduct amounted to unsatisfactory professional conduct or professional misconduct. | Legal submissions must be based on verified authorities; courts will refer unverified or fabricated legal material to regulatory authorities for assessment. |
| XAI v XAH and another matter | [2025] SGFC 93 | 3 September 2025 | Singapore | Family Court(Singapore) | The dispute arose between divorced spouses who filed cross-applications for PPOs against each other concerning alleged family violence involving access to their two children. The Father appeared self-represented and filed written submissions citing 14 supposed judicial precedents. During review, the Court discovered that none of these cases actually existed. | The Father(appeared in person and was unrepresented) used ChatGPT to identify relevant precedents for his case and blindly produced 14 fabricated case citations in his written submissions. The cited cases either did not exist at all or carried neutral citations belonging to entirely different cases. The father failed to check the authenticity of the cases cited by him.Therefore, he violated the General Principle laid down by Registrar’s Circular No. 1 of 2024 i.e. Guide on the Use of Generative AI Tools by Court Users. | The court observed that the fake or AI hallucinated cases have a deeply corosive effect on the legal system.The court also exercised its inherent power to prevent the abuse of its own processes | The Court ordered the father to pay SGD 1000 to the mother.The Court also directed that if the father appears self-represented in future Family Court Proceedings, and uses generative AI to prepare any document that is submitted in the court, he must formally declare such use in writing and state that he has compiled with the guide. | The Court does not prohibit the use of Generative AI tools to prepare Court Documents, provided that they are prepared in accordance to the Registrar’s Circular No. 1 of 2024 – Guide on the Use of Generative AI Tools by Court Users. Also, Self Represented Litigants have to abide by the same standard of accuracy as lawyers while ensuring that all information provided to the Court is independently verified, accurate, true, and appropriate. |
| Tajudin bin Gulam Rasul and another v Suriaya bte Haja Mohideen | [2025] SGHCR 33 | 29 September 2025 | Singapore | General Division of the High Court of the Republic of Singapore | The case arose from an application to set aside a default judgment in a civil claim. During the proceedings, counsel for the claimants cited a case authority in written submissions to argue that the Moneylenders Act did not apply to the transaction in question. It later emerged that this cited authority did not exist and had been generated by GenAI( generative artificial intelligence tool). The opposing counsel was unable to locate the case, informed the court, and sought a personal costs order against the claimants’ counsel for the improper citation. | The Counsel for the Claimants cited a fictitious authority that had been generated using a generative AI tool. The misconduct was further aggravated by the counsel’s failure to promptly disclose the error to the Court. Instead, without seeking the Court’s permission, he filed amended written submissions and attempted to downplay the seriousness of the issue by characterising the amendments as mere “typographical errors.” The gravity of the misconduct was further evident from the fact that the fictitious authority was conspicuously absent from the Claimants’ Bundle of Authorities, which ultimately revealed that the cited case did not exist at all. | The court found that the counsel’s conduct was improper, unreasonable, and negligent. It held that advocates and solicitors have a non-delegable duty to verify the accuracy and existence of all authorities placed before the court, regardless of whether AI tools are used. The court further held that the conduct caused the defendant to incur unnecessary costs by expending time and resources to investigate the non-existent case and raise the issue before the court. The court emphasised that such conduct erodes the integrity of the justice system and public confidence in the legal profession. | The court ordered the claimants’ counsel to personally pay costs of SGD 800 to the defendant, representing the unnecessary costs caused by the improper citation. The court also directed that both parties’ counsel provide their respective clients with a copy of the court’s directions. | While lawyers may use generative AI tools, they remain fully responsible for verifying all AI-generated content before submitting it to court. Citing fictitious authorities, even unintentionally, is a serious breach of professional duty and may result in personal cost sanctions. |
| Tan Hai Peng Micheal & Anor v Tan Cheong Joo & Ors | [2025] SGHC 217 | 3 November 2025 | Singapore | General Division of the High Court of the Republic of Singapore | The claimants, Tan Hai Peng Micheal and Tan Hai Seng Benjamin, were executors of the estate of Tan Thuan Teck (TTT). They sued several defendants (four brothers and related companies) to recover outstanding sums from multiple loans extended between 2009 and 2018. | The court found serious professional misconduct in the defendants’ closing submissions, noting first a material misquotation of section 105 of the Evidence Act which omitted a proviso directly relevant to the issue of burden of proof. More seriously, the defendants cited two entirely fictitious judicial authorities that did not exist, prompting the court to intervene after the claimants raised the issue. Defence counsel later admitted that the authorities were fictitious, explained that they had been supplied by another solicitor engaged to assist with research, and acknowledged that they were likely generated using an artificial intelligence tool, although the specific tool could not be identified. | The court held that the citation of fictitious authorities was “most troubling”, treating it as a serious breach of counsel’s duty to the court. It emphasised that counsel bears responsibility for the accuracy of submissions and noted that there were reasonable grounds to suspect that the fictitious authorities were generated by an AI tool, given the well-known risk of AI systems “hallucinating” fabricated case law. To prevent further dissemination of false information, the court deliberately refrained from reproducing the fictitious citations in the judgment. |
“Talks of gains in efficiency and productivity by using tools such ChatGPT and Google Gemini are overshadowed by the myriad negative impacts of AI tools on information ecosystems, environmental issues as well as issues related to labour exploitation in building AI datasets and data privacy issues.”
– Kaif Siddiqui, Research Fellow, NALSAR, Hyderabad in the Foreword to the Feb 2026 Report
The Report and The Repository
The Bi-Annual report is a 6 month’s compilation of the repository that is updated and maintained regularly by Research and Content Team,Virtuosity Legal. The report is updated on a regular basis. Think we missed a case? Report It Here
Documented Hallucinations
A curated record of verified instances where AI-generated content entered common law courtrooms as fabricated or erroneous legal material.
Jurisdictional Mapping
Tracks how AI hallucinations have surfaced across different common law jurisdictions, enabling comparative legal analysis.
Judicial Responses
Examines how courts have reacted to AI-generated errors, from judicial warnings to procedural consequences and sanctions.
Patterns & Risks
Identifies recurring trends in AI hallucinations, highlighting systemic risks in legal research, drafting, and advocacy.
Research & Accountability
Designed as a reference point for scholars, lawyers, and policymakers studying AI reliability, ethics, and legal accountability.
Built for Citation
The repository maintains a verified record of AI hallucinations suitable for academic reference and policy research.
