Report Release: The Case That Wasn’t: A Repository of AI Hallucinations in Common Law Courtrooms Around The World | Feb 2026, Virtuosity Legal

Kaif Siddiqui, Research Fellow, NALSAR Univ. Hyderabad

It is November 2025, as I write this note. Exactly three years since the world changed forever, with the release of OpenAI’s ChatGPT in November 2022. And unlike many other historical instances where we may say that “the world changed forever”, this moment did actually feel like a monumental shift. Users quickly realized this technology that we dubbed as “Generative AI” could do much more than just talk to you. It could generate text -poems, stories, articles. It could analyze, summarize and explain. It could hold a conversation with you. It could be your “friend”. After years of a general negative public image held by Big Tech corporations like Meta and Google, it seemed like the new upstart, OpenAI, had actually made something people wanted to use. We were finally in the future and intelligent AI was here to revolutionize our lives. Did it, though? This is the question we must ask ourselves, three years later. The effects of ChatGPT and its ilk continue to loom over global society, and the revolution of ChatGPT seems to have come at a heavy cost, (if we assume it has come at all). Talks of gains in efficiency and productivity by using tools such ChatGPT and Google Gemini are overshadowed by the myriad negative impacts of AI tools on information ecosystems, environmental issues (including massive water consumption and power consumption) as well as issues related to labour exploitation in building AI datasets and data privacy issues. In many cases, companies are firing workers to replace them with AI only to realize that fixing AI’s issues may take up even more effort and costs than simply hiring humans. Nonetheless, due to millions of dollars spent in marketing AI tools as “everyday personal assistants”, individuals and companies continue to be excited about adopting AI in their workflows, with lesser and lesser attention being paid to the critical question of “do we even need this?” While AI can be used to automate mechanical tasks and give users the space to focus on more complex and creative issues, in practice it is very hard to maintain the distinction between a beneficial and a harmful use of AI, as the tendency to let the AI do one’s work for them has a habit of creeping up in scale, over time. As many scholars, writers and thinkers have noted, one of the many impacts of AI tools is in people willingly letting the tools “think” for them, completely erasing any kind of effort, mental or physical, that one would normally put into their work – which severely impedes growth, learning and skill improvement.

It was in April 2025 that we started working on this repository, nearly 10 months ago. The report was earlier scheduled to be released by the end of Virtuosity Lexicon, however, at Virtuosity Legal, substance and actuality does take precedence over formalism. We realised that the repository, required extensive coverage of the judicial reaction to AI Hallucinations, one that would require some time. The report is a testament to the months of honest research and commitment to our motto, ‘substance. authority. insight.’

We focused on the judicial approach taken by various courts in the Common Law jurisdictions (plus USA) around the world. The reason for limiting ourselves to Common Law countries is merely practicality, one that helped us to narrow down our scope of research.

This repository (and the subsequent report) is a note of caution to the legal profession and to any profession that seeks to rely on unmonitored and unscrutinised outputs of Generative Artificial Intelligence Models.

Significant effort was invested in researching each jurisdiction individually because legal approaches are shaped by local procedural rules, professional conduct frameworks, and institutional expectations of the legal profession. These nuances cannot be captured through automated research or generic summaries. Each jurisdiction required independent verification of judicial reasoning, regulatory guidance and factual context to ensure accuracy. It must also be recognised that no website or application can reliably determine or certify the percentage of artificial intelligence involved in any written work. A central warning that emerges from this work is that legal research cannot be automated. Technological tools can certainly help in locating information but they cannot understand context or legal reasoning or take responsibility for accuracy and ethics. This repository reflects a conscious decision to value independent verification and thoughtful analysis over speed or convenience. It serves as a reminder that meaningful legal research is ultimately a human exercise where one that depends on judgement, diligence and accountability.

Project Lead
Aiman Mairaj
Janhavi Gupta

Co-Curators
Ayat Shaukatullah
Saniya Malik
Shabi Tauseef
Akshat Pahuja

Author


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *