AI in Court: When Legal Tech Goes Rogue – Lessons from Mata v. Avianca

written by Guest Author

in , , ,
[post-views]

The legal industry is undergoing a technological transformation, with artificial intelligence (AI) emerging as a powerful tool for legal research, drafting, and case analysis. AI-driven platforms promise greater efficiency, reduced costs, and improved access to legal resources. These tools can quickly summarise case law, generate legal arguments, and assist in document preparation, tasks that traditionally required extensive manual effort. However, with this promise comes significant risk, particularly when attorneys rely on AI without fully understanding its limitations.

A stark example of AI misuse in legal practice emerged in Mata v. Avianca, Inc.[i], a case that exposed the dangers of unverified AI-generated legal work. In this case, attorneys representing the plaintiff submitted court filings citing fictitious legal precedents, cases that did not exist. These fake cases were generated by ChatGPT, an AI-powered language model, which falsely assured the attorneys that the citations were legitimate. The court’s scrutiny revealed glaring inconsistencies, ultimately leading to sanctions against the lawyers involved.

The Mata v. Avianca[ii] debacle highlights a critical lesson: while AI can enhance legal work, it is not infallible. Blind reliance on AI without proper verification can lead to severe consequences, including professional discipline, reputational damage, and compromised judicial integrity. This case underscores the ethical and professional responsibility of lawyers to critically assess AI-generated outputs, ensuring that technology remains a valuable tool rather than a liability in the courtroom.

Background: The Case of Mata v. Avianca

The case of Mata v. Avianca, Inc[iii]. originated from a seemingly routine personal injury claim. Plaintiff Roberto Mata alleged that he suffered injuries when a metal serving cart struck his knee during an international flight operated by Avianca Airlines. Seeking legal redress, Mata filed a lawsuit against the airline, asserting his claims under the Montreal Convention, an international treaty that governs airline liability for passenger injuries.

Avianca, in response, moved to dismiss the case, arguing that Mata’s claims were time-barred under the strict two-year statute of limitations imposed by the Montreal Convention. Given that Mata’s alleged injury occurred well outside this period, the airline contended that his case was legally untenable and should be dismissed outright.

Representing Mata in the litigation were attorneys Peter LoDuca and Steven Schwartz of the law firm Levidow, Levidow & Oberman P.C. Tasked with opposing Avianca’s motion, Schwartz, who took the lead in legal research, sought to identify legal precedents supporting an argument for tolling the statute of limitations, particularly in cases involving bankruptcy stays. In doing so, he turned to ChatGPT, an AI-powered chatbot, to assist in finding relevant case law.

Unbeknownst to Schwartz, ChatGPT did not retrieve actual cases from legal databases but instead fabricated non-existent precedents. These AI-generated cases, complete with fictional citations and judicial opinions, were subsequently incorporated into Mata’s court filings, leading to a legal and ethical disaster. Neither Schwartz nor LoDuca verified the authenticity of the cases before submitting them to the court, setting in motion a chain of events that would expose the dangers of uncritical reliance on AI in legal practice.

How AI Went Wrong: The Fabrication of Fake Cases

At the heart of the Mata v. Avianca[iv] debacle was attorney Steven Schwartz’s misguided reliance on ChatGPT for legal research. Facing a motion to dismiss grounded in the Montreal Convention’s two-year statute of limitations, Schwartz sought case law that might support an argument for tolling. Instead of turning to traditional legal research tools like Westlaw or LexisNexis, he used ChatGPT—a chatbot designed to generate human-like text but not built for retrieving accurate legal precedents.

This decision proved disastrous. ChatGPT has a well-documented phenomenon known as “hallucination”—a tendency to generate information that appears authoritative but is entirely fictitious.[v] The AI does not access actual legal databases; instead, it produces responses based on patterns in the data it was trained on, often fabricating plausible but non-existent cases. When Schwartz queried the AI, it confidently presented fabricated case law, complete with citations, judicial opinions, and procedural histories. When he asked ChatGPT to confirm the cases’ legitimacy, the AI, consistent with its design, reaffirmed their authenticity, further misleading him.

Schwartz and his colleague Peter LoDuca overlooked several glaring red flags before submitting the brief. The AI-generated cases contained nonsensical legal reasoning, non-existent citations, and even self-referencing cases that a simple check on Westlaw or LexisNexis would have exposed. Also, the opinions were fragmented and lacked clear conclusions, deviating from legitimate judicial rulings. Had they conducted even minimal verification, these errors would have been immediately apparent. Instead, their blind reliance on AI led to the submission of fabricated legal authorities to a federal court, resulting in severe judicial scrutiny and sanctions.[vi]

The Court’s Response: How the Legal System Handled AI Misuse

The errors in Mata’s legal filings did not go unnoticed. Avianca’s attorneys, upon reviewing the plaintiff’s opposition to the motion to dismiss, found several of the cited cases unfamiliar. A diligent search of legal databases such as Westlaw and LexisNexis confirmed their suspicions. The cases simply did not exist. In response, Avianca raised the issue before the court, prompting Judge P. Kevin Castel of the Southern District of New York to intervene.

When Judge Castel ordered Mata’s attorneys to produce the full texts of the cited cases, it should have been a wake-up call. Instead of admitting their error, Steven Schwartz and Peter LoDuca doubled down, failing to withdraw the false cases and attempting to defend their submission. Schwartz initially concealed his use of ChatGPT, while LoDuca misled the court about his involvement. Their excuses were unconvincing, at one point blaming vacations for their ignorance of the issue. Even after the false citations were exposed, Schwartz submitted an affidavit acknowledging his use of AI but failed to take full responsibility for his lack of verification.

Judge Castel found these actions inexcusable and imposed sanctions against the attorneys. The court ordered them to notify every judge misled by the fake cases, as well as their own client, Roberto Mata. Additionally, they were fined $5,000[vii],a relatively modest financial penalty but a significant professional and reputational blow.

Lessons from Mata: The Risks of AI in Legal Practice

The Mata v. Avianca[viii] case serves as a cautionary tale for attorneys embracing artificial intelligence in their practice. While AI tools like ChatGPT offer convenience and efficiency, they are not substitutes for legal expertise, judgment, or due diligence. This case highlights three critical lessons for the legal profession.

1. AI Must Be Verified, Not Trusted Blindly

One of the most glaring errors in this case was blind reliance on AI-generated research. Unlike traditional legal research platforms such as Westlaw and LexisNexis, ChatGPT does not pull case law from verified legal sources. Instead, it generates responses based on language patterns, sometimes fabricating plausible-sounding but entirely fictitious cases.

Attorneys have a professional duty to cross-check AI-generated legal information. A simple search in an authoritative database would have exposed the fraudulent citations in Mata’s filings. AI can be a useful starting point, but it must never be the final word in legal research.

2. Ethical Obligations Remain with the Attorney

The responsibility for accuracy in legal filings rests with the attorney, not the tools they use. Ethical standards require lawyers to supervise all aspects of their work, ensuring that arguments presented are well-founded in fact and law. Failing to scrutinise and verify submissions can lead to serious professional consequences, highlighting a fundamental truth: AI is a tool, not a substitute for legal judgment. Ultimately, accountability lies with the practitioner, reinforcing the need for careful oversight in an era of evolving technology.

3. AI’s Potential for Hallucinations Cannot Be Ignored

ChatGPT’s ability to hallucinate legal precedents presents a serious risk. Unlike a misinterpreted case, which can be corrected through argument, an entirely fictional case is indefensible. The attorneys in Mata learned this the hard way, submitting fabricated legal authorities not only, weakened their client’s case but also damaged their own credibility before the court.

Legal professionals must recognise that AI can confidently produce falsehoods, and failing to verify its output can lead to court sanctions, reputational harm, and even malpractice liability. As AI continues to play a role in legal research and drafting, Mata v. Avianca serves as a warning: technology can assist but cannot replace professional responsibility. Courts demand accuracy, diligence, and integrity, making it essential for attorneys to ensure AI supports these principles rather than undermining them.

How Lawyers Can Responsibly Use AI

Artificial intelligence is reshaping legal practice, but as Mata v. Avianca[ix] demonstrates, it must be used with caution. Lawyers who integrate AI into their workflow must do so responsibly, ensuring that technology enhances their practice rather than compromising their credibility. Here are three essential guidelines for using AI effectively in legal work.

1. Understand AI’s Strengths and Weaknesses

AI offers tremendous benefits in drafting, summarisation, and brainstorming, helping attorneys streamline tasks that once consumed hours of manual effort. It can assist in generating legal arguments, organising information, and improving efficiency. However, AI is not a substitute for traditional legal research tools like Westlaw, LexisNexis, or Fastcase.[x]

Unlike databases that provide verified case law, AI-powered tools like ChatGPT do not retrieve legal sources, they generate text based on linguistic patterns. This means they cannot guarantee accuracy and should never be used as a primary research tool for case citations or legal precedent.

2. Always Verify AI-Generated Content

Every legal argument submitted to a court must be thoroughly vetted. AI-generated citations should always be cross-checked against trusted legal sources before inclusion in any filing. Courts expect attorneys to uphold professional standards, and submitting inaccurate or fabricated information, even unintentionally, can lead to serious consequences.[xi]

AI should be treated as an assistant, not an authority. Lawyers must remain the final gatekeepers, ensuring that any content produced by AI aligns with verified legal principles and precedents.

3. Immediate Correction of Mistakes Is Crucial

One of the most damaging aspects of the Mata case was how the attorneys handled their mistake. Instead of promptly withdrawing the fabricated cases, they delayed, submitted contradictory affidavits, and attempted to downplay the error. This only worsened their situation and resulted in court sanctions.[xii]

Lawyers who discover an AI-related mistake must act swiftly and transparently. A prompt correction can mitigate damage, maintain professional credibility, and prevent harsher consequences. Ignoring or covering up AI-related errors, however, can escalate into ethical violations and disciplinary action.

Conclusion

This case stands as a cautionary tale for the legal profession, illustrating both the potential and the pitfalls of artificial intelligence in legal practice. AI tools like ChatGPT offer efficiency, automation, and assistance in drafting and summarisation. However, as this case demonstrates, they are not infallible, and when used recklessly, they can lead to serious professional consequences.

The central lesson from Mata’s attorneys’ missteps is clear: AI is a tool, not a substitute for human expertise. While AI can assist in legal work, it cannot independently verify legal precedents or ensure the accuracy of case law. Blind trust in AI-generated content is dangerous, and failing to verify its output can result in reputational damage, court sanctions, and even malpractice liability.

As AI continues to integrate into legal workflows, the legal industry must evolve alongside it. This means adopting clear ethical guidelines, implementing verification processes, and ensuring attorneys remain accountable for their filings. AI’s presence in the legal field is inevitable, but without responsible use, it poses risks that could undermine the integrity of the profession.

Ultimately, the responsibility remains with lawyers. AI can streamline legal work, but human oversight, diligence, and ethical judgment are irreplaceable. The future of law and AI is not about replacement—it’s about partnership, with attorneys maintaining full control over the accuracy and integrity of their work.


Endnotes

[i] Case No. 22-cv-1461 (PKC) (S.D.N.Y).

[ii] ibid

[iii] ibid

[iv] Case No. 22-cv-1461 (PKC) (S.D.N.Y).

[v] Thomson Reuters, ‘The Key Legal Issues with Gen AI’ (Thomson Reuters, 18 March 2024) https://legal.thomsonreuters.com/blog/the-key-legal-issues-with-gen-ai/ accessed 5 April 2025.

[vi] Rapoport, Nancy B. and Norton, Cynthia A., Doubling Down on Dumb: Lessons from Mata v. Avianca Inc. (August 1, 2023). American Bankruptcy Institute Journal 24 (Aug. 2023), Available at SSRN: https://ssrn.com/abstract=4528686

[vii] [Mata v. Avianca, Inc., No. 1:2022cv01461 – Document 55 (S.D.N.Y. 2023)

]https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/55/ accessed 5 April 2025.

[viii] Case No. 22-cv-1461 (PKC) (S.D.N.Y).

[ix] Case No. 22-cv-1461 (PKC) (S.D.N.Y).

[x] Association of Corporate Counsel, ‘Practical Lessons from Attorney AI Missteps: Mata v Avianca’ (ACC, 6 July 2023) https://www.acc.com/resource-library/practical-lessons-attorney-ai-missteps-mata-v-avianca accessed 5 April 2025.

[xi] ibid

[xii] ibid

Author

  • Praney Goyal

    Praney Goyal is a third-year B.Com. LL.B. student at University Institute of Legal Studies (UILS), Panjab University.

    View all posts

The views expressed are personal and do not represent the views of Virtuosity Legal or its editors.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *