Share This Article
What happens when an artificial intelligence (AI) tool like ChatGPT invents a legal rulingโand that ruling ends up in a courtroom filing?
In this episode of Diritto al Digitale, Giulio Coraggio explores two real cases, one in Italy and one in Canada, where lawyers unknowingly relied on hallucinated case law generated by artificial intelligence.
We examine the legal and ethical implications of these events, what they reveal about the professionโs readiness for legal tech, and why under the EU AI Act, such mistakes could trigger serious compliance risks. Giulio also shares how DLA Piper is addressing this challenge with a dedicated Legal Tech practice that brings together legal and technical expertise to help clients adopt AI responsibly and effectively.
Tune in to understand why AI can assistโbut never replaceโhuman legal judgment. You can listen to the episode below or on Apple Podcasts,ย Google Podcasts,ย Spotify, and Audible and read the following article
The Cases on AI Hallucinations in Court in Italy and Canada
In the matter at hand, a lawyer submitted a defense brief in a trademark and copyright dispute that included citations from the Italian Supreme Court. Upon review, it emerged that these references were entirely fictitiousโfabricated by ChatGPT, a generative AI model. The court acknowledged that the citations were produced without malicious intent, attributing the incident to the failure of the attorney to verify the accuracy of research conducted by a colleague using AI. As a result, a claim for aggravated liability under Article 96 of the Italian Code of Civil Procedure was dismissed for lack of demonstrable harm.
This episode is not isolated. In a separate case before the Supreme Court of British Columbia in Canada, a lawyer submitted two fabricated judgments generated by ChatGPT in a custody dispute. The lawyer admitted to using the tool without being aware of its limitations and was ordered to personally compensate the opposing party for the resulting procedural delays.
These cases underscore a significant and growing concern: the risk of AI hallucinationsโthat is, the generation of plausible-sounding but entirely false information. While generative AI models have proven valuable for enhancing efficiency and supporting routine tasks, they lack the capacity to assess the factual or legal accuracy of their outputs. In legal contextsโwhere precision and accountability are paramountโthis limitation presents serious professional and ethical implications.
The Imperative of Human Oversight
As the legal sector increasingly explores the adoption of AI-driven tools, the importance of maintaining rigorous human oversight cannot be overstated. Legal professionals must carefully review and validate all AI-generated content, particularly when used in the context of legal submissions, opinions, or advice. The role of the lawyer remains indispensable in interpreting legal texts, assessing jurisprudence, and ensuring the accuracy and relevance of cited authorities.
At DLA Piper, we recognize both the potential and the limits of legal technology. We have established a dedicated Legal Tech practice composed of lawyers with interdisciplinary expertise, enabling us to support clients in the responsible implementation of AI tools. Our approach is grounded in legal and technical rigor, with a focus on governance, transparency, and regulatory compliance.
The Regulatory Landscape: AI and the Legal Profession
The emergence of such incidents also prompts reflection on broader regulatory developments. Under the proposed EU AI Act, the use of generative AI in legal services may qualify as a high-risk applicationโparticularly where it affects individualsโ rights or legal obligations. In such cases, the deployment of AI tools must comply with stringent requirements concerning accuracy, traceability, and human oversight.
The submission of hallucinated case lawโeven if unintentionalโcould raise concerns not only for the individual legal practitioner but also for the law firm and the AI provider involved. To mitigate these risks, it is critical that legal AI solutions are designed with reliable, verifiable sources and deployed within robust risk management frameworks.
Conclusion
The integration of AI into legal practice is inevitable and, when implemented responsibly, can provide significant advantages. However, these recent cases serve as a cautionary reminder of the professional obligations that remain unchanged. The use of advanced technology must never compromise the duty of diligence, accuracy, and integrity that lies at the core of legal work.
Legal professionals, firms, and institutions must approach the adoption of AI with caution, transparency, and a clear understanding of the applicable legal and ethical standards. Only then can innovation serve as an enabler of progressโrather than a source of liability.
On the matter you can read the articles available HERE.