Reddy v Saroya: The Alberta Court of Appeal’s Message on AI Use in Legal Proceedings

/Reddy v Saroya: The Alberta Court of Appeal’s Message on AI Use in Legal Proceedings

Reddy v Saroya: The Alberta Court of Appeal’s Message on AI Use in Legal Proceedings

2025-12-04T16:18:51-07:00 December 1st, 2025|

The Alberta Court of Appeal’s recent decision in Reddy v Saroya, 2025 ABCA 322 marks one of the clearest judicial warnings to date about the professional and procedural risks associated with using artificial intelligence (AI), particularly large language models (LLMs), in the preparation of court materials. While the case dealt primarily with a complex civil contempt appeal, the Court of Appeal took the unusual step of devoting a significant portion of its reasons to counsel’s use of AI-generated content and the appearance of fabricated case law in a filed factum.
Reddy stands as a turning point in the Canadian courts’ approach to generative AI: not only acknowledging the technology’s growing role in legal practice, but also emphasizing that accuracy, verification, and professional judgment remain non-negotiable.

The Case Behind the Warning
At its core, Reddy involved a long-running business dispute and an appeal of 32 contempt findings for failing to provide adequate responses to undertakings. While the Court ultimately reduced the number of valid contempt findings to 18, it was the way the appellant’s initial factum was prepared that has drawn the most attention.
The appellant’s factum, drafted with the assistance of a third-party contractor, cited seven cases that did not exist, including six that were said to have been decided by the Alberta Court of Appeal itself. Opposing counsel flagged the issue, explaining that after substantial time spent searching authoritative legal databases, the cited authorities simply could not be found.
Appellant’s counsel ultimately acknowledged that the contractor may have used an LLM despite assurances to the contrary, and that he had not verified the sources before filing due to time pressures, illness, and the holiday season. The Court allowed a corrected factum to be filed, but not without comment, and not without consequences still under consideration.

What the ABCA Actually Said About AI
The Court of Appeal’s analysis of AI use is found at paragraphs 73 – 87 of the decision, where several core principles emerge:
1. Lawyers must understand both the benefits and risks of generative AI (paragraphs 80 – 82)
Relying on the Law Society of Alberta’s Code of Conduct (Rule 3.1-2) and its Generative AI Playbook, the Court of Appeal emphasized that competence now includes technological competence, which includes an understanding of how generative AI works, where it fails, and how to use it responsibly.
The Court referenced the Law Society’s warning that LLMs will confidently “fill gaps” with invented cases, facts, or citations, and may even fabricate entire opinions if prompted. This phenomenon is referred to as ‘AI hallucination’ and it has become a known and foreseeable risk with the ever-increasing use of generative AI in society.

2. Verification is not optional (paragraphs 81 – 83)
The Court of Appeal reiterated the Alberta Courts’ October 2023 Notice on Ensuring the Integrity of Court Submissions When Using Large Language Models, which requires:
• a “human in the loop” for all AI-generated materials,
• exclusive reliance on authoritative sources (CanLII, official court websites, commercial databases), and
• meaningful, point-by-point verification of every citation.
The Court of Appeal went further, expressing that the excuses of ‘I was busy’ or ‘the holidays made things difficult’ do not serve as justification to neglect verification:
“The time needed to verify and cross-reference cited case authorities generated by a large language model must be planned for as part of a lawyer’s practice management responsibilities.” (paragraph 83)

3. Counsel bears ultimate responsibility (paragraph 83)
The Court held that the lawyer whose name appears on the filed document is responsible for its content, regardless of who or what helped produce it. This is true even if:
• the drafting was outsourced,
• AI use was concealed from the lawyer, or
• AI was only used at an early stage.

4. Consequences for AI misuse will escalate (paragraph 84)
The Court of Appeal cautioned that:
• counsel “should not expect leniency” for violating the 2023 Notice;
• sanctions may include striking submissions or awarding costs against counsel personally;
• more serious remedies, like contempt proceedings or referrals to the Law Society, may be appropriate where the conduct amounts to an abuse of process.
In Reddy, the Court expressly invited submissions on whether appellant’s lead counsel should personally pay enhanced costs for the improper use of AI-generated materials (paragraph 87). Those costs could be significant.

Why This Matters: The Dangers of Over-Reliance on AI
The weaponization of hallucinated case law is no longer hypothetical. Reddy shows what can happen when AI is used without safeguards:
AI confidently produces false legal authorities.
Even the most advanced LLMs can output convincing but entirely fabricated case names, citations, quotes, or statutory sections.
Real and unnecessary costs are imposed on opposing counsel and the courts
In Reddy, respondent’s counsel spent significant time searching through commercial databases and court archives for non-existent cases.
The integrity of the judicial process is undermined when verification is lacking
As the Court put it, unverified AI-generated submissions “can bring the administration of justice into disrepute” (para 80).
Clients may ultimately pay the price, or their lawyers may be ordered to
The Court’s invitation to consider costs against counsel personally (paras 85 – 87) signals a strong willingness to deter future misuse.
AI cannot account for context, legal nuance, or ethical obligations
AI tools can assist, but they cannot replace legal judgment, professional responsibility, or the obligations of an officer of the court.
Tying back to the case of Clearview AI Inc v Alberta (Information and Privacy Commissioner), 2025 ABKB 28, discussed in our previous article found here: ALBERTA’S COURT OF KING’S BENCH DECLARES PRIVACY PROVISIONS UNCONSTITUTIONAL , Alberta courts are increasingly willing to engage directly with the realities of emerging technologies, and to impose meaningful boundaries when necessary.

Looking Ahead: Responsible AI Adoption in the Legal Sector
AI is reshaping nearly every area of practice, from document review to trial preparation to client management. The Law Society of Alberta is encouraging lawyers to embrace the technology responsibly and Courts, too, recognize the efficiencies AI can offer.
However, Reddy confirms that in litigation, accuracy is king. As Alberta courts adapt to the realities of generative AI, practitioners must be prepared for:
• sharper judicial scrutiny of legal authorities,
• stricter expectations around verification,
• increasing willingness to impose costs or sanctions, and
• the incorporation of AI-competence into the definition of a ‘competent lawyer.’
In that sense, Reddy is not just a warning, it is a roadmap for modern legal practice. Generative AI will continue to evolve, and so will the law. For now, Reddy stands as a reminder that while technology can enhance legal practice, it can never replace the lawyer’s critical role in ensuring accuracy, integrity, and respect for the administration of justice.
Note: This article provides general commentary and is in no way intended to replace the need to consult with a legal professional concerning the specific circumstances of your situation. This article should not be construed or relied upon as legal advice.

Request a Consultation