Illinois Supreme Court Provides Guidance on AI Use in Legal Proceedings

In December 2024, the Illinois Supreme Court issued guidance on the use of AI (guidance) by judges and other lawyers.

The guidance comes after several high-profile instances of misuse of generative Artificial Intelligence (AI) in litigation, including cases where lawyers used AI to draft briefs that later cited non-existent case law. For instance, in several cases, lawyers used generative AI to write a brief and file it with a court, only to discover afterward that the AI “hallucinated” caselaw that does not exist. See, e.g., Gauthier v. Goodyear Tire & Rubber Co., No. 1:23-CV-281, 2024 WL 4882651, at *3 (E.D. Tex. Nov. 25, 2024) (sanctioning attorney). More responsible members of the bench and bar have used both generative and non-generative AI tools, including tools for general audiences (e.g., ChatGPT and Claude) and tools specifically designed for use by lawyers (e.g., Clearbrief, CoCounsel, and Harvey AI).

Judicial responses to AI use have varied. Some judges have proposed using AI to interpret legal terminology, while others have banned the use of AI for drafting briefs or as authority in motions. Some courts have issued standing orders requiring litigants to disclose AI usage in filings.

In contrast, the Illinois Supreme Court encourages the responsible and supervised use of AI and recommends that Illinois state court judges not require lawyers to disclose the use of AI in drafting pleadings. The guidance emphasizes that existing legal and ethical rules, rather than special provisions for AI, are sufficient to govern its use in litigation. Overall, the tone of the new guidance is to encourage the use of AI, so long as the use is responsible. The guidance also recognizes that responsibility means following the rules that already exist.

The Illinois Supreme Court’s guidance takes a relatively “pro-AI” stance. It recognizes the potential benefits of AI use, including efficiency and increased access to justice. The guidance authorizes the use of AI by litigants and judges and states that it should be expected rather than discouraged while also recommending that courts do not require litigants to disclose the use of AI in drafting pleadings.

A judicial reference sheet accompanies the new guidance, providing basic information about AI and links to other resources. The reference sheet also discusses use guidelines, like attribution and confidentiality, and identifies possible issues, such as AI hallucinations and deepfakes. (The new guidance highlights that judges “remain ultimately responsible for their decisions, irrespective of technological advancements.”)

Although “pro-AI” in the sense of not requiring specific disclosure — and discouraging an outright ban — the guidance is better understood as reiterating that the current rules governing lawyers provide sufficient regulatory structure to address the serious risks that generative AI technology poses. The guidance specifically identifies two categories of risks.

First, the guidance acknowledges concerns about authenticity, accuracy, bias, and the integrity of court filings and decisions. It makes clear that Illinois courts will be “vigilant” against AI technologies that jeopardize due process, equal protection, or access to justice. The Illinois legislature recently responded to similar concerns by restricting the use of AI in employment practices.

Second, the court’s guidance highlights privacy and confidentiality, stating that AI use must not compromise confidential communications, personal identifying information, protected health information, justice and public safety data, security-related information, or “information conflicting with judicial conduct standards or eroding public trust.”

The court’s guidance states that the Illinois Rules of Professional Conduct and Code of Judicial Conduct apply fully to the use of AI technologies. Thus, any court’s or litigant’s use of AI must comply with existing legal and ethical standards. For example, lawyers have a duty to ensure the authority they cite in court filings is accurate and good law — this includes making sure cited cases are real and not “hallucinated” by generative AI tools.

Put differently, litigators in Illinois courts still have the professional responsibility to ensure that their factual and legal citations are accurate. And they, like others doing business in Illinois, must protect the private data that their technological tools may use.

As a firm, Taft is committed to utilizing AI responsibly, and has vetted and approved specific tools, such as Clearbrief and Lexis® Create+, formerly Henchman, that improve work product while satisfying regulations and other rules. Taft’s litigators continue to monitor and develop regulations and best practices on the legal uses of AI. For questions on how to implement an AI policy or how to navigate new laws and regulations, contact an attorney in the Technology and Artificial Intelligence team.

In This Article

You May Also Like