Artificial Intelligence in Court Proceedings
In the United Kingdom the Courts and Tribunals Judiciary has published guidance for Judges on the use of Artificial Intelligence (“AI”). The guide briefly considers some of the challenges faced by Judge’s and practitioners alike brought about by the introduction of AI in the legal industry.
The guide serves as a timely reminder of the effect that AI has had and will continue to have on the legal industry at large. The introduction of AI into the legal industry, an industry traditionally slow and cautious of change, is largely positive with the development of technologies being able to assist the Judiciary and practitioners with admin, workload and accuracy.
However, as is common with the introduction and implementation of new technologies, risks and obligations are bound to arise, it accordingly becomes incumbent on the industry to transition and undergo a learning curve to ensure the continued ethics, high performance and accuracy demanded in legal practice. One need only consider the instances of AI technologies producing fictious case citations in practitioners heads of argument to understand the risk that full reliance on AI technologies without proper regulation or procedures can pose.
Although the guide is addressed specifically to the UK Judiciary, it does nevertheless provide value and insights into certain challenges that AI brings to Judicial services across the board and the legal industry at large. The guide is broken down into seven points, each of which will be briefly discussed below.
1. Understanding AI;
The guide encourages understanding of AI and related tools including their capabilities and limitations. The guide highlights the limitations of certain AI technologies such as public AI chatbots. Such limitations including not having access to authoritative databases and the quality of answers depending on user engagement.
2. Confidentiality and Privacy;
The guide forewarns on confidentiality and privacy risks. Anything inputted into a public AI chatbot becomes publicly available and all such inputted information is retained for future use by the AI chatbot. Judges should be aware of and take caution with the information they input.
3. Accountability and Accuracy;
The guide reinforces practitioners responsibility for the documents/information that they put before court, Judges similarly have a duty to ensure that judgments are based on sound research and accurate information. When using AI tools, it is essential to ensure accountability and accuracy by checking the information before relying on it.
4. Bias;
AI tools based on Language Models (LLMs) often contain an embedded bias as their responses are influenced by the datasets they are trained on. It is incumbent on Judges and practitioners to acknowledge this possibility and take steps to correct any biases.
5. Security;
Security should be exercised when utilising AI technologies. Some of these issues were touched on under privacy. The use of authorised and trusted AI tools is of paramount importance.
6. Responsibility;
Judicial office holders as well as practitioners bear personal responsibility for materials produced in their name. While Judges are not obligated to detail the research or preparatory work for a judgment, adherence to provided guidelines allows generative AI to be a potentially useful secondary tool. If staff, including clerks or assistants, are using AI tools on behalf of a Judge, discussions should be held to ensure appropriate and risk-mitigating usage.
7. Awareness of AI use;
Legal professionals have long used certain AI tools without issues, such as TAR in electronic disclosure. AI is prevalent in various applications like search engines, social media, image recognition, and predictive text. Practitioners have a responsibility to ensure accuracy and appropriateness in the material presented to courts. While it may not be necessary to explicitly mention AI use, context matters. Until familiarity with these technologies grows, occasional reminders of professional obligations to verify AI-generated research or citations may be needed. AI chatbots are now used by unrepresented litigants, who may lack the skills to independently verify legal information and may not be aware of potential errors. Courts face challenges with AI-generated fake material, including text, images, and videos, and Judges should be mindful of the risks posed by deepfake technology.
The guide bears reference to a changing legal industry in which AI does and will continue to feature. Adaptability for the Judiciary and legal industry at large is the key to benefiting from this changing landscape and ensuring continued accuracy and accountability. I have no doubt that AI regulation will continue as AI technologies and their adaption into the Judiciary and legal industry are ironed out.
Brandon Cole
Associate
(AI technologies were used in the production of this article and the images)