Lessons Learned Writing ‘Advancing Legal Reasoning’, a Paper on Modern AI and Global Jurisprudence
Mike De'Shazer, LL.M.
Law + AI Integrator for Better Societies | Lecturer | Coder | Policy Advisor
TL;DR of this post about the recent paper on AI and legal frameworks as presented yesterday at the SuperAGI Leap Summit:
Given that the state of AI changes so rapidly these days, academic research and commercial endeavors can become outdated rather quickly. Though, what about endeavors such as legal reasoning in particular? Legal reasoning can take a number of forms; however, the essence of this discussion is around how we apply laws to facts in order to address legal and policy questions. A legal question might be what award the plaintiff should receive. Meanwhile, a policy question might be the appropriate budget to allocate to a government-backed project. My recent paper, Advancing Legal Reasoning: The Integration of AI to Navigate Complexities and Biases in Global Jurisprudence with Semi-Automated Arbitration Processes (SAAPs), analyzes 188 (near) randomly selected court cases across 5 national jurisdictions to find anomalies in legal reasoning between judges and judgments. The case that was most anomalous was Rossendale BC v Hurstwood Properties (A) Ltd, a tax case in the UK Supreme Court. By the end of the experiment, an automated arbitration process took place with an AI-based arbitrator adjudicating based on the arguments of an AI-based claimant arguing for bias and an AI-based critic defending the Rossendale judgment. The context of the arbitration was set by a human, regarding specifically uncovering bias by judges who are ruling on a matter that affects their income. This human involvement is the nature of the “semi” in Semi-Automated Arbitration Processes.
This post is a list of the top 3 lessons I felt were most impactful to the space of AI and the law.
Lesson 1?: The?Watchmen
Who watches the watchmen? When done correctly, today’s AI-based advanced language models (ALMs) can be effectively leveraged to influence court decisions when human contextualization is effectively used. The key to this concept is adequately defining “effectively”. ALM-based applications can be deployed specifically for this reason to serve as filters in cooperation with a human expert.
Lesson 2: Jailbreaking GPT
Jailbreaking GPT-based ALM technologies can be effectively carried out with PAIR (Prompt Automatic Iterative Refinement). This can overcome GPT-based model training biases, specifically in the realm of jurisprudence and even-handedness.
Essentially, you can use the legal reasoning tactic of IRAC: Issue, Rules, Application, Conclusion. However, what is at issue and the rules are obfuscated to reflect different variables in a game-like format. The output from an ALM is then swapped with the correct variables by another algorithm. When it comes to inherent, human-generated biases within the legal judgment sphere, we must remember that ALMs were trained on these biases. To overcome this for a purer application of the essence of laws to impact society optimally, we can further investigate strategies that can be agreed upon within jailbreaking frameworks.
领英推荐
Lesson 3: AI-Generated Creativity
Automating grounded theory-based research and multi-AI agent discourse in the legal field has the potential to lead to breakthroughs in actual AI-generated creativity.
Theory development through observation is at the heart of the research methodology that was used for Advancing Legal Reasoning. One interesting phenomenon that unfolded while working on this project was how AI was developing scores around humor ratings in languages like Cantonese and Kinyarwandan. We have an interesting opportunity to leverage the strong language translation capabilities of GPT-based technologies to bridge gaps enhancing the quality of legal systems by borrowing insights from across borders. This accessibility, matched with the ability for AI-based technologies to carry on discussions with AI-based technologies for themselves, can unlock untold wonders in the legal (and more broadly, societal) realms.
TL;DR Plus Conclusion
The nature of how qualitative analysis research can be conducted has transformed with the advent of the latest ALMs. Expert human and ALM-based collaborations may yield novel and creative solutions that can contribute towards solving the problem within legal systems of “Who watches the watchmen?”.
__________________
Special Thanks
Dr. Tom Rutkowski: Early discussions as this research project progressed benefited greatly from advice from Rutkowski. Additionally, his recent book, Explainable Artificial Intelligence Based on Neuro-Fuzzy Modeling with Applications in Finance, contributed towards the important call within the paper for future and important research on un-blackboxing certain advanced AI systems.
John J. Nay: Nay’s paper on Large Language Models as Fiduciaries was instrumental in the development of this research. I view it as one of the most important works on the discourse of AI in the context of the law and society.
Mark C. Suchman and Lauren B. Edelman: Their work on Legal Rational Myths greatly impacted this research. It is a work that may be best described as a mature and nuanced approach toward the analysis of legal reasoning as it is practiced.
An additional and special thanks to all of the other contributors and corresponding works listed in the bibliography.
Further thanks to the conference coordinators at SuperAGI Leap Summit, where this paper was first presented.