When AI Fails the Grade
Karta Legal LLC
Award winning legal operations and law practice management consultants for law firms and legal departments of any size.
In a recent twist that has captured the attention of legal tech circles, a Canadian law professor delivered a scathing critique of Lexis+ AI, marking the tool’s performance as subpar. The critique underscores the ongoing struggle for even the most sophisticated AI systems to meet the high stakes and expectations inherent in legal research and analysis. This issue, however, is not unique to Lexis+ AI. All generative AI models, regardless of their sophistication, have the potential for pitfalls, hallucinations, and mistakes.
However, we should not fear or be dissuaded from progress by such headlines. Instead, we need a thoughtful and structured plan forward—one that acknowledges the potential pitfalls, embraces innovation, and sets out clear, responsible pathways for integrating AI into legal practice.
The Promise and Pitfalls of Legal AI
Lexis+ AI was introduced with the promise of revolutionizing legal research by providing efficient, accurate, and citable results grounded in an extensive repository. The company emphasized its commitment to delivering "hallucination-free" l inked legal citations, aiming to mitigate the common AI issue of generating inaccurate or fabricated information.
Lexis+ AI continues to make improvements including those announced earlier this year noting that it will now employ an advanced form of Retrieval Augmented Generation (RAG) known as RAG 2.0, which integrates the Shepard’s? Knowledge Graph. This integration was meant to enhance the AI's ability to provide authoritative and comprehensive responses by leveraging accurate case law relationships.
However, Professor Perrin's evaluation suggests a discrepancy between these promises and the tool's actual performance. He highlighted instances where Lexis+ AI produced inaccurate or misleading information, undermining its reliability as a legal research assistant.
The Canadian law professor tested Lexis+ AI through several prompts, beginning with drafting a Supreme Court motion, during which the AI cited a non-existent law and produced a poor-quality draft. Attempts to summarize a Supreme Court case led to verbatim copying of headnotes, sometimes even from unrelated cases. During a LexisNexis training session, the AI declined to respond to previously attempted prompts, citing unavailability. When tested on substantive legal questions, the AI offered concise but error-filled responses, confusing legal concepts and citing incorrect cases. Subsequent tests showed little improvement, with recurring issues and basic answers that did not reference leading legal authorities. The professor concluded that Lexis+ AI is not reliable for student or professional legal research at present.
Why Does This Matter?
Headlines like these can deter progress, fuel the skeptics, and stall innovation within the legal industry. Critiques and high-profile failures often reinforce the fears of those hesitant to embrace new technology, casting a shadow over genuine advances. However, such missteps are almost inevitable and, to some extent, predictable—particularly when ambitious claims like "no hallucinations" are made in the rapidly evolving world of AI. While the pursuit of error-free performance is laudable, it must be tempered by transparency about AI's inherent limitations and a commitment to continual refinement.
1. Trust in Technology: Legal professionals rely on precision and credibility. Errors from AI tools can have significant consequences, potentially weakening arguments or adversely affecting client cases. Hallucinations in AI outputs pose risks that can erode trust in these technologies.
2. The Human-AI Balance: Despite technological advancements, human oversight remains crucial. The effectiveness of tools like Lexis+ AI depends on expert validation. Legal practitioners must critically assess AI-generated outputs rather than accept them unconditionally.
3. Transparency and Training: Integrating AI into legal practice necessitates a thorough understanding of these systems. Training legal teams on the capabilities and limitations of AI tools is essential. AI outputs should be viewed as part of a collaborative process, not as definitive answers.
4. Ethical Considerations: The propensity of generative AI to produce hallucinations raises ethical questions. Lawyers are bound by strict rules regarding competency and honesty, which extend to the use of AI tools. If an AI tool introduces errors or fabrications, determining responsibility becomes complex.
Ideas for Improving AI’s Role in Legal Practice
- Pilot Testing and Sandboxing: Law firms should conduct rigorous pilot tests and use sandbox environments before fully deploying AI tools. This approach allows for close examination of an AI solution's performance in practice, identifying any deficiencies.
- Double-Checking AI Output: Similar to reviewing a junior associate's work, AI outputs should be scrutinized for accuracy. A layered approach, where AI-generated content is verified by human experts, ensures more reliable outcomes.
- Clear Guidelines for AI Use: Firms must develop and regularly update policies on how, when, and to what extent AI can be used. These guidelines should emphasize leveraging AI's potential while recognizing its limitations.
- AI Literacy for All: Training legal professionals in basic AI concepts and the practical aspects of using these tools can help mitigate risks. Understanding when and how AI hallucinations might occur, and recognizing red flags, empowers practitioners.
A Call for Collective Improvement
This critique should not deter the use of AI in legal practice because “Innovation is the ability to see change as an opportunity – not a threat.” AI continues to advance rapidly, and its integration into the legal field is inevitable. Rather than being swayed by headlines highlighting AI’s pitfalls and retreating from adoption, we should remember the old adage: "proceed with caution."
It would be wise for solution providers to exercise care in making sweeping claims about their products' capabilities. Overpromising leads to unrealistic expectations and erodes trust. It is also wise for legal buyers to move forward with AI adoption but with caution, adhering to clearly defined guidelines and procedures. A collaborative approach—bringing together legal practitioners, technology providers, academics, and other stakeholders—is essential to continually enhance and refine AI technology. Our collective goal should be to prioritize steps that safeguard and elevate the accuracy and value of AI outputs.