Advisory Opinion by Bar Association on the use of GenAI
Karta Legal LLC
Award winning legal operations and law practice management consultants for law firms and legal departments of any size.
The Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility and the Philadelphia Bar Association Professional Guidance Committee have issued a comprehensive opinion on the ethical use of artificial intelligence (AI). This is an advisory opinion only, and not binding on any disciplinary board or court, but in our view, it provides helpful guidance to all legal professionals today.
At its core, this opinion simply restates what has been said before: lawyers are required to stay proficient with technology relevant to their practice. Of course, this is nothing new. The Model Rules of Professional Conduct incorporated "technical competency" well over a decade ago, in 2012.
In the past, this included understanding how to use legal research databases, e-discovery software, smartphones, email, and safeguarding client information in digital formats. Now, it includes understanding how LLMs models work and how GenAI is applied in the practice of law.
To the extent that you do not have a grasp of the ins and outs of GenAI, this opinion encourages you to embrace the inevitability of change and make time to learn how this technology is or should be used. Even if you are not using it yourself, you need to understand it because your client or your opposing counsel might be. It is also worth noting that, by its generative nature, the emergence of GenAI represents a significant change that brings about complex challenges. Unlike previous AI tools that only analyzed content, generative AI can create new content, raising new ethical considerations.
Below we include a list of the key ethical concerns, the rules at issue, and a list of recommended best practices. But before we get to that:
The issue with hallucinations...
The opinion goes on to discuss hallucinations and some of the most glaring ethical violations that have been made in the recent past by lawyers filing GenAI output in court filings without any regards to their duties of professional conduct or ethics.
It is important to recognize that the much-publicized "hallucinations" in AI-generated content are fundamentally a manifestation of inadequate professional oversight, not unlike errors that have occurred throughout the history of legal practice. While, admittedly, the grave concern here is that these errors have the potential to grow exponentially if the use of GenAI is not kept "in check," in many instances hallucinations are not really that unique to AI technology.
As anyone who has supervised inexperienced legal professionals or young associates knows, we have all encountered work that seemed disconnected from reality – effectively human "hallucinations." This typically resulted from hasty work, unclear instructions, or insufficient attention to detail. Often, multiple revisions and extensive editing were necessary to produce client-ready work. The advent of generative AI has not altered this fundamental dynamic.
In the context of AI, similar issues arise when users provide excessive or disorganized data, craft poor prompts, neglect iterative refinement, or lack the expertise to properly evaluate the output. This mirrors the age-old principle of "garbage in, garbage out."
However, legal professionals who are diligently learning to harness this technology, continuously refining their methods, and rigorously validating outputs are poised to achieve remarkable improvements in efficiency and quality. As with any tool, the key lies in skilled application and meticulous oversight.
The challenge, therefore, isn't unique to AI but rather a continuation of the profession's ongoing responsibility to maintain high standards of accuracy and diligence, regardless of the tools employed. As the legal community adapts to and masters these new technologies, we can expect to see exponential improvements in both productivity and precision.
Ethical concerns addressed in the opinion
1. Competence: Lawyers must understand AI technology, its benefits, and risks. They should verify all AI-generated citations and content.
2. Confidentiality: Client information must be protected when using AI tools. Lawyers should ensure AI systems have adequate security measures.
3. Communication: Lawyers must inform clients about the use of AI in their cases, including potential benefits and risks.
4. Conflicts of Interest: AI systems may inadvertently create conflicts by using information from one case to inform another. This needs to be monitored.
5. Candor to the Court: Lawyers are responsible for ensuring AI-generated content is accurate and truthful. GenAI work product cannot be left unchecked. GenAI does not replace human supervision.
6. Supervision: The same ethical rules that apply to supervising human assistants apply to AI tools.
7. Unauthorized Practice of Law: Lawyers must ensure AI tools do not engage in tasks requiring legal judgment without attorney oversight.
These concerns are tied to the following Pennsylvania Rules of Professional Conduct:
领英推荐
Best Practices for AI use in legal practice
As noted above, the key difference with AI is the scale and speed at which these errors can propagate. Without proper oversight and rigorous verification processes, AI-generated hallucinations could quickly multiply, potentially leading to more widespread and serious consequences than traditional human errors.
This underscores the importance of maintaining strict quality control measures when using AI tools. Legal professionals must approach AI-generated content with the same - if not greater - level of scrutiny as they would apply to work produced by human colleagues. Regular audits, cross-referencing, and expert review remain essential safeguards against the amplification of errors in the AI era.
By implementing robust verification protocols and fostering a culture of critical evaluation, the legal community can harness the benefits of AI while mitigating the risk of exponential error growth. This balanced approach will be crucial in ensuring that AI remains a powerful tool for enhancing legal practice rather than a source of amplified inaccuracies.
The opinion provides the following guidance:
1. Truthfulness and Accuracy: Ensure AI-generated content is truthful, accurate, and based on sound legal reasoning.
2. Verification: Check all citations and materials generated by AI for accuracy.
3. Competence: Develop proficiency in using AI technologies.
4. Confidentiality: Safeguard client information when using AI tools.
5. Conflict Identification: Be vigilant in identifying potential conflicts arising from AI use.
6. Client Communication: Inform clients about AI use in their cases and obtain consent when necessary.
7. Unbiased Information: Ensure AI-generated content is free from bias and discriminatory outcomes.
8. Proper Use: Guard against misuse of AI-generated content to manipulate legal processes.
9. Ethical Compliance: Stay informed about regulations governing AI use in legal practice.
10. Professional Judgment: Recognize that AI assists but does not replace legal expertise.
11. Billing Practices: Ensure AI-related expenses are reasonable and properly disclosed.
12. Transparency: Be open with clients, colleagues, and courts about AI use in legal practice.
Conclusion
Our fundamental professional duties remain unchanged; the key is understanding how to leverage GenAI to fulfill these responsibilities effectively and ethically.
In conclusion, lawyers can ethically use AI within the framework of the Pennsylvania Rules of Professional Conduct, provided appropriate safeguards are in place. AI tools complement but do not replace personal legal review and must be used cautiously, with thorough verification of AI-generated content.