ChatGPT: Hybrid Intelligence in Law Practice and Legal Ethics
By DALL·E 2 and inspired by Extraordinary Attorney Woo

ChatGPT: Hybrid Intelligence in Law Practice and Legal Ethics

In the last few weeks I have been spending a great deal of time conversing with ChatGPT about how we can ethically use that wonderful AI in our law practice to provide better services at greatly reduced costs. At one point, staring into a sea of patterned data, I had a vision of a graceful great whale breaching out of the water, and saw the immediate future of hybrid intelligence in law practice. Those of you who have seen the touching Netflix series Extraordinary Attorney Woo will know why; Attorney Woo is not just neurodivergent but has two somewhat distinct superpowers: total recall of vast information stores and creative epiphanies always accompanied by visions of leaping dolphins and whales. My epiphany was that my ethically-focused conversations with ChatGPT were forming a hybrid intelligence offering a combination of precisely those two superpowers and perhaps the most important transformation the practice of law has ever seen, at least in terms of the time-honored social structure of our guild – partner, associate, paraprofessional. To these, ChatGPT introduces a new, complementary type of intelligence and the even more exciting opportunity for its integration, potentially leading to drastic reduction in the cost of good legal services and expansion of access to legal information and justice.?Lawyers who jump on board the AI train now, however, without paying close attention to the ethical and practical restrictions discussed below put themselves at serious risk of ethical violations and malpractice.?

1.????Ethical Risks of Using ChatGPT

a.????Confidentiality

A primary ethical concern when using ChatGPT is the potential for breaches of client confidentiality. Under all codes of legal ethics, lawyers have a duty to protect the confidentiality of client information and to prevent the unauthorized disclosure of such information. Disclosure to ChatGPT as currently configured may itself violate that duty, and the use of a language model like ChatGPT could lead to the further disclosures of sensitive information if the model is not properly trained and secured. To mitigate these risks, lawyers must ensure that the model is trained on a dataset that does not contain our confidential client information, that any of our client information used as input to the model is properly anonymized, and that appropriate security measures are in place to protect client information when using the model. At present, these security measure will simply prevent our confidential client information from entering the model; in a future of private or enterprise ChatGPT it may also involve preventing any sensitive information from leaking out.??

Disclosing confidential client information by a lawyer to ChatGPT as currently configured could not only violate the duty of confidentiality but could jeopardize the attorney-client privilege. The threat that ChatGPT poses to either client confidentiality or the attorney-client privilege depends on how the technology is used and configured. It is up to the users and developers of the technology to ensure that it is configured and used in a way that protects the confidentiality and privacy of sensitive information, and – when private or enterprise versions of ChatGPT become available – up to lawyers to form enforceable service provider agreements that establish data protections sufficient to justify legal findings of compliance with the lawyer’s duty of maintaining the confidentiality of client information and the oversight and direction by counsel sufficient to maintain applicable privileges (see also subsection c on Supervision, below).

b.????Competence

Another ethical concern when using ChatGPT is the potential for lawyers to become over-reliant on the model and to neglect their own legal skills and knowledge. Codes of professional conduct require lawyers to provide competent representation to their clients and to maintain the required knowledge and skill to do so. To ensure that they are meeting this responsibility, Lawyers must familiarize themselves with the capabilities and limitations of the model and should not use it as a substitute for their own legal knowledge and judgment.?

The currently available version of ChatGPT is often inaccurate, requiring the lawyer to act as an expert editor and therefore not eroding the longstanding specialization trend in legal practice.?Most fundamentally, lawyers should bear in mind that the current v. 3.5 of ChatGPT can almost NEVER deliver competent counsel in areas such as privacy and data security in which the laws and concerns are in a constant state of flux, because it was trained on a dataset that existed in 2021.?Even a more advanced version of ChatGPT trained on current data will continue to suffer from inaccuracies arising from causes such as these:

  • Limited training data: ChatGPT is trained on a large corpus of text, but it may not have seen certain types of examples or diverse viewpoints, leading to inaccuracies in its responses.
  • Lack of context: ChatGPT operates on a turn-by-turn basis, meaning it doesn’t have a full understanding of the context of a conversation. This can lead to inaccuracies in its responses, especially when answering questions that require background knowledge.
  • Bias in training data: The training data used to develop ChatGPT can contain biases, which can be reflected in its responses.
  • Overgeneralization: As with any machine learning model, ChatGPT can sometimes make overgeneralizations, producing responses that are not accurate in certain situations.
  • Adversarial examples: ChatGPT can be susceptible to adversarial examples, where small changes to the input can lead to unexpected or inaccurate responses.[1]

c.????Supervision and Unauthorized Practice of Law

The use of ChatGPT also raises ethical issues about supervision of the model. Lawyers are responsible for supervising non-lawyer assistants, and the use of a language model like ChatGPT may be considered a form of non-lawyer assistance.?At a minimum, lawyers should establish guidelines and protocols for its use – and, when possible, contracts establishing direction and oversight by the attorney for uses involving separate non-attorney entities associated with the model – and should review and approve or modify any decisions, conclusions or inferences made by the model. ?Beyond those, when attorneys can gain direct control over the model through private or enterprise versions, confidentiality-related supervision should drive physical, technical and administrative security enhancements and protection through anonymization and firewalls that not only keep client confidential information out of the model but keep sensitive information in, while active oversight and supervision may protect applicable privileges.?The incompleteness and inaccuracy of ChatGPT discussed in subsection b call for editorial supervision, including updating and correction.??

Uses of ChatGPT by lawyers without appropriate supervision – and much more likely by pro se nonlawyer litigants – are likely to spark claims of unauthorized practice of law. The answer for lawyers is simple and clear: supervision and oversight of the technology.?A pro se litigant could be subject to contempt for getting real time guidance from an AI in court, and would be better off hiring an attorney willing to use and supervise ChatGPT

d.????Duty of Reasonable Diligence

Another ethical concern when using ChatGPT is the duty of reasonable diligence that lawyers have to their clients. ?ChatGPT can assist a lawyer in meeting their duty of diligence by providing information and knowledge on a wide range of legal topics. However, a lawyer must still exercise their own independent judgment, research and evaluate the information provided by ChatGPT, and ensure that the information is up to date and relevant to the specific case they are working on. To meet their duty of diligence, a lawyer must also consider other relevant sources of information, verify the accuracy of the information provided by ChatGPT, and make a professional judgment on how to best represent their client’s interests.

Citing legal authority accurately and strategically for the client’s benefit – an area in which ChatGPT is currently challenged – is especially important here. Moreover, in terms of diligence and advocacy, a lawyer should evaluate not just what the law is, but also where appropriate should envision potentially valid arguments for changing or invalidating laws. If previous content incorporated into the ChatGPT large language model has already written text that identifies potential claims in similar language contexts, ChatGPT may be used by counsel to detect patterns that enable it to find and surface arguments for legal change.?

A lawyer must also assess the limitations and potential biases of ChatGPT, as it is an AI model trained on text data and may not have the same level of legal expertise or understanding as a human lawyer. The use of ChatGPT should not replace the lawyer’s own independent analysis and decision making. A lawyer must use ChatGPT as an aid or supplement to their own legal knowledge and expertise and not rely solely on its output.

2.????Looking Forward: Ethical and Professional Risks of Not Using ChatGPT

Lawyers who fail to use ChatGPT or its successors may miss out on the benefits of advanced natural language processing capabilities, such as increased efficiency in drafting legal documents and answering client inquiries. Additionally, not using ChatGPT’s successor currently being integrated into Bing or others with full internet connectivity could limit a lawyer's ability to stay current with the latest developments and to access relevant information quickly and easily. This could result in missed opportunities, a decreased ability to provide insightful and informed advice to clients, and a reduced competitiveness in the market.

Moreover, lawyers have a duty to use reasonable skill and care in providing their services, and failure to adopt innovative tools like ChatGPT could be seen as a failure to fulfill this duty. ?And as technologies such as ChatGPT plays a larger role in legal practice, lawyers will probably need to familiarize themselves with the technology and ensure they are using it competently and diligently; some codes of ethics include use of technology as part of the duty of competence. Using ChatGPT can help lawyers meet their professional obligations, maintain the highest standards of client service, and stay ahead in a rapidly changing legal landscape.

No alt text provided for this image
By DALL·E 2 and inspired by Extraordinary Attorney Woo

3.????How to Use ChatGPT Now

The use of artificial intelligence in the legal industry has the potential to revolutionize the way legal services are delivered, making them more affordable and available for individuals and organizations, and elevating the ways in which both senior and junior attorneys work.?As long as practitioners continually bear in mind their duties of confidentiality, competence, supervision and diligence, they can obtain these benefits now.?

One of the main benefits of using ChatGPT is its ability to provide fast responses to routine legal inquiries and frequently asked questions. Private and enterprise versions of ChatGPT and its successors can also be programmed to provide customized responses based on specific client needs and preferences, making it easier for individuals and businesses to find the information they need.?In the current state of the technology, lawyers should always review and validate the results for accuracy and appropriateness, but that review and validation by a lawyer with appropriate expertise takes far less time than writing from scratch.

Another way that ChatGPT can make legal services more affordable and available is by automating repetitive tasks such as document drafting and data entry. It can help to streamline the process of preparing legal documents, reducing the time and effort required by attorneys and freeing up their time to focus on more complex tasks. In using ChatGPT now, we regularly find nuances of the documents that it misses, but sometimes can train it to get those nuances right through feeding it more information.

ChatGPT can be used to provide legal research support, assisting attorneys in finding relevant case law and legislation quickly and efficiently. ?A lawyer with expertise in a particular area can often see relevant areas and aspects of law that ChatGPT cannot, but in our experience ChatGPT generally provides valuable additions, and precisely because ChatGPT is not as specialized as many lawyers have had to become, it may provide useful information from other areas of law and non-legal knowledge. ?To be sure, it will "make things up" and get facts wrong, so its results are at most suggestions to be considered and verified if useful.

Most fundamentally, it is important for lawyers to understand how using ChatGPT and its progeny for research is different from use of current search engines.?While search engines generally rely on keyword matching and indexing algorithms to find information, ChatGPT uses Natural Language Processing (NLP) techniques and deep learning models to understand the meaning of text and context. ?To put it simply, search engines generally search for words or variants of words, and ChatGPT can search for concepts and contexts in which there are no relationships between variants of keywords.

No alt text provided for this image
By DALL·E 2 and inspired by Extraordinary Attorney Woo

4.????Hybrid Intelligence in Law Practice

I have found ChatGPT better at providing a mass of information than in itself engaging in what we might call strategic or creative thinking, but that mass of information includes the identification of patterns that might not be quickly evident to a lawyer. ?It has no emotions, let alone emotional intelligence, and is not going to replace our close client relationships in the foreseeable future, or appear in court or negotiate a deal except in an assistive role.?Yet in all of these contexts it has contributions to make, and to the extent that duties of confidentiality and maintenance of privileges permit, it can be a good unsleeping, brainstorming, pattern-detecting partner if asked the right questions.?In all of these insights we see the basis for the emergence of hybrid intelligence in the practice of law.

Hybrid intelligence refers to a type of artificial intelligence that combines both human and machine intelligence to create a more advanced and effective system. In a hybrid intelligence system, the strengths of human and machine intelligence are leveraged in tandem, with the machine handling tasks that it is well-suited for, such as processing large amounts of data, and the human handling tasks that require judgment, empathy, compassion and creativity.

The goal of hybrid intelligence is to create a system that is more efficient, effective, and flexible than either humans or machines working alone. This can be achieved by having humans and machines collaborate in a complementary manner, with the machine handling routine tasks and the human making decisions and providing oversight.?In these respects the hybrid intelligence that can be achieved with ChatGPT dovetails with ethical considerations in its use by attorneys.?

No alt text provided for this image
By DALL·E 2 and inspired by Extraordinary Attorney Woo

[1] Adversarial examples are inputs to a machine learning model that have been specifically designed to cause the model to produce an incorrect output. These inputs are usually created by making small, carefully chosen modifications to the original input in a way that is intended to fool the model. In the case of ChatGPT, this can result in inaccurate or unexpected responses, as the model may be fooled into giving a response that is different from what it would have given without the adversarial modifications.

Adversarial examples can be malicious or non-malicious in origin. They can be created by an attacker with the intention of tricking the model into giving a wrong answer, or they can be created as part of a scientific study to evaluate the robustness of the model. In either case, they can highlight weaknesses in the model and demonstrate the need for improved robustness and accuracy in language models like ChatGPT.

The exploration continues on The Hybrid Intelligencer.


Ralph Losey

Attorney, AI Whisperer, Open to work as independent Board member of for-profit corps. Business, Emp. & Lit. experience, all industries. Losey.ai - CEO ** e-DiscoveryTeam.com

2 年

Final comment. Love this quote too; “The goal of hybrid intelligence is to create a system that is more efficient, effective, and flexible than either humans or machines working alone. This can be achieved by having humans and machines collaborate in a complementary manner, with the machine handling routine tasks and the human making decisions and providing oversight.” This is what I’ve been trying to achieve with my predictive coding 4.0 hybrid multimodal method for evidence search. The picture here is my quick effort Re Attorney Woo.

  • 该图片无替代文字
Ralph Losey

Attorney, AI Whisperer, Open to work as independent Board member of for-profit corps. Business, Emp. & Lit. experience, all industries. Losey.ai - CEO ** e-DiscoveryTeam.com

2 年

Good insight Re legal research. Havnt played with the research capabilities much yet (but have seen it insanely make up a judge and opinion that don’t exist!) so encouraging to hear you say: “In using ChatGPT now, we regularly find nuances that it misses, but often also find that we can often “train” it to get those nuances right through “dialogue.””

Ralph Losey

Attorney, AI Whisperer, Open to work as independent Board member of for-profit corps. Business, Emp. & Lit. experience, all industries. Losey.ai - CEO ** e-DiscoveryTeam.com

2 年

I totally agree with this assertion: “… the use of a language model like ChatGPT may be considered a form of non-lawyer assistance.” Like a paralegal with lawyer duty to supervise. I think you are right to be very cautious Re client data. I would however like to know more about the current realistic risks of disclosure. There are millions of requests and the “monitoring” by OpenAI seems remote. I’d like to know more about, but agree it is necessary to be cautious and conservative at this point. OPENai needs to be more open about its privacy protections before lawyers can use it freely.

Ralph Losey

Attorney, AI Whisperer, Open to work as independent Board member of for-profit corps. Business, Emp. & Lit. experience, all industries. Losey.ai - CEO ** e-DiscoveryTeam.com

2 年

Jon - We once again seem to be on parallel tracks. I loved the Woo series. Season Two may be out now or soon, and like you Jon, been using lots of time getting to know OpenAi’s tools. Been waiting a long time for the promises of Ai to come true. I expected the unexpected and that’s what we got. Look forward to your insights in the coming months. A more substantive comment later. In the meantime check out what I’ve been doing with it on my blog articles. E-discoveryTeam.com. I was especially blown away by its insights into my all time favorite philosophy story, Socrates’ Cave analogy. Two weeks back before my Sedona series. I think it caprtured Socrates essence perfectly, especially the second grader version.

Ellen Marie Giblin

Artificial Intelligence AI, Tech, Privacy and HIPAA Compliance and Cyber Security Attorney | Certified Information Privacy Professional/CIPP/United States/US Government/Canada

2 年

Excellent and very informative article Jon.

要查看或添加评论,请登录

Jon Neiditz的更多文章

社区洞察

其他会员也浏览了