Your Journey to AI Competence and Confidence
Sonya L. Sigler, Esq.
Bestselling Author | Executive Coach | Business Strategy Consultant Legal & Operations Expert | Teachnologist | Organizer of Chaos | Philanthropist
Understanding and using Artificial Intelligence (AI) of all ilks is a journey. One that starts out like any other journey, most likely not knowing what to expect, maybe having some idea of where you want to go, and maybe even having a roadmap. Along the way, we learn more, we find places we love and want to come back to, and we experience others that we never want to revisit. The journey to understanding and using AI tools efficiently and cost effectively is no different.
I recently moderated a panel for the Women in eDiscovery SoCal Tech Conference, and we had a lively discussion about what we can do to make this journey more enjoyable while building our competence in understanding AI technology and confidence in using it.?
Let’s start with bringing down the anxiety level around AI right now…
AI will not replace you. AI won’t replace most legal professionals any time soon, or at all. It may change how you do your work, what role you play, or how you think about your work, but it won’t replace you.?
The best thing we can do is take a step back and think about the latest hype – generative AI (think Bard, Jasper, ChatGPT) – as the newest technology or tool in a long line of ‘disruptive’ tools. When I was in law school the internet and more specifically hyperlinks were going to put lawyers out of business. That hasn’t happened. Then there was scanning in documents so we could review them and notate them electronically instead of dealing with paper, binders, sticky notes and posting notes and documents on conference room walls. ?That technology didn’t put lawyers out of work either.
Things over the years have progressed to include software for all kinds of activities – document review, drafting contracts, knowledge management, first pass review, quality control, beyond key word search, predictive coding, determinative coding, etc. Technologies have included all kinds of ‘intelligence’ based upon linguistic, statistical, or a combination of both methods and others. None of these technologies have put lawyers out of business. Generative AI is just the latest, hot, new technology - A ‘thing’ to be tested, tried, and understood just like all the other preceding technology.
My ‘teachnology’ philosophy is to educate others, demystify the technology, explain how it works, how it doesn’t, and encourage exploration with trial usage. While at Cataphora I trained thousands of lawyers in statistics and linguistic tools – how to use them, how not to use them (i.e. what the tools were not good at), and at Discovia, I produced a 5-part webinar series to demystify varying types of technology assisted review (TAR) software. Helping others understand technology is near and dear to my heart and helping others understand the latest flavor of technology requires the same ‘teachnology’ approach as with TAR.
The purpose of this article (and our panel discussion) is to lower the level of fear, uncertainty and doubt related to the hype surrounding the latest AI technology to catch everyone’s attention, explain these tools in plain English, increase your confidence in approaching and using these tools, and most of all encourage you to use and test these tools as much as possible. This meme says it all...
How do we build our AI competence and confidence when it comes to AI technology? We focus on three essential elements - people, process, and technology. Before I dive into each of these elements, I want to remind you that there is no easy button, this isn’t magic. AI tools are not a panacea, there is no one-size-fits-all solution. Adding the latest AI tool to your offerings will not make you a superhero. No tool will fix a broken process. You won’t obtain the desired results if you don’t know how to use the tool properly. It’s like using a hammer to pound in a screw. It won’t hold well. AI tools have to be used properly for their designated purpose.
My goal is to give you a framework to help you figure out/approach the tools that are out there and make the most of them. Let’s look at each of these essential elements in more detail, starting with people.
People
People are the most essential element of this whole framework. ?They provide judgment and experience. Machines and programs that mimic intelligence are not the same as human intelligence. Whether you are a skeptic or an early adopter, a newbie or a sophisticated user, your human intelligence is indispensable to the evaluation, use and effectiveness of the software or AI tools you will be using.
People will play many roles in the adoption and use of AI tools – from influencer, to buyer, to user. Fear, uncertainty, and doubt (FUD) all factor into the sales and adoption process. You can see that now with any of these tools as you listen to vendors at any conference and count the number of seminars being offered on the topic on a weekly (or even daily) basis. To counteract any FUD, avail yourself of these webinars to learn as much as you can. Ask questions, especially if you don’t understand something or it isn’t clear. Take the opportunity to read the information vendors and law firms are publishing, listen to those who have tested and used the technology, and take advantage of free (or close to free) trials to try out the technology yourself.
Humans provide the all-important and necessary strategy, critical thinking and judgment to any process and technology. As panelist Joy Murao of Practice Aligned Resources added, TAR software can reveal information, but humans will need to understand its impact and determine its use. People provide and understand context in a way that machines cannot. Same with generative AI tools in creating content. ?The AI tools can be prone to making up things and citing phantom cases, which means all AI generated content requires checking for truthfulness and context. These are just a few examples of how humans are crucial when using technology to do our work.
People are required to curate the data, validate results, fine tune the process, add their insight, and make decisions from the results. Their life and work experience also plays a part in this equation, as the context and importance/impact is missing from the machine learning process and the generation of content. Yes, these systems get smarter over time with use and additional data and input. But humans with their judgment and critical thinking skills are essential to the proper and effective use of any technology.
One of the other resources available is an article I wrote for the International Conference on Artificial Intelligence and Law when TAR was receiving all of the industry attention and hype: Are Lawyers Being Replaced by Artificial Intelligence? The answer then, in 2009, as now, is still no, lawyers are not being replaced by artificial intelligence. If anything, we are more necessary now than ever; people are the ones to figure out how AND when to use these tools.
The role of document reviewer did not go away with the advent of technology assisted review tools and our role as content generators – whether it is for an email or memo or brief, is not going away with the advent of the Generative AI tools. We should be thrilled to move to a higher level of input instead of being in the weeds.?
Process
We talked about what process to put in place when evaluating and using these new technologies. It’s hard to evaluate these on an apples-to-apples basis when tools that are great for creating song lyrics or a poem about your six-year-old daughter are being set up against tools being used to draft legal briefs or interrogatories. When using technology assisted review software to categorize data as relevant, non-relevant, hot, or privileged, you are going to follow a different process than if you are asking ChatGPT to draft search terms for you.
The process may contain the same elements of purpose of the testing or use, understanding the expected results, testing the veracity of the results, ensuring you have a repeatable, defensible process, and managing risks inherent in the technology, like privacy, confidentiality, security, and bias. Panelist Melissa Dalziel of Quinn Emanuel has tested over 20 tools and said it is best to be able to use the same information to compare results fairly. She said involving their IT and security teams early helped her team vet the tools confidently with regard to security and privacy issues.
领英推荐
When evaluating or using a technology, if you have a process to follow, you can be consistent in your evaluation of various technologies, whether you are using it to prepare for a deposition or crafting a set of search terms. Even similar technologies can perform very differently on the same task. For example, as explained in a recent ABA Journal article , ChatGPT 3.5 performed in the bottom 10% while GPT-4 performed in the top 10% when taking the bar exam.
Testing the tools with the same data set allows you to really compare results and functionality. Discovering previously unknown information or hot documents will help with the buy-in process. No matter what tool you use, you will want to include time for testing the veracity of the results and performing a round or two of quality control on the results. You will want to review your work, especially if you are using a content generation tool, as you would anything else being submitted to a higher up or the court. You don’t want to leave any hallucinations or made-up information in your documents. A lawyer’s name and signature are going on court submissions so why wouldn’t you review it as with any other submission. Our perfectionistic tendencies and low tolerance for mistakes in the legal industry won’t be taking a back seat just because we are using a new technology. In fact, we should review things with a stronger magnifying glass!
Whatever process you use, it will include an iterative component. No prompt will work perfectly right out of the gate. You will need to try it out, examine the results and refine the prompt, or the search terms or the instructions. It takes effort and forethought with each iteration. Check the actual results against your expected results and adjust accordingly. There is no one size fits all or easy button when it comes to drafting prompts or queries. Your results may even reveal blind spots in your approach, which was the case with deposition prep questions in one round of testing.
Last, but not least, your process should have a robust quality control element to it. Just as you want to look at the precision and recall levels with a TAR tool, you will want to review and edit any content created with AI tools. It’s unfortunate that one of the first uses of Generative AI tools resulted in a court submission containing made-up cases and cites . You want to edit carefully for content, truth, accuracy, and your voice. The tools can easily fall prey to the garbage in/garbage out phenomena, which means you need to verify everything!
A couple days before the conference, an article came out stating some law schools welcomed the use of AI tools for the creation of the personal essay for the law school admission process. Whether your voice or tone is down to earth or flowery, you will want it to be authentic to you. This review and quality control will be essential to ensure generated content ends up in your own voice or format.
When using TAR tools for relevancy determinations, I always included a report on what data was used, what tool was used, how it was trained, who trained it, what their role/title was, what the results were, what the precision and recall rates were (and any other data points relevant to the situation) and what quality control measures were taken. This cut down on the fights about using technology to produce evidence. Using Generative AI tools is no different. You want to be able to defend your usage and resulting work product, if necessary.
Technology
I’ve seen so many technologies over my legal career, which I suppose is to be expected as an IP lawyer, working in software companies. There are so many tools coming on the market, with new ones appearing almost daily. When looking at these tools with a critical eye and testing them – keep in mind - are they generic content generating tools for writing or drafting? Like a marketing email or song lyrics, or making an email tone ‘nicer’ or neutral? Or is the tool built for a specific purpose? Like anomaly detection, or pattern analysis, or categorization into relevant/non-relevant buckets? What is the technology intended for? What is its purpose?
Generative AI tools are just the latest in a long line of hyped technologies. Hopefully we have reached the pinnacle of the hype cycle with these tools and now we can hunker down and test them to figure out how they can help us in our day to day jobs. I liken this hype to the five stages of grief and my goal is to help move us from denial to acceptance more quickly!
My experience with this long line of technologies started in 2002 with Cataphora. Other technologies around at the time were Dolphin Search, Attenex, H5 and many others. These tools were linguistics or statistics based or both.
Now, I don’t want to start an argument about whether TAR tools are more sophisticated than Large Language Model (LLM) tools but we (the legal industry) have been using more sophisticated tools than the ones coming on the market now. I’ll share one example of how models get smarter - I used to have people ask me why Dolphin Search didn’t work? My response was that it works fine, it works as intended. The results may change when you add more data because it makes better or more informed connections in the data. People didn’t understand how the underlying technology worked.
AI tools that are large language model-based change over time too. They become more sophisticated in their connections, their language, their mimicking of your voice and other parameters. You can see the value of large language model tools in the results of test taking for entrance exams and AP tests. Just looking at the AP test results which were 4 or 5 for all tested subjects but, ironically enough, the English Literature and Composition exam and the English Language and Composition exam, both of which received a score of 2 (not passing).
However, on the flip side, staring at a blank page can be time-consuming and a deterrent. Using ChatGPT or Jasper or Bard (or any other similar tool) to generate a first draft has its attraction. It can be enormously helpful and save valuable time, as it is much easier to edit a document for most people than it is to draft it from scratch. This type of use has a low risk associated with it when using it as a starting point rather than as a finished product.
We were fortunate to have Elise Tropiano from Relativity on our panel as she spoke to the importance of training the system and being able to QC the results. Going back to our purpose for using a given technology will help dictate the type of use cases we deploy in testing and trying out the tools. She emphasized when looking at the quality control side of new technology, consider two things: does the technology work as intended and are you using it as intended? There are so many variations in the tools and now specific use AI is being integrated into other tools at a rapid clip, we need to be quick to test them and try them out.
When I used Otter.ai to attend a meeting, record it, and take notes for me, my friend exclaimed, “You just figured out how to be in two places at once!” If that’s true, I am sure I will repeat this particular AI usage a lot. However, using an AI tool to take notes for me is a much different risk level than using it to produce a document that will be filed with the court.
These tools are changing every day and getting smarter and smarter, which is to be expected. Plan, review, use, and verify accordingly!
Final Advice
Our Journey to AI Competence and Confidence panel advice boiled down to the following:
New technology is always appearing. Groups like WiE provide a safe space to learn, understand and ask questions about the new technology and how to use it. Our roles/jobs may change, processes may change, and technology will most definitely change but humans aren’t in danger of being replaced in the legal field any time soon! The good news is that you get to choose whether you want to be proactive and try out these tools or if you want to be an ostrich and stick your head in the sand. The choice is yours.
Please connect with me and let's chat if you AI questions!
Founder of SDPA Virtual Lunch with Leaders | Certified eDiscovery Specialist | Advanced Certified Paralegal | Advocate for Legal Education and Training
1 年Open, honest and productive conversations like this one are rare. Thank you - your mentee
Speaker | Transformational Mindset and Business Coach | Author | Executive Burnout and Overwhelm Consultant | Sales Teams Consultant
1 年This is such a great article to reduce that fear around AI...Its a learning curve just like it was when internet started.. Thank you for sharing!
Specializing in eDiscovery & Marketing
1 年You were an amazing moderator Sonya, thank you for attending!