ChatGPT: ready to replace junior lawyers?

ChatGPT: ready to replace junior lawyers?

Everyone is excited - and confused about the implications of ChatGPT3 applied to the delivery of business legal services.

These three recent article / blog posts on that subject caught my eye:

Nicola Shaver at LegalTechnology Hub posted a blog that included some helpful suggested guidance you might want to give your lawyers (source):

  • Out-of-Date Information: OpenAI (the maker of ChatGPT) has made it clear that ChatGPT is only trained on information up to the end of 2021,
  • Lack of security: ChatGPT is an open-source beta product, and information entered into it will not be kept secure
  • Lack of Specialized Knowledge: ChatGPT has been trained on everything on the internet and beyond, but it doesn’t actually “know” anything.
  • Hallucinations: When ChatGPT is asked a question and it does not know the answer, it will . . . likely make up an answer
  • Lack of Explainability: LLMs draw upon such vast data when generating an answer that understanding where that response came from is challenging, if not impossible?
  • Bias: Like any AI, ChatGPT takes on biases in its training data. The information available about the law and legal cases upon which it has been trained will inevitably contain historical, systemic biases that we are trying to correct for in current times.?
  • Inability to read long documents: One of the things ChatGPT is good at is summarizing text, or extracting information from text. . .?ChatGPT can read only 4096 tokens at once, which means that lawyers will not be able to feed lengthy contracts into it.
  • Questions need to be formulated carefully:??generative AI . . .gives rise to the need for a new skill, called “prompt engineering”. The answers provides by LLMs are only as good as (1) the data upon which they draw to provide those answers, and (2) the way the question is phrased.

SG Murphy Solicitors (an Irish personal injury, property and commercial firm) analyzed responses by ChatGPT to a set list of typical questions a client might ask. They found it had an accuracy rate of 91% (click here to read about the study).

No alt text provided for this image
ChatGPT3 answers client questions correctly 91% of the time

[GVW: I wonder what the error rate would have been if they had asked the same questions of their junior lawyers?]

Their conclusion:

We do not see AI Chatboxes significantly disrupting the legal industry right now. However, we think this will change quickly, as AI’s capabilities increase further, we see AI tools becoming more and more prevalent in the legal industry. It is not that far into the future where we see AI tools being used routinely for drafting of simple documents and agreements, and for the generation of first drafts of more complex documents. ?Allowing AI chatboxes to perform more simple mundane tasks, allows lawyers to focus on higher paying more complex work. [Emphasis added]

‘ChatGPT Already Outperforms a lot of Junior Lawyers’: An Interview With Richard Susskind

The title of this interview says it all.

Ramin Assa

AI Ready, Actionable Insights - Data, KM, CoPilot, SharePoint Premium Strategist

1 年

#legal #km

回复
Morten Marquard

Founder @DCR - delivering process mining that welcomes change

1 年

Very interesting. I compared ChatGPT with symbolic AI in this article https://www.dhirubhai.net/posts/mortenmarquard_dcr-chatgpt-aiforgood-activity-7039157133346533376-odkl and think ChatGPT is not accurate. It resembles the normal call center service with lots of information but little insight and knowledge.

回复
Nicola Shaver

Driving the Future of Law at Legaltech Hub | Innovation, AI, Legaltech Leader, Advisor, Investor | LLB, MBA | Fastcase 50, 2021 & 2024, ABA Women of Legaltech, 2022 | Adjunct Professor

1 年

Thanks so much for the shout-out, Gordon! Good piece and look out for more guidance coming from us later this week (follow along at LegalTechnology Hub).

Jed Cawthorne MBA CIP IG ??

Director Analyst Enterprise Content Management

1 年

I liked the article that labelled it "mansplaining AI" - it is chat interface so its trained to provide good, readable responses, that "sound" correct, however those well written passages can be full of garbage, with massive factual inaccuracies - "so it comes across as being very confident, even though it's wrong".

Janus Boye

I bring people together for learning and networking

1 年

要查看或添加评论,请登录

社区洞察

其他会员也浏览了