The one reason artificial intelligence will not replace lawyers

(October 20, 2022, 2: 15 PM EDT) -- With the hype of artificial intelligence (AI), many people believe that AI will dominate the human race in no time. Lawyers' jobs will be replaced by AI thanks to the advancement of technologies like legal tech.

Fortunately, lawyers can rest assured that in the foreseeable future, AI is not capable of replacing most lawyers' jobs. Similarly, there is no need for legal tech professionals to make fruitless efforts attempting to replace lawyers with AI. It won't succeed in the foreseeable future based on the current state-of-art.

One reason makes lawyers immune from Al

Reasoning is the one reason.

If AI can replace most lawyers, it must be able to automatically generate legal reasonings which are persuasive, legally sound, contributory to the case proceedings and benefit the justice system. This is a big ask for AI.

AI only gives you an answer but can't give you a reason why the answer is like that. AI lacks the capability of outputting a humanly understandable reasoning process.

?????A spam-filtering AI in your e-mail can send an e-mail to the jail, namely the spam folder, but does it ever tell you why it makes such a judgment to an incoming e-mail?

?????A handwriting-detecting AI recognizes the handwriting and converts them into digital texts, but does it ever tell you why a character should be "0" rather than "O"?

?????A marketing AI takes down your behavioural pattern and posts the ads on your screen, but does it ever tell you why you are the likely customer?

It's true that users don't need the reasons for the AI uses above. However, for the justice system and legal tasks, the reasoning is the core.

????The losing parties are desperate for the reasons why their claims fail.

?????Lawyers need to persuade judges why the applications should be approved/denied.

?????Legal professionals need to know how the reasoning of the judgments informs future cases.

Now that reasoning is everything, why not just figure out how AI is thinking and convert that into human thoughts or develop an AI algorithm that can imitate the human reasoning process?

There are at least two obstacles.

First obstacle: Incompatibility between legal reasoning and Al reasoning

Legal reasoning is based on logic, whereas AI reasoning is based on statistics and probabilities. AI sends an e-mail to the spam folder, recognizes a handwriting symbol as "0" instead of "O", or posts certain ads on your screen based on a calculation of probabilities, not on meticulously logical proof. Such a drastic difference makes it extremely difficult to convert between AI and legal reasoning.?

For example, human beings recognize a handwriting symbol as a "6" rather than a "9" because of the relative locations of the "loop" stroke in the numbers. However, AI does not care where the loop stroke is. Instead, it will map the handwritten character to a 16x16 pixel matrix (or higher resolutions), note whether each pixel is black or white, and calculate the probabilities of the handwritten character matching each number after taking all pixels into account. There's no trace of "finding the loop" in Al's neuro network. (To learn how AI recognizes handwriting in detail, please follow this Youtube video of 3Blue1Brown.)

If there's an AI that can be used in legal decisions, its reasoning process may be something like this: "Taking all into account, the probative value of the plaintiff's evidence is 0.743, and the prejudicial coefficient is 0.658. Therefore, the probative value outweighs the prejudicial effect. On the other hand, the credibility index of the defendant's witness is 0.493, which is deemed not credible on a balance of probabilities ..."

Why not revolutionize our legal system? Instead of relying on legal reasoning, why not just rely on probability, quantifying all legal concepts like the above? This topic is profoundly philosophical and needs books of discussion, but simply put, should the law serve humans or Ais?

Second obstacle: Not enough data makes artificial incompetence {AI)

AI is a sub-category of data science, because all the current AI algorithms in the market are data based, which all consume hordes of data to get trained. Data is the spirit of AI - no data, no intelligence. Even if we created a natural language processing (NLP) AI model that could miraculously produce a humanly and legally understandable reasoning process, there would not be enough data for training, and AI would become artificial incompetence.

Different specializations of law and different legal questions within one specialization have different reasoning processes, which require different training sets. The number of good cases on a single topic in one jurisdiction is just several, which is awfully insufficient to train Al.

Moreover, the instability of good cases also derogates the problem. As laws are constantly developing, just one new judgment (such as the Vavilov case) or new legislation is enough to invalidate the entire training set, which renders the trained AI useless (Canada (Minister of Citizenship and Immigration) v. Vavilov [2019] S.C.J. No. 65).?

Why not detach AI from data and develop a reasoning-based AI that does not depend on data? AI experts are working on such models, whose success is not even close, let alone catering such AI to the legal industry. Only when AI becomes an independent discipline rather than a sub-category of data science can we feel the hope of reasoning-based AI.

What kind of lawyers are in danger?

Certain legal jobs are truly in danger facing the advancement of Al. Similar to other industries, those jobs in danger must be repetitive, monotonous, and not engage in complex reasoning.?

Some examples are:

?????Form fillers: those who transfer clients' information to legal forms or document templates.

????Case searchers: those who put in keywords and search relevant cases.

????Client screeners: those who collect intake forms

?????Note takers: those who keep meeting minutes or transcripts with clients.

If those are the main responsibilities of your legal job, it's time to reconsider your career plan.

Conclusion

Until people can develop a reasoning-based AI model that can mimic humanly understandable reasoning processes and does not consume a large amount of data, there's no need to worry about lawyers' jobs being replaced by AI, except for a few. Similarly, until such an AI model is readily available in the market, there is no need for legal tech professionals to pursue such barren goals.

Rocky Wang is an inaugural student at Lincoln Alexander School of Law with computer science and engineering degrees. He is interested in legal tech, laws in technology and telling jokes. You can reach him at [email protected] or on Linkedin.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author's firm, its clients, The Lawyer's Daily, LexisNexis Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

要查看或添加评论,请登录

Rocky Wang的更多文章

社区洞察

其他会员也浏览了