Do lawyers trust AI?
Legatics AI signature page detection engine

Do lawyers trust AI?

When I was training to be a lawyer, a senior associate gave me some career advice:

"Do your best ever photocopying. Make sure the pages are straight. Make sure none have been left out. Sort the copies into neat piles. Know which was the original copy."

It's not because there was much skill in photocopying, but because at the start of my career as a fledgling lawyer the most important thing I could do was to build trust. If I could photocopy well, it was likely that I could do other things well. If I couldn't, then I'd need to find another department to qualify into.

Trust issues

Much of a lawyers' job is identifying and removing risk. You want to make sure everyone's signing the right version of a document and that everyone did indeed sign. My firm took responsibility for a partner's work, who in turn took responsibility for a senior associate's work, who in turn took responsibility for fresh-faced me. In order for the work to be delegated there had to be trust at each level of the hierarchy. In my case it was built up by doing fantastic photocopying.

Lawyers trusting AI

For as much as AI has brought great excitement to the legal world, issues of trust hold back its use. Is the answer right? How did it get there? Is it more accurate than a person? The issues are compounded by AI engines not being able to show their working. We also can't just wait for the technology to be perfect - it never will be, and even if it was, we would still need to trust it.

Without a legacy of perfectly aligned photocopies, I've been asking how lawyers can build trust in AI and how, as CEO of a legal technology company, we can build technology that is trusted and used.

1. Show what you’ve done

When we built our legal text importer, I knew lawyers wouldn't trust it. It can analyse a conditions precedent (CP) schedule in a loan document and pick out what's a clause and what's a heading. Lawyers can't miss a loan condition, so they were always going to check everything manually.?

It can be really hard to show the workings of AI algorithms and those workings can make little sense to a human. You can ask ChatGPT to explain its answer, and it might sound reasonable, but you can still worry about the explanation containing an error.?

We therefore focused on showing the output (not reasoning) of the work in a way that rapidly allows a lawyer to see what has changed. Despite taking longer to build than the AI model itself, creating a UI that shows the work in blackline form allows the lawyer to quickly assess the output and gain confidence in it.?

Legatics import tool presenting work in blackline form with quick correction tooling

2. Don’t shift responsibility

After pasting or selecting a document to import, the text importer drops lawyers straight into a screen where they check its output and can rapidly amend it. The output is not imported until the lawyer has scrolled through everything the algorithm has decided and the lawyer chooses to click ‘import’.?

This has the effect of keeping responsibility with the lawyer - they need to actively accept the algorithm’s output, or it won’t make it into their work. It makes firms comfortable with unleashing the technology on their lawyers and lawyers feel like they’re still in control.

We also did this in our signature detection algorithm being released this week. Our AI detects which pages need signing to speed up the lawyer’s work, but the lawyer works in a checking interface, taking responsibility for who signs where. Again, we spent more on building the UI than the AI model, but we wanted the technology to be adopted, and for that, the lawyers need to feel in control.

Legatics signature page detection algorithm places emphasis on a lawyer reviewing its work

3. Pick tasks you can only win

I was recently having a discussion with my father in law. He didn’t like the idea of his doctor getting replaced by AI. I said that if an AI had a better success rate in detecting cancer from an MRI image than a human doctor, I’d want it to do the diagnosis. He still looked uneasy. “What if the doctor does her diagnosis and then, if the AI thinks she’s missed something, it pops up a warning?”, I said. That was fine. There was a risk the doctor might come to rely on it, but we agreed we’d all like that.

I spoke to an M&A client of a law firm that saw things the same way. They’d agreed to a supplemental rapid due diligence report, conducted by AI, that highlighted key issues to them early in the deal, as well as a manual review. There was some reticence that the technology they might have once thought would cut their legal bill was now adding to it, as legal engineers were producing an additional report, but for a mission critical deal, it was a worthy addition even if the AI missed something.

It made me realise that there are legal use cases where you can only win. In our signature detection algorithm, the lawyer was going to have to identify the signature pages anyway. So long as they stayed astute, they weren’t losing anything by having the AI automatically flag them, and with our check-and-correct UI design, they were likely to remain feeling responsible for the output. Checking is quicker than doing the work from scratch, and if the AI misses something, we don’t lose.

Legatics signature page detection AI puts the lawyer straight into a check-and-correct UI


4. Do the impossible

Another way of getting lawyers to trust AI is to do the impossible. If a task simply cannot be done by a human, or even an army of humans, there’s no better option. It can also apply if it can’t be done at a reasonable cost, or in a reasonable time.

These include:

  • Enormous scope disclosure reviews
  • Summarising market positions across many documents
  • Finding unusual new case law

Again, you need to check the output. I’m not sure Steven Schwartz ever imagined he’d be used as an example of not doing so in quite so many legal technology conference talks, but if the task is otherwise impossible and risk of a mistake acceptable, you’re improving on something no human can do better.

Closing the excitement gap

I see a massive gap between what technology lawyers get excited about and what they actually trust and use. Trust is critical when it comes to getting technology adopted and I think we should start with the parts of the puzzle we can solve and solve them in a way that builds trust in technology... because, once I was trusted with the photocopying I was allowed to email a client.?

Isabel Parker

Chief Innovation Officer, White & Case

1 年

With deep fakes everywhere, including the ability accurately to replicate handwriting, I am not sure where trust resides anymore. Not sure I would trust a wet ink signature to be real. The next wave of technology needs to address authentication in a post truth era. Did someone say “blockchain”? Micha? Morrison Clare Jenkinson

要查看或添加评论,请登录

社区洞察

其他会员也浏览了