Reflections on Scaleup GC: bridging the AI trust gap
In a week or two we’ll be releasing the findings of our annual survey, and boy have things moved quickly when it comes to AI and the professions.?
TLDR: the proportion of lawyers who have never used generative AI for their roles, and the proportion who use it daily or weekly, have swapped in a year. I can’t think of any technology ever being so quickly and widely adopted by lawyers.
But…our sample comes from the more forward-looking lawyers. Our community members are often in tech scale-ups and are interested in technology by default. The more corporate legal events are much more fixated on risk and precaution than embracing the benefits (see this post , for example).?
That’s understandable. We are lawyers, after all. The AI trust gap is real - this Thursday, we’re hosting a webinar in which we’ll dive into why the trust gap persists and what we can do to close it. Sign up here ??
But at Scaleup GC last week, the biggest risk being flagged around generative AI wasn’t regulatory, or privacy- or security-related. Those risks must be addressed, but Scaleup lawyers are confident they can get there.
Instead, the greatest risk our delegates feared was not using it (or blocking it from a legal standpoint), falling behind, and being outpaced by competitors or becoming redundant in their roles.?
As our customer Lars Krooshof at Temper told the conference, in a session highlighting how to use Juro’s AI Assistant to draft and negotiate contracts:
“AI won’t replace lawyers, but lawyers who leverage AI successfully will replace lawyers who don’t”.
The ‘lawyer co-pilot’ phase
We saw specific examples of how TravelPerk’s innovative team (bravo Tom Rice and Andrew Cooke ) is doing amazing things to scale their legal output without scaling legal headcount - achieving the "more with less" mantra we’ve been chanting for 10 years, by building bots that answer legal questions based on their wiki and playbooks.?
We heard specific examples of sole and fractional GCs like Lucy Ashenhurst using ChatGPT regularly to support with research, writing training modules and summarising documents. We met customers using AI assistants to translate whole agreements into different languages. Data visualization, creating interactive FAQs, writing training materials, parsing guidelines and codes of conduct - the usage is real and it's accelerating.
领英推荐
We could call this the ‘co-pilot phase’ - lawyers are doing their thing, and they’re leveraging generative AI as a partner with the ability of an exceptionally hard-working trainee in order to get through more tasks. Fantastic.
The ‘omni-lawyer’ phase
So far, so good. But what if the ‘muscle’ of AI - the sheer force of execution it brings - is directed not at ‘lawyers’ per se, but instead at legal tasks, in such a way as to make them safe to execute without a lawyer at all?
We see examples of this already with AI playbooks; for example, in Juro, team admins (often lawyers) can set up AI guardrails, explaining their rules and dealbreakers, then trusting generative AI to review contracts for compliance:
This means commercial colleagues can conduct simple negotiation tasks, like pushing back on risk positions the company can’t accept, without needing to talk to legal. Now instead of one lawyer supporting 30 salespeople, you’ve got 31 lawyers (up to a point!).
At TravelPerk, who we mentioned earlier, this kind of ‘upstreaming’ of AI to handle legal work goes beyond contracts. The team creates legal bots leveraging AI to provide guidance to their colleagues on a range of legal issues. Automating yourself out of a job is usually the province of teams like revenue operations - it’s amazing to see lawyers leading the way here, and a mark of how far we’ve come in our quest to do more with less.
Moving legal tasks out of legal teams, at scale, sounds dramatic. It is dramatic. I see it every day when we walk the talk here at Juro - our GC Michael Haynes focuses intently on things like frictionless template design/drafting, negotiation fallback positions, and training and enablement on key topics in order to keep himself out of as many commercial deals as possible.
This was a point I made at an AI and the Professions roundtable at the UK Treasury. The future of legal work may reveal that that work was never really ‘legal’ work at all. It was a collection of tasks to be completed by knowledge workers many of whom are perfectly capable of executing those tasks. This poses the question of what really we will mean by ‘legal services’ in the future.??
What was great to see at this roundtable, as well as the Tech Nation Future Fifty event we attended recently, is that the government is sincere about AI, and committed to doing the right things in regulation, promotion of scaleups and AI safety. Since that roundtable, the Prime Minister has called a General Election, but regardless of who occupies no. 10 Downing Street, this commitment looks set to endure.
Sifted data recently confirmed that the UK is behind only the US and China when it comes to startup investment, so the ingredients are all here. But the truth is that the crucial innovators here won’t be the government. It will be the legal teams finding the right problems to solve, and the right technology vendors thinking innovatively and moving quickly to partner with them.
The COVID pandemic famously forced 5 years of technology adoption to happen in a few months. The pace of change in AI indicates that legal is undergoing a transformation that’s even more dramatic. The entire Juro team is super excited by this prospect - and we’re building an AI-native product that is fit not just for 2024 but for 2026. Even more exciting than that is the work done by our customers to start working and enabling the business like it’s 2026.
AI safety is crucial. How do you see regulation evolving in this space?
CEO | Applied | 2x founder | VC backed SaaS
5 个月In general humans have algorithmic aversion so there will be distrust. But again safety / alignment have different meanings to differently politically & philosophically aligned. I’d say look at the use case and if it’s high risk, we need to care about safety. Some use cases it really doesn’t matter e.g if an AI sold me apple instead of orange- who cares.
Mid Market Account Executive @ Nylas | On a mission to give sales a good name
6 个月As a salesperson who needs to wait for 1 lawyer, I love this 'omni-lawyer' concept. Trust has and always will be dependant on time. And its moving faster than expected.
Senior Director, Legal|Commercial & Technology Lawyer | Legal Engineer
6 个月Nice piece Richard Mabey, and I’m all for forums such as #ScaleUpGC by Juro where these discussions / exploration of implications are encouraged. Great event ??
Sales Growth Consultant | Driving Sales Growth for Product Owners
6 个月I dont know why but AI is still like a black box to me Richard Mabey