The AI Revolution: Balancing progress with our well being
Sunrise in the Pyrenees this morning.

The AI Revolution: Balancing progress with our well being

I have recently been thinking and reading a lot about agency and AI (and shared a stage and related views with Lilian Breidenbach at the LegalTechTalk event in London a few weeks ago).? For a start I should make clear that I’m hugely inspired by what we will achieve in the next 5-10 years accelerated by the advance of generative AI (and the applications we will build on top of this incredibly powerful general purpose technology), and have spoken of the positive impacts we are already experiencing at an event organised by the Manchester Law Society.? I also enjoyed listening to Sir Geoffrey Vos (the current Master of the Rolls) at the same event, who provoked the audience to consider:?

“whether lawyers and others in a range of professional services will be able to show that they have used reasonable skill, care and diligence to protect their clients’ interests if they fail to use available AI programmes that would be better, quicker and cheaper.”?

Whilst I wholeheartedly agree with Sir Geoffrey Vos (and note: for those still warming up on the sidelines, that history has shown that “the quiet moderate exercise of anxiety” has never shaped the course of a great technological advancement), I do also believe our early explorations should be dovetailed by a continued assessment of “at what cost” some of these benefits might arrive.??

Part of the reason for recording my thoughts here is a rising belief or suspicion that:

  • Many across the legal profession see the future as pre-determined - that we are the subjects (rather than agents) of inevitable and rapid change being thrust upon us by this technology’s rising capabilities (to counter - I think we have more capability to shape and harness the trajectory of this technology than we realise, and if you’re experimenting with it then you should be cognisant that you are providing real time feedback that is relevant to its future projection and value)
  • We might gradually over time begin to de-skill in areas where AI flourishes (over the last 20 years, we have probably outsourced parts of our memory, knowledge and processed based thinking to Google.. and we might be beginning to accelerate the devolution of more material aspects of this intelligence to OpenAI, Anthropic, Gemini.. and the interfaces we depend on to access them), and we don’t seem to be consciously informing a plan to address such potential over reliance or taking the opportunity to explore the often conflicting dynamics between our job to be done and the motives of the applications we go to do it (who are often incentivised to capture, convert or redivert our attention for commercial gain).? We never quite fixed the internet, and the opportunity to shape now how our interactions with AI powered interfaces may also be slipping through our grasp.? Related - I really enjoyed this lively discussion between Ezra Klein and Nilay Patel covering “Will A.I. Break the Internet? Or save it?”.
  • Large enterprises, AI model & application providers (and their research labs) fuelled by the idea of infinite data discovery opportunities that these multimodal models can now harness to model/ then eventually create superior artificially capable intelligence (basically agents able to reason, plan and autonomously stitch together the performance of discrete tasks in pursuit of an objective - much like how we currently apply our individual judgement, taste and experience to deliver the same) appear to be all in on either augmenting or displacing aspects of the workforce (I cheer for the former; not the latter - but have been posting recently on who should benefit from “our augmented selves” - essentially, I champion the individual above the enterprise, and believe it is a seriously important topic that could determine both the future success of our companies, nation states and the level of societal disruption we all experience in the next 5-10 years)
  • Governments for a variety of legitimate reasons (war, rising geopolitical power struggles, elections, global supply chain shocks, inflation etc.) or less legitimate ones (a lack of deep proliferated technical expertise and understanding) might be being informed of/but not necessarily consulted on the pace of progress (as it happens) and its near term societal implications.? My scepticism was perhaps confounded by this exchange between Tony Blair and Demis Hassabis discussing the opportunities of AI (with a more than subtle disclosure that many nation state leaders have been slow to understand its potential significance), but slightly alleviated by this interview with Arati Prabhakar who shared insights into the recent pace of development, hiring and change at the White House (in response).?

As a lawyer I think the pace of progress presents both tremendous opportunities and concerns, particularly the potential near term impacts to our legal system - and how the law is accessed, applied and adjudicated.

Within our businesses, I see a growing opportunity for lawyers to:

  • Engage and collaborate on the ethical guardrails through which we harness this technology to our competitive advantage - and there is a compelling recent example of the work Dana Rao and co at Adobe have done to: (i) propagate Firefly as a model developed and trained on legally licensed materials; and (ii) contribute to the establishment of the Content Authenticity Initiative.? This podcast with Verge editor Nila Patel provides a rich insight into both initiatives.
  • Inform and proactively contribute to nation state initiatives and consultations regarding how we foster the right type of generation defining advancements (over the wrong type), and I am (being from the UK) interested to understand the remit and power of the incoming Regulatory Innovation Office.
  • Take an active role in shaping knowledge and data management, extraction, development, curation and monetisation by reference to the critical workflows and products of our businesses.
  • Train and educate employees who access, utilise or develop such data (through AI powered systems or interfaces) on the importance of understanding and communicating areas they might be weak OR what might be reasonable indicators to understand where they might be failing SO that we can build more resilience into our deployments of them, and train users and beneficiaries on how to respond if such circumstances materialise (e.g. think of a pilot’s training on what to do when “auto-pilot” has been switched off as a reference point).? I’m starting to believe how we train those who come to rely on outputs (powered by this technology) is equally as important as the way we train the models themselves.? For this reason - I believe the way we should educate the lawyers of tomorrow should adapt and change, but perhaps not as transformationally as many would propagate (and if anything, we might benefit from more regular on the job training, no matter our level of experience, particularly as advancements in artificial intelligence continue to confound our expectations and in turn our reliance).
  • Promote the application and development of uniquely human traits across our workforce and society (our connectedness, application of judgement and human like mercy and empathy), and ensuring we steer (rather than are steered) by the systems we develop and deploy (the classic reference to “human-in-the-loop” deployment).

The latter point is one I wrestle with most frequently, and it is probably linked to a wider duty we have (as lawyers) to uphold public trust and confidence in the legal profession.? If we race to create artificially capable intelligence that can apply and adjudicate the law more expediently, consistently and cheaply then this has some serious upsides - particularly in the spheres of access to/and administration of justice (when you consider the current plight of the UK judicial system it is hard not to see the rainbow on the horizon), but let’s also consider the “at what cost” assessment I advocate for (earlier).??

“I’m not worried about computers thinking like humans.? I’m worried about humans thinking like computers.”

(A quote attributed to Joseph E. Aoun author of “Robot Proof: Higher Education in the Age of Artificial Intelligence”, but which I came across in this insightful exchange between Jen McCarron and Professor Wilkins on CLOC Talk Goes to Harvard)

I’ve recently enjoyed reading “The Coming Wave” by Mustafa Suleyman, and taking the example of the progress they unlocked at DeepMind in the training of AlphaZero to defeat the worlds best players of Go - AlphaZero did so by applying and confounding Go players on new ways and means to play the game (unlocking progress by applying the rules and permutations in new and novel ways to defeat their human counterparts).? The literal equivalent could be the unique application of the law by AI agents in both novel, discrete, subtle and successful ways (not contemplated by legislature) to pursue the rights of the subject who has set some higher order objective. How might we proactively counter such a foreseeable action? Would we trust trained models to adjudicate justice (and set legislation) on our behalf? Might we quickly lose sight and understanding of the nuance of new legislation and what rights we have as subjects, because we have too eagerly chosen to surpass the limits of our own human agency and control? These are clearly doomsday existential questions, but taking a step back we will need to start considering how we respond to the interpretation and application of laws by AI agents (soon to be in possession of PhD level reasoning capabilities if you’ve seen the news this last week coming out of OpenAI).

Ultimately, I write this article seeking to provoke greater thought around the consequences of each of our explorations and applications of this generation defining technology. I don’t think we should stop (and I certainly won’t), but I’d love to see us start to have open empowering conversations with a broader audience regarding the potential wider consequences that should in turn inform and shape its course.?

Tom Rice I'm late to the party here (are the Greek islands an excuse??) – this is a brilliant piece! Thanks for sharing. It's also a nice surprise, knowing how quickly you and the TravelPerk team have moved on GenAI. So many of us techno-optimists are skirting over big and real issues for the profession and society, and I don't think that's going to be helpful in the long-run. The big one I'm still seeing is figuring out what this means for impacted workforces, communicating honestly, and making a proactive plan for those individuals. Not only is this important at a societal level: it is also absolutely crucial for making sure these tools are a success. I think the key lies in us recognizing how critical workers and subject-matter experts are in building useful generative AI tools – if we don't have the benefit of their expertise, the digital products we create will be significantly less useful. (This tracks to the original conversation about law librarians being replaced by LLMs, only to be flipped such that law librarians are now seen as pivotal to making LLMs work for law firms.) I like this because it means that everyone needs to be brought on the journey if we're going to make this technology work!

Eugenia Navarro

Socia de LOIS, área de estrategia

4 个月

Tom Rice I think these kind of reflections are necessary, I love to see how you link different reflections and ideas. We need to think at a time of transformation of the profession and understand where we are going, but more importantly, where we want to go and what we are not willing to lose. ?I believe you have touched on essential and urgent points regarding how AI technology is transforming the profession. I fully agree with the idea that we have a significant capacity to shape and direct the trajectory of this technology. It is crucial that legal professionals not only passively adapt to these changes but also become active agents in their implementation and regulation.?Even further, they should be able to establish an ethical framework of application that serves to create a better society, improve education, and make justice more accessible.Collaboration between technology developers and legal professionals, but also the client perspective is essential to ensure that AI solutions align with the ethical and legal principles that govern our society.

Jenn McCarron

Legal Operations & Technology Director | Legal Tech Influencer | Board President | Host of the CLOC Talk Podcast | Ex-Netflix | Ex-Spotify

4 个月

Tom — love reading your thoughts. I share so many of these concerns around the downside: over reliance on law created by AI and worse… it not having the ability to factor for nuance and delivering services that leave some groups out of thought. I also agree it’s going to be a long, slow road of incremental change with this tech. Everywhere right now we are saying “it’s not production ready in the enterprise right now” in a few law firm innovation circles recently, I’ve heard professionals say “yes. AI tech startup company X’s product is great on a demo. But get my senior attorney or partner to train the software for a year as if it were a jr. associate, not happening.” This shows a more realistic timeline we are facing. Last, and like Sheila, I love your throught around what all of these work shifts should yield: "Promote the application and development of uniquely human traits across our workforce and society (our connectedness, application of judgement and human like mercy and empathy), deployment)." We will have a chance to be more creative, more empathic, have even sharper judgement. Let’s hope as leaders we set ourselves up to instill, recognize and measure those qualities.

Lucie Allen

Managing Director at BARBRI Global - NED - Board Member

4 个月

Thanks for sharing this Tom Rice, lots to get into and discuss. There's an in-balance currently between fear and opportunity with the advancement of AI although I agree we may be able, right now, to help shape what's coming. The gains in productivity and efficiency are huge and transformational and 'de-skilling' in those spaces isn't necessarily a bad thing if it enables us to focus on more impactful work. However, questions around whether we targeting the right problems with the right guardrails exist. The future lawyer isn't a robot but will be different from the lawyer who spend countless nights preparing materials. Education and training on new ways of working, new technologies and the ability to adapt and drive change needs to become the norm and a continued throughout a lawyer's career. That presents a pretty exciting opportunity I think.

Sheila Dusseau

Heading up Global Legal Operations and Innovation at Ferring Pharmaceuticals; proud to be a WorldCC 2024 Inspiring Woman

4 个月

Always interesting to get your insights. Two of my fave: "For this reason - I believe the way we should educate the lawyers of tomorrow should adapt and change, but perhaps not as transformationally as many would propagate (and if anything, we might benefit from more regular on the job training, no matter our level of experience, particularly as advancements in artificial intelligence continue to confound our expectations and in turn our reliance)." - this is a great reminder that it's not a "one and done" when we introduce AI or train the team on best practice usage. We need to keep revisiting and re-energizing our views and learning. "Promote the application and development of uniquely human traits across our workforce and society (our connectedness, application of judgement and human like mercy and empathy), and ensuring we steer (rather than are steered) by the systems we develop and deploy (the classic reference to “human-in-the-loop” deployment)." - not enough is being said about this and I fear the focus will come as a defensive reaction later - too little too late. Thanks, as always, for the inspiration! ??

要查看或添加评论,请登录

Tom Rice的更多文章

社区洞察

其他会员也浏览了