A Technologist Perspective on the AI Position Paper of The Law Society of Hong Kong
AI and the Legal Sector in Hong Kong

A Technologist Perspective on the AI Position Paper of The Law Society of Hong Kong

To kickoff 2024,?The Law Society of Hong Kong?released a Position Paper:?The Impact of Artificial Intelligence on the Legal Profession. It joins work from an array of organizations seeking to define and scope Artificial Intelligence (AI), especially within the professional services sector. At just 14 pages, it is a quick read, seeking to address the far-reaching impact of AI with adequate urgency, a challenge that is being felt across economies and industries as AI output continues to improve at light speed in quality, quantity and variety. It marks a strong beginning in a journey to?educate, engage and work with the Hong Kong legal sector in addressing developments in AI.

EXECUTIVE SUMMARY?

  1. The Law Society should lean into training and CPD offerings on AI.
  2. We agree with the HKLS that AI will impact the profession and require ongoing guidance.
  3. Do not use the term? "Open AI" to describe open source AI development or public datasets.
  4. Provide specific examples of how AI is being integrated into software? that is not standalone Generative AI.
  5. Firms and the Law Society should align their AI terminology with global standards organizations like ISO, IAPP, NIST, etc.
  6. Make extra efforts to assist the majority of Hong Kong lawyers that are either sole proprietors or in small (2 to 5 people) firms.
  7. AI struggles at connecting dots, planning or dealing with changes over longer timelines--all things where skilled lawyers excel.
  8. Quick wins in using AI remain in formatting, summaries, rephrasing and following rules. Ask AI what it knows on a given topic to audit and improve it in chat.
  9. The amount of change taking place in AI is massive, but mutatis mutandis, AI can certainly aid lawyers and law firms in many of their regular tasks.
  10. LexiTech looks forward to teaching CPDs on AI, setting up in-house, data secure AI instances, and helping more HK lawyers with their technology needs.

Before diving in, there is one notable misstep in the paper, calling AI models which use data sourced from the public internet, "Open AI".?And while this review is technology-focused and not from the legal perspective, it is worth noting Quinn Emanuel, representing OpenAI, Inc. is in a trademark infringement lawsuit against Guy Ravine and Open Artificial Intelligence, Inc.[1] regarding the usage of those same terms: OpenAI / Open AI. Back to the technologist's point of view, go to a search engine looking for "Open AI" and see which way the 100s of millions of results lean.

The "Open AI" misstep does help introduce a major issue all facets of the industry are dealing with: definitions and standardization. Major standards bodies--such as ISO, NIST, IAB Tech Lab, IAPP and others--are still working on standards with most at the beginning stages of building out published and reviewed definitions in relation to AI[2]. Still, "Open AI" as a term is in direct conflict with a major player in the industry and should not define a model that, "..goes out into the open internet and learns all the word patterns it can find."[3] Rather it should be replaced in upcoming Law Society documents with terms that are both more descriptive--ethical AI, transparent AI, AI governance--model specific, and less conflicted.?

Apart from the trademark considerations, the Law Society's definition combines two facets of AI that are often addressed separately: models and datasets. The current magic in Large Language Models (LLMs) made famous by ChatGPT is found in the combination of those two aspects:

  1. A massive training dataset, large enough to provide statistical relevance in parsing grammar and other semantic relationships, (the LL in LLM); and,
  2. A model (the M in LLM) composed of an overall architecture that is engineered for chat-like output, but allows it to "learn" how to do this from the training dataset and interactive chats.

As a simple example, if ChatGPT only trained on Cantonese, it would only 'understand' and 'reply' in Cantonese--even though the model could be the same as one that can produce answers in English. To continue our example, English writing would need to be added to the training data before ChatGPT could interact in English, again without the model ever being re-engineered.

Similarly, an unchanged dataset can produce dramatically different results by changing the type of model that interacts with that dataset. These model types include things like: computer vision, speech-to-text, natural language processing, multi-modal, etc. Next, within a given model type there are different "weights" and "temperatures" that can be scaled in order to affect output, not to mention other engineering factors. As an example of how many models can be created within one type, see the hundreds of entries on the Open LLM Leaderboard at Hugging Face.[4]?

The relationship between models and datasets circles back together though for training. Often the best scores for weights and temperatures are those derived from a large enough and sufficiently focused dataset. as Stephen Wolfram writes in his treatise, What is ChatGPT doing and why does it work?, "...weights are normally determined by 'training' the neural net using machine learning from examples of the outputs we want."[5]?Clear-eyed lawyers reading this far may be thinking, "If only ChatGPT had the entire written judgments of the Chief Justice McJudgy Judge[6], and trained on outputs where my firm won the case..." Indeed, if you are thinking along those lines, you're well on your way to understanding AI "weights" "temperatures" "training data" and "outputs".?

Hopefully the above breakdown, albeit brief, emphasizes the point of having clarity and commonality in terms.

Now that the "Open AI" hurdle is passed, the strengths of the Law Society Position Paper come through.

STRENGTHS (emphasis mine)

Prescriptions?

  1. Coordinated efforts on the formulation of ethical guides on the use of AI are urgently needed in areas like data protection and data governance, security and safety, transparency, disclosure and proper human oversight.[7]
  2. Lawyers have to reach a higher level of thinking and?find innovative ways of protecting data privacy, privilege and confidentiality.[8]?
  3. As a qualified legal practitioner, the solicitor shoulders the ultimate responsibility on the quality of the work. Having a good understanding on how the technological solutions work is essential so that he can decide the extent of reliance he should place on the tools. Keeping up to speed on technology is thus becoming an additional component of competence required of a solicitor.[9]
  4. Continuing education and reskilling training are essential?to facilitate the evolution of entry-level roles such as paralegals and legal assistants transitioning into more specialized roles managing and overseeing AI systems and processes.[10]

Cautions?

  1. OSPs [online service provides] are not law firms. They are unregulated and have no mandatory professional indemnity cover to protect the public from losses arising from their services.[11]
  2. Substantial amounts of legal data are highly confidential and/or protected by legal professional privilege. There are concerns about client data privacy and securing sensitive information when using AI via cloud services. When interacting with an Open AI model, all inputs are sent to the relevant cloud services provider and is used by the LLM to further assist in training. Such data therefore becomes available to anyone else using the same LLM model.[12]
  3. LLMs do not represent a search for truth and meaning. This, in turn, leads to the increasingly well-known phenomenon of “Hallucinations”. Or in plain language, making up things that are not true. This is because the LLM is assessing the likelihood of probable next words, not the truth value of those words or even the truth value of the connections of those words as concepts.[13]

If a six-point highlight of a 14 page paper isn't succinct enough for you, here's the TL;DR: continue to learn about AI, be vigilant when AI output requires facts, and be extra careful to protect confidential data. To this end, the Law Society leans in to another strength with its plans to inform, engage and implement continuing professional development, promote ethical standards and best practices, and work as a hub for AI law-related resources in Hong Kong.

The weaknesses of the paper are much more open to interpretation than the usage of "Open AI", and are mostly the views and bias of your author here. That said, some quibbles to consider:

QUIBBLES (emphasis and commentary mine)

  1. "Within a law firm, key roles and skills will be required to keep up with the adoption of AI systems."[14]?Yes, firms will want to keep up with AI, but the need to have AI-focused roles in-house is questionable. The financial motivations of a law firm will--and rightly so--skew towards hiring more lawyers, not technologists. Creating and helping law firms adopt AI systems is something that can likely be outsourced, even while maintaining confidentiality and security of client data. Also, hiring an in-house "Legal Knowledge Engineer" as a sole proprietorship or 2 to 5 person firm is likely a non-starter. Cheaper, local, and more readily available AI solutions are needed from the mid-tier down to sole proprietors.
  2. Limited focus on finished work products. Responsibility for finished work is mentioned in 33(g), "As a qualified legal practitioner,?the solicitor shoulders the ultimate responsibility on the quality of the work" but as an overall emphasis in the position paper, it is lacking. Many AI tools are already integrated into the software and systems lawyers use on a regular basis. Further, the businesses that compete in providing these tools have a vested interest in not being transparent in how their tool functions. This is all the more apparent when working directly with proprietary generative AI systems, such as ChatGPT. The position paper would be better served by a greater focus on the accuracy of deliverables, because it is the emails to clients, finished contracts, and documents in court that will be attributable to the lawyers presenting them--regardless of how they are made. This is a core aspect of a regulatory body like the Law Society, so in a position paper I can understand the emphasis on the topic at hand, but especially with systems that can tend toward?hallucinations, alterations and sycophancy all in the interest of “sounding” right with its correspondent, there is all the more need to remind lawyers that something poorly produced by the body of the firm can cost its head.
  3. A lack of analysis for integrated AI systems. As mentioned above, the position paper largely deals with AI as a standalone entity or tool. It lacks guidance on evolving software itself, especially Software as a Service (SaaS) which is progressively adding more AI processing to its conventional products. For an applicable Hong Kong example that worked prior to January 2024, point your browser to: https://chat.openai.com/ and you'll see an “Unable to load site” message. However, there is no such issue for Microsoft Copilot. Ask Copilot if it is powered by OpenAI and it will tell you something to the effect of, “I am powered by OpenAI. My underlying technology is a combination of advanced language models, including GPT-4, which allows me to understand and generate human-like text based on the input I receive." Further, "Copilot is ... embedded in the Microsoft 365 apps you use every day — Word, Excel, PowerPoint, Outlook, Teams and more." Copilot+OpenAI also powers, Business Chat, which "...works across the LLM, the Microsoft 365 apps, and your data — your calendar, emails, chats, documents, meetings and contacts — to do things you’ve never been able to do before."[15] The trend for AI tools to be ingrained and improved inline with the business goals of companies like Microsoft, Google, Apple, Salesforce, Zoho and other competing platforms will not change. In this regard, AI adoption may not even be a milepost on a timeline that some lawyers choose, but rather part of the long march of software development. As AI-powered performance becomes more seamless and subtle, it only adds a greater need for adequate review of deliverables from lawyers.

?

RECOMMENDATIONS

Educate. I wholly agree with Law Society plans to offer educational events and seminars on AI to the legal community in Hong Kong. As already seen even in the technology-focused standards organizations, there is a struggle to keep up with the pace of change brought on by AI. At a basic level, understand the factors that contribute to how an AI functions--prompts, weights, temperature, tokens, context windows and parameters--and begin to get a rough idea of how these factors compare in the performance of different models.

Partner. If not formally--at least in general alignment--the Law Society and lawyers should seek to define their AI terminology inline--and complimentary to--other global organizations.

Strengthen. If you or your firm have a data secure way to use AI, then lean into its strengths. Does your firm have a style guide? Make that style guide the training and output format instructions of the AI. Does your workflow benefit from summaries or in reframing and restructuring documents? See what AI can achieve in speed and efficiency. Wondering what your AI knows about the data its ingested? Ask it. LLMs tailor output to the structure and questions asked within a context window, so a well-formed prompt--or series of prompts--can make a dramatic difference in the reliability and repeatability of output.

CONCLUSION

Apart from the shoehorned use of "Open AI" into a mangled and constrained definition, the AI position paper of the Law Society is a solid beginning.?Its forward-looking approach to the next phases of educating, engaging, and participating with the Hong Kong legal sector and developments in AI are its strengths. It openly acknowledges the challenges faced by sole proprietorships and small firms. From this beginning, the Law Society can shore up a few weaknesses, improve alignment with the technology community at large, and better address the day-to-day technology needs of Hong Kong lawyers. We at LexiTech look forward to being a part of these coming changes and are happy to answer any questions you may have.?

REQUEST FOR COMMENTS (RFC)

In a nod to the origin of the Internet, please comment on this article below.*? Your time and constructive, polite opinions are greatly appreciated.

*"Request For Comments (RFC’s) documents were invented by Steve Crocker in 1969 to help record unofficial notes on the development of the ARPANET. They have since become the official record for Internet specifications, protocols, procedures, and events." [Source]


[1]?https://www.pacermonitor.com/public/case/49838878/OPENAI,_INC_v_Open_Artificial_Intelligence,_Inc_et_al?for more details.

[2]?https://www.iso.org/standard/81230.html - As of April 2024, ISO/IEC 42001 the international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) is published, but still under review.

[3]?The Impact of Artificial Intelligence on the Legal Profession, Position Paper of The Law Society of Hong Kong, January 2024, p. 7, Item 19

[4]?Also: https://huggingface.co/models where rankings can be sorted by "most likes" "most downloads" "trending" and so on.

[5]?https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ - "The basic idea is to supply lots of “input → output” examples to “learn from”—and then to try to find weights that will reproduce these examples."

[6]?Fictional

[7]?The Impact of Artificial Intelligence on the Legal Profession, p. 11, Item 40

[8]?Ibid., p. 5, Item 15

[9]?Ibid., p. 8, Item 33, (g)

[10]?Ibid., p. 7, Item 27

[11]?Ibid., p. 5, Item 11

[12]?Ibid., p.6, Item 20

[13]?Ibid., p. 5, Item 18

[14]?Ibid., p. 6, Item 25

[15]?The integration of Copilot, powered by OpenAI and GPT-4, into Microsoft 365 was announced on 16 March, 2023.?


This article originally appeared on LexiTech Consulting. Published here with permission.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了