GENERATIVE AI: HOW TO USE IT SAFELY
Tomasz Zalewski
Partner at Bird & Bird. I advise on contracts, IT and technology projects, public procurement, cybersecurity, defence, copyright, Web3 and LegalTech
In this newsletter, I write about the confusion surrounding generative AI systems and give tips at the end on how such systems can be used safely by lawyers and other business advisers.
Ten sam newsletter w j?zyku polskim jest tu:
The advent and widespread access to ChatGPT, Bard, Bing AI, Midjourney, DALL-E and other so-called generative artificial intelligence systems came as quite a surprise to experts and to all of us. Suddenly, systems whose way of working we do not fully understand were available to everyone, and not, as before, only to a narrow circle of insiders in the laboratories of large companies.
So we started to test these tools on a large scale.
As a result, the media - both traditional and social media - have been inundated with reports describing the results of these tests. From the outset, enthusiastic voices were mixed with critical comments, but recently the sceptics seem to have prevailed.
More and more news about cases of use of generative AI systems are coming to light, which cause us concern. Student papers prepared with the help of ChatGPT, fake photographs depicting Donald Trump's arrest generated from MidJourney or completely false information generated by ChatGPT that appears authentic - make us wonder whether these systems bring more problems or benefits.
Adding fuel to the fire was the publication of an open letter calling for a pause the work on generative AI systems and the decision by the Italian Data Protection Authority to block access to ChatGPT on the grounds that it violated data protection laws.
What are the risks associated with the use of generative systems?
Regardless of the perspective taken, there are a number of undoubted risks associated with generative AI systems. Looking from a broader perspective, these risks include, first and foremost:
These are not new threats. They have been identified before with the development of generative artificial intelligence systems. The difference is that previously access to AI systems was limited, whereas now virtually everyone has access to such systems, increasing the scale of potential threats.
On the other hand, from a closer perspective, we can identify, for example, such risks associated with these systems as:
Should we restrict ourselves and employees in our organisations from accessing generative AI?
We may get different answers to this question. The precautionary answer will probably recommend the use of the risk-based approach familiar from GDPR and the draft AI Act regulation. We will likely receive recommendations to at least :
Implementing such recommendations, however, actually means blocking the use of these tools for an extended period of time and, in addition, some of them cannot be implemented in practice (e.g. it does not seem possible to verify the operation or training database of tools such as ChatGPT, which currently uses 175 billion parameters).
It is people who use AI systems
It is not possible to apply to generative AI systems all the recommendations that we have been making to the existing specialised AI systems. Their emergence came as a surprise to, among others, the drafters of the AI Act regulation, which is currently being adapted to the new realities.
Nor is it useful to think of AI systems as if they have self conscience and humans are mere puppets under their control, as this leads to losing sight of the real issues involved in using these solutions. In the end, however, it is people who decide how AI systems are built and how they are used.
The recently fashionable trend of so-called AI ethics is nothing more than a consideration of how to ensure that AI systems are built and used in an ethical way. This is an important direction for human thought, but there is no doubt that we will not get around it without the introduction of new legal regulations.
Current and future regulations
The EU's AI Act regulation, which has already been advanced, will soon give us the basic legal framework for the manufacture, marketing and use of AI systems. Among other things, this legislation will impose a number of obligations on manufacturers of high-risk AI systems in order to minimise the risks associated with their use.
领英推荐
It is worth noting that these risks are primarily associated with situations where we relinquish all or part of human control over the performance of these systems.
However, even now we are not defenceless, especially to the extent that these systems process personal data. GDPR contains relevant provisions that apply to automated decision-making based on the processing of personal data. EU legislation has also been published (or is about to be published) that regulates the use of data, both personal and non-personal.
To use or not to use?
I do not wish to give authoritative guidance here on every use of generative AI tools. However, in my view, refraining from using these tools only because of unspecified risks or exaggerated risks is not valid. After all, risks are only significant if they are unmanageable and the possibility of their occurrence cannot be minimized.
Therefore, since it is people who use these systems, it should be sufficient for the time being to implement simple rules of use, adapted to the specific situation and preventing, or at least minimising, the risk of infringement of the law or legally protected interests.
Should lawyers use generative systems?
Lawyers are already being asked for advice related to the use of generative AI systems. Is it possible to have a situation where lawyers advise their clients on the use of, for example, ChatGPT, while not using such systems themselves in their daily work?
Of course, yes - although one may wonder whether the lack of own experience with such tools will limit the ability to see the relevant risks in the client's case or, conversely, cause some risks to be overstated within the analysis.
Furthermore, in order to use these tools efficiently, it is necessary to get out of the habit of using search engines and learn how to formulate queries (so-called prompts) in such a way that the results obtained are satisfactory. This area of knowledge - called prompt engineering - is developing rapidly.
It is not worth lagging behind your own customers.
Specific guidance on the safe use of AI generative systems
I have prepared specific guidance on the use of text-based AI generative systems. These are prepared from the perspective of a business advisory service provider (so, for example, a lawyer, consultant, advisor).
The guidance is presented in the form of a decision tree and boils down to three recommendations:
You can compare it with similar chart prepared by Aleksandr Tiulkanov .
The above graphic is also available here:
I would welcome any comments on the proposed recommendations.
If anyone would like such guidelines tailored to their industry or specific situation (e.g. for generative text-to-image systems), feel free to contact me.
Was this newsletter interesting? Help me improve it.
Your feedback will make this newsletter better. It will take you 15 seconds. The survey is anonymous.
IT & NewTech??AI & Data?? Cybersecurity??Cloud??FinTech??LegalTech ??LegalOps Specialist
1 年I would also add strong security check. Apart from LLM-based tools there are plenty of plug-ins to apps used by business (including law firms). We still don’t know a lot on this matter and security reasons should be one of the most important concerns. In addition, main big cloud providers plan to embed generative AI into their service environment (Microsoft is the very first one). As of Microsoft we will get it this year. Last but not least, before we even examine Chat-GPT 4 concerns properly ver. 5 will be released in Q423.