GENERATIVE AI: HOW TO USE IT SAFELY

GENERATIVE AI: HOW TO USE IT SAFELY

In this newsletter, I write about the confusion surrounding generative AI systems and give tips at the end on how such systems can be used safely by lawyers and other business advisers.

Ten sam newsletter w j?zyku polskim jest tu:

The advent and widespread access to ChatGPT, Bard, Bing AI, Midjourney, DALL-E and other so-called generative artificial intelligence systems came as quite a surprise to experts and to all of us. Suddenly, systems whose way of working we do not fully understand were available to everyone, and not, as before, only to a narrow circle of insiders in the laboratories of large companies.

So we started to test these tools on a large scale.

As a result, the media - both traditional and social media - have been inundated with reports describing the results of these tests. From the outset, enthusiastic voices were mixed with critical comments, but recently the sceptics seem to have prevailed.

More and more news about cases of use of generative AI systems are coming to light, which cause us concern. Student papers prepared with the help of ChatGPT, fake photographs depicting Donald Trump's arrest generated from MidJourney or completely false information generated by ChatGPT that appears authentic - make us wonder whether these systems bring more problems or benefits.

Adding fuel to the fire was the publication of an open letter calling for a pause the work on generative AI systems and the decision by the Italian Data Protection Authority to block access to ChatGPT on the grounds that it violated data protection laws.

What are the risks associated with the use of generative systems?

Regardless of the perspective taken, there are a number of undoubted risks associated with generative AI systems. Looking from a broader perspective, these risks include, first and foremost:

  • Risk of millions of people losing their jobs
  • Risk of flooding the world with false information leading to an epidemic of disinformation

These are not new threats. They have been identified before with the development of generative artificial intelligence systems. The difference is that previously access to AI systems was limited, whereas now virtually everyone has access to such systems, increasing the scale of potential threats.

On the other hand, from a closer perspective, we can identify, for example, such risks associated with these systems as:

  • the mass production of false information that is presented in a form that imitates real information
  • lack of explanation of the system's results - we can basically only verify by comparing it with the correct information, but we are not able to check step by step how the system "arrived" at the given answer
  • difficult-to-detect errors in the system, resulting from the training data set used or the training method, which may generate results that we consider undesirable (e.g. racist or discriminatory against certain individuals or social groups)

Should we restrict ourselves and employees in our organisations from accessing generative AI?

We may get different answers to this question. The precautionary answer will probably recommend the use of the risk-based approach familiar from GDPR and the draft AI Act regulation. We will likely receive recommendations to at least :

  • examine the type of information to be processed
  • carry out a data protection impact assessment
  • check the legal basis for processing data fed into generative AI systems
  • verify the data that was used in the AI system training process
  • verify the quality assurance process of the training data and the system learning process itself
  • etc.

Implementing such recommendations, however, actually means blocking the use of these tools for an extended period of time and, in addition, some of them cannot be implemented in practice (e.g. it does not seem possible to verify the operation or training database of tools such as ChatGPT, which currently uses 175 billion parameters).

It is people who use AI systems

It is not possible to apply to generative AI systems all the recommendations that we have been making to the existing specialised AI systems. Their emergence came as a surprise to, among others, the drafters of the AI Act regulation, which is currently being adapted to the new realities.

Nor is it useful to think of AI systems as if they have self conscience and humans are mere puppets under their control, as this leads to losing sight of the real issues involved in using these solutions. In the end, however, it is people who decide how AI systems are built and how they are used.

The recently fashionable trend of so-called AI ethics is nothing more than a consideration of how to ensure that AI systems are built and used in an ethical way. This is an important direction for human thought, but there is no doubt that we will not get around it without the introduction of new legal regulations.

Current and future regulations

The EU's AI Act regulation, which has already been advanced, will soon give us the basic legal framework for the manufacture, marketing and use of AI systems. Among other things, this legislation will impose a number of obligations on manufacturers of high-risk AI systems in order to minimise the risks associated with their use.

It is worth noting that these risks are primarily associated with situations where we relinquish all or part of human control over the performance of these systems.

However, even now we are not defenceless, especially to the extent that these systems process personal data. GDPR contains relevant provisions that apply to automated decision-making based on the processing of personal data. EU legislation has also been published (or is about to be published) that regulates the use of data, both personal and non-personal.

To use or not to use?

I do not wish to give authoritative guidance here on every use of generative AI tools. However, in my view, refraining from using these tools only because of unspecified risks or exaggerated risks is not valid. After all, risks are only significant if they are unmanageable and the possibility of their occurrence cannot be minimized.

Therefore, since it is people who use these systems, it should be sufficient for the time being to implement simple rules of use, adapted to the specific situation and preventing, or at least minimising, the risk of infringement of the law or legally protected interests.

Should lawyers use generative systems?

Lawyers are already being asked for advice related to the use of generative AI systems. Is it possible to have a situation where lawyers advise their clients on the use of, for example, ChatGPT, while not using such systems themselves in their daily work?

Of course, yes - although one may wonder whether the lack of own experience with such tools will limit the ability to see the relevant risks in the client's case or, conversely, cause some risks to be overstated within the analysis.

Furthermore, in order to use these tools efficiently, it is necessary to get out of the habit of using search engines and learn how to formulate queries (so-called prompts) in such a way that the results obtained are satisfactory. This area of knowledge - called prompt engineering - is developing rapidly.

It is not worth lagging behind your own customers.

Specific guidance on the safe use of AI generative systems

I have prepared specific guidance on the use of text-based AI generative systems. These are prepared from the perspective of a business advisory service provider (so, for example, a lawyer, consultant, advisor).

The guidance is presented in the form of a decision tree and boils down to three recommendations:

  1. Do not introduce data protected by law (personal data, confidential data, professional secrecy data, etc.) into generative AI systems - at least as long as you are not able to analyse such a system from the point of view of data protection and applicable regulations.
  2. Only use the results of generative AI systems if you have sufficient knowledge and experience to verify them for correctness and completeness.
  3. Only communicate the results of generative AI systems to others if you are prepared to take responsibility for the accuracy of those results as if they were created by you.

You can compare it with similar chart prepared by Aleksandr Tiulkanov .

No alt text provided for this image

The above graphic is also available here:

I would welcome any comments on the proposed recommendations.

If anyone would like such guidelines tailored to their industry or specific situation (e.g. for generative text-to-image systems), feel free to contact me.

Was this newsletter interesting? Help me improve it.

Your feedback will make this newsletter better. It will take you 15 seconds. The survey is anonymous.

Przemys?aw Barchan

IT & NewTech??AI & Data?? Cybersecurity??Cloud??FinTech??LegalTech ??LegalOps Specialist

1 年

I would also add strong security check. Apart from LLM-based tools there are plenty of plug-ins to apps used by business (including law firms). We still don’t know a lot on this matter and security reasons should be one of the most important concerns. In addition, main big cloud providers plan to embed generative AI into their service environment (Microsoft is the very first one). As of Microsoft we will get it this year. Last but not least, before we even examine Chat-GPT 4 concerns properly ver. 5 will be released in Q423.

要查看或添加评论,请登录

Tomasz Zalewski的更多文章

社区洞察

其他会员也浏览了