Governing Artificial Intelligence (part 2): Risk mapping
Philippe NIEUWBOURG
Data Strategist | AI Governance | Data Governance | Data Architect | Concepteur pédagogique | Formation et accompagnement
In this second part of our series on the governance of artificial intelligence, we take a look at risk. The main motivation for setting up governance is to anticipate, measure and mitigate risks.
Human beings need to learn for themselves. It makes little use of the lessons of the past and needs to reproduce mistakes to realize them and find solutions. The conclusion is that AI will create mistakes, some will pay dearly for them, some will lose their jobs for using AI, others will lose customers. Let's go all the way: some people will lose their lives by mistake, for example when AI-driven weapons are deployed in conflicts.
We're not there yet, and our subject remains the company. But the risks are real, even if the consequences are less extreme.
I would distinguish two categories of risk. On the one hand, the risks associated with artificial intelligence, its data and its models; on the other, the risks associated with the incorrect use of AI that does function correctly.
The first role of an AI governance manager will be to map these risks. As in the field of cyber or business intelligence, it's essential to know what threats you're likely to face, then assess and measure them, and finally propose solutions to mitigate them.
William KELVIN is quoted as saying in the 19th century: "If you can't measure it, you can't improve it". So don't tell me that your AI is perfectly controlled, free from any risk, when you haven't even begun this mapping work. That's called moving forward in the dark. Sometimes it works, but don't complain about the consequences if you come across a risk!
So, get things in the right order, and map out your risks. You can do this yourself, using the templates I recommend in my training courses, or you can take a more comprehensive approach, based on the ISO 31000 standard[1] for example. This standard is in no way specialized in any particular type of risk related to artificial intelligence but teaches you a global approach to risk management. If you don't want to delve into the full standard, the book "Comprendre l'ISO 31000: Mettre en place une gestion globale et intégrée des risques[2] ", published by AFNOR, is a good introduction.
The aim of this mapping exercise is to produce an AI risk matrix.
You can create this matrix with a simple spreadsheet program, or with software. There are several on the market; I'm not familiar with them and haven't tested them, but take a look at Value Associates'[3] solution, for example.
In short, the aim of this mapping exercise is threefold:
1)??????? Identify potential risks;
2)??????? Measuring the probability of their occurrence;
3)??????? Assess their potential impact on your business.
These elements will determine the order in which these risks will be addressed, and the resources that will be allocated to mitigate their impact.
So, when it comes to artificial intelligence, what are the risks? I'm not proposing an exhaustive list, but a few focuses on the most common risks you'll have to face.
Data risks
The first and most obvious risk is data quality. AI doesn't understand anything, of course, and if you feed it imperfect data, you'll get imperfect results.
Biases are complicated to identify, and sometimes even deliberately provoked. If my AI shows an image of a camper wearing nylon shorts, a wifebeater and sandals, is this a bias, because I've only fed it images of this type, or is it reality? Of course, there are campers who look different, but is it a bias or an invisible minority? Another example: women drive better than men, a fact that car insurers are well aware of. So if I use a person's gender to quote their car insurance, men will pay more than women. Is this normal or a bias? It's a complicated question.
The same goes for discrimination. To make a choice is to discriminate! In all cases! The question is one of criteria. I can choose a candidate on the basis of his technical knowledge, but not discriminate against him on the basis of his age. But can I choose a receptionist based on her appearance? Or discriminate positively, by giving priority access to my services to refugees? The question is incredibly complicated. But it's precisely the role of AI governance to ask these questions, before the problem arises.
Another theme is the ownership of the content I use to feed my models. If you only use internal data, don't worry. But ChatGPT, for example, has pumped billions of web pages, PDF documents and books into its model, without ever bothering to obtain the author's permission. We're back to the early days of the Internet, with comments like "it's on the Internet, so it's free" ... well, no!
Model risks
The second category of risks concerns AI models themselves.
领英推荐
Never forget that the AI model, whatever it may be, understands nothing about our world! So don't count on it to tell you that it's been badly parameterized. And as this is all mathematics, parameter setting is very important.
Models sometimes get it wrong! And we can't blame them, because they don't understand a thing. So, a few months ago, when I asked ChatGPT the difference between hen's eggs and rooster's eggs, he complied, and produced twenty lines on the subject, without ever asking the question of the rooster's ability to lay eggs.
Finally, let's not forget that this AI, which amuses us and makes us dream, is incredibly energy intensive. One of the risks of developing AI in the enterprise is that its carbon footprint will explode. It's all very well to reuse sheets of scrap paper, but if at the same time you're launching ChatGPT for no reason at all, you're going to suffer when you must assess its impact on your ESG balance sheet.
Risks related to use
Now let's talk about the risks associated with use, and misuse.
Deepfakes, whether they be false images, videos or voices, can be used against the company, for example in the case of attempted "Chairman's fraud".
From a societal point of view, it's their use in election campaigns that gives cause for concern.
They can also be used to humiliate or harass certain people. Singer Tailor Swift recently fell victim to these AI-generated images.
Produce fake content, e.g. a realistic video ad that has been generated by AI. This will cost much less than shooting a real ad. Is this good or bad? It's up to you to define your own ethical rules.
If AI is used by people who know the subject and see it as a tool to get their work done faster, fine. But if AI is used by people who don't know the subject. And so, produce low-quality content, which is then sent to a customer or supplier, or published on a website. Will this ultimately damage the company's image? AI-generated content is flat, without irony, without opinion, I'd say without relief. Is this the image you want to project of your company?
One of the risks we don't talk much about is that of Shadow AI, the use of artificial intelligence by employees, even though they are forbidden to use it. So, discreetly, I'm going to ask ChatGPT to produce the minutes of the confidential meeting I recorded... I don't have time to listen to it all again. And there it is, the contents of the confidential meeting on OpenAI's servers, without anyone knowing.
Framing the use and non-use of AI in the enterprise is one of the first topics to be addressed in the artificial intelligence charter to be put in place by the AI governance team.
?
Once again, this list is not exhaustive, but it does give you some pointers. The most important thing is to map your own risks. And this mapping must be alive. Some form of alert should be issued when a new risk appears; and for safety's sake, it's advisable to plan a general review of risks every year, to take account of changes in their impact and likelihood of occurrence.
[2] ISBN: 9782124658794
#data #datagovernance #aigovernance #governance #CIGO