AI governance and risk management
Image source: Samson from Unsplash

AI governance and risk management

Bronwyn Ross

6 Dec 2023

When implementing an AI governance framework, both board and management have a role to play in ensuring it reflects the organisation’s risk settings.?

Directors have a duty to act in the best interests of the company; they should know the risks faced by their organisation and be able to make informed decisions on how to manage them. In the context of AI, this means knowing where? their organisation develops or deploys AI systems and defining the risk appetite for such activity.

Management’s role is to embed risk management in organisational processes and to ensure that they operate within the risk appetite set by the board. This applies to the governance processes used to deliver and manage AI initiatives.

Here are some of our thoughts on how to do this:?????????????????

Review your risk appetite….?

The board is responsible for setting an organisation’s risk appetite, thereby defining the boundaries of senior management’s authority. It needs to ensure risk settings are comprehensive and address new and emerging areas of risk if necessary. Typically, AI initiatives will give rise to common risks for which appetite statements already exist – for example cyber security, health & safety, financial, reputational, or legal compliance risks. But AI initiatives may also contribute to ethical or social harms the organisation had not previously considered, for example job displacement or an increase in carbon emissions due to massive scale computing. When setting the organisation’s AI strategy, the board should also review its current risk appetite statements, to determine if they require adjustment.

Conduct (multiple) risk assessments

A risk assessment identifies the potential harms of a proposed initiative and assesses the magnitude of the resulting risk. Typically, ?organisations conduct preliminary risk assessments of initiatives during the feasibility or pre-design stage, so management can make an informed decision about whether or not to fund them. For AI initiatives it is important to repeat the risk assessment at the operational readiness checkpoint (pre-deployment), as this may yield different results; some latent risks only become evident as AI systems adapt and evolve. Conducting risk assessments is good risk management and requiring them as part of a process is good governance. Note: AI systems don’t always behave as expected. For this reason, the team performing the risk assessment should be as broad as possible – including technical, functional, legal and ethical perspectives – to try and anticipate the potential harms of unexpected behaviors.

Monitor controls post-deployment

Risk management typically means applying controls to avoid, mitigate or manage identified risks. Good risk management monitors those controls on an ongoing basis. The concept of post-deployment monitoring is hugely important in the world of AI operations, where AI systems may evolve over time. AI models should be monitored at a technical level for accuracy, performance, latency, and resource utilization; but humans should also monitor the overall application or workflow the AI model forms part of, in order to override unexpected or unforeseen behaviors. This is called keeping a “human-in-the-loop” and is an important feature of AI risk controls.? ??

It's sometimes difficult to distinguish between AI governance and risk management. Put simply, risk management is part of good governance. But it is worth remembering that good governance includes realizing value, as well as managing risk. In our view organizations seeking to realize value from AI applications need foundational capabilities such as good data management, tools such as an AI inventory, and new processes such as business idea generation.

Red Marble AI can help with all aspects of AI governance, the management of risk and of value; reach out to us if you’d like to talk.

Kabira Boulakchour

Compliance Executive | Private Equity & Venture Capital | Asset Management | Data Protection | Business Strategy | Corporate Compliance | Independent Director & Board member

7 个月

Embracing AI with excitement is great, but it needs to be done responsibly. Because AI is designed to adapt, risk assessments should regularly be carried out following deployment as well. Otherwise, we run the risk of holding on to a technology that no longer does more good than it does harm.

回复

要查看或添加评论,请登录

Red Marble AI的更多文章

社区洞察

其他会员也浏览了