Risk assessments for generative AI

Risk assessments for generative AI

A few months ago we wrote about AI governance and risk management and the importance of conducting risk assessments for AI initiatives. In this article we'll take a look at risk assessments for generative AI (genAI) projects in particular.

It can be difficult for organisations to identify relevant genAI risks and determine how to manage them. For one thing, genAI risks are quite diverse: some risks are algorithmic? (e.g. bias), others are organisational (e.g. data leakage) and still others present as risks to society (e.g. misinformation or skills displacement). Further complicating matters, the consequences of some risks ?may crystallise at a point-in-time, whereas the consequences of others may be realised only over an extended period. Identifying such a broad variety of risks requires multiple actors and mitigating them requires a variety of measures, from policy interventions to organisational strategies and individual user guidelines.

The US National Institute of Standards and Technology (NIST) recently published a draft AI RMF Generative AI Profile which may help organisations evaluate genAI risk. ??It defines a list of 12 risks unique to or exacerbated by genAI and offers more than 450 actions that different actors can take across the AI lifecycle to enhance the trustworthiness of their genAI solution. The profile could be a useful starting point for organisations struggling to assess the risks of their genAI projects. ?

A well-defined risk description comprises three elements: the cause, the risk event and its resulting impact. NIST’s draft profile helps with the first two elements, by suggesting multiple causes and possible risk events. A project team could start with the draft list, filter those that are relevant at the organisational level, and assess their impact in the context of the project.

The next step would be to design appropriate mitigations, in line with organisational risk appetites. NIST’s profile may? be of some assistance here too. It supplies a pick list of possible actions to address risk. ?Of course, NIST’s suggested actions are necessarily generic; it is more than likely that project teams performing risk assessments will be able to tailor them and/or identify additional context-specific mitigations.

As mentioned, NIST’s profile is still draft, although the period for public comments has now closed and we expect the final version later this year. In the meantime there are some other resources available also for organisations seeking more general guidance on how to manage AI risk, such as ISO/IEC’s 23894:2023 Guidance on AI? Risk Management, which can be adapted to any organisation and its context.? The European standardization organisation (CEN) is also working on a checklist for AI risks management, which has not yet been published.

Some AI governance frameworks such as ISO/IEC 42001 suggest conducting impact assessments on AI systems in addition to risk assessments. An AI system impact assessment is a formal, documented process by which developers or deployers of AI products or services assess the impact of their AI system on individuals, groups and society. ?It’s worth pointing out that there are many low risk AI systems that do not impact the physical or psychological well-being or fundamental rights of individuals or groups in society, and for which an impact assessment would be overkill. We suggest organisations step through a risk assessment first, to determine whether or not an impact assessment is warranted. But for those who need it, guidance on AI system impact assessments is at hand – ISO/IEC? Draft International Standard 42005 (AI System Impact Assessment) is currently under development.

There is so much being written about AI risk at the moment it can be overwhelming. But if you are thinking about how to manage AI risk in an organisation, remember you don't have to start from scratch – your company probably already has a risk management framework - and not all AI projects or applications are high risk. At Red Marble AI we help companies put the concepts of AI governance into practice. Reach out to us if you’d like to discuss how to incorporate AI governance in your organisation's processes and frameworks.


要查看或添加评论,请登录

Red Marble AI的更多文章

社区洞察

其他会员也浏览了