Board Oversight and Monitoring of Artificial Intelligence Risks

I am grateful to my colleague in the USA, an expert on the above topic, who has given me permission to extensively quote from his work dealing with this important issue of Board Oversight and monitoring of Artificial Intelligence risks. I trust that it will be of assistance to readers.

Corporate boards face a panoply of risks – and the nature of these risks are quickly evolving. Cybersecurity has quickly risen to the top of the list of corporate risks. Add to that cybersecurity disclosures, and board members face serious and escalating risks surrounding ransomware attacks, data breaches and other technical issues.

The challenge – board members are not cyber experts, nor do they really like to focus on technical issues. Not to be too simplistic or harsh but board members usually ask CISOs – “Are we okay?” and then want to move on.

Just to make everything even more complicated, now let’s ladle on a new, and quickly growing risk for board – artificial intelligence. By this time at the board meeting, eyes will be glazed over.

Directors have significant oversight obligations to cover artificial intelligence.

First, if properly applied, artificial intelligence can exponentially assist businesses. Artificial intelligence can be used to increase the accuracy and speed of processes that may depend on human functions – Companies are spending more money on artificial intelligence capabilities. But companies have to be careful in this area – we all have heard about Zillow’s disastrous implementation of a housing value algorithm in the USA that was riddled with problems, forcing Zillow to shut down its new product offering.

Companies have to identify and assess the potential risks. We still do not know if and how the government may impose regulatory regimes over artificial intelligence. Organisations across South Africa are focusing on artificial intelligence risks and the appropriate regulation of same.

In this uncertain environment, stakeholders are quickly discovering the real and significant risks generated by artificial intelligence. Companies have to develop risk mitigation strategies before implementing artificial intelligence tools and solutions. These risks cover a wide swarth of terrible results – artificial intelligence can rapidly be abused to spread disinformation; algorithms can have built in racial discrimination; an artificial intelligence platform can easily (and with little effort) cause privacy invasions; and possibly lead to layoffs, primarily among white collar workers. In combination, these are some significant risks.

Like any risk area, companies need to develop appropriate compliance policies and procedures, tailored to a specific risk profile. Corporate boards have to head up this effort and oversee and monitor a company’s artificial intelligence compliance program.

Corporate boards are familiar with the legal framework – flowing from the Companies Act and King IV – which require that a corporate board ensure that a compliance program is operating, that the board is informed as to the artificial intelligence compliance program and its effectiveness in mitigating risks, and that the company has implemented a training program.

It is important to note that most countries, including South Africa, currently lack comprehensive AI regulations, so developments should be carefully monitored. As the regulatory framework and enforcement regarding AI evolves, it is important organisations to be aware of the legal considerations surrounding AI.

Given recent international judgments holding Corporate Boards accountable, companies that are embracing artificial intelligence have to ensure that they design and implement an appropriate governance framework to meet basic requirements. Artificial intelligence presents significant risks that have to be identified and mitigated.

A basic list of compliance oversight tasks include, but are not limited to, inaccurate content having been produced, the exposure of trade secrets and proprietary information, the critically important issue of bias in AI and the risk of intellectual property infringement and as a result:

  • Listing of Artificial intelligence risks as a standing agenda item for every meeting is strongly advised. A standing committee can be assigned the task or full board discussions can address the issue at each meeting.
  • Companies should add a board member with technical expertise to cover cybersecurity, data governance and artificial intelligence.
  • Board members should be briefed on existing and planned artificial intelligence deployments to support the company’s business and/or support functions.
  • Designation of senior management executives(s) responsible for artificial intelligence compliance is strongly advised.

Corporate boards should ensure that an effective compliance framework is in place, including avenues for reporting potential violations of corporate policies, and applicable regulations.

The above guide does not constitute legal advice. Organisations should consult with their internal and external advisors as to what is appropriate for their organisation. In today’s business ecosystem, in which change happens at a relentless pace, standing still is not an option.


Prepared by:

J. MICHAEL JUDIN

JUDIN COMBRINCK INC.

Mobile: +27 83 300 5000

E-mail: [email protected]



Celéste Burger

On-demand-CMO. Helping mid-size businesses integrate and implement brand, creative and marketing strategy with measurable ROMI. Assisting marketing teams to set KPIs and objectives to reach goals.

1 年

Very valuable piece, thank you Michael.

回复
Richard Lewis

CEO-Artisan Training Foundation-Linking funders/CSI to quality artisan training propelling youth to sustainable livelihoods. Strategic Plan Facilitation ? Board Alignment-Leadership Development ? WESSA Ambassador

1 年

An excellent read Michael! Thank you.

回复

要查看或添加评论,请登录

Michael Judin的更多文章

社区洞察

其他会员也浏览了