AI; Governance, Risk and Compliance implications.

AI; Governance, Risk and Compliance implications.

Nowadays you get flooded with news, articles about Artificial Intelligence (AI). If you are a quality, risk, compliance, data protection manager or have a role that needs to deal with this matter, you may be overwhelmed and wondering how to address it.

To make a long story short: The genie is out of the bottle. AI compliance and security, it never was fully controllable and it never will be. The more systems become truly intelligent, the more uncontrollable they become.

This is what AI has in common with human actors and all other living beings.

But let's take a step back. Since (artificial) intelligence, by definition, develops own idea, takes decisions and interprets its own values and ethics, it cannot be controlled. We are (far) away from that point. While Machine Learning as we have today still is somehow controllable, future real AI will not be. Due to increasing complexity of systems and data volume, this is an illusion we have to say goodbye to. If you think otherwise, you believe that AI will only take good decisions. Since everything is connected, this can’t be true because ethics kicks in. For example, to make it better, AI should be able to identify the biases in the data it is learning from. But it doesn’t. Humans must interfere to correct this. Now this is still possible. Will it be in the future?

You also never have full control over living beings (citizens, your kids, employees, customers, dogs, etc.), but rules and boundaries are set that they should abide by, in order for the interaction between each other to work.

And the same approach needs to be taken for AI.

But since this article is about AI, we asked ChatGPT to write a statement itself based on the following parameters:

“Create a text 100 words why AI Risks and compliance cannot be controlled”

AI risks and compliance cannot be controlled because AI technology is constantly evolving, making it difficult to predict and prevent all possible risks. As AI systems become more complex, they become harder to regulate, and their actions become increasingly difficult to understand or explain. Moreover, AI models are only as unbiased as their data and the algorithms used to train them, and the data used to train AI models may contain implicit biases that can perpetuate discrimination. AI risks also include cybersecurity threats and potential misuse by bad actors, which are difficult to detect and prevent. Therefore, while AI compliance and risk management frameworks can help mitigate potential harms, they cannot entirely eliminate the risks associated with AI technology.

What is the way forward?

Humans expect for the challenges they face a simple, straight forward answer. A “silver bullet”. Guess what, there is none. But there is also good news. A lot of what you need to handle for the AI challenge is already established from an organization’s perspective (for the Internal Control System (ICS) and Risk Management). Some aspects need to be changed, some improved and some added if they are missing.

The two control categories

There are two categories of controls when it comes to AI: the controls you need to implement if your organization is consuming AI and the controls required if you develop or enhance AI.

Here is high-level overview of what you should consider when defining the AI control sets:

Corporate Governance Topics impacted by AI

Legislations and Standards to be considered

This list is not meant to be complete and you need to see what is relevant for your legislation, in your jurisdiction, your industry:

  1. Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
  2. NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) ://doi.org/10.6028/NIST.AI.100-1
  3. ISO/IEC 23053:2022 https://www.iso.org/standard/74438.html
  4. ISO/IEC 23894:2023 https://www.iso.org/obp/ui/#iso:std:iso-iec:23894:ed-1:v1:en
  5. OWASP Top 10 for Large Language Model Applications - https://owasp.org/www-project-top-10-for-large-language-model-applications/

Autor: Andreas von Grebmer CISO @ CISS Ltd Switzerland.

Our innovative 360inControl? (www.360inControl.com) solution supports you in governing the AI aspects.

Deutsche/German Version online: https://www.kmu-magazin.ch/wissen/digitalisierung-transformation/wie-unternehmen-ki-kontrolliert-einsetzen-koennen


要查看或添加评论,请登录

社区洞察

其他会员也浏览了