Artificial Intelligence Governance

Artificial Intelligence Governance

What is AI Governance?

"AI governance" refers to a framework consisting of principles, rules and structures that oversee the creation, implementation, and maintenance of AI. Having an adequate AI governance can help organizations avoid the tricky issues that can arise with AI, such as biases, privacy concerns, or unexpected outcomes.

Simply put, AI governance aims to ensure that AI is doing good things for us by keeping its power in check.

Why is AI Governance important?

Without appropriate governance techniques, organizations are likely to face legal, financial, and reputational risks due to potential misuse and biased outcomes of their algorithmic inventory. In order to mitigate these risks and promote trust in AI technologies in general, AI governance should not be considered a simple obligation, but rather a strategic necessity. Showcasing responsible use of AI technologies through accountability and transparency is important.

With new laws like the EU AI Act, organizations that proactively adopt responsible AI governance will be ahead of those that don't.

How to Prepare AI Governance Frameworks?

The best way to start an AI governance framework is to use external frameworks as a starting point rather than designing your own from scratch.

Frameworks prepared by NIST and OECD are currently the most recognized ones available.

What is Responsible AI

When we talk about "responsible AI," we're referring to a method of developing, assessing, and deploying AI systems that is safe, trustworthy, transparent, accountable, and ethical.

We must understand that when it comes to developing a system, it's all about the choices that the designers make. The designers and developers are only human, and sometimes their unconscious biases and prejudices can sneak into the system's design without them even realizing it. That's why it's super important to get a clear picture of why and how the designers are creating AI systems in the first place. We have to keep people and their objectives in mind when making design decisions, and not forget about the essential values like fairness, dependability, diversity and transparency.

In Satya Nadella's words, "[p]erhaps the most productive debate we can have isn’t one of good versus evil: The debate should be about the values instilled in the people and institutions creating this technology."

Key Values/Principles of Responsible AI

Organizations that create or use AI systems must make sure that they adhere to industry standards and legal requirements. This means they have to give their AI solutions a thorough check-up to make sure they're in line with key responsible AI values and principles.

Here are some of the key responsible AI principles.


Explainability/Transparency

It means designing AI systems in a way that people can understand why certain decisions are being made. To achieve this, organizations must ensure that the internal workings of an AI system and the way it uses input data to reach its conclusions are transparent and easy to understand.

According to IBM:

"It is crucial for an organization to have a full understanding of the AI decision-making processes with model monitoring and accountability of AI and not to trust them blindly. Explainable AI can help humans understand and explain machine learning (ML) algorithms, deep learning and neural networks. As AI becomes more advanced, ML processes still need to be understood and controlled to ensure AI model results are accurate." (IBM)

According to Forbes, some companies, like Adobe for example, have done a great job demonstrating that they are transparent about how their data is used to train language models, in contrast to companies like OpenAI, which appear to be often sued by copyright owners.


Accountability

What defines accountability is a clear attribution of responsibility for AI system actions. The process should ensure that issues are addressed, biases are mitigated, unintended consequences are avoided, and legal and ethical responsibilities are clearly defined and followed.

According to Microsoft:

"The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren't the final authority on any decision that affects people's lives. They can also ensure that humans maintain meaningful control over otherwise highly autonomous AI systems." (Microsoft)


Safety and Security

The design and deployment of AI systems must consider human rights and safety, as well as ensuring that AI acts in a way that is beneficial for humans. In addition, both the data and the AI system must be protected from unauthorized access, ensuring confidentiality, integrity, and availability.

According to Google:

"Safety and security entails ensuring AI systems behave as intended, regardless of how attackers try to interfere. It is essential to consider and address the safety of an AI system before it is widely relied upon in safety-critical applications. There are many challenges unique to the safety and security of AI systems. For example, it is hard to predict all scenarios ahead of time, when ML is applied to problems that are difficult for humans to solve, and especially so in the era of generative AI. It is also hard to build systems that provide both the necessary proactive restrictions for safety as well as the necessary flexibility to generate creative solutions or adapt to unusual inputs. As AI technology evolves, so will security issues, as attackers will surely find new means of attack; and new solutions will need to be developed in tandem." (Google)


Diversity/Inclusiveness

The teams responsible for AI development, training, testing, and monitoring should have a diversity of opinion. Biased inputs and outcomes are more likely to occur when there is no diversity in the teams that develop AI systems.

According to Neudesic:

"Inclusive AI ensures autonomous systems produce the best outcomes for a broad range of users and stakeholders. To achieve this, implementations must incorporate a variety of people and experiences to consider the different realities of its use and impact. Doing so should inform the use case, the system's design, and its deployment in the real world. By prioritizing inclusiveness, companies maximize value for a diverse user base and, consequently, for themselves. To start, AI development teams should be diverse, representing a variety of opinions, racial backgrounds, experiences, skills, and more. Next, your training data must also be appropriately diverse as well. This may include using or buying synthetic or high-quality data sets. However, there is no replacement for understanding your users and their experiences. Gathering insights from actual users, including those who may not use the system but are affected by it, is crucial from the initial stages of discovery and design and should continue after the product's deployment." (Neudesic, an IBM company)


The above principles are the foundation for?responsible AI governance. Any AI framework or regulation must take them into account.?


Some tips on creating responsible AI

  • Organizations should prepare labels identifying products and models as AI-based, both internally and externally.
  • Notifications should be sent to consumers when they interact with artificial intelligence or receive outputs/decisions generated by it.
  • Privacy notices should disclose how personal information is used to develop and train artificial intelligence.
  • If an organization will use personal information for automated profiling, they must obtain consents in accordance with applicable privacy regulations (e.g., GDPR, California Consumer Rights Act, omnibus U.S. state privacy laws, etc.).
  • In compliance with applicable laws, consumers should be able to access and delete their personal information used to develop and train AI models.
  • The development and training of AI models requires a great deal of data, but it is important to minimize the amount of personal data used.
  • Cyber intrusions, including exfiltration of confidential information, or poisoning of AI models, must be mitigated through AI development.


Looking Ahead

We will discuss the potential risks and harms to the environment associated with AI in the next issue.

Thank you for joining me on this exploration of AI and law. Stay tuned for more in-depth analyses and discussions in my upcoming newsletters. Let's navigate this exciting and challenging landscape together.

Connect with me

I welcome your thoughts and feedback on this newsletter. Connect with me on LinkedIn to continue the conversation and stay updated on the latest developments in AI and law.

Disclaimer

The views and opinions expressed in this newsletter are solely my own and do not reflect the official policy or position of my employer, Cognizant Technology Solutions. This newsletter is an independent publication and has no affiliation with #Cognizant.

Shankar Srivastava

Battery Development , Aluminium Battery,Fuel Cells,Hydrogen, EV Chargers

3 个月

Good Work. Lot of efforts have gone for this. Proud of you. Laura Reynaud Esq., LL.M.

Zia Rezvi

Compliance Project Manager | GRC Consultant | Growth Mindset Career Coach | Data Analytics Mentor | Start-up and Non-profit Advisor | Scrum Master | ACMA | Passionate about Personal Knowledge Management!

3 个月
回复
Jessica Swann-Jadwat

Media Communication & Cultural Adviser. Spouse of Australian Ambassador to UAE. My home - lands of the Bunurong Peoples of the Kulin Nation, the traditional owners and custodians of the Mornington Peninsula, Australia

3 个月

Brilliant Laura, very well written and easy to digest. Thank you!

Insightful... thanks Laura??

Milla Oinonen

Senior Legal Counsel | LLM | MBA | Commercial Contract, Data Privacy and Technology Counsel specialized in international contracts with Compliance and Corporate Governance experience.

3 个月

Thank you for taking the time to write the article Laura Reynaud Esq., LL.M. The principles of responsible AI are well aligned with GDPR principles so many organisations may already have a starting point to create AI policies from there. To the list if frameworks I might add ISO/IEC 42001. It is said to be first AI management system standard. It sets out structured way to manage risks and opportunities associated with AI.

要查看或添加评论,请登录

Laura Reynaud Esq., LL.M.的更多文章

  • Qatar Central Bank's AI guidelines for the financial sector

    Qatar Central Bank's AI guidelines for the financial sector

    In this edition, we explore the latest guidelines issued by the Qatar Central Bank (QCB) to ensure the ethical use of…

  • Saudi Arabia - Data Privacy Law in a Nutshell

    Saudi Arabia - Data Privacy Law in a Nutshell

    The Saudi Personal Data Protection Law (PDPL) stands out as an important regulation that will significantly impact the…

    4 条评论
  • AI and Machine Learning

    AI and Machine Learning

    Machine learning, a subset of artificial intelligence (AI) and computer science, involves creating models by training…

    13 条评论
  • Intersection of Copyright and A.I.

    Intersection of Copyright and A.I.

    Artificial Intelligence (AI) continues to evolve, copyright law becomes increasingly complex for artists, creators…

    12 条评论
  • AI’s Environmental Impacts

    AI’s Environmental Impacts

    ” Pour ce qui est de l’avenir, il ne s’agit pas de le prévoir, mais de le rendre possible. “? – Antoine de Saint…

    11 条评论
  • Hardware Component of AI

    Hardware Component of AI

    Welcome to the second edition of my newsletter on the intersection of artificial intelligence (AI) and law. In this…

    8 条评论
  • AI & LAW

    AI & LAW

    Welcome to the first edition of my newsletter on the intersection of artificial intelligence (AI) and law. As AI…

    38 条评论
  • Seven tips to overcoming your imposter syndrome

    Seven tips to overcoming your imposter syndrome

    People often asked me “how do you do a man’s job in a man’s world?” They assume that: 1) legal work belongs to men…

    31 条评论
  • Say-on-Pay (SOP)

    Say-on-Pay (SOP)

    It has been said that executives command outrageous compensation packages. Between 2009 and 2010 alone, the Chief…

  • How to regulate transnational business activities?

    How to regulate transnational business activities?

    Old ideas of state boundaries have been transformed by ever growing transnational investments, businesses and cultures.…

社区洞察

其他会员也浏览了