Artificial Intelligence Governance
What is AI Governance?
"AI governance" refers to a framework consisting of principles, rules and structures that oversee the creation, implementation, and maintenance of AI. Having an adequate AI governance can help organizations avoid the tricky issues that can arise with AI, such as biases, privacy concerns, or unexpected outcomes.
Simply put, AI governance aims to ensure that AI is doing good things for us by keeping its power in check.
Why is AI Governance important?
Without appropriate governance techniques, organizations are likely to face legal, financial, and reputational risks due to potential misuse and biased outcomes of their algorithmic inventory. In order to mitigate these risks and promote trust in AI technologies in general, AI governance should not be considered a simple obligation, but rather a strategic necessity. Showcasing responsible use of AI technologies through accountability and transparency is important.
With new laws like the EU AI Act, organizations that proactively adopt responsible AI governance will be ahead of those that don't.
How to Prepare AI Governance Frameworks?
The best way to start an AI governance framework is to use external frameworks as a starting point rather than designing your own from scratch.
What is Responsible AI
When we talk about "responsible AI," we're referring to a method of developing, assessing, and deploying AI systems that is safe, trustworthy, transparent, accountable, and ethical.
We must understand that when it comes to developing a system, it's all about the choices that the designers make. The designers and developers are only human, and sometimes their unconscious biases and prejudices can sneak into the system's design without them even realizing it. That's why it's super important to get a clear picture of why and how the designers are creating AI systems in the first place. We have to keep people and their objectives in mind when making design decisions, and not forget about the essential values like fairness, dependability, diversity and transparency.
In Satya Nadella's words, "[p]erhaps the most productive debate we can have isn’t one of good versus evil: The debate should be about the values instilled in the people and institutions creating this technology."
Key Values/Principles of Responsible AI
Organizations that create or use AI systems must make sure that they adhere to industry standards and legal requirements. This means they have to give their AI solutions a thorough check-up to make sure they're in line with key responsible AI values and principles.
Here are some of the key responsible AI principles.
Explainability/Transparency
It means designing AI systems in a way that people can understand why certain decisions are being made. To achieve this, organizations must ensure that the internal workings of an AI system and the way it uses input data to reach its conclusions are transparent and easy to understand.
According to IBM:
"It is crucial for an organization to have a full understanding of the AI decision-making processes with model monitoring and accountability of AI and not to trust them blindly. Explainable AI can help humans understand and explain machine learning (ML) algorithms, deep learning and neural networks. As AI becomes more advanced, ML processes still need to be understood and controlled to ensure AI model results are accurate." (IBM)
According to Forbes, some companies, like Adobe for example, have done a great job demonstrating that they are transparent about how their data is used to train language models, in contrast to companies like OpenAI, which appear to be often sued by copyright owners.
Accountability
What defines accountability is a clear attribution of responsibility for AI system actions. The process should ensure that issues are addressed, biases are mitigated, unintended consequences are avoided, and legal and ethical responsibilities are clearly defined and followed.
领英推荐
According to Microsoft:
"The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren't the final authority on any decision that affects people's lives. They can also ensure that humans maintain meaningful control over otherwise highly autonomous AI systems." (Microsoft)
Safety and Security
The design and deployment of AI systems must consider human rights and safety, as well as ensuring that AI acts in a way that is beneficial for humans. In addition, both the data and the AI system must be protected from unauthorized access, ensuring confidentiality, integrity, and availability.
According to Google:
"Safety and security entails ensuring AI systems behave as intended, regardless of how attackers try to interfere. It is essential to consider and address the safety of an AI system before it is widely relied upon in safety-critical applications. There are many challenges unique to the safety and security of AI systems. For example, it is hard to predict all scenarios ahead of time, when ML is applied to problems that are difficult for humans to solve, and especially so in the era of generative AI. It is also hard to build systems that provide both the necessary proactive restrictions for safety as well as the necessary flexibility to generate creative solutions or adapt to unusual inputs. As AI technology evolves, so will security issues, as attackers will surely find new means of attack; and new solutions will need to be developed in tandem." (Google)
Diversity/Inclusiveness
The teams responsible for AI development, training, testing, and monitoring should have a diversity of opinion. Biased inputs and outcomes are more likely to occur when there is no diversity in the teams that develop AI systems.
According to Neudesic:
"Inclusive AI ensures autonomous systems produce the best outcomes for a broad range of users and stakeholders. To achieve this, implementations must incorporate a variety of people and experiences to consider the different realities of its use and impact. Doing so should inform the use case, the system's design, and its deployment in the real world. By prioritizing inclusiveness, companies maximize value for a diverse user base and, consequently, for themselves. To start, AI development teams should be diverse, representing a variety of opinions, racial backgrounds, experiences, skills, and more. Next, your training data must also be appropriately diverse as well. This may include using or buying synthetic or high-quality data sets. However, there is no replacement for understanding your users and their experiences. Gathering insights from actual users, including those who may not use the system but are affected by it, is crucial from the initial stages of discovery and design and should continue after the product's deployment." (Neudesic, an IBM company)
The above principles are the foundation for?responsible AI governance. Any AI framework or regulation must take them into account.?
Some tips on creating responsible AI
Looking Ahead
We will discuss the potential risks and harms to the environment associated with AI in the next issue.
Thank you for joining me on this exploration of AI and law. Stay tuned for more in-depth analyses and discussions in my upcoming newsletters. Let's navigate this exciting and challenging landscape together.
Connect with me
I welcome your thoughts and feedback on this newsletter. Connect with me on LinkedIn to continue the conversation and stay updated on the latest developments in AI and law.
Disclaimer
The views and opinions expressed in this newsletter are solely my own and do not reflect the official policy or position of my employer, Cognizant Technology Solutions. This newsletter is an independent publication and has no affiliation with #Cognizant.
Battery Development , Aluminium Battery,Fuel Cells,Hydrogen, EV Chargers
3 个月Good Work. Lot of efforts have gone for this. Proud of you. Laura Reynaud Esq., LL.M.
Compliance Project Manager | GRC Consultant | Growth Mindset Career Coach | Data Analytics Mentor | Start-up and Non-profit Advisor | Scrum Master | ACMA | Passionate about Personal Knowledge Management!
3 个月Colin Crofts
Media Communication & Cultural Adviser. Spouse of Australian Ambassador to UAE. My home - lands of the Bunurong Peoples of the Kulin Nation, the traditional owners and custodians of the Mornington Peninsula, Australia
3 个月Brilliant Laura, very well written and easy to digest. Thank you!
Insightful... thanks Laura??
Senior Legal Counsel | LLM | MBA | Commercial Contract, Data Privacy and Technology Counsel specialized in international contracts with Compliance and Corporate Governance experience.
3 个月Thank you for taking the time to write the article Laura Reynaud Esq., LL.M. The principles of responsible AI are well aligned with GDPR principles so many organisations may already have a starting point to create AI policies from there. To the list if frameworks I might add ISO/IEC 42001. It is said to be first AI management system standard. It sets out structured way to manage risks and opportunities associated with AI.