AI Governance: The Key to Unlocking the Full Potential of Artificial Intelligence
??Hilary (Hils) Walton
@HilsWalton | Tech Strategist at Microsoft | CISO | Board Member | Speaker | Author & Podcaster | Psychologist (non-practicing) | Passionate about Digital Culture, Metaverse and Web3
Introduction
AI is transforming the way businesses operate, innovate, and compete. But with great opportunities come great challenges.
There was a lot of excitement when OpenAI released ChatGPT on November 30th 2022. However, businesses quickly began to create policies on how to use AI for work, or not use it at all, due to concerns about information and data leakage, as well as lost intellectual property and sensitive customer information. Some companies even went so far as to shut down or block AI websites altogether.
I’ll review the latest research about how can you ensure that your AI systems are ethical, reliable, and secure? How can you avoid the pitfalls of bias, privacy breaches, and cyber fraud? How can you align your AI strategy with your business goals and values?
In this article, I will share some insights and resources I have found in my research and in my? experience working with customers, and other experts on how to manage AI risks and governance in your business.
I’ll cover the following topics:
Where to start when governing AI
One of the first steps to govern AI in your business is to establish a clear policy for AI usage. This policy should define the scope, purpose, and principles of your AI initiatives, as well as the roles and responsibilities of your AI stakeholders. It should also include the dos and don'ts for your employees and the business when using AI.
A good resource to help you write a policy for AI is the AI Usage Policy Checklist from Kordia , a leading provider of digital solutions in New Zealand and Australia. This checklist covers the key elements of an AI policy, such as data quality, accountability, transparency, fairness, privacy, security, and inclusiveness. You can download the checklist for free by providing your email address and agreeing to receive marketing emails from Kordia.
What are the AI risks to manage
AI can bring many benefits to your business, but it can also pose many risks. Some of the common AI risks that you need to manage as described by Gartner are below, and would be good first cut to have on your company risk register:
1.????? Fabricated and inaccurate answers: AI systems may generate false or misleading information that can harm your reputation, credibility, and trust.
?
2.????? Data privacy and confidentiality: AI systems may collect, store, or share sensitive or personal data without proper consent, protection, or compliance.
?
3.????? Model and output bias: AI systems may produce unfair or discriminatory outcomes that can affect your customers, employees, or society.
?
4.????? Intellectual property and copyright risks: AI systems may infringe or violate the rights of others when using or creating content, data, or algorithms.
?
5.????? Cyber fraud risks: AI systems may be exploited or manipulated by malicious actors to commit fraud, theft, or sabotage.
?
6.????? Consumer protection risks: AI systems may cause harm or dissatisfaction to your customers by providing poor quality, unsafe, or unethical products or services.
To help you identify and evaluate these risks, you can refer to the Gartner report on Six ChatGPT Risks Legal and Compliance Must Evaluate. This report provides a comprehensive overview of the legal and compliance implications of using AI systems, especially chatbots, in your business. Y
How to design ethically responsible and culturally considerate data systems
Another key aspect of governing AI in your business is to ensure that your data systems are ethically responsible and culturally considerate. This means that your data systems should respect the values, preferences, and expectations of your users, customers, and stakeholders, as well as the norms and practices of the communities and societies where you operate.
A useful tool to help you design ethically responsible and culturally considerate data systems is the AI Ethics Cards from IDEO , a global design and innovation company.
These cards are a set of collaborative activities for designers, developers, and researchers to explore the ethical implications of their AI projects. The cards cover four themes: data collection, data analysis, data use, and data impact. You can download the cards for free by providing your name and email address.
How Microsoft implements its own Responsible AI Standard
If you are looking for an example of how a leading technology company governs AI in its own business, you can learn from Microsoft's Responsible AI Standard .
This is a set of internal guidelines and requirements that Microsoft follows to ensure that its AI systems align with its six Responsible AI goals: accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness.
微软 has released its Responsible AI Standard to the public to share its learnings, invite feedback, and contribute to the discussion about building better norms and practices around AI.
The standard is broken down into goals which are then translated into tools and practices.
If you are looking for an AI Management Framework then this is a pretty good start.
How does Biden's AI Executive Order Help?
President Biden issues and executive order on artifical intelligence (AI) in October 2023. The order establishes new standards for AI stafey and security.
The order directs the following actions:
How to apply this information to your business
AI systems are transforming the world in amazing ways, creating new possibilities and opportunities for society. But how can we make sure that AI is aligned with our values, ethical principles, and legal frameworks? How can we foster trust, transparency, and accountability in AI? How can we mitigate the risks and maximize the benefits of AI for everyone?
The good news is there is a growing number of places organisations can look for how.
If you are a business leader, developer, or user of AI, you may be wondering how you can implement the Responsible AI Standard in your own organisation. Here are some steps you can take to get started:
"Write a short story about a company that develops an AI system that is designed to be transparent, explainable, and accountable. The story should describe how the company ensures that the AI system is developed and deployed in a responsible manner, and how it manages the risks associated with the system"
For those of you new to my LinkedIn content - follow me on TikTok , subscribe to my YouTube channel , follow me on Instagram , and hit me up on Twitter or Threads :)
Join the free text news service: Digital Culture Ideas & News Hub - this is my absolute favourite activity. Love it more than posting on general social media. You can also chat to me here. https://chat.whatsapp.com/LULdX7yAtPy37vLFiteXqW
Hilary's Digital Culture Ideas Show: - YouTube channel: https://www.youtube.com/@HilsWalton
Podcast on Spotify (and available anywhere else you listen to your podcasts) : https://open.spotify.com/show/3z4t5ZeAB4aSwr0ZeqtyaZ?si=b4a0a9a8894b43f5 ?
Instagram: https://www.instagram.com/hilswalton/
LinkedIn: https://www.dhirubhai.net/in/hilswalton/
X (Twitter): https://twitter.com/HilsWalton
Threads: https://www.threads.net/@hilswalton
APAC Region Top 10 Digital Product Leader
11 个月Hi Hilary - word is I should tune into your newsletters! And having just read this, I’m really impressed! I had to write our organisations policy on Generative AI recently and there wasn’t an clear, unbiased centre of excellence that I could find (but lots purporting to be). So it’s great to see good quality information in one place - and know that an experienced human signed off on it!! Looking forward to reading more. ????