Understanding AI ethics issues and the needs for AI Governance for Enterprises
Winton Winton
AI, Cybersecurity and Hybridcloud Technologist | Client Engineering Leader | Build next generation digital talents
Harnessing the power of AI comes with ethical considerations. This article explores potential pitfalls and equips you with strategies to implement AI governance within your organization, using real-world case studies as example to better understand ethics issues related to AI.
I helped many organizations assess the capability of Generative AI. Part of the conversation is the quality of response generated by AI. The responses you get from generative AI are powered by a foundation model, a pre-trained system designed by AI engineers to learn from vast amounts of data.
Gen AI primarily relies on its pre-trained foundation model, which means its responses are based on the data it was originally trained on, unless you use the RAG pattern using your own data sources outside of the foundation model. There are questions about the data source used by the foundation model, however until now the source is not published and considered as proprietary information by the owner. The bottom line is foundation models use massive datasets.
The vast amount of data used to train foundation models can be a double-edged sword. While it grants them immense capabilities, the lack of control over the data sources can lead to the generation of unintended consequences, unethical or biased outputs.
According to Unesco, the ethical challenges related to AI are as follows:
We can gain a deeper understanding of AI ethics issues by examining them through the lens of specific cases. There is a website https://incidentdatabase.ai/, the name explains what it does and we shall learn from the cases highlighted there.
Case study 1-Bias and Discrimination: UK passport photo checker shows bias against dark-skinned women
A recent investigation revealed that an AI system used for passport applications in the UK is more likely to reject photos submitted by individuals with darker skin tones. The system, implemented to expedite the process, reportedly provided inaccurate feedback about picture quality, such as "mouth open" or "improper background," for these users. This incident raises concerns about potential bias within the AI system and the risk of perpetuating discrimination in automated processes.
Case Study 2-Discrimination and Privacy: Korean Chatbot Luda Made Offensive Remarks towards Minority Groups (incident 106)
A Korean interactive chatbot servicing Facebook Messenger users to have used derogatory and bigoted language when asked about lesbians, Black people, and people with disabilities. The service was later withdrawn from Facebook Messenger and the developer was fined for breaching personal data. This is the first time the country has penalized a company for such a misuse of data with AI.
Here's what went wrong:
Scatter Lab used personal conversations (9.4 billion!) from 600,000 users of their emotion analysis apps to develop Lee Luda, without their consent. They failed to anonymize the data, including names, phone numbers, and addresses. The chatbot itself used conversation snippets from millions of women in their 20s on KakaoTalk, potentially replicating biases present in that data.
Scatter Lab apologized and vowed to improve their data practices. They're also taking steps to protect children's data, as they reportedly collected information from minors without parental consent.
Organization with AI in a way that violates ethical principles can expose the organization to a number of serious risks, including financial penalties, reputational harm, and even the complete collapse of the project.
AI Ethics is also regulated
Data privacy is a foundation of ethics that protects data privacy. It is a major focus of regulations around the world, with established frameworks like the EU's GDPR and the US's patchwork of data privacy laws. Additionally, countries like Indonesia have implemented their own specific laws, such as the recent PDP law.
There is AI specific regulation already launched by EU called EU AI Act https://artificialintelligenceact.eu/. There is no formal law issued in US for AI ethics by the time this article is written. The White House issued Blueprint for AI Bill of Rights and NIST is developing voluntary standard for trustworthy AI development.
EU AI Act classifies AI systems into different risk categories based on the potential for harm. High-risk systems, like facial recognition tech, face stricter regulations compared to minimal-risk systems like spam filters. There is regulation for high risk AI to implement risk management, data governance and transparency to ensure they are ethical. EU AI Act also regulates the fines and consequences for violation.
In light of these concerns, how can organizations ensure ethical AI?
Welcome to the journey of AI Governance. Adhering to AI ethics might seem like another compliance burden. However, the long-term benefits far outweigh the initial effort. By building trust and demonstrating responsible AI development, organizations can solidify their market position and avoid costly pitfalls down the line.
IBM highlighted that there are implementation models of AI Governance within an organization, informal, ad-hoc and formal. Informal governance.
领英推荐
Informal governance is the least intensive approach where there may be informal processes, such as ethical review boards or internal committees, but there is no formal structure or framework for AI governance. AI Governance tasks are shared among a team that works part-time on this function. It is usually implemented by organization just starting with their AI journey.
Ad hoc governance is a step further that involves the development of specific policies and procedures for AI development and use. Since this is ad-hoc it is often developed in response to specific challenges or risks. The team is also working part-time to run the AI Governance.
Formal governance is the highest level of governance and involves the development of a comprehensive AI governance framework. Formal governance frameworks typically include risk assessment, ethical review and oversight processes. There is full-time employee assigned to run the AI Governance within the organization, just like Data Protection Officer for data privacy governance.
Key success factor in implementing AI Governance within your organization:
Establish a formal AI ethics framework documented in an internal policy to ensure all AI projects align with ethical principles.
2. Budget commitment
How far an organization can go is a matter of budget commitment. A commitment to AI ethics starts with executive leadership. By allocating budget for skilled personnel and essential tools, organizations can build a strong foundation for responsible AI development, mitigating potential issues down the line.
3. Continuous testing and monitoring
Organization needs to start AI ethics effort from development and incorporate ethics as part of testing process and monitor AI running in production. Manual methods are impractical for such ongoing tasks. Implementing tools and automation provides a solution, enabling efficient and robust ethical evaluation of our AI systems. Traditional test automation tool does not work for AI testing, specifically Gen AI. You can use tools you use for monitoring AI in production for your testing.
4. Tools
There are tools needed to help with AI governance and they are:
AIMultiple.com released this chart for for your reference on AI Governance Tools landscape.
Conclusion
Ignoring AI ethics can lead to disastrous consequences, including complete project failure and damaged reputation. Build trust and safeguard your success by integrating AI governance into your AI journey from the very beginning. This is especially crucial for Generative AI (Gen AI), which directly generates human-readable outputs.
Organizations new to AI development should prioritize AI ethics from the get-go. An informal governance structure with an ethics review board is a great starting point. As your AI use evolves, consider developing a more formal framework with dedicated resources to ensure ongoing ethical development.
AI governance is a complex undertaking. To effectively navigate it, specialized tools are essential. These tools can help you manage your AI models, assess potential risks, monitor for bias, and ensure compliance with regulations.
Vind de juiste klant op het juiste moment met het juiste bericht! | B2B GTM met AI
7 个月That dedication shines through your work; ethics are vital! Enjoy holiday writing.