Ethical Aspects of AI: What Senior Leaders Should Know

Ethical Aspects of AI: What Senior Leaders Should Know

Authors: Caren Sang and Dr Mario Bojilov - MEngsSc, CISA, F Fin, PhD

Summary?

  1. AI Learning and Risks: AI systems improve with use but can be exploited for harm, emphasising the need for managing inherent biases and uncertainties.
  2. AI Ethics and Standards: Global guidelines like UNESCO's ensure AI deployment respects human rights and addresses fairness, privacy, and transparency.
  3. Corporate Responsibility: Leadership must evaluate AI support capabilities, ensure ethical compliance, and align AI strategies with business objectives for responsible integration.


Executives ranking AI ethics as important jumped from less than 50% in 2018 to nearly 75% in 2021. - IBM


Introduction

AI systems or tools can learn. AI efficiency and accuracy increase as AI learns. Sadly, anyone with ill intentions can train AI to create harm. So, do you know if your AI is safe to launch? Is your company ready to incorporate AI into its information system infrastructure despite knowing this? Can your company tame the uncertainty and biases hidden behind the nodes and layers propagating every outcome AI generates?

Let's see how we can tame the dangers and risks lurking behind every click on an AI system.?


AI Ethics?Overview

AI ethics are the guiding principles that stakeholders (company owners, employees, government officials, customers, etc) use to ensure artificial intelligence technology is created and used responsibly. It means taking a safe, secure, humane, and environmentally friendly approach to AI.

UNESCO has played a significant role in shaping global standards for AI ethics. In November 2021, UNESCO produced the first-ever global standard on AI ethics, the "Recommendation on the Ethics of Artificial Intelligence". About 193 member states adopted the framework. The cornerstone of this recommendation is the "protection of human rights and dignity, emphasising transparency, fairness, and the importance of human oversight in AI systems" [1].

Some of the fundamental AI principles that we should focus on include:

  • Human, Societal, and Environmental Well-being: AI systems should benefit individuals, society, and the environment. Their objectives should be identified clearly and justified. Ideally, AI should benefit all human beings, including future generations.
  • Human-Centred Values: AI systems should respect human rights, diversity, and individual autonomy. They should be designed with a focus on people's needs and well-being.
  • Fairness: AI systems should be inclusive, accessible, and free from unfair discrimination. They should not perpetuate biases or harm specific groups.
  • Privacy Protection and Security: AI systems must protect privacy rights and data. Ensuring data security is crucial to maintaining trust.
  • Reliability and Safety: AI systems should operate reliably according to their intended purpose. Safety measures are essential to prevent unintended consequences.
  • Transparency and Explainability: People should be aware and understand when AI impacts them significantly. Responsible disclosure ensures transparency.
  • Contestability: There should be a process to challenge AI system outcomes when they significantly impact individuals or communities.
  • Accountability: Those involved in the AI lifecycle should be identifiable and accountable for system outcomes.?

As AI continues to reshape our world, we must navigate its development with strong ethical guardrails. These principles should guide us in maximising AI's benefits while minimising risks and adverse outcomes.?


Is AI Ethics Necessary?

The short answer is YES. Ethics ensures we create AI tools fairly, responsibly, and transparently. AI tools should align with societal values. We should always aim to build AI tools that bring more value while minimising potential harm or risks.?


Let's look at examples of what happens when we ignore AI ethics.?


Gender Bias in Search Engines

Search engines can deliver biased[2] results due to their reliance on big data and user preferences. For instance, searching for "greatest leaders of all time" predominantly yields male personalities, perpetuating gender bias. Addressing this bias is essential to ensure more equalised and accurate search results.?


Racial Bias in Facial Recognition:

Facial recognition algorithms have exhibited racial bias[3], especially when trained on datasets that over-represent certain racial groups. For instance, an algorithm trained predominantly on images of white individuals may struggle with accurate recognition of people of colour.?


Gender Bias in Hiring Algorithms:

A job search platform unintentionally offered higher positions to men with lower qualifications more frequently than women—the model's biased output perpetuated gender inequality[4].


Criminal Justice Algorithms

An algorithm designed to predict the likelihood of a convicted person re-committing a crime was racially biased[5]. The model falsely labelled detained individuals as likely to re-offend, disproportionately affecting certain racial groups.?


Age-Related Biasness in Healthcare?

Healthcare algorithms sometimes prioritise younger patients over older ones[6], leading to unequal treatment and potential harm. The World Health Organization (WHO) has brought to light ageism in AI use for older patients in public health. Most AI tools often bolster stereotypes about older patients, claiming that they lack interest, which is not always true. Sadly, these tools overlook older patients when collecting and training data, resulting in biased or skewed results.??


Building an AI Ethics Framework?

Every company intending to use AI in its operations needs to develop an AI ethics framework to avoid legal or ethical issues. Here are a few remarkable frameworks to consider when building your company's AI Ethics Framework.?

Australian AI Ethics Principles:

The Australian Government has developed an AI Ethics Framework[7] to guide businesses and governments in designing, developing, and implementing AI responsibly. The principles put emphasis on:

  • Benefit: AI should benefit individuals, society, and the environment.
  • Human Rights: Respect for human rights, diversity, and individual autonomy.
  • Inclusivity: AI should be inclusive, accessible, and avoid unfair discrimination.
  • Privacy and Security: Uphold privacy, data protection, and data security.
  • Transparency: Operate transparently and disclose significant AI impacts.
  • Accountability: Clearly define accountability for the impact of the AI system.


Unified Framework of Five Principles for AI in Society?

Luciano Florid and Josh Cowls proposed this framework[8]. It provides a concise set of principles to guide responsible AI development and usage.?

  1. Beneficence: AI systems should aim to benefit individuals and society. Their impact should be positive, promoting well-being.
  2. Non-maleficence: Avoid causing harm. AI developers must minimise negative consequences and prevent harm to users and society.
  3. Autonomy: Respect individual agency. AI should empower users to make informed choices and maintain control.
  4. Justice: Ensure fairness. AI systems should not discriminate based on race, gender, or socioeconomic status.

5.???? Explicability: This principle emphasises both intelligibility and accountability:

  • Intelligibility: AI systems should be understandable. Users should know how they work.
  • Accountability: Clarify who is responsible for the AI's behaviour and outcomes.


OECD AI Principles

The OECD AI Principles[9] promote the use of AI that is innovative and trustworthy and respects human rights and democratic values. Adopted in May 2019, these principles set standards for AI that are practical and flexible enough to stand the test of time. Let me summarise them here:

  1. Inclusive Growth, Sustainable Development, and Well-being: AI should contribute to the well-being of individuals and society while promoting sustainable development.
  2. Human-centred Values and Fairness: AI systems should respect human rights, diversity, and individual autonomy. Fairness is crucial to avoid discrimination.
  3. Transparency and Explainability: AI processes should be transparent, and decisions made by AI systems should be explainable to users.
  4. Robustness, Security, and Safety: AI systems must be robust, secure, and safe to prevent unintended consequences or harm.
  5. Accountability: Clear accountability mechanisms should be in place for AI system behaviour and outcomes.

These principles guide how governments and other actors can shape a human-centric approach to trustworthy AI. They represent a common aspiration for adhering countries and foster trust in AI adoption worldwide.


Ethics Guidelines for Trustworthy Artificial Intelligence (AI)

A group of high-level AI experts developed guidelines to advance and secure the building and deployment of AI systems in the EU. Here are some of the critical matters pointed out in the guidelines[10].?

  1. Lawful AI—AI systems should respect all applicable laws and regulations.
  2. Ethical AI—AI should adhere to moral principles and values.
  3. Robust AI—AI systems need technical robustness while considering their social environment. They should be resilient, secure, accurate, reliable, and reproducible.
  4. Privacy and Data Governance—Respect privacy and data protection. Ensure data quality, integrity, and legitimate access.
  5. Transparency—Ensure the data, systems, and AI business models are transparent. Adequately explain AI decisions to stakeholders.
  6. Non-discrimination and Fairness—Avoid unfair bias. Ensure accessibility and involve relevant stakeholders.
  7. Societal and Environmental Well-being—AI systems should benefit all, including future generations. Consider sustainability and environmental impact.


Considerations for Boards and the C-Suite

As leadership teams consider the integration of AI into their business strategies, several practical considerations emerge that are crucial for Boards and C-Suite executives to address:

  1. Infrastructure and Risk Management: Before rolling out AI, it is essential to evaluate whether your organisation's current infrastructure can support AI technologies and identify potential risks. This includes examining data privacy concerns, potential security breaches, and the risk of biased outcomes from AI decisions.
  2. Ethical and Legal Compliance: Boards and C-Suite executives must ensure that AI applications comply with ethical standards and legal requirements. This involves more than just following current laws; it requires proactive engagement with upcoming legislative trends that might impact AI usage.
  3. Strategic Alignment and Governance: Integrating AI should align with the organisation's strategic objectives and require top-level oversight. This means defining specific goals for AI, such as improving customer service, increasing operational efficiency, or gaining a competitive edge and setting clear metrics to measure AI's effectiveness in these areas. Boards should also ensure continuous monitoring and adaptation of AI strategies to respond to evolving business needs and technology landscapes.


Conclusion

Integrating AI into business operations brings significant opportunities along with substantial challenges. Organisations must navigate their complexities as AI technologies advance by ensuring a robust infrastructure, maintaining ethical compliance, and securing strategic alignment. These considerations are essential for Boards and C-suite executives to manage the risks and effectively leverage AI's benefits.

Leaders must adopt a proactive approach to AI governance by focusing on risk management, adhering to evolving legal standards, and ensuring alignment with strategic objectives. By taking these steps, they can guarantee that AI not only boosts operational efficiencies but also supports their organisations' integrity and core values.


#LeadershipInAI #EthicalAI #AIForExecutives


We have developed our "Shaping Success with AI?" (SS:ai?) framework to help Board Directors understand, introduce, and govern AI in their organisations. SS:ai? aims to help Board Directors move from User or Learner to Trailblazer and lead profoundly impactful organisations.

AI Maturity Levels for Board Directors

If you're a Board Director introducing AI into your organisation, don't hesitate to contact me for an exploratory discussion. You can reach me at Mario.Bojilov@mbsys.com.au, or you can book a 15-minute confidential session directly at https://bit.ly/mbojilov-15min or by using the QR code below.


References

  1. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
  2. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics/cases
  3. https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/
  4. https://atrium.ai/resources/ethical-ai-real-world-examples-of-bias-and-how-to-combat-it/
  5. https://atrium.ai/resources/ethical-ai-real-world-examples-of-bias-and-how-to-combat-it/
  6. https://pixelplex.io/blog/ai-bias-examples/
  7. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework
  8. https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8
  9. https://oecd.ai/en/mcm
  10. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

要查看或添加评论,请登录

Dr Mario Bojilov - MEngsSc, CISA, F Fin, PhD的更多文章

社区洞察

其他会员也浏览了