AI Villain - Bias. Navigating the Complexities of AI Bias: Global Challenges and New Zealand's Unique Perspective

AI Villain - Bias. Navigating the Complexities of AI Bias: Global Challenges and New Zealand's Unique Perspective

This article was authored by Steve Carey and Dr Karaitiana Taiuru (Dr Karaitiana Taiuru JP, MInstD | LinkedIn)



With the proliferation of artificial intelligence (AI) worldwide, it brings with it a host of ethical challenges, particularly in the realms of data sovereignty and algorithmic bias. This article explores these issues from both a global perspective and through the unique lens of New Zealand's cultural and social landscape.

The Global Challenge of AI Bias

Worldwide, AI systems, including facial recognition technologies, have been shown to exhibit biases that can perpetuate and even exacerbate societal inequalities. Studies from the United States and Europe have revealed that many AI systems perform poorly when identifying women and people of colour, raising serious concerns about their deployment in critical areas such as law enforcement, healthcare, and financial services (Buolamwini & Gebru, 2018).

These biases often stem from two primary sources: unbalanced training data that lacks diversity, and algorithmic designs that inadvertently encode societal prejudices. The issue of unbalanced training data is particularly pernicious on a global scale. AI systems learn from the data they are fed, and if this data does not adequately represent the diversity of the world's population, the resulting AI will inevitably show bias. For instance, facial recognition systems trained predominantly on images of light-skinned individuals struggle to accurately identify those with darker skin tones across various countries and cultures.

AI bias extends beyond facial recognition technology, affecting multiple sectors with significant consequences. In healthcare, for example, AI systems predominantly trained on data from Western populations often fail to accurately diagnose conditions or recommend treatments for patients from underrepresented communities. Experts emphasise the importance of using more diverse datasets to eliminate racial biases in healthcare AI (Backman, 2023). Similarly, in financial services, credit scoring algorithms based on historical data have been shown to reinforce existing economic disparities, making it more challenging for marginalised groups to access loans or financial products. A recent analysis revealed that these biases contributed to the denial of mortgage applications for 80% of Black applicants, highlighting the urgent need for fairer AI models in financial decision-making (Hale, 2021).

Algorithmic design, the second major source of bias, is often more subtle but equally problematic worldwide. Even with diverse training data, the way algorithms are structured, and the assumptions built into them can lead to biased outcomes. These biases may reflect the unconscious prejudices of developers from dominant cultures or may emerge from the complex interactions within the AI system itself. For example, language processing algorithms may struggle with dialects or languages that are structurally different from those they were primarily designed for, leading to misinterpretations or exclusions in applications like voice assistants or automated customer service systems.

The global impact of AI biases is profound and far-reaching. As AI systems increasingly influence decisions that affect people's lives—ranging from job applications to criminal sentencing—the potential for these biases to reinforce and exacerbate existing social inequalities becomes a pressing concern. This issue has driven a global push for more transparent AI systems, diverse development teams, and the integration of fairness constraints in algorithm design. Recent advancements in technologies, such as sovereign AI microservices, offer promising avenues to enhance the reliability and fairness of AI by enabling more localised, secure, and unbiased data processing (Daws, 2024). Such innovations underscore the urgent need for continuous improvement in AI development practices to prevent the entrenchment of systemic biases.

New Zealand's AI Bias Landscape

In New Zealand, these global challenges take on unique dimensions due to our multicultural society and the special status of Māori as tangata whenua. Our commitment to the principles of the Treaty of Waitangi/Te Tiriti o Waitangi requires us to consider how AI technologies impact Māori rights, particularly in relation to data sovereignty.

Our commitment to the principles of the Treaty of Waitangi/Te Tiriti o Waitangi requires us to consider how AI technologies impact Māori rights, particularly in relation to data sovereignty.

New Zealand's cultural landscape is rich and diverse, with a significant Māori population that currently comprises nearly 20% of the total population and is projected to grow to 33% of all children by 2038 (Te Puni Kōkiri, 2018). Alongside a growing Pasifika community and increasing diversity from immigration, this multicultural context means the impacts of AI bias could be particularly severe if not properly addressed. For example, facial recognition systems that struggle to accurately identify diverse ethnicities may disproportionately misidentify Māori or Pasifika individuals, potentially exacerbating existing inequalities, particularly in sensitive areas such as policing and public services.

The potential for bias extends beyond facial recognition. In healthcare, AI systems not trained on diverse New Zealand data might fail to account for the specific health needs and risk factors of Māori and Pasifika populations, potentially exacerbating existing health disparities. In education, AI-driven learning systems that don't account for cultural differences in learning styles or knowledge frameworks could disadvantage students from non-Western backgrounds.

Moreover, the use of AI in government services and decision-making processes raises important questions about fairness and representation. If these systems are not designed with New Zealand's unique cultural context in mind, they risk perpetuating or even amplifying existing social and economic disparities between different ethnic groups.

The challenge for New Zealand is not just to address these potential biases, but to do so in a way that honours the principles of partnership, participation, and protection enshrined in the Treaty of Waitangi/Te Tiriti o Waitangi. This requires going beyond simply ensuring diverse representation in datasets to actively involving Māori and other cultural groups in the development, implementation, and governance of AI systems.

Data Sovereignty: A Global Perspective

Data sovereignty has become a critical issue globally, with nations and indigenous peoples worldwide asserting their right to control data generated within their territories or about their citizens. This concept challenges the borderless nature of digital information and raises complex questions about data ownership, storage, and use in an interconnected world.

Internationally, data sovereignty discussions often focus on national security, economic interests, and citizen privacy. Countries like China, Russia, and members of the European Union have implemented various forms of data localisation laws, requiring certain types of data to be stored within their borders. These measures aim to protect citizens' data from foreign surveillance and ensure that national laws and values govern data use.

The European Union's General Data Protection Regulation (GDPR) is perhaps the most well-known example of data sovereignty legislation. It gives EU citizens significant control over their personal data and imposes strict requirements on organisations handling this data, regardless of where they are based. This has had global ramifications, with many international companies adjusting their data practices to comply with GDPR.

In contrast, countries like China have taken a more state-centric approach to data sovereignty, asserting greater government control over data flows and storage. This has led to the creation of national internet infrastructures and restrictions on international data transfers, raising concerns about digital authoritarianism and the fragmentation of the global internet.

Indigenous peoples around the world are also asserting their rights to data sovereignty. In Canada, for example, First Nations have developed principles of "OCAP" (Ownership, Control, Access, and Possession) to guide the collection and use of data about their communities. Similar movements are emerging among indigenous groups in Australia, the United States, and other countries.

These diverse approaches to data sovereignty reflect different cultural values, political systems, and historical contexts. They also present significant challenges for global tech companies and international collaborations, as navigating this complex landscape of data regulations and cultural expectations becomes increasingly difficult.

Data Sovereignty and Māori Perspectives in New Zealand

In New Zealand's rapidly evolving digital landscape, the concept of data sovereignty takes on a unique dimension due to our cultural context, particularly concerning Māori data sovereignty. This principle asserts that data from or about Māori should be subject to Māori governance, a concept deeply rooted in the Treaty of Waitangi/Te Tiriti o Waitangi. Article II of Te Tiriti grants Māori rights to their taonga (treasures), which Māori consider to include their data.

The Waitangi Tribunal's WAI 2522 Report on the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) further clarifies this concept. It defines Māori Data Sovereignty as "Māori Data Governance: The principles, structures, accountability mechanisms, legal instruments, and policies through which Māori exercise control over Māori data" (Waitangi Tribunal, 2023). The same report defines Māori Data as "Digital or digitisable information or knowledge that is about or from Māori people, language, culture, resources, or environments" (Waitangi Tribunal, 2023).

Understanding Māori Data Sovereignty

For Māori, data sovereignty extends far beyond legal frameworks to encompass a holistic worldview that sees data as intrinsically connected to cultural identity, ancestral knowledge, and spiritual beliefs. Taiuru (2018) elaborates on this concept, explaining that in Māori cosmology, knowledge itself has a whakapapa (genealogy). This spiritual dimension of data challenges the Western notion of data as a neutral, commodifiable resource.

The Concept of Data as Taonga

This perspective transforms data from a static resource into a dynamic, spiritual entity that requires respect, protection, and careful stewardship. Just as Māori view land, water, and other natural resources as living entities with their own rights and mana (spiritual power and authority), data too is seen as having its own life force or mauri. This understanding fundamentally challenges Western concepts of data as a commodity or neutral information resource.

Just as Māori view land, water, and other natural resources as living entities with their own rights and mana (spiritual power and authority), data too is seen as having its own life force or mauri.

Implications for AI Development

This understanding of data as a living, spiritual entity raises critical questions for AI development in New Zealand:

  1. Respecting Mauri in AI Systems: How can we design AI systems that honour the life force of the data they process? This may require developing new protocols for data handling that incorporate Māori spiritual and cultural values.
  2. Protecting Sacred Knowledge: What safeguards should be in place to ensure AI doesn't misuse or misinterpret culturally sensitive information? This calls for ethical frameworks that go beyond Western concepts of privacy and consent.
  3. Equitable Benefit Sharing: How can we ensure that the benefits derived from AI systems using Māori data flow back to Māori communities, honouring their role as kaitiaki (guardians) of this knowledge?
  4. Data Lifecycle Management: How does the concept of data as a living entity affect decisions about long-term data storage, deletion, or repurposing in AI systems? Should there be specific protocols for "retiring" data that is considered to have fulfilled its purpose, similar to cultural practices for laying ancestral artifacts to rest?

Challenges and Opportunities

Integrating Māori data sovereignty principles into AI development presents both challenges and opportunities. It requires a delicate balance between protecting cultural values and promoting innovation, between local control and international cooperation.

These questions challenge conventional approaches to AI ethics and governance, suggesting the need for new frameworks that can accommodate diverse cultural worldviews and spiritual beliefs about the nature of data and knowledge. It might involve incorporating traditional Māori decision-making processes, such as hui (gatherings) and wānanga (discussions), into the development and governance of AI systems that use Māori data.

Moreover, this perspective on data sovereignty has implications beyond AI, touching on broader issues of digital infrastructure, data storage, and international data sharing agreements. It raises questions about whether New Zealand needs to develop its own data storage capabilities to ensure that Māori data remains under local control and governance.

Moving Forward

As we navigate this complex landscape, it's crucial to engage in meaningful dialogue with Māori communities, involve them in decision-making processes, and ensure that AI development in New Zealand respects and upholds the principles of the Treaty of Waitangi.

By embracing the concept of data as taonga, we have the opportunity to develop AI systems that not only leverage cutting-edge technology but also honour and protect the rich cultural heritage of Aotearoa New Zealand. This unique approach could position New Zealand as a global leader in culturally responsive and ethically robust AI development.

The challenge for New Zealand is to develop approaches to data sovereignty and AI governance that honour Māori perspectives while also navigating the global digital economy and international data flows. By incorporating traditional Māori values such as manaakitanga (mutual respect) and kaitiakitanga (guardianship) into AI governance, we can create more inclusive, trustworthy, and culturally responsive AI systems that benefit all New Zealanders while respecting the unique status of Māori as tangata whenua (people of the land).

Facial Recognition: Global Concerns and Challenges

Facial recognition technology serves as a prime example of how AI bias manifests globally. This technology, which uses AI algorithms to identify or verify a person from a digital image or video frame, has seen rapid advancement and widespread deployment in recent years. However, its rollout has been accompanied by growing concerns about accuracy, bias, and privacy.

International studies, particularly from the United States, have shown that these systems often have higher error rates for women and people with darker skin tones (Ankita et al., 2024). A landmark study by Buolamwini and Gebru (2018) found that some commercial facial recognition systems had error rates of up to 34% for dark-skinned women, compared to error rates of less than 1% for light-skinned men. This disparity in accuracy raises serious concerns about the potential for discrimination when these systems are used in high-stakes applications.

The global response to these concerns has been varied. In the United States, cities like San Francisco, Boston, and Portland have banned the use of facial recognition by government agencies due to concerns about bias and privacy. At the federal level, there have been calls for regulation, with some lawmakers proposing bills to restrict the use of facial recognition technology.

In the European Union, there are ongoing debates about restricting the use of facial recognition in public spaces. The EU's proposed AI Act includes strict regulations on the use of "high-risk" AI systems, including many applications of facial recognition. These regulations would require extensive testing for bias and accuracy before such systems could be deployed.

China, on the other hand, has widely deployed facial recognition for surveillance purposes, raising concerns about privacy and human rights. The country has integrated facial recognition into its social credit system and uses it extensively for public security purposes. This has led to international criticism and debates about the ethical use of AI in surveillance.

In India, the government's plans to implement a nationwide facial recognition system have been met with legal challenges and public protests over privacy concerns and the potential for mass surveillance. Critics argue that the system could be used to target minority groups and suppress dissent.

These diverse global responses reflect different cultural attitudes towards privacy, security, and the role of technology in society. They also highlight the challenges of developing international standards for the ethical use of AI technologies like facial recognition.

The accuracy issues in facial recognition systems stem from several factors:

  1. Biased training data: Many facial recognition systems are trained on datasets that are not sufficiently diverse, leading to lower accuracy for underrepresented groups.
  2. Algorithm design: The algorithms used in facial recognition may be inherently biased due to choices made in their development.
  3. Lighting and image quality: Facial recognition systems often perform poorly in low-light conditions or with low-quality images, which can disproportionately affect people with darker skin tones.
  4. Facial features: Some systems struggle with accurately identifying facial features that are more common in certain ethnic groups and underrepresented communities.

These technical challenges intersect with social and ethical concerns. The use of facial recognition in law enforcement, for example, has been criticised for potentially exacerbating racial profiling and over-policing of minority communities. In border control, biased systems could lead to unfair treatment or delays for certain travellers. Even in commercial applications, such as identity verification for financial services, biased facial recognition could result in unequal access to services for some communities.

?

Buolamwiwi, J. (2019)

Privacy is another major concern with facial recognition technology. The ability to identify individuals in public spaces without their knowledge or consent raises significant ethical questions. This is particularly problematic given the potential for this technology to be used for mass surveillance, as seen in some authoritarian regimes.

Addressing these global challenges requires a multi-faceted approach:

  1. Improving the diversity of training data to ensure facial recognition systems are accurate across all demographic groups.
  2. Developing more robust algorithms that can account for variations in skin tone, facial features, and lighting conditions.
  3. Implementing rigorous testing protocols to identify and mitigate bias before systems are deployed.
  4. Establishing clear regulatory frameworks that govern the use of facial recognition technology, particularly in high-stakes applications.
  5. Engaging in ongoing ethical debates about the appropriate use of this technology, balancing security needs with privacy rights and the potential for discrimination.

As facial recognition technology continues to evolve and spread globally, these issues will remain at the forefront of discussions about AI ethics and governance.

Facial Recognition in the New Zealand Context

In an era of rapid technological advancement, facial recognition systems are increasingly permeating various sectors of society. However, the potential for bias in these systems raises critical questions about fairness, equality, and cultural sensitivity, particularly in diverse nations like New Zealand. This article explores the implications of biased facial recognition technology in the New Zealand context, focusing on its impact on Māori and Pasifika populations.

Facial recognition technology, while promising efficiency in sectors such as law enforcement, border control, and commercial applications, faces a significant challenge: higher error rates among certain ethnic groups. In New Zealand, this disproportionately affects Māori and Pasifika populations, raising concerns about fairness and equality before the law (RNZ, 2024). The potential for biased facial recognition systems in policing is particularly alarming. Higher rates of misidentification for Māori and Pasifika individuals could lead to wrongful arrests, disproportionate surveillance, and exacerbation of existing overrepresentation in the criminal justice system. Moreover, such outcomes could further erode trust between police and minority communities.

A recent incident involving the misidentification of an innocent Māori woman by a supermarket AI system underscores these risks (RNZ, 2024). This case exemplifies how AI can reinforce existing human biases, a phenomenon supported by broader international research suggesting that humans often perceive AI systems as more accurate than their own judgment, even when the AI is flawed (Taiuru, 2024).

As facial recognition technology becomes more prevalent in border control, inaccuracies for certain ethnic groups could result in unfair treatment of Māori, Pasifika, or other non-European travellers. This could lead to delays and inconveniences for specific groups, potentially damaging New Zealand's reputation as a welcoming destination. Even in the private sector, biased facial recognition could lead to unequal access to services for some communities, reinforcing existing economic disparities and potentially creating new forms of digital exclusion.

The implementation of facial recognition technology in New Zealand must also grapple with unique cultural considerations. In Māori culture, the face is considered tapu (sacred), with specific protocols governing the use and representation of facial images, especially that of moko (Taiuru, 2020). The collection, storage, and analysis of facial data by AI systems may conflict with these cultural values. Furthermore, the use of overseas-developed facial recognition systems and cloud-based processing raises concerns about Māori data sovereignty. The storage and analysis of Māori facial data outside New Zealand could conflict with principles of Māori data governance (Taiuru, 2020).

To navigate these complex issues, New Zealand must adopt a multi-stakeholder approach. This involves engaging with Māori and Pasifika communities to understand their perspectives on facial recognition technology and developing culturally sensitive guidelines and regulations for its use. There is also a need to invest in research to improve system accuracy for New Zealand's diverse population and implement rigorous testing and auditing processes to identify and mitigate bias.

Consideration should be given to developing local technologies designed for New Zealand's unique demographic and cultural context. Additionally, exploring alternative technologies that achieve similar goals without risking bias or cultural insensitivity could provide innovative solutions to these challenges.

As New Zealand integrates facial recognition and other AI technologies into various sectors, it has the opportunity to lead in culturally responsive and ethical AI deployment. By addressing bias, respecting Māori cultural values, and upholding data sovereignty principles, New Zealand can set a global standard for the responsible use of facial recognition technology in diverse societies.

The implications of biased facial recognition technology in New Zealand are far-reaching, intersecting with the country's unique cultural and social context. As the nation grapples with these challenges, it has the potential to develop approaches that not only mitigate bias but also respect and incorporate Māori cultural values and data sovereignty principles. This could position New Zealand as a leader in the ethical and culturally sensitive deployment of AI technologies, setting an example for other diverse societies around the world.

Bias in Image Generation and Large Language Models

As AI technologies continue to advance, two areas that have garnered significant attention are image generation and Large Language Models (LLMs). While these technologies offer immense potential, they also present new challenges in terms of bias and fairness.

Image generation AI, such as DALL-E, Midjourney, and Stable Diffusion, has captured public imagination with its ability to create realistic and creative images from text descriptions. However, these systems have also been found to perpetuate and sometimes amplify societal biases. One of the primary issues is the representation of different demographic groups. Studies have shown that when prompted with neutral terms like "CEO" or "doctor," these systems tend to generate images of white males more frequently than other demographics. This bias reflects and potentially reinforces societal stereotypes about professional roles. Moreover, the generated images often perpetuate beauty standards and gender stereotypes. For example, prompts involving "beauty" or "attractive person" typically result in images of young, thin, white women, failing to represent the diverse beauty standards across different cultures. However, as Google’s Gemini has found, reversal of these stereotypes may need careful thinking.

?

Stable Diffusion demonstrating algorithm bias


?

Google’s Gemini (February 2024)

Efforts to address these biases in image generation AI include diversifying training datasets to include a wider range of images representing different cultures, ethnicities, and body types. Companies and researchers are also implementing ethical guidelines and content policies to prevent the generation of harmful or biased content. Additionally, work is being done on developing more sophisticated prompting techniques to encourage diverse outputs and creating tools for users to customise the demographic characteristics in generated images.

Large Language Models (LLMs) like ChatGPT, BERT, and their successors have revolutionised natural language processing tasks. However, they've also been found to exhibit various forms of bias. Text generated by LLMs can reflect gender, racial, and cultural biases present in their training data. For instance, they might associate certain professions with specific genders or ethnicities or perpetuate stereotypes in the narratives they generate. LLMs can also exhibit language bias, performing better in English and other widely spoken languages while struggling with less common languages or dialects. This can lead to unequal access to AI-powered language services across different linguistic communities. Another concern is the potential for these models to generate false or biased information confidently, which can be particularly problematic when they're used in applications like content creation or information retrieval.

To mitigate bias in LLMs, researchers and developers are employing various strategies. These include careful curation of training data to ensure diverse and balanced representation, and the development of debiasing techniques such as counterfactual data augmentation or fine-tuning on carefully constructed datasets. There's also a push for the implementation of ethical AI principles in the development and deployment of LLMs, as well as the creation of benchmarks and evaluation metrics specifically designed to measure different types of bias in language models. Increasing transparency about the limitations and potential biases of these models to end-users is also becoming a priority in the field.

In New Zealand, the challenges of bias in image generation and LLMs take on additional dimensions due to our unique cultural landscape. For image generation, there's a concern about the representation of Māori and Pasifika individuals and cultural elements. If these AI systems are not trained on diverse datasets that include adequate representation of New Zealand's population, they may fail to generate accurate or respectful images related to these cultures. This could lead to misrepresentation or cultural appropriation if not carefully managed. There's also the question of how these technologies interact with traditional Māori concepts of imagery and representation. For instance, the generation of images depicting taonga (treasured objects) or incorporating Māori designs raises questions about cultural sensitivity and intellectual property rights, including with the ability to register Trademarks that use a Māori element considering Section 17 of the Trade Marks Act 2002.

In the realm of LLMs, a significant challenge for New Zealand is the incorporation of Te Reo Māori and New Zealand English. Most large language models are trained predominantly on American English data, which may not capture the unique linguistic features of New Zealand English or the bicultural nature of New Zealand's official languages. The potential biases in LLMs could also impact areas such as automated customer service, content moderation, and educational tools in New Zealand. If these models don't adequately understand or represent New Zealand's cultural context, they could perpetuate biases or misunderstandings in these critical areas.

Addressing these challenges in the New Zealand context could involve developing New Zealand-specific datasets for fine-tuning image generation models, ensuring representation of Māori and Pasifika individuals and cultural elements. Creating guidelines for the ethical use of image generation AI in contexts involving Māori and Pacific cultural elements, developed in consultation with these communities, is also crucial. There's a need for investment in the development of language models that incorporate Te Reo Māori and New Zealand English, possibly through partnerships between local researchers and global AI companies. Establishing clear policies on the use of LLMs in government and educational contexts, ensuring they meet standards of cultural responsiveness and fairness, is another important step. Additionally, conducting research on how these technologies impact perceptions of New Zealand culture and identity, both domestically and internationally, can provide valuable insights for future development and policy-making.

By proactively addressing these issues, New Zealand has the opportunity to ensure that as these powerful AI technologies become more prevalent, they respect and accurately represent the country's diverse cultural landscape. This approach not only mitigates potential harms but also positions New Zealand as a leader in culturally responsive AI development, setting a valuable example for other multicultural societies grappling with similar challenges.

Addressing AI Bias: Global Approaches

Tackling AI bias is a global challenge that requires a comprehensive strategy combining technical solutions, ethical guidelines, and robust governance frameworks. As the awareness of AI bias has grown, so too have the efforts to address it. Here are some key approaches being explored and implemented internationally:

  1. Diverse and Representative Data: Ensuring AI training datasets reflect the full diversity of global populations is a critical first step in addressing bias. This involves not just collecting more diverse data, but also carefully curating datasets to ensure balanced representation. Companies and researchers are developing techniques to audit datasets for diversity and creating synthetic data to fill gaps in representation. For example, IBM has released the Diversity in Faces dataset, which includes over a million annotated facial images with a wide range of ethnicities, ages, and genders.
  2. Explainable AI (XAI): Implementing techniques that make AI decision-making processes more transparent and interpretable is crucial for identifying and addressing bias. XAI aims to create AI systems whose actions can be easily understood by humans. This includes developing models that can provide clear explanations for their decisions and creating tools to visualise the decision-making process. For instance, Google's What-If Tool allows developers to visualise and investigate machine learning models with minimal coding.
  3. Ethical AI Frameworks: Developing guidelines that embed ethical principles into AI development and deployment processes is becoming increasingly common. Organisations like the Institute of Electrical and Electronics Engineers (IEEE) have developed global standards for ethically aligned design of AI systems. The European Union has proposed comprehensive AI regulations that include requirements for high-risk AI systems to be tested for bias before deployment (EU AI ACT). These frameworks often emphasise principles such as fairness, accountability, transparency, and privacy.
  4. Inclusive Development: Actively involving diverse communities in AI governance and development is crucial to ensure multiple perspectives are respected. This goes beyond just hiring diverse teams to include engaging with communities that might be affected by AI systems. For example, the Partnership on AI, a global initiative, brings together academics, civil society organisations, companies, and other stakeholders to develop best practices for AI technologies.
  5. Algorithmic Fairness: Researchers and developers are creating new algorithms and techniques specifically designed to promote fairness in AI systems. This includes methods for debiasing existing models, developing fair classification algorithms, and creating tools to measure and mitigate bias in AI systems. For instance, the AI Fairness 360 toolkit, an open-source library developed by IBM, provides a comprehensive set of metrics for datasets and models to check for unwanted bias.
  6. Regulatory Measures: Governments and international bodies are exploring legal and policy frameworks to address AI bias and protect data sovereignty. The EU's proposed AI Act, for example, includes strict requirements for high-risk AI systems, including those used in employment, education, and law enforcement. In the United States, several states have passed laws regulating the use of AI in hiring decisions.
  7. Education and Awareness: Increasing understanding of AI bias among developers, policymakers, and the general public is crucial. Universities are incorporating ethics courses into computer science curricula, and organisations are developing training programs to help professionals understand and address AI bias.
  8. Bias Bounties: Similar to bug bounties in cybersecurity, some organisations are implementing "bias bounties" to incentivise the discovery and reporting of biases in AI systems. For example, X has run a bias bounty program to identify potential biases in their image cropping algorithm.
  9. Intersectional Approach: Recognising that bias often occurs at the intersection of multiple characteristics (e.g., race and gender), researchers are developing more nuanced approaches to understanding and addressing AI bias that consider these intersections.
  10. Ongoing Monitoring and Auditing: Implementing systems for continuous monitoring and auditing of AI systems in deployment is becoming recognised as crucial. This involves regular testing of systems for bias and performance across different demographic groups.

These global approaches provide a foundation for addressing AI bias, but their implementation often needs to be tailored to local contexts, considering specific cultural, legal, and social factors.

New Zealand's Path Forward

In New Zealand, addressing AI bias requires adapting these global approaches to our unique cultural and social context, particularly considering the principles of the Treaty of Waitangi/Te Tiriti o Waitangi and the concept of Māori data sovereignty. Here's how New Zealand is and could be approaching the challenge of AI bias:

  1. Culturally Inclusive Data: Ensuring AI training datasets reflect the full diversity of New Zealand's population is crucial, with particular attention to Māori and Pasifika representation. This goes beyond simple demographic inclusion to consider cultural contexts and knowledge systems. For example, the Māori Data Sovereignty Network, Te Mana Raraunga, advocates for data collection and use practices that empower Māori communities and respect Māori knowledge.
  2. Treaty-based AI Governance: Developing governance frameworks for AI that honour the principles of the Treaty of Waitangi/Te Tiriti o Waitangi is a unique challenge and opportunity for New Zealand. This could involve creating AI ethics boards that include Māori elders and cultural experts, ensuring that AI development aligns with Treaty principles and respects indigenous knowledge systems. The AI Forum of New Zealand has begun work on incorporating Treaty principles into AI governance frameworks.
  3. Culturally Responsive XAI: Developing XAI techniques that can explain AI decisions in culturally appropriate ways, respecting Māori concepts of knowledge and decision-making, is an important area for research and development. This might involve creating explanation methods that align with Māori oral traditions or incorporate concepts from Te Ao Māori (the Māori world view).
  4. Māori-led AI Initiatives: Actively involving Māori in AI governance and development is crucial to ensure their perspectives and rights are respected. This goes beyond consultation to include Māori leadership in AI research, development, and policy-making. Initiatives like the Māori AI Research Group at the University of Waikato are leading the way in this area, focusing on developing AI technologies that are beneficial to Māori communities.
  5. Addressing Unique Demographic Challenges: New Zealand's AI systems need to be particularly attuned to the country's unique demographic makeup. This includes developing facial recognition systems that are accurate across all ethnic groups and underrepresented communities in New Zealand, and creating language processing systems that can handle Te Reo Māori and New Zealand English, including various accents and dialects.
  6. Data Sovereignty Compliant AI: Developing AI systems that respect the principles of Māori data sovereignty is a key challenge. This might involve creating data storage and processing systems that keep Māori data within New Zealand, or developing AI models that can be trained and operated on decentralised data to allow Māori control over their information.
  7. Cultural Impact Assessments: Implementing cultural impact assessments for AI systems, similar to environmental impact assessments, could help identify potential cultural biases or negative impacts before deployment. These assessments would need to be developed in partnership with Māori and other cultural groups.
  8. Ethical AI Education: Developing education programs that emphasise ethical AI development with a specific focus on New Zealand's cultural context is crucial. This could involve incorporating modules on the Treaty of Waitangi/Te Tiriti o Waitangi and Māori data sovereignty into computer science and data science curricula.
  9. Cross-sector Collaboration: Fostering collaboration between technologists, ethicists, cultural experts, and policymakers is essential to developing comprehensive solutions to AI bias in New Zealand. Initiatives like the AI Forum of New Zealand are working to bring together diverse stakeholders to address these challenges.
  10. Legislative Framework: Exploring legal and policy frameworks that address AI bias and protect data sovereignty, drawing on international best practices while tailoring them to New Zealand's unique context. This might involve updating the Privacy Act or creating new AI-specific legislation that incorporates Treaty principles and Māori data sovereignty.
  11. Bias Detection Tools for NZ Context: Developing bias detection and mitigation tools specifically designed for the New Zealand context, considering our unique demographic makeup and cultural factors.
  12. Community Engagement: Implementing programs to engage with diverse New Zealand communities to understand their concerns and perspectives on AI, and to involve them in the development and testing of AI systems.

By taking these steps, New Zealand has the opportunity to develop AI systems that not only avoid bias but actively promote equity and cultural responsiveness. This approach could position New Zealand as a world leader in ethical and culturally grounded AI development.

Conclusion

As AI continues to shape our world, addressing bias and ensuring data sovereignty are critical challenges that require global cooperation and local action. New Zealand, with its unique cultural landscape and commitment to the principles of the Treaty of Waitangi/Te Tiriti o Waitangi, has the opportunity to lead by example in creating ethical, inclusive, and culturally responsive AI systems.

The path forward requires a multifaceted approach that combines technical innovation, policy development, and deep community engagement. By fostering collaboration between technologists, ethicists, cultural experts, and policymakers, investing in research at the intersection of AI and cultural responsiveness, and developing education programs that emphasise ethical AI development, New Zealand can contribute valuable insights to the global discourse on AI ethics while ensuring that our AI systems benefit all New Zealanders and respect our diverse cultural heritage.

As we move forward, it's important to recognise that addressing AI bias and ensuring data sovereignty is an ongoing process. The rapid pace of AI development means that new challenges will continue to emerge, requiring constant vigilance and adaptation. By maintaining a commitment to fairness, inclusivity, and cultural respect, we can ensure that AI becomes a tool for positive change in New Zealand and beyond, enhancing our societies while staying true to our values.

The journey towards ethical and unbiased AI is complex and challenging, but it is also an opportunity for New Zealand to showcase its innovative spirit and commitment to social justice on the global stage. As we navigate this journey, we have the chance to create AI systems that not only avoid harm but actively contribute to a more equitable and culturally rich society.

References

Ankita, B., ChienChen, H., Lauren, P., & Lydia, O. (2024, May 3). Impact of Explainable AI on Reduction of Algorithm Bias in Facial Recognition Technologies. In?2024 Systems and Information Engineering Design Symposium (SIEDS)?(pp. 85-89). IEEE. DOI: 10.1109/SIEDS61124.2024.10534745

Backman, I. (2023, December 21). Eliminating racial bias in health care AI: Expert panel offers guidelines. Yale School of Medicine. https://medicine.yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offers-guidelines/

Buolamwini, J. (2019, January 26). Response: Racial and Gender bias in Amazon Rekognition—Commercial AI System for Analyzing Faces.?Medium. https://medium.com/@Joy.Buolamwini/response-racial-and-gender-bias-in-amazon-rekognition-commercial-ai-system-for-analyzing-faces-a289222eeced

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1-15.

Daws, R. (2024, August 27). Sovereign AI gets boost from new NVIDIA microservices. AI News. https://www.artificialintelligence-news.com/news/sovereign-ai-gets-boost-new-nvidia-microservices/

?

Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies.?Sci,?6(1).

Hale, K. (2021, September 3). AI bias caused 80% of black mortgage applicants to be denied. Forbes. https://www.forbes.com/sites/korihale/2021/09/02/ai-bias-caused-80-of-black-mortgage-applicants-to-be-denied/

Leavy, S., O'Sullivan, B., & Siapera, E. (2020). Data, Power and Bias in Artificial Intelligence. arXiv:2008.07341. https://arxiv.org/abs/2008.07341

Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., Ruggieri, S., Turini, F., Papadopoulous, S., Kransanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., Broelemann, K., Kasneci, G., Tiropanis, T., & Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey.?Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery,?10(3), e1356.

RNZ. (2024, April 17). Māori woman mistaken as thief by supermarket AI 'not surprising,' experts say. https://www.rnz.co.nz/news/te-manu-korihi/514523/maori-woman-mistaken-as-thief-by-supermarket-ai-not-surprising-experts-say

Taiuru, K. (2018). Data is a Taonga: A customary Māori perspective. https://www.researchgate.net/profile/Karaitiana-Taiuru/publication/329089721_Data_is_a_Taonga_A_customary_Maori_perspective/links/5bf4e3f792851c6b27cebbb7/Data-is-a-Taonga-A-customary-Maori-perspective.pdf

Taiuru, K. (2020, December 7). Māori cultural consideration with facial recognition technology in New Zealand. https://taiuru.co.nz/maori-cultural-considerations-with-facial-recognition-technology-in-new-zealand/

Taiuru, K. (2024, February 10). Facial recognition and artificial intelligence profiling. https://taiuru.co.nz/facial-recognition-and-artificial-intelligence-profiling/

Te Puni Kōkiri. (2018). Future demographic trends for Māori – Part one: Population size, growth and age structure. https://thehub.sia.govt.nz/resources/future-demographic-trends-for-maori-part-one-population-size-growth-and-age-structure/

Waitangi Tribunal. (2023). WAI 2522 Report on the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). https://waitangitribunal.govt.nz/publications-and-resources/waitangi-tribunal-reports/

Thank you for this piece. As a grossly underrepresented member of our global community and a strong advocate of the ditigal transformation of local health care systems to serve those who are globally underserved, I am very sensitive to this issue. It is major and now is the time to course correct. Diversify, include, co-create, are the kinds of actions we need. “The global impact of AI biases is profound and far-reaching. As AI systems increasingly influence decisions that affect people's lives—ranging from job applications to criminal sentencing—the potential for these biases to reinforce and exacerbate existing social inequalities becomes a pressing concern.” I will be quoting you often.

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

2 个月

The exploration of culturally responsive AI in the context of Te Tiriti o Waitangi is incredibly timely. You mentioned the need for AI systems to actively promote equity and cultural responsiveness, a sentiment echoed by many as we grapple with algorithmic bias in various sectors. Historically, marginalized communities have often been disproportionately affected by biased technologies, reinforcing existing inequalities. Can we envision a future where AI not only avoids perpetuating these biases but actively works to dismantle them? Given the unique challenges posed by data sovereignty in Aotearoa, how might we ensure that indigenous knowledge systems are integrated into AI development in a way that honors their inherent value and autonomy?

回复

要查看或添加评论,请登录

Steve Carey (MA, CG, CMInstD)的更多文章

社区洞察

其他会员也浏览了