Democratising the Digital Economy
Democracy Works Foundation
Providing tools to build resilient democracies in Southern Africa.
Policy Brief 43:
By William Gumede
?Introduction
Digital technology, particularly Artificial Intelligence (AI), which has now become an integral part of our lives, can reinforce racism, undermine human rights, and undermine personal safety, and needs greater regulatory oversight, increasing human supervision and more accountability from technology companies, to reduce its negative impact on humans, safety and rights.
AI, a pillar of the data economy, is a package of technologies that brings together data, algorithms, and computing power, allowing machines to perform decision-making, object recognition, and problem-solving. It is predicted that the AI market will increase by US$190 billion by 2025 (Butterfield, Toplic, Anthony and Reid 2022).
AI can protect citizens' rights, security, and access to public services. AI is not only a tool to tackle climate change, and diseases and strengthen security, it also has the potential to help countries achieve their developmental goals. According to the World Economic Forum, AI could help deliver 134 targets out of 169 UN Sustainable Development Goals (Butterfield, Toplic, Anthony and Reid 2022).
Algorithm decision-making is often used to safeguard the security of citizens, whether at customs, police stop-and-search campaigns or to stop terror groups. It is also used by financial institutions to decide who should get a home or business loan or the interest to be paid on such a loan. And by companies to shortlist candidates for a job.
However, the asymmetric algorithmic decision-making of AI, could be very harmful to citizens. It is increasingly clear that certain algorithms behind AI, used to predict human behaviours can display racial, gender and xenophobic bias (Achiume 2020; Bajorek 2019; Barczyk 2020; Boffey 2020; Buranyi 2017; Chander 2020; Epps-Darling 2020; Hardesty 2018; Najibi 2020). AI decision-making based on biased data, could discriminate against individuals based on colour, gender and home origin or income.
There has been safety and liability concerns with AI systems. Self-driving cars operated by AI systems has been involved in accidents just like human-driven vehicles. AI system errors in medicine could potentially result in many more deaths or injuries than a human error. AI could also be deliberately used for malicious purposes by governments – to undermine democratic rights, criminalise citizens and marginalise perceived outsiders (Dalton 2020; New York Times 2019; Wakefield 2021).
?
AI may reinforce racism
A couple of studies in the past few years have warned about the dangers that AI may increase racism, and undermine human rights, dignity and personal privacy.
A 2018 study, called Gender Shades, co-authored by Timnit Gebru and Joy Buolamwini, argued that IA's facial recognition technology may promote racism. In Gebru and Buolamwini's paper, they show AI facial recognition technology, error rates for identifying darker-skinned people were much higher than error rates for identifying lighter-skinned people since the datasets used to train algorithms for facial recognition technology were overwhelmingly white.?
In Gebru and Buolamwini's study, a team of researchers discovered that facial recognition services from Microsoft, IBM and Face++ may discriminate based on gender and race. They used a dataset of 1270 photos of parliamentarians from three African and three Nordic countries. The study found that all three platforms were more effective on white male faces and had the highest error margins on dark-skinned males and females.
Gebru, a Staff Research Scientist and Co-Lead of Ethical Artificial Intelligence at Google. In 2020, Gebru was sacked by Google allegedly for another paper she co-wrote with five other co-authors, titled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?", which warned about the inherent biases in AI language models, cautioned about the environmental costs in creating large AI language models and questioned whether the big tech companies were doing to reduce the potential risks (Bender, Gebru, McMillan-Major and Shmitchell 2021).
Gebru, who is an expert on algorithmic bias, was forced to resign, according to her, after she allegedly refused the company's request for her to retract the paper or remove her name and that of other co-authors who work for Google from the paper (Lyons 2020; Metz 2020; Schiffer 2020). She felt she was being censored. The head of Google AI, Jeff Dean, at the time, alleged that the paper "didn't meet our bar for publication" (Lyons 2020; Metz 2020; Schiffer 2020). Google's AI team created a giant AI language model called in BERT in 2018, which it incorporated into its search engine.
A 2015 led by Anupam Datta at Carnegie Mellon University found that Google's online algorithm for its online advertising system portrays men as more suited to executive jobs than women. In the Datta study, men are more likely to be shown online ads for top executive jobs than women, showing gender discrimination in targeted online ads (Datta, Tschantz, and Datta 2015). The Datta study was based on experiments with simulated user profiles, which analysed targeted advertisements managed by Google's DoubleClick ad network on third-party websites.
?
IA may erase people of colour from virtual existence
In a 2020 study University of Cambridge researchers Kanta Dihal and Stephen Cave warned that AI algorithms created by "racially homogenous" technologists might build machines with built-in racial and gender biases. Crave is executive director of the Cambridge Leverhulme Centre for the Future Intelligence (CFI) and Dihal leads the CFI's Decolonising AI initiative. Dihal and Cave argued that many AI systems are racialised as white – which will have dangerous consequences for users who are not white.
The Cambridge University researchers said rightly that when AI systems are racialised, it "perpetuates 'real world' racial biases" in the way AI works and might be "erasing people of colour" out of virtual existence (Cave and Dihal 2020). Dihal and Cave called on technology companies to diversify the demographic of software developers, or the racial and gender biases will increase.?
An investigation in 2016 by the US organisation ProPublica found that IA software predicted that black people are at higher risk of committing a second crime after a first arrest (Angwin, Larson, Mattu and Kirchner 2016). The IA software based the conclusions on imprisonment numbers. The investigation concluded that black defendants were often at a higher risk of returning to crime than they were, were twice likely to be misclassified as higher risk than their white counterparts, and white defendants were predicted to be less risky than they were.
Furthermore, according to the 2016 ProPublica investigation, black defendants were also twice as likely as white defendants to be misclassified as being of reconviction of violent offences, while white defendants were 63% more likely to have been misclassified as a low risk of reconviction for violent crime (Angwin, Larson, Mattu and Kirchner 2016). US Representative Alexandria Ocasio-Cortez (2019) in response to the ProPublica investigation of IA system racial bias against blacks when measuring the likelihood of re-offending, said: "Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They're just automated assumptions. And if you don't fix the bias, then you are just automating the bias."
?
IA is being used for prosecuting minorities
There is a fear that IA could be used for racial profiling by countries prosecuting minority groups. In 2019, whistle-blowers told the New York Times that the Chinese government was using facial-recognition software to "track and control" the country's minority Muslim group, the Uighurs (NYT 2019). The Chinese government allegedly used AI systems in its public security cameras to identify Uighurs and then used the data to keep a watch on the community, which is persecuted in the country. The whistle-blowers exposed the police in the Chinese city of Sanmenxia for screening Uighurs and said there was an increasing demand by the country's government departments and agencies for screening systems which are programmed for Uighurs facial recognition (NYT 2019; Wakefield 2021).
Jonathan Frankle,?an AI expert from the Massachusetts Institute of Technology, quoted by the New York Times, said: "Once a country adopts a model in this heavy authoritarian mode, it is using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into."
?
AI applications may undermine privacy rights
Currently, the bulk of personal data in countries is stored on central cloud-based infrastructure and is mostly information related to consumers. As AI advances, far more data will be harvested by governments and industry, for use far beyond gauging consumer behaviour. Therefore, personal data will increasingly be stored and processed on the local devices of government and industry using the harvested data.
There are real dangers that using biometric data for remote identification purposes, such as facial recognition systems, will undermine human rights. There is a fear that the personal data of citizens used by AI applications could be abused, used to discriminate against them and undermine privacy.
Last year, French data privacy watchdog CNIL told Clearview Artificial Intelligence, a US-based facial recognition company, to stop collecting data from French citizens (Reuters 2021). CNIL said Clearview AI's collection of facial images on social media and the internet was illegal and breached EU data privacy rules (Reuters 2021).
EU data privacy rules obligate companies to seek prior consent for those images they collect online. EU rules also give citizens the right to demand that their data be removed from privately-owned databases (Reuters 2021). CNIL said Clearview AI did not ask for prior consent from those using their online images. The civil society organisation Privacy International laid a complaint against the company.
Privacy considerations must be fundamental when AI products are designed. Einaras von Gravrock (2022), a data specialist, said one way to safeguard data privacy is that when AI products and algorithms are designed, data should be decoupled "from users via anonymisation and aggregation" and "remove all personal identifiers and unique data points" from the data sets. Furthermore, technology companies must have "strict control over who in the company has access to specific data sets and continuously audit how this data is accessed" (von Gravrock 2022).
?
Conclusion: Democratisation IA
Tendayi Achiume, the UN Special Rapporteur on racism, in a 2020 report to the UN Human Rights Council, warned that technology "is fundamentally shaped by the racial, ethnic, gender and other inequalities prevalent in society typically makes these inequalities worse. It results in discrimination and unequal treatment in all areas of life, from education and employment to healthcare and criminal justice."
Achiume (2020) said governments must provide remedies to those who have been discriminated against by AI systems. "This includes accountability for racial discrimination and reparations to affected individuals and communities. As recent moves to ban facial recognition technologies in some parts of the world show - in some cases, the discriminatory effect of digital technologies will require their outright prohibition."
Tendayi Achiume, the UN Special Rapporteur on racism, said: "To prevent and eliminate racial discrimination in technological design will require having more racial and ethnic minorities in decision-making in the industry". It is crucial that the demographic of software developers diversify.
There must be greater oversight over AI systems to prevent racial and gender bias in their applications. For example, many EU countries have data protection rules to disallow the use of biometrics for facial recognition systems, except under strict public interest conditions (EU 2020).
Introducing open source into IA without undermining privacy laws will not only improve the quality of IA, but it will serve as a mechanism of oversight (Rao 2020).
Better regulation of the storage and use of such personal data is crucial to protect fundamental human rights. Data governance is often weak in technology companies. Technology companies must make data management controls part of compliance monitoring.
Anand Rao, PwC's Global AI Leader, said: "Controls need to be in place to ensure that the models are being developed with the appropriate success or validation metrics (a balance of accuracy, fairness, explainability), to avoid the development and deployment of AI models whose results are biased or can't be easily explained or understood".
There must be formal regulations to address safety concerns in AI systems. Liability, consumer protection and product safety rules should be tightened to tackle accidents, product faults and errors related to AI systems – ensuring human oversight over AI products (Greene 2021). The challenge is that even if such tighter oversight rules are established, they are often difficult to enforce. Professional associations, civil society and the media play a crucial role in helping to increase the quality of oversight.
A recent EU White Paper warned that although human decision-making is not immune to biases, "the same bias when present in AI could have a much larger effect, affecting and discriminating many people without the social control mechanisms that govern human behaviour".
The EU White Paper noted that "the objective of trustworthy, ethical and human-centric AI can only be achieved by ensuring an appropriate involvement by human beings". The EU paper strongly argues that critical public service applications should always be validated by a human if AI is used. The EU proposed that citizens who feel they have been discriminated against in the AI decision have the right to an explanation.
At a global level, it is crucial that AI policies around the world are in line with basic human dignity and privacy rights. In areas where citizens' rights are most directly impacted by AI, such as the judiciary, policing and welfare, there must be rules that AI follows democratic, fair and privacy principles – and have human oversight.
Civil society organisations in developing countries need to hold AI companies accountable for their actions like Privacy International did when it opposed US-based company Clearview AI's collection of facial images of French citizens without their consent. ?
Kay Firth-Butterfield (2020), the head of AI for the World Economic Forum, rightly said: "teams working on developing (AI) products and services need to think not only about the computer science behind a solution, but also the economic, legal, and social implications of their algorithms".
?
References
Tendayi Achiume (2020) Report of the United Nations Special Rapporteur on Racism, UN Human Rights Council, Geneva, July 15.
Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner (2016) "Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks", ProPublica, May 23.
Joan Palmiter Bajorek (2019) "Voice recognition still has significant race and gender biases", Harvard Business Review, May 10. https://hbr.org/2019/05/voice-recognition-still-has-significant-race-and-gender-biases.
Franziska Barczyk (2020) "Predictive policing algorithms are racist. They need to be dismantled", Technology Review, July 17.
领英推荐
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell (2021) "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?", ?Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021, pages 610–623.
Philip M. Boffey (2020) "Baked-In: How Racism is Coded into Technology", Report from the International Neuroethics Society Conference, November 11.
Joy Buolamwini and Timnit Gebru (2018) "Accountability, and Transparency Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification", Proceedings of Machine Learning Research 81:1–15. Conference on Fairness, Accountability, and Transparency.
https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
Stephen Buranyi (2017) "Rise of the racist robots – how AI is learning all our worst impulses", The Guardian, August 8.
Stephen Cave and Kanta Dihal (2020) "The Whiteness of AI", Journal of Philosophy and Technology, Vol 33, pp. 685-703, August 6.
Sarah Chander (2020) "Technology has codified structural racism – will the EU tackle racist tech?", Euractiv, September 3
https://www.euractiv.com/section/digital/opinion/technology-has-codified-structural-racism-will-the-eu-tackle-racist-tech/
Glenn Cohen and Michelle M. Mello (2019) "Big data, big tech, and protecting patient privacy", JAMA, August 9. https://jamanetwork.com/journals/jama/fullarticle/2748399.
Amit Datta, Michael Carl Tschantz, and Anupam Datta (2015) "Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination", Proceedings on Privacy Enhancing Technologies, 2015 (1):92-112.
Richard DiTomaso (n.d) "Liability and safety concerns with self-driving cars", DiTomaso Law.
Andy Dalton (2020) "Bad robots – China uses artificial intelligence to target Uighur Muslim population", Ethical AI, 19 December.
Avriel Epps-Darling (2020) "How the Racism Baked into Technology Hurts Teens", The Atlantic, October 24.
European Commission (2020) "White Paper on Artificial Intelligence – A European Approach to Excellence and Trust", European Commission, Brussels, February 19.
https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
Kay Firth Butterfield (nd) "This is the teaching moment for artificial intelligence", The National.
Kay Firth Butterfield, Leila Toplic, Aubra Anthony and Emily Reid (2022) "AI will add $15.7 trillion to global GDP by 2030", The Print, March 19.
Kay Firth-Butterfield (2020) "Why AI Ethics Matter", Re-Work, October 19.
Travis Greene (2021) "AI Ethics Isn't Enough: Here's How I Would Regulate AI in the USA", Medium, April 21
William Gumede (2022) "Not intelligence: AI's biases are dangerous and need more oversight", The Times, May 22.
https://www.timeslive.co.za/sunday-times-daily/opinion-and-analysis/2022-05-25-william-gumede--not-intelligent-ais-biases-are-dangerous-and-need-more-oversight/
Larry Hardesty (2018) "Study finds gender and skin-type bias in commercial artificial-intelligence systems", MIT News Office, February 11.
Cameron F. Kerry (2020) "Protecting privacy in an AI-driven world", Brookings, February 10.
Kim Lyons (2020) "Timnit Gebru's actual paper may explain why Google ejected her", The Verge, December 5. https://www.theverge.com/2020/12/5/22155985/paper-timnit-gebru-fired-google-large-language-models-search-ai
Rachel Metz (2020) "Google widely criticised after parting ways with a leading voice in AI ethics", CNN, December 5. https://edition.cnn.com/2020/12/04/tech/google-timnit-gebru-ai-ethics-leaves/index.html
Alex Najibi (2020) "Racial Discrimination in Face Recognition Technology", Harvard Science Policy, October 24.
New York Times (2019) "How China is Using AI to Profile a Minority", April 14.
Alexandria Ocasio-Cortez (2019) "Conversation between Congresswoman Alexandria Ocasio-Cortez author Ta-Nehisi Coates", Annual Dr Martin Luther King Jr Legacy Event, Riverside Church, Harlem, New York, January 22.
W. Nicholson Price (II) (2019) "Risks and remedies for artificial intelligence in health care", Brookings, November 14.
Anand S. Rao (2020) "Democratisation of AI", Towards Data Science, August 16
https://towardsdatascience.com/democratization-of-ai-de155f0616b5
Mathieu Rosemain (2021) "France rebukes US AI company for privacy breaches", Reuters, December 16.
https://www.reuters.com/technology/france-says-facial-recognition-company-clearview-breached-privacy-law-2021-12-16/
Zoe Schiffer (2020) "Google fires prominent AI ethicist Timnit Gebru", The Verge, December 3.
https://www.theverge.com/2020/12/3/22150355/google-fires-timnit-gebru-facial-recognition-ai-ethicist
?David A. Teich (2020) "Artificial Intelligence and Data Privacy – Turning A Risking into A Benefit", Forbes, August 10.
https://www.forbes.com/sites/davidteich/2020/08/10/artificial-intelligence-and-data-privacy--turning-a-risk-into-a-benefit/?sh=422614776a95
Jane Wakefield (2021) "AI emotion-detection software tested on Uyghurs", BBC, May 26.
Einaras von Gravrock (2022) "Why artificial intelligence design must prioritise data privacy", World Economic Forum, March 31.
https://www.weforum.org/agenda/2022/03/designing-artificial-intelligence-for-privacy/