The Impact of Artificial Intelligence on Marginalised Groups
Comments delivered at Renew Europe webinar on "Implications of Artificial Intelligence on Human Rights", organised by MEPs Samira Rafaela and Svenja Hahn
Thank you for the opportunity to speak here today and thank you for the excellent opening remarks.
In this brief intervention I will address the impact of AI on marginalised groups, focusing on three main issues:
1. punishment and policing
2. essential services and support
3. movement and border control
I will address these in turn, before making a few concluding observations.
1. Punishment and policing
There has been an increased use of AI and algorithmically-driven tools for the policing and punishing of individuals. Well-known examples are the use of facial recognition and predictive policing technologies by law enforcement.
Predictive policing technologies use historical and real time data to predict when and where a crime is most likely to occur or who is most likely to engage in or become a victim of criminal activity. European police forces –– amongst others in Germany, Switzerland, the Netherlands, and the UK –– have been developing and piloting predictive mapping and identification systems to help them pre-emptively intervene and deter crime.
Many of these technologies are based on risk modelling: individuals are identified and ranked according to the likelihood they will engage in criminal activity. In the Netherlands, this has gone as far as developing a system that scores the likelihood of children under the age of 12 to become potential criminals. Various studies have shown that the historically biased police data that feeds into predictive policing programmes is perpetuating and reinforcing the overpolicing of neighbourhoods housing marginalised groups.
The historically biased police data that feeds into predictive policing programmes is perpetuating and reinforcing the overpolicing of neighbourhoods housing marginalised groups
Another example of "pre-crime detection" is the Dutch SyRI (System Risk Indication) System that determined whether or not individuals were likely to commit public benefits fraud. The use of this system, which was predominantly rolled out in neighbourhoods with low income and high immigration rates, was struck down by the courts as violating human rights. However, the government has since come with a proposal for an even more invasive system, informally referred to as "Super SyRI", which would allow for extended data sharing between not only government agencies, but also private companies.
It is important to note that this data-driven policing and punishment is not just carried out by public authorities. It is also happening in private contexts, manifesting itself through workplace surveillance, credit checks, private security firms, and mobile apps.
In Finland, the National Non-Discrimination and Equality Tribunal found that a Swedish credit company had to stop using a statistical method in credit scoring decisions. The credit company would determine if people were eligible for credit or loans not based on specific information about the individual requesting the loan or credit, such as their income, financial situation and payment history, but instead by scoring applicants based on factors such as their place of residence, gender, age, and mother tongue. In this case, the applicant who filed the complaint was denied a loan because he was a man, lived in a rural area of Finland and spoke Finnish instead of Swedish as his mother tongue. Had he been a Swedish-speaking woman living in an urban area, he would have been eligible.
Access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being
Access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being, and access to credit helps marginalised communities exercise their economic, social, and cultural rights. Automation often polices, discriminates, and excludes, thereby threatening the rights to non-discrimination, association, assembly, and expression.
2. Essential services and support
Automated systems are increasingly being used to make decisions on whether an individual is entitled to essential services and support, such as welfare or shelter, and the extent to which they can rely on these services.
By using AI systems in making decisions on essential services such as welfare, those already in a precarious position can be pushed even further into precarity
By using AI systems in making these kinds of decisions, those already in a precarious position can be pushed even further into precarity by excluding them or shutting them out. A system malfunction or inaccuracy can result in serious human rights violations, including violations of the right to life.
In the UK, the government began rolling out Universal Credit in 2013, a major restructuring of the country’s social security system, combining six social security benefits into one monthly lump sum. The Universal Credit system was developed to ostensibly simplify access to benefits, cut administrative costs, and “improve financial work incentives.” Requiring people to apply for and manage the benefit online has however resulted in harming those already in a precarious position even further.
First, the algorithm turned out to be flawed: by not accounting for the way people with irregular, low-paid jobs were collecting their earnings via multiple paychecks a month, the system ended up overestimating people's earnings and drastically shrinking the benefits they received.
Second, by requiring people to request their benefits online, the system caused hardship amongst people who lacked digital skills or did not have reliable internet access. People would need to fill out long web forms about their personal and financial circumstances and meet cumbersome identity verification requirements while sitting in, for example, a public library, without the option of saving the data entered in case they did not have all required paperwork with them to complete the process. If you consider that applicants would have to wait five weeks for their first payment, it is easy to see that every day "lost" on not being able to complete a complicated online application process adds to existing hardship.
In India, people have been reported to have died of starvation, having been unable to access food rations due to the system failing to read their fingerprints for identification
In India, people have been reported to have died of starvation, having been unable to access food rations due to the system failing to read their fingerprints for identification. The Aadhaar system is the world's largest biometric identification system, and was argued by the Indian government to revolutionise welfare. In practice, glitches in the system are causing terrible problems for the very people it is supposed to be intended to protect. Decisions about benefits and services are made in a central system and if anything goes wrong, a range of support can suddenly come to an end. Payment such as pensions are frequently misdirected. Even if a recipient can find out what is causing an error, they often still have difficulty getting it corrected or face the difficult choice of trading time that could be spent working and earning an income to undertake lengthy travel to try and speak with officials.
3. Movement and border control
Automated systems are increasingly being used in immigration and refugee contexts in ways that are experimental, dangerous, and discriminatory. These technologies interfere with the rights of, among others, refugees, migrants and stateless persons.
The use of AI is integrated into facial and gait recognition systems, retinal and fingerprint scans, ground sensors, aerial video surveillance drones, biometric databases, asylum decision-making processes, and other aspects of border enforcement. In this way, they have contributed to the rise of what some have referred to as “digital borders.”
The use of AI is integrated into facial and gait recognition systems, retinal and fingerprint scans, ground sensors, aerial video surveillance drones, biometric databases, asylum decision-making processes, and other aspects of border enforcement
The large-scale reliance on these technologies has reduced human bodies into evidence. In addition, there has been a pattern of centralised collection and storage of this biometric and personal data across sectors and agencies, including public-private partnerships like Palantir with the World Food Program and Microsoft with the International Committee for the Red Cross. Left without any control over this type of data harvesting, there have been a number of disturbing reports about refugees burning off their own fingerprints out of fear of being tracked and returned to countries of origin or to entry-point countries in the EU.
Instead of getting information from refugees and migrants directly and with informed consent, trends in cross-border surveillance and use of their data without consent are evident across sectors: at UN agencies, border control agencies, and non-governmental organisations.
Why we need to move beyond "bias" and "fixing" the data
One final issue I want to draw attention to following these examples, is that in understanding the harms of new technologies it is key to take into account the broader context and existing inequalities and power structures into which they are being deployed. Machine learning technologies learn from patterns in existing data and therefore make it possible to further entrench and exacerbate already existing systemic human rights harms.
The harmful uses of AI cannot be improved through improved data quality, as these uses in themselves exacerbate structural exclusion and inequality
This, however is not the same as the usual framing we see in conversations about discrimination and AI, which generally focus on data quality, data accuracy, or "bias". The harmful uses of AI just described cannot be improved through improved data quality, as these uses in themselves exacerbate structural exclusion and inequality. They therefore need to be restricted, not facilitated with tweaks or improvements.
The harms that AI can cause are multi-faceted and intersectional. They cut across civil, political, economic, social and cultural rights. And when deployed in the wrong context, they not only threaten the right of all individuals to enjoy equal protection of these rights, they can threaten the very rights themselves.
The European Commission recently presented a proposal to regulate the use of AI systems. For a first analysis that takes the above-mentioned issues into account, see EDRi's "EU’s AI law needs major changes to prevent discrimination and mass surveillance"
Love this! Thanks for sharing!