Are we subjected to discriminatory rulings of machines? De-biasing AI Automated Decision-Making

Are we subjected to discriminatory rulings of machines? De-biasing AI Automated Decision-Making

Have you ever felt that you should’ve get a credit that was denied? Or maybe that you should get a subsidy or governmental transfer? A job you applied for? Or a better rate in your insurance policy? You’re not alone. AI-based automated decision making is helping public and private organizations across the globe to make in theory better informed, faster and more efficient choices.

AI-supported systems to access to loans, jobs, land, social protection, subsidies, transfers, price determination and even love, can discriminate against the most disadvantaged based on race, gender, age, location, sexual orientation, ethnicity or even religion.

?AI systems make decisions based on the identification of variables called “target variables” usually from large datasets, that are assigned “data labels” when AI systems are trained. Systems learn to identify and classify data based on existing historical data, thus for example, you can train an AI system with a large dataset of images from women with breast cancer and women free from it, so in the future the algorithm learns to identify and predict new cancer cases, without human intervention.

?This is an excellent example of the potential of frontier technologies as a force for good.

?

So, how a machine-made decision can be discriminatory?

Digital technologies seem innocuous. In theory AI systems should be fairer and more efficient without any human intervention. However, these technologies are crafted by humans that hold beliefs, a cultural background and experiences that shape the way they see and interpret the world.

So, biases that could lead to discrimination – either direct or indirect – may be rooted either on the datasets during AI pre-processing, when labelling and training during processing, or when a decision is made during post-processing, this is when the machine implement the decision without any human intervention or verification.

Let me give you a concrete example.

Loans are granted when either a credit executive or automated system decides that you have the capacity to re-pay the loan considering your age, occupation, historical payment trends, assets you hold, and location, among other variables. A credit scoring system assigns a score for each of the variables considered so you can get a number that in the end predicts your ability to re-pay the monthly payment. So, a credit is approved or denied based on that scoring process.

?What if the credit is denied because you live in a neighborhood that predicts a low re-payment capacity because there is a high concentration of Afro-American families or because there is a high crime rate? What if the system predicts a future potential default because you’re a woman in fertile age that could become pregnant and lose or leave your job?

Digital technologies are not innocuous, neither AI-based decision-making systems.

These technologies are made by humans and hold the ability to learn our same biased beliefs and in turn make discriminatory decisions.

This can have huge negative implications in various aspects of humans’ lives.

?

People are discriminated by algorithms impacting negatively their love life

Apryl Williams and Kendra Albert, researchers of Harvard Law’s Cyberclinic, found that dating apps are automating sexual racism . These apps use algorithms that attempt to predict attraction and attractiveness, so they match app users with others that look like them, reinforcing racial stereotypes.

Another research conducted by the Universitat Oberta de Catalunya explored the effects of digital ageism and found that Tinder has been accused of charging users different rates depending on their age, discriminating against people over 30.

Age is therefore at the heart of the business of the algorithmic colonization of love, which restricts users' ability to explore relationships spontaneously or beyond personal prejudices ” says Andrea Rosales, member of the faculty of information and communication sciences.

Credit approval algorithms discriminate against black people and women across the globe

In 2018 Cozarenco and Szafarz studied sex bias in French microfinance institutions, analyzing 1,098 credit applications. Their results suggest that female micro-borrowers have lesser chances than male borrowers to get loans above the loan ceiling . These findings are confirmed by research conducted with 80,000 Spanish companies, comparing credit demand and approval ratio. They found that female entrepreneurs are less likely to have their loan application approved in their firms founding year than their male peers , comparing people from the same industry. While the likelihood of loans be denied for women-led enterprises in Vietnam is 67% in male-intensive industries and 71% in periods of tight monetary policy as Le and Stefańczyk found in 2018.

Black and Latino borrowers are three times more likely to receive high-cost loans compared with whites and are less likely to get a loan approved in the US (54% Blacks and 63% Latinos) than their white peers (71%). In this case a phenomenon called ‘redlining’ appears to be more predominant, since race is a protected characteristic in US legislation, however even though race is not considered as a predictor for a credit approval, the Zip code is used in datasets, acting as proxy of location and hence of the concentration of Afro-Americans.?

Algorithmic discrimination can drive families to financial ruin or foster grave human rights violations

In 2021 the Dutch Systeem Risico Indicatie (SyRI) unfairly singled out 20,000 families from low socio-economic backgrounds wrongly accusing them from tax fraud over childcare subsidies, blocking families with more than one nationality from receiving the social benefits to which they were entitled because of their socio-economic status . The system was then declared illegal by The Hague Court months later the harm was done.

“As humankind moves, perhaps inexorably, towards the digital welfare future it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia ”. UN Special Rapporteur on Extreme Poverty and Human Rights. 2019.

The 2019 UN Special Rapporteur on Extreme Poverty and Human Rights flagged that digitalization of social welfare systems was used to reduce deep reductions in the overall budget, a narrowing of the beneficiary pool, the elimination of services, the introduction of demanding and intrusive forms of conditionality and the pursuit of behavioral modification goals.

Modern surveillance systems, that rely on facial recognition AI technologies and algorithm-based predictive policing, are supposed to enhance safety but are being used to identify, predict, target, punish and violate human rights. While digital social protection systems in many countries now rely on biometric digital identification systems that can potentially create high security risks for its citizens.

For example, Aadhaar in India has the world’s largest biometric identification system with 1.2 billion people. The system has been highly criticized for “unnecessarily collecting biometric information, for severe shortcomings in legislative oversight, function creep, facilitating surveillance and other intrusions on the right to privacy, exacerbating cybersecurity issues, and creating barriers to accessing a range of social rights” .

In 2019 Kenya through the Huduma Namba Programme required all of its citizens, including those living abroad, and all foreign nationals and refugees in the country, above the age of 6, to obtain a national ID in order to access government services, including welfare benefits. Biometric data collected included fingerprints, hand geometry, earlobe geometry, retina and iris patterns, voice waves and DNA in digital form. This programme was closed for inadequate privacy protections in response to a ruling of the High Court that the programme did not comply with the Data Protection Act due to the lack of data protection impact assessment. Despite the ruling, the Government reportedly threatened citizens to exclude unregistered individuals from access to benefits or the right to vote . The digital identity cards programme, relaunched as Maisha Namba under the new government, was halted in December 2023 by a ruling of the High Court that ?suspended the issuance of new digital IDs until the project was compliant with Kenya’s Data Protection Act. Ruling that was then lifted in February 2024 .

?

So then, what can be done? Are humans subjected to the rulings of machines? Is our digital future doomed?

In a nutshell, no. We’re not doomed or subjected to the rulings of machines. But we need better governance frameworks, more transparency, accountability and explainability of ‘AI blackboxes’, and for sure, more AI literacy.

First, let’s explore the issue of AI governance. AI has reached a point in which it can learn by themself, which is exciting and a bit scary, and that’s the very reason why better and improved AI governance is a crucial issue in our digital future. When we talk about AI governance we’re talking about processes, policies and tools that bring together key stakeholders of the AI cycle – including engineers, data scientists, regulators, compliance, legal, business teams, etc. – to ensure that AI systems are built, deployed, used and managed to maximize benefits and prevent harm.

Stronger AI governance and ethics are strongly needed, and human intervention for verification, curation and quality checks at different points in time across the AI cycle is extremely needed.

Second, build ethical and responsible AI by design is a must, so we can make it transparent, accountable and explainable. Ensuring that bias is removed from historical data sets (during pre-processing) and establish protocols for AI to learn how to identify and treat discriminatory variables when they’re trained (during processing) is imperative. But also, when a discriminatory outcome is obtained with AI predictions is crucial to include human intervention to review and validate decisions, without compromising the efficiency of the system but ensuring that its outcomes are inclusive and non-discriminatory (during post-processing).

We need to transit towards more transparent and explainable algorithms, avoiding the opaqueness of AI “black boxes”, both in public and private AI systems. There is a need for the implementation audit systems at the different phases of the AI cycle and to deploy more comprehensive impact assessments of AI-based automated decision making, as well as redress mechanisms that ensure human rights protection.

And third, AI literacy is extremely important as we move towards a digital future in which AI technologies will pervade our lives at every level. We need to start introducing digital and AI literacy in the classrooms, from both the technical and the anthropological implications of AI in human existence. We also need to train data scientists and STEM people in ethical and responsible technologies and approaches, as well as more diversified technological teams in which the expertise of social disciplines ensures the development of human-centered, inclusive and gender responsive technologies. Finally, we also need AI literate regulators, politicians and governmental agencies alike, so we can build better regulatory frameworks and policies, that ensure people’s well-being and human rights respect above technological advancements.

?

?

?

?

?

?

?

?

?

?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了