'Algorithmic Coloniality', 'Algorithmic Oppression' and Artificial Intelligence (AI) in Africa
Tafadzwa Mushaikwa
MSc Marketing | HBBS | MSc Journalism & Media Studies candidate | Expert at driving growth via innovative communication strategies with proven success in scaling brands.
'Algorithmic Coloniality' refers to ways algorithmic systems duplicate and aggravate historical outlines of colonisation. Coloniality is a form of post-imperial domination which mimics racial, political, social, and economic subjugation. ‘Algorithmic Oppression’ refers to ways algorithmic systems and technologies perpetuate and disseminate systematic discrimination, social injustice and prejudice in Africa. ‘Algorithmic Oppression’ spotlights how the formulation of AI algorithms predominantly in the Global North adversely affects marginalised or disadvantaged Global South groups.
Artificial Intelligence (AI) is computing software that can learn, make decisions, complete tasks, and problem-solve. AI learns from data sets referred to as training data, which are a set of instructions that result in a desired conclusion. AI has advanced worldwide and has been harnessed in investigative journalism. Journalists use AI technologies like bots and drones with facial recognition to collect data and evidence from safe distances. Reporters use AI-based training programs for graphics, translations, transcription, and video editing. AI is thus a critical player in elevating the voice of the Global South however, AI has been accused of perpetuating bias, unjust discrimination, and contributing to inequality in African societies.
AI has been accused of being programmed with inherent biases against minority groups and has been caught perpetuating Coloniality. Facial recognition AI, for instance, was predominantly programmed using White people's photos to learn how to spot faces, thus the AI does not work well on non-white faces. AI-powered captioning programs are currently unable to transcribe Africans speaking English with African accents. AI software also cannot transcribe African languages since Africans were not represented in the audio clips used in the training database for language AIs. AI use in Africa is thus highly restricted in investigative and data journalism as utilising machine learning, computer vision, speech recognition, planning, scheduling, optimisation, and robotic technologies comes with many limitations. This non-representation is a pseudo-replication of historical power imbalances between the Global North and South.
One would argue such AI biases are not incidental but reflect a glaring issue of rogue AI systems designed with Global North values and data, ignoring African lived realities. This mirrors the colonial process of violating and imposing foreign systems onto African societies, often to the detriment of Africans. For example, AlgorithmWatch demonstrated that Google Vision Cloud, an AI-powered computer vision service, labelled an image of a dark-skinned individual holding a thermometer as a “gun,” while a similar image of a light-skinned individual was labelled as an “electronic device.”
Such stereotypes show how Algorithmic oppression prolongs prejudice based on colour. AI image generators depict Africans as living in abject poverty. Western narratives have painted Africa as a dark continent with the former so-called leader of the free world Donald Trump going as far as labelling African countries as “shit holes”. AI is thus trained by racially biased data to associate Africaness with demeaning negative images such as pictures of unbridled corruption and crime, despondency, anarchy, poverty, murder, mass migration, lawlessness, backwardness, darkness, low IQ and hunger.
The hegemony of the Global North in the AI space due to financial and technological disparities has left them with the greater responsibility of designing and training AI systems. The urge to dominate AI by the Global North mirrors the global political and economic mentality observed during colonialism. It reflects a larger issue of who gets to define intelligence and whose knowledge is to be devalued. In this way, AI is weaponised to extend the legacy of colonisation, reinforcing ideas about racial superiority and inferiority.
AI is heavily programmed via data sets, values, and knowledge systems derived from Western contexts, disregarding local African knowledge systems. This imposition of Western knowledge is reminiscent of the way colonisers imposed their systems of governance, education, and economics onto colonised populations. AI technologies fail to consider the socio-cultural realities of African societies, making them tools of epistemic domination. This is particularly problematic in sectors such as healthcare, agriculture, and education, where local knowledge is essential for effective decision-making.
‘AI nationalism’ is a process where dependencies are tacitly enforced between low-tech and advanced-tech states that are set to follow the historical divide of Global North/South. The biased AI production reveals a dark image of the project’s colonialist impulses and the neo-Darwinian linearity of the evolution of a science which leaves behind those who do not conform to catching up. Much like the gold rush resource extraction that occurred during colonisation, AI today relies heavily on the extraction of data from African populations, often without local control or ownership of this data.
For instance, AI-driven systems like those used in Kenya’s mobile loan market gather personal data from users’ phones but often make opaque decisions about creditworthiness, which can marginalise vulnerable populations. The majority of data generated by African populations is processed and stored on servers outside the continent, limiting the ability of African governments to regulate and control this information. This erodes data sovereignty and the ability of African nations to develop AI systems that reflect their unique socio-economic realities.
AI control reflects a neo-colonial dynamic where Africans remain downtrodden yet dependent on foreign technologies while bearing the full costs of the implementation. AI systems are thus reinforcing and amplifying existing forms of social inequality, discrimination, and marginalisation along racial, gender, and socio-economic lines. AI platforms, particularly those developed outside Africa, often fail to account for African languages and cultural contexts. For example, voice recognition systems struggle to process African accents and cannot process diverse African languages, effectively marginalising African users’ access to technological advancements. This leads to the homogenisation of Western cultures as Africans are forced to use English or French, an erasure of the rich African diversity. This form of algorithmic oppression extends the historical marginalisation of African languages and knowledge systems under colonialism.
Foreign companies are quickly penetrating African markets, offering soft loans for governments to purchase their equipment and promising to set up and manage these systems. In Kenya, for instance, Huawei installed video systems that deployed 1,800 HD cameras and 200 HD traffic surveillance systems across Nairobi. In Zimbabwe, the Guangzhou-based developer CloudWalk announced a controversial deal in 2018 to oversee a large-scale facial recognition program in collaboration with authorities. AIs used for surveillance raise privacy and civil liberties concerns.
领英推荐
African researchers like Mbembe provocates to critique the apparatuses of the race in digital technologies and warn against the hegemony of technocratisation which commodifies human objects. We are witnessing a third phase of colonisation, where AI represents the latest phase of high modernity. AI is creating new forms of dependency and exacerbating the historical divide between technologically advanced countries and those that are still developing their AI capacities.” As Hogarth discusses in his critique of ‘AI nationalism,’ “There is a growing global competition for dominance in AI technologies. Africa is at risk of being ‘left behind’ as countries in the Global North dominate AI research, development, and application.”
The top AI companies like OpenAI, META, and Apple Inc., are in the Global North. This AI nationalism reinforces the colonial legacy of uneven development, where African nations are consumers of a technology they cannot shape according to their needs. The globalisation of AI leads to a conceit of the colonial order; the assumption that Western rationality is neutral, universal, and objective, and thus can be detached from the specific cultural and historical contexts in which it was developed and applied universally.
This ‘point zero’, belief that positions Western knowledge as the almost divine, sole, legitimate and natural way of understanding the world, asserting its supremacy over other forms of knowledge should be challenged within the context of AI in Africa, as this assumption is particularly problematic. The idea that intelligence can be placed on a machine implies that intelligence can be abstracted, and separated from the people, cultures, and contexts from which it originates. This leads to the perpetuation of colonial logic in the realm of AI, where algorithms and AI-driven systems designed primarily within Western epistemological frameworks are universally applied globally, to societies with vastly different cultural, social, and historical realities.
The wanton disregard and failure to acknowledge and integrate non-Western perspectives in AI system development perpetuates a form of epistemic violence. Epistemic violence is the use of power to control the production, distribution, and recognition of knowledge. It can involve denying certain groups the ability to create knowledge or exploiting their knowledge without their consent. This reinforces a worldview where Western knowledge continues to dominate and marginalise other ways of knowing and being. This results in a situation in which discrimination is embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant on, by choice or not. ‘Algorithmic Coloniality’ and ‘Algorithmic Oppression’ both have a hold when it comes to data as Boulamwini says, “What’s happening is these technologies are being deployed widely without oversight, oftentimes covertly, so that by the time we wake up, it’s almost too late.”
'Algorithmic Coloniality' is bolstering reinforcement of bias and propagating bleaching practices in Africa by mimicking racial undertones of colourism and blackness as the prototype for the assemblage of the human-object of modernity. Being light-skinned is regarded as better amongst African youths as witnessed by how beauty products prefer models who are white, or fair-skinned even though the target audience is predominantly dark-skinned. Algorithms allow social media platforms to choose what information billions of people see, shaping their perception of reality. Social media applications such as Instagram, TikTok and Snapchat have all introduced AI-enhanced beauty filters which have skin-lightening effects. These are popular with Africans and this is influencing African girls to resort to bleaching.
The near-ubiquitous use of algorithmically driven software, both visible and invisible to everyday people, demands a closer inspection of what values are prioritised in such automated decision-making systems. Algorithmic coloniality illustrates the complex interplay between global influences and local practices. Addressing these issues requires critical engagement with the historical context, media representation, and the development of inclusive and equitable frameworks that celebrate diverse identities and challenge harmful norms. Beta-testing new AI systems in African countries, often without adequate safeguards, a practice referred to by Mohamed and Isaac as "ethics-dumping” aligns with a centuries-old colonial attitude, where Africa was treated as a "laboratory" for scientific and technological advancements—a concept Jan Smuts euphemistically coined in 1930. Such experimentation came at the expense of African populations, deemed disposable in the name of progress.
The epistemological foundations of AI, especially in areas like statistics and data analysis, are deeply rooted in colonial legacies. Francis Galton, a key figure in the development of statistical methods such as inference, regression, and the normal distribution curve, conducted much of his foundational work in Southern Africa, applying these techniques to measure and classify human differences and intelligence among native populations. This historical link highlights how AI today is operating within frameworks planted in colonialist ideologies and practices. Algorithmic Coloniality and Algorithmic Oppression are both influencing cultural homogenisation in Africa. Research has shown that with increased migration, the world is becoming more multiculturally diverse nonetheless more homogenised.
The concepts of 'Algorithmic Coloniality' and 'Algorithmic Oppression' offer critical frameworks for understanding how AI is functioning within African society. These concepts allow us to examine the ways AI systems replicate historical patterns of exploitation, marginalisation, dehumanisation and systemic inequality, introduced by European colonialism. As AI technologies increasingly shape decision-making in Africa across sectors like healthcare, education, and governance, understanding their colonial legacies and oppressive tendencies is essential for developing equitable AI solutions in Africa. The researcher looked at how 'Algorithmic Coloniality' and 'Algorithmic Oppression' are critical areas Africa has to fight to eradicate.
Article by Tafadzwa Wilson Mushaikwa