AI Literacy: not just tech
Maike Van Oyen
Programme Development and Management | Employee Experience | DEIB | Wellbeing | Creating prosperous systems by building the inclusive competency
Artificial intelligence (AI) is reshaping almost every part of our modern lives. Yet, the development, deployment, and consequences of using AI remain deeply intertwined with the socio-technical and historical conditions of the societies that produce it. Truly understanding AI - what it is, how to use it, and how to evaluate and critique it - requires both technical proficiency and a holistic grasp of the systemic inequalities, labour dynamics, and environmental impacts embedded in AI's lifecycle. This article synthesises insights into these socio-technical realities. It also explores how historical biases in society and science shape AI, amplifying inequality and perpetuating discrimination.
This article is not designed to dictate whether AI is inherently "good" or "bad". Like any powerful tool, AI can be both. Instead, the purpose of this article is to enhance AI literacy among those tasked with deciding how and when to deploy this technology to achieve specific goals. By understanding the socio-technological, historical, and environmental contexts that shape AI's development and application, decision-makers can better evaluate its potential benefits and risks. This awareness enables informed, thoughtful choices about whether and how to use AI, ensuring its application aligns with ethical, sustainable, and equitable practices.
Towards true AI literacy
AI literacy must equip individuals to critically evaluate these technologies. This includes understanding:
Understanding the socio-technical conditions of AI
The labour force behind AI
The development of AI systems relies on vast datasets that must be carefully cleaned, labelled, and refined. Much of this labour-intensive work is outsourced to underpaid workers, often from underserved populations in the Global South or within carceral systems. Platforms like Amazon Mechanical Turk and outsourcing firms hire workers in countries like India, Kenya, and the Philippines, who endure low wages, job insecurity, and exposure to harmful content while contributing to the training of AI models.
Incarcerated individuals in multiple U.S. states are facing exploitative conditions to annotate and clean data for AI systems, perpetuating a pattern of racial capitalism that leverages Black and Brown bodies for technological advancement, according to reports. Scholars like Ruha Benjamin have provided detailed explanations of how and why this practice is profoundly harmful. Similar data labelling programmes involving prisoner participation have also been piloted in Finland.
Reports have also emerged about the coerced use of Uyghur populations in Xinjiang for tasks related to data annotation for facial recognition AI. These same systems are later deployed to monitor and suppress the very communities whose labour was exploited, showcasing a direct link between labour exploitation and technological oppression.
Environmental costs
The voracious computational demands of AI systems, especially large-scale machine learning models, are driving up energy use, water consumption, and carbon emissions at an alarming rate. As these AI technologies become increasingly central to modern life, their mounting environmental toll is cause for grave concern. While some organisations are exploring sustainable energy sources, the threat of "green colonialism" looms, potentially undermining genuine sustainability efforts. In the sections that follow, we will examine the multifaceted environmental impact of AI, drawing insights from recent research and expert analysis.
How much energy does AI require?
Training and running AI models requires substantial computational resources. High-performance servers in data centres execute billions of operations per second, consuming vast amounts of electricity. In fact, a study from the University of Massachusetts Amherst found that training a single natural language processing model, such as BERT or GPT, could emit approximately 626,000 pounds of CO? - equivalent to the lifetime emissions of five average cars or 315 round-trip flights between London and Cairo.
The environmental impact of AI is significant and growing. Data centres, which power AI computations, accounted for 1% of global electricity consumption in 2020. This percentage is expected to rise as AI adoption grows. A report by the International Energy Agency (IEA) estimates that data centre energy usage will increase by 50% by 2030 if efficiency improvements do not keep pace. It is important to point out that AI’s energy consumption often grows exponentially due to larger models and datasets. While hardware efficiency improves, the computational demands of cutting-edge AI systems, such as GPT-4, outweigh these gains. For instance, despite newer GPUs being more energy-efficient, training a large-scale language model can still emit as much CO? as hundreds of transcontinental flights.
Data and water usage
Data centres rely heavily on water for cooling, especially in hot climates. A single facility can consume millions of gallons per day to prevent overheating. For instance, Microsoft's Arizona data centres used over 300 million gallons (1.136 billion litres) in 2022, straining scarce local water supplies in that drought-prone region, according to a Bloomberg report. Since data centres are often situated in areas with affordable electricity and water but limited natural resources, this can spark conflicts with surrounding communities over resource allocation.
Carbon emissions
The carbon footprint of AI systems is a significant concern. The energy-intensive nature of training AI models and the use of fossil fuels to power data centres contribute to substantial greenhouse gas emissions. As AI models grow larger, their emissions also increase. For example, OpenAI's GPT-3 required around 12,000 petaflop/s-days of computation during training (Imagine 12,000 ultra-high-speed computers, each performing one quadrillion calculations per second, all running nonstop for 24 hours), which could result in tens of thousands of metric tons of CO? emissions, depending on the energy mix of the data centre used.
Despite a push toward renewable energy, a large proportion of data centres still rely on non-renewable sources like coal and natural gas. In 2022, only 43% of global data centre energy came from renewable sources, according to the IEA.
Other resources affecting our ecosystem
The rapid obsolescence of AI hardware significantly contributes to the mounting e-waste crisis. Advanced devices such as GPUs, TPUs, and servers have short lifespans due to the swift pace of technological development, leaving obsolete hardware in landfills. This waste often contains toxic materials, such as heavy metals, that leach into the soil and water, causing long-term environmental harm. While e-waste recycling could mitigate some of the impacts, it remains an underdeveloped and inefficient process. Rare earth metals are difficult to extract from discarded electronics, and current recycling rates are alarmingly low, with only about 17.4% of e-waste being properly recycled globally (as per a 2020 report by the Global E-Waste Monitor).
The manufacturing of advanced AI hardware relies on rare earth metals like cobalt, lithium, and nickel, whose extraction is environmentally destructive and often linked to exploitative labour practices. Mining for rare earth metals involves intensive water usage and chemical processing, leading to pollution of local water supplies and surrounding ecosystems. For example, cobalt mining in the Democratic Republic of Congo (DRC) has been associated with significant environmental degradation, including deforestation and contamination of rivers.
Additionally, the demand for rare earth metals far exceeds the rate at which they can be replenished. Over time, the depletion of these finite resources could lead to skyrocketing costs, increased geopolitical conflicts, and unsustainable technological growth.
A 2021 study in Nature Sustainability found that the production of a single GPU for high-performance computing emits approximately 1.5 tons of CO?-equivalent over its lifecycle, not including the energy costs of operation.
Geographical inequities
The environmental costs of AI are unevenly distributed. Data centres are disproportionately located in regions with lax regulations or cheaper resources, placing the ecological burden on these areas. Meanwhile, the benefits of AI development are concentrated in wealthier nations.
Regions like Sub-Saharan Africa, Southeast Asia, and parts of Latin America are often selected for mining rare earth materials, leading to local environmental degradation. Simultaneously, data centres in these areas strain local water and energy infrastructure without providing benefits to the surrounding population. Lithium extraction in the Atacama Desert of Chile consumes vast amounts of water—an estimated 500,000 gallons per ton of lithium extracted—contributing to water shortages for nearby farming and indigenous communities and Google's data centre in Hamina, Finland, has been criticised for using local seawater for cooling, raising concerns about the impact on marine ecosystems.
Indigenous communities near mining sites often face displacement and threats to their cultural heritage and ways of life, but many are leading efforts to resist and adapt. In Indonesia, nickel mining for battery production has led to deforestation and pollution, disrupting the livelihoods of local fishermen and farmers. Indigenous groups have protested, citing the destruction of sacred lands and loss of access to clean water sources.
The legacy of societal and scientific inequality
AI systems are built upon existing data and information and learn what is important, what is valued and what is valid through that same lens. This means AI does not merely reflect reality, it reflects a version of reality shaped by human history and therefore learns to reflect the same values, societal biases, priorities and processes in datasets, designs and deployment. And because there exists a tension between the drive for rapid innovation and the need for ethical oversight, many organisations tend to deprioritise fairness and inclusivity in favour of speed and profitability.
Inequity in datasets
AI systems often replicate the biases and inequities of their training data, which are shaped by historical discrimination, highlighting the need for ethical design. For instance: Credit scoring algorithms have been known to disadvantage Black and Hispanic borrowers due to their training on data influenced by redlining—a practice that systematically excludes people of colour from financial services.
Similarly, predictive policing tools like PredPol disproportionately target communities of colour by relying on historically biased crime data. And facial recognition systems trained on predominantly light-skinned faces have been found to have much higher error rates when identifying darker-skinned individuals, as revealed by the Gender Shades study which found error rates as high as 34.7% for dark-skinned women compared to just 0.8% for light-skinned men.
The systemic bias in citation culture, where the work of white European and American male professionals has historically been disproportionately valued, has direct implications for artificial intelligence (AI). AI systems learn from datasets that reflect societal structures, including the knowledge and values deemed "important" or "valid." When these systems are trained on academic and professional data dominated by the voices of historically privileged groups, they inherit the same biases that have marginalised underrepresented perspectives.
For instance, AI models used in natural language processing, knowledge databases, or academic recommendation systems often prioritise information that aligns with dominant narratives. This means the contributions of underrepresented groups—women, Black scholars, Indigenous researchers, and others—are underrepresented in the data, making their perspectives less likely to influence AI's outputs. The result is an AI ecosystem that perpetuates and amplifies systemic inequities in knowledge production and dissemination.
Bias in algorithm design
The homogeneity of the AI workforce exacerbates bias in algorithm development. Women and people of colour are underrepresented in AI roles, leading to blind spots in the design. Globally, only 22% of AI professionals are women, and the representation of Black and Hispanic individuals in U.S. tech companies remains below 10%. The result: AI systems often exclude the lived experiences of marginalised groups, embedding biases that perpetuate systemic discrimination.
This issue is especially problematic in medical AI, where systems trained predominantly on Western, male data perform poorly for women and people from other regions. For example, a 2020 study found that AI tools for diagnosing skin cancer were less effective for darker-skinned patients due to inadequate representation in the training datasets.
Colonial histories of AI
The AI systems we use often replicate the flawed, prejudiced logic of colonialism and racism. Historical practices like phrenology and anthropometry, which aimed to categorise populations based on physical characteristics, have laid the foundation for modern biometric technologies employed for surveillance and policing. These tools disproportionately target communities of colour.
For example, facial recognition systems used by law enforcement have significantly higher misidentification rates for Black and Asian individuals compared to White individuals. A 2019 study by the National Institute of Standards and Technology (NIST) found error rates 10 to 100 times higher for these marginalised groups.
And once again, for the people in the back: Digital colonialism disproportionately impacts the Global South, where communities are actively resisting exploitative practices. Data centres located in resource-constrained regions exacerbate local environmental degradation without adequately serving the needs of nearby communities, while the Global North reaps the benefits of AI's development.
Grassroots activism against AI
Communities are pushing back against AI systems that perpetuate harm through analogue and digital strategies:
AI for social good
This article underscores critical concerns surrounding the development and deployment of AI, drawing attention to the ethical, social, and environmental challenges it presents. However, it is equally important to acknowledge that AI can serve as a powerful tool for positive change when developed and deployed under the right conditions. The impact of these technologies is not inherently good or bad; it depends on the intentions behind their use and the structures in place to govern their application.
The examples shared here demonstrate that while AI can indeed be a force for good, its potential can only be fully realised when we align its deployment with ethical principles, equitable practices, and sustainable frameworks:
Early warning systems for natural disasters:
Fei-Fei Li's work on AI applications in disaster management could illuminate how machine learning models predict floods, cyclones, and earthquakes in vulnerable regions like Southeast Asia. For example, IBM’s Watson has been integrated into programs that alert farmers in Africa about impending weather changes, enabling better preparation and resource management.
Healthcare accessibility:
AI has been used to detect diseases like diabetic retinopathy in rural India, where access to specialists is scarce. Fei-Fei Li’s insights on using computer vision for scalable healthcare solutions could detail the transformative power of such innovations.
Education in underserved communities:
AI-driven language translation tools have helped bring educational content to indigenous populations in South America, breaking barriers to learning.
AI in forest monitoring and restoration
AI-powered satellite imaging and data analysis tools are used to monitor deforestation and illegal logging activities in real-time. Projects like Global Forest Watch leverage AI to process satellite data and detect changes in forest cover, enabling conservationists and policymakers to intervene quickly.
AI used for coral reef protection
AI models like those developed by CoralNet are used to map and monitor coral reef health. These systems identify areas of degradation and help scientists prioritise interventions, such as reef restoration or protective measures, ensuring marine biodiversity thrives.
Final thoughts
The development of AI is deeply embedded in complex socio-technical, historical, and environmental systems shaped by inequality. Addressing these complexities requires a comprehensive approach to AI literacy that integrates technical knowledge with ethical, historical, and environmental perspectives.
Despite these important challenges, progress is being made. Companies are increasingly investing in fairness-aware AI tools and frameworks, such as IBM's AI Fairness 360 and Google's What-If Tool, which help developers identify and mitigate bias in models. Regulators and industry groups are also starting to provide new guidelines and frameworks, like the European Union's AI Act, that emphasise transparency, accountability, and fairness in AI systems. However, the impact of these efforts would be amplified if similar guidelines were adopted globally.
To address the ecological impact, concerted efforts are needed to design hardware with longer lifespans and modular components, use low-carbon AI techniques, develop efficient recycling technologies, and incorporate ethical sourcing, fair labour practices and environmental recovery programmes. Policymakers and corporations should adopt circular economy principles in tech manufacturing.
Efforts to include diverse voices in AI development are gaining momentum, with initiatives aimed at increasing representation in tech and funding for community-led AI projects. However, significant work remains to create truly equitable AI.
Organisations need to move beyond superficial fixes and address systemic issues, including inequities in data collection and the structural biases of the societies AI systems operate within. This requires fostering interdisciplinary collaboration, incorporating social science and ethics into AI development, and implementing robust accountability measures. Centering the voices of historically marginalised communities in AI design and governance is essential to ensuring AI systems serve the diverse needs of all people, not just the privileged few. ?Last, we should be very critical about our current definition of success as this definition influences our behaviours at work and home. Only by tackling these root causes can we build AI systems that genuinely promote fairness and equity.
Reflective questions on success:
Hey Maike, thanks a lot for such useful article! I made some slides with key ideas for my colleagues. If someone else is interested, the full version is here https://wonderslide.com/s/idyp856y/