The master's algorithms won't dismantle the master's system
Carine Roos
Researcher in AI ethics, human rights & gender. MSc Gender, LSE | Postgrad in Emotional Balance. Founder of Newa, shaping ethical workplaces. Author of The Hidden Politics of AI. Speaker, LinkedIn Top Voice, columnist.
Audre Lorde's profound assertion that "the master's tools will never dismantle the master's house" offers a compelling lens for examining the promises of generative AI. Developed within hegemonic frameworks, these technologies frequently replicate and magnify the systemic inequalities they purport to mitigate. This critique scrutinizes the inherent constraints of generative AI systems, such as ChatGPT and Bard, by highlighting how they perpetuate bias and reinforce entrenched power structures. Drawing on feminist and decolonial perspectives, including the critical insights of Sylvia Wynter, I argue that achieving justice in technological development requires a radical rethinking of its foundational principles.
Superficial inclusion and the illusion of progress
The tech industry often champions "diversity" and "inclusion" as panaceas for bias in AI. However, as Dalal, Hall, and Johnson (2024) argue, these efforts frequently amount to tokenism, reducing inclusion to superficial gestures without challenging the systemic inequities embedded in algorithms and datasets. This symbolic engagement offers the illusion of progress while leaving power dynamics intact.
Lorde’s critique aligns with this observation, underscoring the futility of addressing deeply rooted systemic issues using tools forged by those same systems. Generative AI systems, trained on biased datasets, inherently reflect the inequities of their creators. Without a radical epistemic overhaul, these tools will continue reinforcing exclusion rather than dismantling it. Sylvia Wynter's (2003) exploration of the colonial construction of "humanity" underscores this dynamic, illustrating how such systems perpetuate exclusionary notions of who counts as fully human.
Generative AI relies on vast datasets scraped from the internet and other sources, embedding societal biases into their systems. Birhane (2021) dismantles the myth of data neutrality, showing how these datasets systematically marginalize certain voices while privileging others. Broussard (2023) critiques "technochauvinism," the belief in the inherent superiority of technological solutions, which often blinds us to how these tools perpetuate their creators' ideologies.
The structural bias becomes even more apparent in prioritizing profitability and engagement metrics. Algorithms are optimized to maximize user interaction, often amplifying divisive or stereotypical content and reproducing harmful narratives about marginalized groups. Tamale (2020) argues that these dynamics perpetuate colonial hierarchies, reinforcing the marginalization of the Global South and other historically oppressed communities.
Radical intersectionality and epistemic justice
Bassett, Kember, and O’Riordan (2019), in Furious: Technological Feminism and Digital Futures, propose "radical intersectionality" as a framework to address these challenges. Unlike tokenistic inclusion, radical intersectionality insists on redesigning technologies to prioritize the needs and perspectives of those most affected by systemic inequalities. This framework demands a fundamental reimagining of how technology is developed and governed, addressing the roots of inequity rather than its superficial manifestations.
领英推荐
D’Ignazio and Klein (2020), in Data Feminism, echo this call by advocating for participatory approaches that center marginalized voices. They argue for systems that actively challenge dominant epistemologies and redistribute power in development. These perspectives highlight the need for transformative change, questioning whose voices are included—and whose are excluded—in shaping AI systems.
Toward justice-oriented technologies
The systemic inequities perpetuated by generative AI demand a radical rethinking of its development and governance. Justice-oriented technologies must reject the belief that minor adjustments can rectify deep-seated biases. Instead, they should prioritize dismantling the power structures that underpin these systems.
Feminist and decolonial critiques offer valuable pathways forward. By centering marginalized voices, challenging colonial epistemologies, and addressing the material conditions that shape technological development, we can envision an AI landscape that serves the many rather than the few. Lorde reminds us that this requires entirely new tools forged with a commitment to justice, equity, and inclusivity at their core.
The challenges posed by generative AI are not merely technical but profoundly political and ethical. Addressing them requires moving beyond tokenistic gestures and superficial fixes to confront the power structures that shape these technologies. By integrating feminist and decolonial frameworks, we can challenge the assumptions underpinning AI development and work toward systems that genuinely serve the marginalized. The master's algorithms, like his tools, cannot dismantle the master's system—but with new frameworks rooted in justice and equity, we can imagine a future where technology becomes a tool of liberation rather than oppression.
References
Coordenadora de Comunica??o | Coordenadora de Atendimento PR
3 个月ótima reflex?o, Carine Roos
HR Manager specializing in HR Management and Employee Relations / ISO 9001 : 2015 - Internal Auditor / IRDA IC38 - Insurance Advisor / Expertise in NAPS & NATS /
3 个月thoughtful analysis on ai bias - we must prioritize equity in development. ?? #techethics