The future of Artificial Intelligence and the need for a relational ethics for inclusive governance
Carine Roos
Researcher in AI ethics, human rights & gender. MSc Gender, LSE | Postgrad in Emotional Balance. Founder of Newa, shaping ethical workplaces. Author of The Hidden Politics of AI. Speaker, LinkedIn Top Voice, columnist.
We live in the datafication age, where all aspects of social life are transformed into data. This process, fundamental to the development of artificial intelligence, promises us a future where complex decisions can be automated and optimized on an unprecedented scale. However, as Boenig-Liptsin and Coté discuss, datafication often oversimplifies the complexity of human life into metrics and numbers, ignoring the relationships and contexts that make each unique. When this data is used in AI systems, it can reinforce the same inequalities and exclusions intended to resolve.
The data that fuels AI systems often reflects historical contexts of inequality. As Coté observes, by shaping human identities and behaviors into digital patterns, datafication limits representation, reducing the diversity of human experiences and, in many cases, perpetuating stereotypes. This can lead to biased decisions, creating a cycle of exclusion and discrimination. This challenge compels us to question the supposed neutrality of algorithms and reflect on how these automated decisions shape society, often favoring particular groups over others.
Rethinking the common good in the age of data
A vital dilemma arises with large-scale data collection and analysis: who defines the "common good" in this new digital context? As Boenig-Liptsin argues, the concept of the common good is often heavily influenced by large corporations and governments, who have the power to steer data used to meet their priorities. This often results in technologies that promote a homogeneous vision of the "good life," centered on consumption and efficiency, while excluding alternative and diverse views of well-being.
Digital platforms prioritize content that generates high engagement, creating environments that favor specific themes and limit the diversity of perspectives. For Boenig-Liptsin, data ethics that genuinely serve the common good need to include diverse voices and ensure that technologies promote efficiency, social well-being, and community cohesion. This means rethinking data policies to represent a richer, more pluralistic view of well-being, respecting the experiences and needs of different communities.?
In her work Ethics of Care, Fabienne Brugère proposes an alternative approach to social justice based on the ethics of care. Instead of viewing justice as a matter of individual rights, Brugère suggests a vision centered on interdependence and mutual respect, where individuals are seen as part of a community rather than isolated entities. This approach challenges the idea that care is solely a personal responsibility, arguing that it should be considered a public and collective good.
In the context of AI, the ethics of care implies designing technologies that respect the dignity and autonomy of people, especially those in vulnerable situations. This vision promotes mutual support and respect beyond correcting algorithmic errors, aiming to build a more humane and inclusive society. The ethics of care challenges dominant individualism, showing that each person's well-being is tied to the well-being of others and that only AI must reflect these values.?
An AI Public Body: Bringing governance into the hands of communities
In considering truly inclusive governance, Browne proposes the creation of an "AI Public Body" — a public entity that allows ordinary citizens, especially those most affected by AI technologies, to participate in decisions about the use of these technologies. Inspired by models of deliberative democracy, this body would include the voice of diverse communities, bringing a new perspective to decisions that, until now, have been dominated by technical experts.
This model represents a significant shift in how we understand governance, emphasizing that AI should not be just a matter of technical precision but also of justice and representativeness. Including these voices in the governance process creates a space where the social effects of technologies can be better understood and addressed, resulting in governance that reflects society's diversity.?
领英推荐
Algorithmic justice is often treated as a matter of mathematical optimization, adjusting data and metrics to minimize statistical inequalities. However, as Van Nuenen points out, social justice in AI must go beyond technical adjustments, considering the complex structural inequalities that this data represents. Instead of limiting itself to statistical results, social justice requires a deeper understanding of the impact of technological decisions on people's lives.
Regarding algorithms used in credit granting or facial recognition, it is essential to understand that these systems do not operate in a vacuum: they are part of a society with a long history of inequality. For AI to be truly fair, it must be designed with a commitment to mitigating these inequalities, considering the realities experienced by marginalized communities and integrating their voices into the development and application of these technologies.?
Toward a more humane and just AI
The age of datafication has brought significant technological advances but also challenges us to reflect on the role of AI in promoting an equitable society. An AI that adopts the ethics of care and inclusive governance offers a more humane and fair model, where technologies serve all people, not just commercial interests. Developing AI with these values means building a digital future that respects diversity and collective well-being, promoting a fairer and more equitable society.
This path to a more humane AI requires a shift in perspective, where datafication and artificial intelligence become tools to strengthen our mutual responsibility and promote inclusion. Only with an approach that values care, inclusion, and social justice can we transform AI into a positive force that reflects the best of our humanity.
*Carine Roos is the founder of Newa and an expert in inclusive leadership, human rights, and AI ethics. With over a decade in social impact, she advocates for ethical AI practices reflecting marginalized perspectives and brings a Global South viewpoint to AI and digital governance discussions. Carine has co-authored works on gender equity and psychological safety, contributing to diversity and responsible technology standards.
References
rela??es públicas | assessoria de imprensa | marketing de influência
3 个月adorando os temas da news Carine Roos!!