AI and Human Rights: The Age of Perpetual Instability Has Begun
The idea that human rights are universal, fundamental, and non-negotiable has shaped modern international law for nearly a century. Yet, as artificial intelligence reshapes economies, political structures, and even the very concept of autonomy, it is becoming increasingly clear that the human rights frameworks we rely on are not built for the realities of an AI-driven world.
For years, governments and corporations have spoken about ethical AI, yet in practice, progress has been largely performative. Tech firms sign declarations while simultaneously securing defense contracts for AI-powered surveillance. Governments warn about AI risks while rushing to develop national security applications that undermine their own principles.
Meanwhile, as economic instability grows and global conflicts deepen, we are beginning to see the first real signs of what life looks like under perpetual instability; a future where AI exacerbates inequality, fuels misinformation, and entrenches power structures beyond the reach of democratic control. The gap between rhetoric and reality in AI governance has never been wider.
Artificial intelligence does not operate in isolation. It is deeply embedded in who gets hired, who gets a loan, who gets imprisoned, who is watched, and who is ignored (European Parliament, 2024). It determines which voices are amplified, which populations are targeted, and how decisions are made at a speed and scale that human institutions cannot match. The idea that human rights violations are the work of oppressive states or violent regimes no longer holds. A flawed algorithm, a biased dataset, or an unregulated deployment of AI can just as easily create systemic, large-scale harm (European Union Agency for Fundamental Rights, 2022).
The Universal Declaration of Human Rights, signed in 1948, was a response to the horrors of war and the urgent need to codify protections for human dignity (un.org, 2025; UN, 1948). It was drafted in an era where the greatest threats to human rights came from authoritarianism, conflict, and economic deprivation. Today, we must recognize that AI is not just a tool. It is a force reshaping power itself. And when power shifts, rights must be redefined, reinforced, and protected with renewed urgency.
AI regulation has followed a rather predictable cycle:
The EU AI Act represents the most ambitious attempt at regulating artificial intelligence to date, yet it already faces pressure from industry and uneven adoption. In the United States, the previous AI executive order was repealed; the current administration has shifted focus toward fostering innovation, adopting a lighter regulatory approach to AI development and deployment. China's approach prioritizes state control over human rights. France recently hosted the AI Action Summit, emphasizing ethical AI development and investment in AI innovation over rights. The Council of Europe's Treaty on AI lacks sufficient ratifications to be effective. Nations in the Global South, where AI could have the most transformative impact, are largely left out of the conversation.
Even where AI regulations exist, they tend to focus on risk management, not rights protection. There is little appetite for tackling the deeper philosophical question: What protections should be guaranteed in an AI-driven world?
For the past year, I have been developing a research paper that argues for the explicit recognition of human rights in the age of AI. In the paper, I argue about structural change; ensuring that AI is governed by frameworks as powerful as the forces driving its deployment.
At the core of my argument is the proposal for a new article in the Universal Declaration of Human Rights:
The Right to AI Accountability, Protection, and Benefit
All individuals have the right to protection from AI-driven harms, the right to transparency and accountability in algorithmic processes, and the right to equitable access to the benefits of AI technologies. States and relevant institutions shall develop, deploy, and govern AI systems, including advanced forms of artificial intelligence, in ways that uphold human dignity, fairness, and the well-being of all.
This is not a theoretical exercise. AI is already deciding who receives medical treatment, welfare support, educational opportunities, and legal protections. Without binding commitments to transparency, fairness, and oversight, we risk constructing an infrastructure where human rights become conditional privileges, controlled by opaque systems and unaccountable actors. For these reasons, we must create change now:
There is a growing sense that AI is developing too fast to regulate, that the problem is too complex to solve, or that whatever damage AI causes can simply be corrected later. These assumptions are dangerously flawed. Regulation that lags behind deployment is regulation that fails.
AI is not a force of nature. It is the product of human choices, choices made by companies, policymakers, and the engineers who build these systems. If AI is shaping the world faster than institutions can respond, then the response must scale to match the challenge.
The next decade will determine whether AI serves as a tool for greater human dignity or a mechanism for new forms of exploitation and control. This is why I have written this article now - before my full academic paper is published - to ensure that I have said my piece, that I have done my small part to contribute to a conversation we need to have now, and to hlep ensure that we do not wait until after the damage is done to act.
Human rights frameworks must evolve. AI must be governed by more than corporate goodwill. The world must move from rhetoric to reality. Get to work! The stakes could not be higher.
Note: This article is an personal opinion piece and does not necessarily reflect the views of any organization or institution.
About the Author
Clara Hawking is a leading global voice in AI governance, ethics, and human rights. As a specialist in computer science, AI Ethics, philosophy, applied ethics, and international policy, she has spent years researching the intersection of emerging technologies and human rights law. Her work challenges the rhetoric-versus-reality problem in AI governance, calling for concrete, enforceable protections against AI-driven discrimination, surveillance, and systemic inequality.
Clara is the Co-Founder of Kompass Education , an organization dedicated to AI governance in schools, EdTech, and policy frameworks, and serves as the Chief AI Officer of IQQEdge , where she leads high-level discussions on responsible AI strategy on behalf of parents and guardians. She has advised schools, policymakers, and international organizations on AI regulation, ethical AI deployment, and human rights impact assessments.
Her forthcoming academic paper, Between Rhetoric and Reality: Digital Human Rights and the Limits of AI Governance, proposes a major philosophical expansion of the Universal Declaration of Human Rights to account for AI’s role in shaping power, governance, and human dignity.
Clara is an award winning advocate for AI transparency, corporate accountability, and the rights of individuals in an AI-driven world. Her expertise in global ethics, policy, and technology governance has made her a sought-after global speaker, writer, and advisor at the forefront of AI regulation debates.
AI Community Knitter ?? | Producing AI Content & AI Events ???? | SWARMcommunity.org
48 分钟前Clara Lin Hawking good timing! Last night I was talking to a friend who is unfamiliar with regulatory compliance efforts. How do we know who downloaded which LLM, when and where? How do we keep track of an LLM released into the wild if not through traceable platforms? How do we ensure the LLM is being leveraged in “ethical” applications? How do we “detect“ unethical applications of LLMs if they take place in local computing environments without access to internet? Lastly, how do we enforce measurements that good behavior is rewarded and bad behavior punished fairly and in a timely manner? The championing of human rights is hard and requires teeth to be effective. Especially when it comes to education and access to infrastructure. We have to balance the dangers while simultaneously ensuring that the public is being instructed into the pros and cons of leveraging the technology.
Founder of AIGN - AI Governance Network | Interim Manager & Consultant for AI Ethics, AI Governance, and Data Privacy | EU AI Act Expert | Keynote Speaker & Workshop Leader – Driving Compliant and Ethical AI Solutions
2 小时前Clara Lin Hawking, a powerful and necessary call to face the reality of AI governance—and a stark warning that human rights protections are failing to keep pace with AI’s rapid advancement. As AI reshapes economies and power structures, human rights frameworks remain largely unchanged—despite the urgent need for adaptation.? The gap between rhetoric and reality has never been wider: corporations sign AI ethics charters while advancing surveillance projects. Governments warn about AI risks while investing in autonomous weapons. And regulation? Too slow, too weak, too easily circumvented.? This is why i founded AIGN (Artificial Intelligence Governance Network)—to ensure AI governance is not dictated by economic interests alone but built on ethical principles, human rights, and accountability. We need **binding commitments**, not voluntary guidelines. AI decisions must be auditable, transparent, and challengeable.? The coming years will determine whether AI enhances human dignity or entrenches inequality. This is not about technology—it’s about **choices. The time for hesitation is over.?