Let Democracy Thrive in the New AI World Order
Kiran Kumar Yellupula
Group Head at Adfactors PR; Previous Organisations: IBM Research, Accenture, IBM Consulting, Infosys, JLL, Visa
Authenticity and Integrity (AI) will be critical for democracies to survive and thrive in the new world order of Artificial Intelligence (AI). And, this is an “AI Challenge” which only humans can solve. Are we ready?
AI is disrupting our lives, opening up infinite opportunities - improving many aspects of humanity. Generative AI (AI) is already transforming the way we work across sectors. There are over 300 use cases across 19 industries, where business models are reinvented. Some forecast AI could add $15.7 trillion to the global economy by 2030. However, global AI leaders believe AI also poses a catastrophic risk to subvert and undermine democracies. And, we are not talking about a small set of doomsayers.
Paradoxically, even developers of AI, industry, scientists, academia - all worry about the existential risks posed by AI. From OpenAI’s CEO Sam Altman, Google DeepMind’s CEO Demis Hassabis, Anthropic’s CEO Dario Amodei, Bill Gates, Tesla’s CEO Elon Musk to Godfathers of AI, all seem worried.?
Growing Societal Threats
From Downing Street to the Whitehouse acknowledge the imminent risks of AI. The societal risks of the AI system are on a par with pandemics and nuclear wars and mitigating risks of extinction from AI should be a global priority, the experts warn. Yet, it is disturbing that without clear laws, experimental tools are unleashed on the masses. Recently, Altman reportedly expressed concern that misuse by dictators using AI to suppress people may not be not be super far away.
Welcome to the age of deep fakes: Experts say 90% of internet content will be synthetically generated by 2026. The risks of AI span disinformation, mass surveillance, predictive policing, and authoritarian control. A recent study warns of lack of trustworthiness as 50% of generative search engine responses lack supportive citations, and 25% of the citations provided are off point.
Shrinking Human Choices
Without guardrails, AI could make social media toxic, addictive, divisive, and manipulative. Deceptive plagiarism of personal data, IP and learning patterns will degrade science and debase ethics. Human decision making could be losing control to blackbox AI, which diminishes our ability to control choices.
FTC warns of chatbots,?images, deep fake videos?and?voice clones can be used to generate realistic but fake content exponentially cheaper and faster, amplifying or targeting some communities?or individuals. Some of the indicative applications include generating?spear-phishing emails,?fake websites,?fake posts,?fake profiles, and?fake consumer reviews,?malware,?ransomware, and?prompt injection attacks. It may also facilitate large-scale?imposter scams,?extortion, and financial fraud. Insights fear autocratic governments could use this to discredit opposition, and foment social conflict.
AI could undermine democracy. Upcoming elections in the US, UK, EU and elsewhere could face a wave of AI-driven disinformation, as synthetic images, cloned human voices and videos go viral at minimal cost in seconds. AI poses political dangers by misleading voters through micro-targeting, potentially hijacking influence. AI could be used to smear political opponents, and autocomplete in Search can limit choices. Prior to elections, deep fake releases may be hard to detect or taken down.
领英推荐
Mitigating the Risks
So, who takes the responsibility and liability for potential risks stemming from misuse of generative AI? As AI developers plead: please regulate us, it defies logic as why half-baked AI was released to the masses, along with the “existential risks”? A balanced discourse and laws are required to mitigate the risks of misuse of AI tools. Mere rhetoric of “open distributed AI” or “AI for Everyone” or “responsible AI” or “ethical AI” is not going to help us mitigate the real issues and cost us immensely. With little say in the design, development and control of AI, our society is entering into a new form of colonial world order.
A myopic, sectoral, task-oriented view to gen AI could be detrimental to the current world order. It will alter power sources, control, and conflicts for economic prosperity to a select few, widening gaps of opportunities, and development, marginalizing the poor further. What if artificial life forms are produced on-demand? Humanity needs planned technological governance rooted in sensible thinking.
The recent call of Group of Seven (G7) leaders for development and adoption of global standards for trustworthy generative AI regulation as rich nations develop the technology is a welcome step. India needs to invest in inclusive, explainable, accountable, safer, fair, interdisciplinary, privacy-preserving research and development, to mitigate risks associated with AI, and serve the public good. Gen AI comes with huge cost, reorganizing the global value chain: widening the gap between rich and poor.
Securing Democratic Rights
We must create awareness about the ethical, legal, and societal implications of AI, and develop inclusive ways to pre-empt risks posed by AI systems to reflect our values and promote equity. It is imperative to develop and enable access to high-quality datasets, testing and training, clear laws with a robust technical agenda, benchmarks, legislation and an AI risk management framework. The roadmap to a prosperous future must nurture an AI-ready workforce, boost understanding of the limits and possibilities of AI-related work, and the education needed to effectively tap AI systems. This calls for collaboration, partnerships and a synergy with academia, industry, and the ecosystem. A holistic approach to AI governance, guidelines and laws will catalyze responsible progress for the humanity at large, and India.
It’s time we nurture open, secure, inclusive, and resilient societies rooted in universal digital rights where technology advances trust, strengthens democratic values and fosters human rights for all. We must have regulations to advance democracy, curb misuse of AI and all streaks of authoritarianism. Archaic laws, developed before the birth of AI must be reframed. Leading the next wave of AI responsibly means fixing accountability and enforcing governance. We cannot allow further monopolization of cyberspace that threatens national security, stifles innovation, and competition.
Shaping People-First AI
We must have a blueprint for AI bill of rights that helps guide safer design, use, and deployment of automated AI. Public rights, opportunities, or access to critical needs must not be infringed upon. ?The entire ecosystem must urgently work with scientists, academia, policy makers and legal luminaries to formulate policies and guidelines allowing access to AI before release for ensuring privacy and safety. It’s time we need datasheets, model cards, transparency, independent audits and testing labs that can provide scorecards on AI System’s flaws throughout the lifecycle of training models. It’s time to publish how the models behave in a flawed scenario so as to label them safer.
A case to consider is the European Union which is closer to the world’s first rules of AI, which can serve as guidelines for other nations. Makers of gen AI must embrace a liability framework. They cannot be absolved from any harm by AI. We need clear laws, independent safety review before deployment, monitoring, examination of research, along with optimal transparency, explainability, performance of models, fair rules, impact analysis, protecting data used for training data, and independent AI audit.
Humanity stands at a crossroads. We cannot move towards a programmed society, and programmed citizens. Let’s ensure AI doesn't harm our citizens. The choices we make today will shape the future.