How can governments regulate AI in a time when AI is everywhere?
Mohammad J Sear
Digital Gov. & Public Sector Consulting Leader, Middle East and Africa (MENA) at EY
We are living in the age of artificial intelligence, where intelligent machines permeate every aspect of our lives.
From voice assistants in our pockets to autonomous vehicles on our streets to personalised public services.... AI has become ubiquitous.
However, with great power comes great responsibility, and governments around the world are grappling with the daunting task of regulating this rapidly advancing technology.
The challenges are not only technical but also political and economic.
How can we effectively regulate AI when a handful of big companies hold its reins?
In this article, we delve into the intricate world of AI regulation and explore potential solutions.
How to begin regulating AI?
Lawmakers and policymakers across the world have already begun to address some of the issues raised in OpenAI CEO Sam Altman's testimony in front of the US Senate on the 16th of May, 2023.
?
So, for example...The US National Institute of Standards and Technology has an AI risk management framework created with input from the US Chamber of Commerce, the Federation of American Scientists, business and professional associations, technology companies etc. etc.
Similarly, the European Union's AI Act is based on a risk model.
It assigns AI applications to three categories of risk: unacceptable, high risk, and minimal risk.
By categorising AI applications, these governments acknowledge that the risks associated with automated hiring systems differ from those posed by AI-powered spam filters.
This differentiation helps in developing targeted regulatory approaches tailored to address the specific challenges and concerns of each category.
But, is this enough?
Lawmakers are exploring an intriguing avenue to tackle the regulation of artificial intelligence: licensing companies before they release AI technologies to the public.
This approach suggests that not only individuals but entire companies would need to meet a defined set of requirements, adhere to standards of practice, and undergo extensive training in algorithmic auditing.
It's clear that holding companies accountable goes beyond individual licensing---it necessitates the establishment of companywide standards and practices.
Addressing concerns of bias and fairness in AI goes beyond technical solutions, as well.
领英推荐
It calls for comprehensive risk mitigation practices (such as the adoption of institutional review boards for AI) to ensure that ethical considerations are thoroughly examined.
Furthermore, fortifying existing statutes on consumer safety, privacy, and protection is crucial to shed light on the complex AI systems while guiding algorithmic accountability norms.
The power lies in Collaboration!
Drawing inspiration from renowned international organizations like CERN and the Intergovernmental Panel on Climate Change, experts are advocating for a collaborative approach to regulate AI.
Just as the internet has been successfully managed by diverse entities such as nonprofits, civil society, industry, and policymakers through organizations like the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Standardization Assembly, these models offer valuable insights for shaping the future of AI regulation.
By embracing this collaborative mindset, industry leaders and policymakers can forge a path towards effective governance in the realm of artificial intelligence.
So....instead of establishing a new agency that risks being influenced by the very industry it seeks to regulate, lawmakers have an opportunity to promote accountability through alternative avenues.
Comprehensive laws addressing data privacy are imperative to safeguard individuals' rights.
To tackle potential algorithmic risks posed by AI, world governments can prioritize strengthening disclosure requirements for both AI firms and users.
Furthermore.... encouraging the widespread implementation of AI risk assessment frameworks and mandating processes that protect individual data rights and privacy would be viable regulatory approaches.
By embracing these feasible strategies, they can play a pivotal role in ensuring AI's responsible integration into our society.
That's how they can drive private and public sectors towards responsible AI practices.
Bottom line
As artificial intelligence continues to shape our world, the pressing need for effective regulation becomes increasingly evident.
Collaboration emerges as a powerful tool to navigate this complex landscape, drawing inspiration from successful models seen in international organizations and the management of the internet.
Lawmakers can leverage existing frameworks and pass relevant legislation rather than creating new agencies susceptible to industry influence.
As we shape the future of AI, it is essential to strike a balance between encouraging innovation and safeguarding societal interests.
With a proactive and collaborative approach to regulation, we can unleash the full potential of AI while addressing concerns related to accountability, fairness, and privacy.
By embracing these strategies, we can foster a harmonious relationship between AI and humanity, ensuring a future where AI technology benefits us all.