Regulating AI- The Perils of AI Exceptionalism
In a recent conference around the theme of technology and law, a topic that is so vast that it encompasses practically every facet of human economy, the focus and the flavour of the season was Artificial Intelligence. Of particular interest however, was the efforts of the European Union to regulate Artificial Intelligence. As was the case with the GDPR, this initiative of the European Union was quick to attract the world’s attention and is likely to be the template for a host of other regulatory initiatives world-over. In particular, the efforts of the European Union, seems also, to be driven by the desire to, amongst other things, a) ensure the safety of individuals from the harmful applications of AI and b) protect the integrity of democracy itself, ensuring that AI does not compromise the ability of voters to make informed choices during elections.
?
?
It is reasonable to expect India to follow suit. Recent instances involving artificial intelligence providing politically coloured and biased opinions involving Indian leaders has further strengthened the Government’s hands to regulate AI products. Yet, notifications and circulars which initially indicated a greater involvement of the state in regulating AI was soon followed by a quick withdrawal, still leaving the industry to regulate itself with some level of government involvement. These developments reflect the fact that regulating AI is an exercise driven less by a precise understanding of the technology and its implications, but more by fear and political exigencies. In another event last November, in Amsterdam, one lawyer from the United States even went on show a poster of the apocalyptic thriller “Terminator 2” and argued that we are certainly headed in that direction with AI. But if history is a template and one need not look further beyond the industrial revolution, technology is inevitable and its role as a force of good or evil, is largely a function of how you regulate the use of it rather than the technology itself.
?
Personally, I am a bit intrigued by the focus on AI as the factor in the two problem statements and at the sudden loss of focus on the role of human beings in them. A big part of the fear and the hype around AI and its implications for our safety, seems to emerge from the erroneous assumption that AI is autonomous. However, how technologists define autonomous and how autonomy is understood from the point of view of philosophy and law, are not entirely the same. Take for example, autonomous cars. The description of a car that can drive itself, avoiding risks and seizing opportunities on the road in a safe way, as autonomous is not inappropriate from the point of view of engineering and technology. But legally, the term autonomous is understood as a term synonymous with “agency”, a phrase associated with sentient beings capable of defining itself by its own laws and rules of behaviour. However, we largely all agree that today’s AI is a tool that is not, philosophically speaking autonomous, but merely tools of automation. This is a subtle but important distinction that the Oxford Handbook of AI ethics makes and I believe, understanding it is critical to developing workable and enforceable regulations for AI.
?
Agency implies a curious and complex interplay between our intellectual abilities and our emotional intelligence. While the pursuit of food for sustenance, requires intellectual thoughts and actions, the choice of means to achieve it, i.e., the choice to, ?say, practice medicine, instead of becoming a thief to earn a living, represents an emotional intelligence with empathy at its core and the ability to feel remorse and therefore assume responsibility. Because of our ability to feel responsible and remorseful, regulating human conduct, at least in the civilized world, has been largely about imposing shame through punishment, in the hope of inspiring reform. Our ability to define rules for our own conduct, however questionable and lacking in integrity it may be, lies at the heart of our claim to be creatures of autonomy. Thus, our understanding of the term “regulate” is synonymous with our identification of the precipitating factors for remorse.
If the sum total of our regulatory experience is based on how creatures that feel emotions rectify conduct that is erroneous, then how are we to regulate AI whose entire limitation today, is the inability to feel emotions, a fundamental requirement for autonomy in the philosophical and legal sense? After all, Chat GPT will never feel “bad” for jeopardizing Stephen Shwartz’s law practice by feeding fabricated case laws. One the other hand, he ought to forever regret, that one moment of indiscretion, when his significant intelligence and experience as a lawyer was compromised by the reliance on a tool that lacks the sentience of even an infant.
?
Almost all of the literature that avoids hype and is well researched, indicates that the real issues around AI are the lack of diversity amongst the pool of people developing AI based applications, which in turn results in subconscious bias, the lack of transparency and accountability in the way data is collected for developing and refining AI, data privacy concerns, lack of ethical labour laws and concerns around liability for AI based applications. These concerns are not glamorous and on the contrary, can really bore an audience given their technical character. But they are the problems of today and affect society in a very real sense. Given that these are problems unprecedented to the collective human experience, we must be circumspect about the two extreme schools of thought- one that predicts the AI enabled apolcalypse and the other that sees AI as the foundation of a Utopia where economies are driven by technology and the merits of sloth, as a seven deadly sin, may need revisiting.
领英推荐
?
To this end, regulating AI must first and foremost be about regulating human conduct around AI. It must avoid the temptations of assuming that AI unlike other tools in the hands of human beings is exceptional and therefore must be regulated through an exceptional regulatory intervention. On the contrary, much of AI regulations will require conventional expertise from D & I professionals, philosophers, labour law experts, etc. For regulations to do this, we must acknowledge that AI like any other machine or invention made by humanity, must ultimately have its impact judged by the choices it inspires amongst its developers and users. While efforts to bring in oversight and accountability to the tech industry is laudable, it will have limitations inherent to all novel and unprecedented regulatory subjects and will likely go wrong more often than going right.
?
However, if we can
-?????? Strengthen education to better skill students to deal with AI and AI generated content,
?
-?????? Update labour regulations to include diversity and inclusiveness as an enforceable goal and address job losses in the wake of AI and
?
-?????? Ensure data privacy laws bring in ethical practices around data collection and processing,
?
we are perhaps taking the first steps towards mitigating the disruption that AI will likely impose on our communities. A great example is the Finnish education system introducing a curriculum on identifying fake news. Though intended to counter propaganda from Russia or information warfare, chances are that those very students will avoid falling for fabricated case laws provided by Chat GPT. If we want responsible AI, we must first inspire and enforce more responsible AI developers and users. A human centric approach, that is based on a respect of underlining and fundamental concepts of philosophy and ethics, will be critical to ensure regulations are roboust, realistic and facilitate ethical innovation.