How to regulate Artificial Intelligence?
Norbert Biedrzycki
Microsoft | McKinsey | AI | FinTech | Web3 | Services | CEO | Board Member | Transformation | Change | YPOer
Lawyers, politicians and business people alike feel that the laws in place are failing to keep up with technological advances. Is it possible to regulate artificial intelligence efficiently and do we need such regulation?
There is an ongoing debate about how to regulate artificial intelligence. Lawyers, politicians and business people alike feel that the laws in place are failing to keep up with technological advances. The primary and secondary laws that are currently in force do not regulate technology properly. Is it possible to regulate artificial intelligence efficiently and do we need such regulation?
Not only are we struggling to grasp the logic behind algorithms, we – the citizens – are also in the dark about the way companies, institutions and services employ modern technology to surveil us in our day-to-day existence. Shouldn’t we be better protected while using computers, drones, applications, cameras and social networks? Shouldn’t someone make sure we don’t end up having algorithms elect their president?
I do not believe that attempts to regulate artificial intelligence are an unnecessary nuisance or a curb on the free distribution of ideas and business freedom. A technology that develops beyond our control and that self-improves without a programmer’s intervention, becomes powerful indeed. In the face of such technology, the principle of unlimited business freedom becomes somewhat archaic and falls short of resolving many issues.
To regulate or not to regulate
Needless to say, views on whether to regulate the development of autonomous, smart technologies are deeply divided. Elon Musk regularly raises concerns about the fate of our planet, speaking of the need for robust mechanisms to protect people from technology-induced threats. On the opposite end of the spectrum stands Mark Zuckerberg, who champions a strongly liberal approach (although his views have been shifting lately).
Generally, the predominant approaches in US industry differ widely from those in Europe. The European Group on Ethics in Science and New Technologies of the European Commission has been working towards an international agreement with a view to creating a legal framework for autonomous systems. It is of the opinion that: “… autonomous systems must not impair the freedom of human beings … AI should contribute to global justice and equal access to the benefits and advantages that AI, robotics and autonomous systems can bring.”
It should also be noted that China’s views on the matter are diametrically different. China aspires to be on the cutting edge of AI development, which it sees as a vital tool for surveiling the public and controlling social behavior.
Observation, education, dialogue
New technology experts play a key role in our changing world. They can answer the burning questions that members of the public may pose. They can tell us whether we can rest assured that the AI we use is “fair, transparent, and accountable”. This answer alludes to the name of one of numerous seminars (entitled: “Fair, Transparent, and Accountable AI”) held by Partnership on AI. The organization’s mission is to study practices relevant for the presence of AI in human lives and explain new developments in the field to the general public. It is worth quoting the sentence that appears in the description of the event at their web page: “through techniques like identifying underlying patterns and drawing inferences from large amounts of data, AI has the potential to improve decision-making capabilities. AI may facilitate breakthroughs in fields such as safety, health, education, transportation, sustainability, public administration, and basic science. However, there are serious and justifiable concerns—shared both by the public and by specialists in the field—about the harms that AI may produce. “
I will name four fields (selected, of course, from among many others), whose rapid change is propelled by artificial intelligence. Some of them may in time require specific regulatory mechanisms for the comfort of the users of this technology.
At odds with the law
All around the world, the police rely on algorithms. AI helps them process data, including information related to crime, searches for criminals, etc. One of the many advantages of machine learning is its ability to classify objects (including photos) by specific criteria. This is certainly of value to organizations that need to quickly acquire information vital for their investigations. Unfortunately, the algorithms that assess the likelihood of re-offending (which affects the decisions to release inmates on parole) are susceptible to abuse. Therefore, lawyers around the world are establishing organizations (among them The Law Society’s Public Policy Technology and Law Commission) to oversee the use of the technology by the police and courts.
The universal nightmare of fake news
This topic, which has been heatedly debated of late, raises questions about the credibility of information and the responsibility of social networks to monitor their content. Since 2016, the first year to have been marred by a huge number of fake news scandals, not a week goes by without the issue hitting the headlines. AI is central to this story because, as we know, it has played a huge role in the automatic generation of content (bots). I think that the credibility of information is one of the biggest challenges of our time, which can rightfully be labeled the age of disinformation. It definitely requires a global reflection and a concerted international response. Every now and then, initiatives for credible news (as in the Pravda site set up by Elon Musk) are put forward, but the challenge remains enormous. Since it would be utopian to regulate it, I am afraid we may be forced to wrangle with the problem for years to come.
Who rules the assembly line?
The robotization of industry is among the most emotionally-charged aspects of AI. People have a big problem accepting robots as their work buddies (or accepting that robots will put them out of a job). I have written about this on numerous occasions, and so I will refrain here from presenting statistics or arguments for or against robotization. The matter is certainly a major problem and there is no point pretending it is going to go away. On the contrary, social unrest may increase as the trend unfolds. I think that in view of its social impacts, it is highly critical to lay down the rules that will govern this field. One possible tool to be used is taxation designed to prevent corporations from excessively relying on robots.
Autonomous vehicles
Enthusiasts quote numerous studies which find that autonomous vehicles will make roads considerably safer. I share that view. And yet, autonomous vehicles raise a lot of questions. One of the key ones concerns vehicle behavior during an accident. Who should algorithms protect as their first priority: passengers, drivers, or pedestrians? Will a driver who causes an accident at a moment of distraction have a claim in court against the manufacturer of his autonomous vehicle and will the driver be able to win his case? How should vehicles be insured? Who should be liable for accidents: the driver / the passengers, the vehicle owner, the manufacturer or the software programmers? Another upcoming conundrum is the future of other autonomous means of transportation such as airplanes, ships and road vehicles that will move cargo for us (deliver shopping, etc.). Legislation around the world varies in how it requires driverless vehicles to be tested. The debate on how to boost user safety is ongoing no matter where you look.
Technology for the people
The progress achieved through the use of smart technologies is unquestionable. However, in addition to such business concerns as the cost of optimization, efficiency, the bottom line and automation (all of which benefit from AI), I think it is vital to remember about some of the less measureable aspects. We should remind ourselves that the good old well-being of individuals, people’s security, knowledge and fulfillment derived from interactions with new technologies are of utmost importance. After all, technology is there to make our lives better. Let us keep a close eye on the experts who influence the drafting of laws designed to protect us from the undesired impacts of algorithms.
. . .
Works cited:
Business Insider, Prachi Bhardwaj, Mark Zuckerberg responds to Elon Musk’s paranoia about AI: ‘AI is going to… help keep our communities safe.’, Link, 2018.
European Commision, European Group on Ethics in Science and New Technologies, Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems, Link, 2018.
New York Times, Tomas Chamorro-Premuzic, Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras, Link,2019.
The Washington Post, Peter Holley, Pravda: Elon Musk’s solution for punishing journalists, Link, 2020.
. . .
Related articles:
– Technology 2020. Algorithms in the cloud, food from printers and microscopes in our bodies
– Learn like a machine, if not harder
– Time we talked to our machines
– Will algorithms commit war crimes?
– Machine, when will you learn to make love to me?
– Hello. Are you still a human?
– Artificial intelligence is a new electricity