Regulating Artificial Intelligence - Definitely, Maybe
Statista has it at $60 Billion by 2025, the McKinsey Global Institute puts its between $644 Million and $126 Billion by 2025 and PwC predicts it to be $15.7 Trillion by 2030. There are many such predictions, each with a different number, but the common element is that each of these numbers for the size of the global Artificial Intelligence market is profanely high.
However, with Artificial Intelligence since the potential for a positive impact is life altering - fact is so is the associated risk. These risks are arguably probable ( the degree of the probability is anybody's guess) and have been so aired by the likes of Elon Musk and the late Prof Hawking, most prominently.
Yet to the question of 'To AI or Not to AI', the huge investments being made by Enterprises and Governments offer a resounding 'Yay' vote in favor of advancing Artificial Intelligence systems further. This, of course, is because of the projected revenues from AI, but also partly because of a strong FoMo ( Fear of Missing out). And like any other nascent industry the known benefits currently outweigh the known & the unknown risks. But to be fair, I don't think anyone/very few are arguing against developing Artificial Intelligence systems further.
Neither is the debate about, whether the risks associated with Artificial Intelligence are real or not. Mainly because, they have not been understood as yet in their entirety, its safe to assume that sufficient risks exist.
So given that Artificial Intelligence is here to stay and grow, and also that there are potential risks associated with it, the real question is - How do we manage the risks associated with Artificial Intelligence systems? Through Regulations or through Industry Initiatives.
And to that, I cast my humble vote in favor of Regulations. I am certainly not in favor of bulky government regulations, but we will require some industry standard & frameworks to be put in place, guided by pertinent laws & legal mandates. As I attempt to explain later in the article, I believe we have the regulatory building blocks of what we need already in place/ coming soon. I would of course advice that the consumption of these thoughts be accompanied by a saline diet as per your discretion.
But before we discuss what we need, lets take a look at what we already have.
The Laws : From Asimov to Etzioni
Let us start with the famous Three Laws of Robotics laid down by Issac Asimov in 1942 :
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Makes sense, right? Well, actually the fault lines start showing when we delve a little deeper. For instance having junk food 'harms' me. So does being exposed to a nuclear explosion. And yet while we want the AI system intervening in the latter scenario, we might not want the AI to restrain someone from having a greasy burger! Similarly, the 'sacrificing one life to save a million lives' situations where shades of grey come into play, question the comprehensiveness of Asimov's laws.
So let us then look at something more recent - the three rules of Artificial Intelligence presented by Oren Etzioni in September 2017, in a New York Times op-ed inspired by Asimov's laws :
1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
2. An A.I. system must clearly disclose that it is not human
3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.
The First rule makes both moral as well as business sense. And is already being practiced. Today all self-driving car systems follow the traffic rules laid down for humans and similarly all drones follow the FAA regulations. Logically speaking, for their mass adoption and assimilation of AI systems in the human day to day life - it's imperative that they follow the same laws, otherwise they will not fit in. So its a reasonable assumption to make that (as long as the first rule is being followed in designing them) the current legal framework should suffice to ensure safety of humans, at least for the near future.
In my view the need for the second rule is slightly nuanced. It becomes more pronounced when the AI systems go beyond recommendations or the nature of the recommendations go beyond benign decisions (like filtering spam mail or movie recommendations). We need the second rule so that, when we are presented with a recommendation or the AI system acts out a recommended action, we can make up our own minds about it. And if we find it 'not humanly' ( the way a moral human would do it), we should be able to take a corrective action. This is probably why, Etzioni also recommends that every AI system should come with a 'Kill switch' or to paraphrase with some clear mechanism to over-ride its actions or recommendations.
Coming to the third rule - this is where I think a lot of the future work around regulations, especially in the AI industry, will happen. This is because, at the risk of an over simplification, Artificial Intelligence systems are basically complex functions of the data their algorithms are trained on, and of what they derive & use at run time. And towards regulating how systems gather and use data, we are already seeing the building blocks for that in the form of GDPR ( General Data Protection Regulation ) and the ePrivacy Regulations, especially in the EU and the UK. These regulations entail clear specifications around, how data is gathered, who has access to it, and how its used. So again, once we have these new data laws in place and evolving, we might not need a massive set of new laws specifically for AI.
There is of course the pertinent question of practically applying them, and this is probably where additional regulations might be needed.
HOW : In the immediate to Short Term
We are a sum of all that we learn . While it might sound like fortune cookie wisdom, that is indeed the fact - all our good habits, morals, values, reactions, information & decision making etc. stem from what we were taught, and what we learn and pick up on a daily basis. This is true for Artificial Intelligence systems as well!
The behavior of all our Artificial Intelligence systems irrespective of the use case or the algorithms are influenced by the data the models are trained on. For instance, the Nightmare Machine - a project by MIT turns normal & benign photos into scary ones, because it was trained on scary photos. If it had been trained on pictures of fairies and unicorns, its output would have been more pleasant and palatable. Supervised learning systems ( majority of the machine learning systems today) are trained on labelled data, based on which they process live inputs and make predictions and recommendations. If one was to consider Reinforcement Learning models, like the DeepMind model that learnt how to play Atari on its own, even there it played against a defined set of rules and arrived at its optimal policy based on the rewards/ disincentives accrued based on its chosen actions. So the training data / system defines the nature of the AI. A Reinforcement Learning Dialog system if disincentivised during training for making racist remarks, is highly unlikely to make race based pejorative remarks during run time.
The point being, if we are able to regulate the data and the datasets that Machine Learning models train on, we can regulate their behavior and performance. Hypothetically, if there was a regulatory mandate that every model, irrespective of the business process supported & associated functional datasets its gets trained on, must additionally get trained on certain sets of 'moral' datasets/learning setups that not just incentivises correct behaviour but clearly disincentivises the bad, for the AI model ( the way we teach moral science and good behavior to a kid, if you will,); and that such train be essential for it to get 'certified' as safe & ready for live operation - we could control the behavior of these systems. Consider it to be like the AI system having to go through the basic civic training, in addition to the standard education, before becoming eligible for live operations. Exactly like the life training and education we humans undergo.
And in addition to that, if we can regulate the data the Artificial Intelligence systems can gather whilst in operation - so that the policies don't change through continuous learning, coupled with the mandated design feature of manual over-ride ( 2nd and 3rd rules), we can influence the degree and the nature of impact it can have on the lives of its human users ( the 'greasy burger' problem).
If the notion of teaching AI systems, human ethics and values sounds theoretical - I assure you its not. Here are some of the AI Principles listed down by the Future of Life Institute. The institute has Jaan Tallinn ( co-founder of Skype and Kazaa) as one of its founders, alongside other MIT and DeepMind luminaries.
As to the design and certification of these 'moral' datasets and design mandates, my submission is that it would have to be championed and delivered by global Industry standards bodies ( in some cases specific to a particular vertical) just like what we have today, for the standards and regulations for the Telecom, Financial services, IT Services sector. Such frameworks would of course need to be guided by the data protection and other pertinent laws.
HOW : Long term
To the question of how do we regulate a self -aware AI system - the honest answer is I don't know. Nobody does for sure. Simply because, in the hypothetical future of singularity, with AI systems becoming self-aware, self governing and self creating, we as yet really don't know what would be their self-defined incentives and goals or what would cause them 'pain' or 'fear'. And without that its difficult to devise a regulatory system that disincentives them for causing mental or physical harm to humans, and designing the appropriate override mechanisms.The hope and logical punt is that the subsequent derivatives evolving from the moral AI systems of today/near future, will also remain moral and benevolent.
Because of the future gazing involved, its not been an easy piece to pen down. But presented with the challenge, I have attempted to share my thoughts as plainly and succinctly as I could, and kept my guiding assumptions as logical as possible. If not anything else, I do hope it has provided you with some food for thought. And as always, if you have any comments and suggestions on how the topic could have been handled better, please do feel free to share your comments. I shall be happy to include them in the post.
Disclaimers: The above post in no way claims any copyright to any of the images or literature presented.Images courtesy : Statista, Pinterest.co.uk, Pixwords Answers, yokota.af.mil
Director | Strategic Programs | ISB | Delivery | Ex-TechM | Ex-FTI | Ex-TCS
6 年Regulated machine learning....interesting sounds its like oxymoron statement. You enable a machine to learn but learn only certain variations or limitations. That’s the whole risk Elon Musk referred to, what if code starts evolving based on certain scenarios and gaps/bugs/defects. Psst....let’s be optimistic around this.
Another thought provoking post Som! Like the inception of mass computing triggered the entire software virus and hacking issues kicking off an entire industry of Computer security, regulations, processes and AV tools, once again with AI we will see an evolution of best practise, regulation and policing. Can we stay on top of this or will it spiral out of control...
Principal Consultant - SME & home Lending
6 年One of the better posts I’ve read on AI.?
Vice President - GSI | Head - Global PMO | Agile Leader | PL - TX |
6 年Very well written and researched Som...