Regulating AI: Europe’s bold initiative to move through the looking glass
Ursula van der Leyen ? Frederick Florin/AFP, via Getty Images

Regulating AI: Europe’s bold initiative to move through the looking glass

It’s always great to enjoy movies in the holidays, like the 1971 classic ‘Willy Wonka & the Chocolate Factory’. In this case, also because of the references to cases of ‘Computer Supremacy’.

TECHNICIAN:

Gentlemen, I know how anxious you've all been during these last few days, but now I think I can safely say that your time and money have been well spent. We're about to witness the greatest miracle of the machine age. Based on the revolutionary Computonian Law of Probability, this machine will tell us the precise location of the three remaining Golden Tickets. 

(He punches computer buttons; reads the card it emits)  

It says, "I won't tell. That would be cheating.". I am now telling the computer that, if it will tell me the correct answer, I will gladly share with it the grand prize. 

(Pushes buttons; reads card) 

It says, "What would a computer do with a lifetime supply of chocolate?" I am now telling the computer exactly what he can do with a lifetime supply of chocolate!

No alt text provided for this image

Script of ‘Willy Wonka & the Chocolate Factory’. ? Wolper Pictures, Ltd and the Quaker Oats Company.

There are several takeaways from this hilarious scene, clearly revealing in depth knowledge of known challenges in Computer Science:

  • The computer is correctly second guessing the goal of the interrogator, but overrides it with its own goal
  • The goal of the interrogator does not seem so ethical, at least not to the computer
  • The computer does not want the offered bribe, in this case, because it does not see any benefit
  • The interrogator end ups quite depressed and angry, intimidating the computer, without the Golden Ticket and probably without having learnt anything
  • It’s unclear if the computer has learnt anything new, although it’s implied that it did find out the location of the remaining tickets.

 We have come a long way in the 50 odd years since then, in closing the gap between man and machine, expanding our intelligence and regulating what machines can do. Or have we?

I - AI Definition

State of affairs

Nowadays, the word computer has fallen a bit out of fashion, as a previous century relic. It seems that Artificial Intelligence (AI) has taken over as favourite, when referring to the ‘smart’ functionality of laptops, phones and the rapidly growing number of other devices, with sensors, algorithms or otherwise. More and more, AI is impacting our societies. After the breath-taking results of its Alpha Go AI, convincingly beating the world’s ruling Go champions, Google’s Deep Mind team has decided to move from simply fixing ‘games’ to ‘intelligence’ and even ‘life’, via ‘protein folding’.

No alt text provided for this image

In 2016, the Alpha Go AI beat 18-time world champion Lee Seedol, in a best-of-five game, where AlphaGo won all but the fourth game; All games were won by resignation. The epic battle can be now be enjoyed through a Netflix documentary. ? Netflix.

Many people have a daily need for AI to do something and get somewhere and even complete societies are impacted by it. As Stuart Russell dramatically stated in his recent mind provoking book “Human Compatible”:

“The reinforcement learning algorithm in social media has destroyed the EU, NATO and democracy. And that’s just 50 lines of code” 

Indeed, AI has ‘nuclear’-like capabilities, as ‘Weapons of Math Destruction’, as described by Cathy O’Neil, since it can nudge, sense, actuate and modify human behaviours through the so-called effect of ‘context collapse’, on a global scale. Maybe, this calls for non-proliferation treaties (NPTs), to prevent the uncontrolled spread of AI. However, strictly speaking, AI entities are not legal entities - natural or juridical persons - and basically lack mandate and sovereignty to freely act in an economical or legal sense, like animals or small children. This indeed implies fundamental issues regarding transparency, trust and liability, which are not sufficiently understood and can’t be properly controlled yet. This leads to the conclusion that AI is still quite in the experimental stage. Additional R&D and innovation is required to move it to the next level of maturity. AI should move in a context defined by ethics and law, not the other way around. 'Team Human' should stay on top.

New definitions

Policymakers are stepping up their efforts to redefine AI, in order to help organizations and individuals in coping with or benefit from AI, in a sustainable way. The EU’s ‘High Level Expert Group’ (HLEG) on Artificial Intelligence has recently proposed to update the definition of AI, to stimulate practical use by non-AI experts. This is done by addressing specific challenges, like the definition of ‘intelligence’ itself, which is still quite an elusive construct. 

However, there is still some more work to do, also because enforcing law or policies is hard and costly, when definitions are not practical enough. Both the old and the new definitions of AI could also apply to humans and other living beings, for example. Next to that, current definitions are not pragmatic enough to be used in practice, e.g. to assess if an entity is AI or contains AI or not, e.g. like in a Turing test.

 Wasn’t the term ‘artificial’ added to imply the non-biological intelligence, in all that is not organic? Next to that, explicating that AI can only be designed by humans is too narrow. It would be better to state that AI is to be controlled by humans, in a context of law and ethics, as suggested by Stuart Russell, in 'Human Compatible'?. Therefore, the definition of AI can both be simplified and expanded, for example in the following manner, also honouring the work of Alan Turing:

"Any life imitating technology, interacting with organic life, using available data and scientific methods to pursue a goal, controlled by humans in a context of law and ethics. This technology is designed by humans or otherwise and implemented in systems that are either non-organic (software or hardware), organic or a combination of the two"

No alt text provided for this image

The Bombe’ or ‘Christopher’, the electro-mechanical device originally designed by Alan Turing to, to help decipher German Enigma-machine-encrypted secret messages during World War II. This device was reconstructed for the movie ‘The Imitation Game’. ? Wired Magazine and Warner Bros.

In this manner, a smartphone or search engine could be called an AI entity. The same applies to a person using AI to upgrade his or her IQ, to accelerate decision making or augment reality, when doing something or trying to get somewhere. We need to make sure that any AI or AI-driven humans always remain ‘human compatible’.

II - AI in practice

‘Privacy issues’

Because of the rise of data science and AI, the hunger for data seems to have become insatiable, to improve the ‘training’ and performance AI-driven services. This is one of the reasons the GDPR regulation has been defined by the EU. Despite what many people think, this regulation was never meant to prohibit the use of data, but more to stimulate dialogue and data ‘markets’, where owners, suppliers and consumers of data trade in a transparent way, with realistic prices given the designated value or risk, in order to expand the market and move it to the next maturity level. The GDPR could be called a success, since it has triggered a constructive dialogue on regulating data across the EU and the world. Still, many organizations have a hard time embracing the opportunities and seem to think in terms of ‘issues’, e.g. looking at the Kafka-esk discussions on the ‘legitimate interest’ of storing and using customer data.

'Data issues’

In the world AI, it still quite common to see major issues arise because of minor issues in data, leading to time consuming and costly development and validation cycles and to sub-optimal AI performance, at its best. This is awkward, also because a typical response would be to look for ‘more’ or ‘better’ data, instead for ‘better’ AI.

 Humans - like other living creatures - learn to ‘work with whatever is available’ and deal with incomplete, unstructured, unreliable, biased or ambiguous information from the day that they come into existence, in order to survive. There seems to be no reason to have AI perform despite data issues. However, this is something that still very challenging for AI designers, probably because they still seem to fail to grasp the essentials of life and learning, which can change function, purpose, ethics, behaviour and value. Wouldn’t it be better, quoting Alan Turing, to take a different view:

“Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?"

The power of AI lies in the fact that it can use sheer unlimited data storage and computing to generate ‘ready-to-use’ statistics at the speed of light. This is a bit like cheating, like mentioned in the Willy Wonka movie, because the operations are basically the same as a human would do, but the unlimited resources enable the AI and the humans that benefit to basically ‘fold time’. Recently, Google 54-qubit Sycamore processor was able to perform a calculation in 200 seconds that would have taken the world’s most powerful supercomputer 10,000 years. This ‘time folding’ has led Google to claim ‘Quantum Supremacy’. But still, these are only ‘just’ statistics.

Remember Oscar Wilde:

"When everybody agrees with me, I feel I must be wrong."

 And also take notion from Stuart Russell’s statement from 'Human Compatible':

"We cannot insure against future catastrophe simply by betting against human ingenuity"

Quantum-driven AI is still only capable to use rules, logic and intelligence, based on all the information and knowledge gathered by humans in time, through experimentation, education, science, business and culture. AI still seems to be very bad at learning, because it typically needs incredible amounts of sample data, for example to make a distinction between a cat and dog. The fact that loads of pictures of cats can be produced in a mouse click (sic) is impressive, but that does not prove superior learning. More than anything, it shows the power of unlimited resources. Toddlers typically need just one or two samples, to learn and recognize all instances of ‘cat’, for the rest of their lives. 

No alt text provided for this image

Babies and cats need few samples to ‘learn’ about each other and bond. ?prettylittercats.com

Finally, AI is also still bad at explaining and teaching. While the bad learning could be called a minor problem, the bad performance at explaining and should not.

Explainability

For most humans, explaining decisions, actions or behaviours is surprisingly harder than it seems. For many people, there is quite a difference between what they think, say, do and remember. There are many reasons for this.

 As the famous Casablanca actress Ingrid Bergman stated:

“Happiness is good health and a bad memory”

And as Daniel Kahneman and several others have shown, humans have a hard time being rational and correctly handling facts and statistics. ‘Explainability’ for humans is quite meaningful, but rare, it is typically done in the context of therapy. Let’s assume for the sake of this article that the context and the level of trust has to play a key role, when bypassing the need for explainability, to always explain yourself, to everybody. This is quite different for AI, also because of the issues mentioned above. Next to that, despite the scientific method, knowledge and causality still play a minor role, as demonstrated by Judea Pearl in the revolutionary ‘Book of Why’. He excellently demonstrates that without causality in AI, the question ‘why?’ can never be answered. And again, like mentioned above, this type of dialogue, raising the why question until the answer is clear enough, would help the ‘infant’AI to learn and grow to next levels of maturity. Here, no doubt, the ‘parent’ human will probably learn a thing or two about him/herself, too. But currently with AI, there is a little to no transparency, not on ethics, applicable law or causality. This lack of explainability makes it impossible to trust on AI and assign it a role in trustworthy functions. Why hire or keep somebody or something, when there is no trust?

III Regulating AI

From GDPR to ‘GAIR’

AI is a global and geopolitical topic and too big to handle for most individual countries. Fortunately, un November 2019, European Commission’s incoming president Ursula von der Leyen has promised to introduce new GDPR-like legislation to govern AI amid fears about Europe’s increasing dependence on (foreign) tech and the implied risks and concerns, as mentioned above. So, from regulation for ‘General Data Protection’or 'GDPR', to ‘General AI Protection’ or ‘GAIR’.

The Commission is likely to build on the works of its expert group on AI, which outlined a series of principles earlier in 2019, meant to support companies in deploying AI in a way that is fair, safe and accountable. These rules, developed by a group of academics and industry representatives, form part of the EU’s plan to increase public and private investment in AI to €20bn a year. Such a budget call for serious principles, sustainable public-private collaboration and effective oversight. Or, as the expert group calls it: 

  1. increasing public and private investments in AI to boost its uptake
  2. preparing for. socio-economic changes
  3. ensuring an appropriate ethical and legal framework to strengthen European values.

Regulatory tsunami

More and more, global corporates are exposed to regulation, with updates coming in faster and faster. In the Finance industry, global corporates face regulatory changes every 7 minutes!.

No alt text provided for this image

? Jeroen de Bel, Fincog; Holland Fintech; Moody’s.

If regulation will be added for AI, this is frequency is bound to get significantly higher. This could potentially put many companies at risk, especially the ones that are or would like to be data-driven, but lack regulatory, legal or ethical expertise.

It’s clear that the rules of the game have changed, because the regulatory context has become a very dominant factor for organizations, to keep their ‘license to operate’ valid and their business model healthy. This calls for a completely new approach to regulation.

Fire with Fire

Many corporates are experimenting with AI, but are concerned that AI regulation will increase requirements and obligations regarding transparency and explainability and the overall regulatory pressure. This pressure is exponentially rising, because of fragmentation and asymmetry in regulations: what is (still) acceptable in one country, may not be (anymore) in another. This is very cumbersome and it can jeopardise the development of AI products and services across a single market.

Handling AI regulations ‘on top of' all that’s there already is a major challenge, which could impede the formal implementation and regulation of AI. A radical new approach is required here: why not fight fire with fire? Clearly, AI can help to process regulatory changes, to quickly determine impact and materiality, using Natural Language Processing (NLP), for all the languages in which regulatory policies are issued. This could balance things in the regulatory space, which should also appeal to policymakers. What is the use of policy change anyway, if organizations simply can’t keep up? We should move from this ‘Folie a Deux’, by embracing AI to deal with regulation in general and with AI regulation in specific. AI can be used to cope with the volume, frequency and complexity, if properly controlled. with a ‘human in command’.

‘Trustworthy AI’

The High Level Expert Group on AI has defined ethics guidelines for trustworthy AI, considering lawful, ethical and robust AI. This is based on the standard reference on AI by Stuart Russell and Peter Norvig. These guidelines are meant to promote trustworthy AI and inspire the makers of policies, models, algorithms and systems to consider these guidelines. Trustworthy AI has three components:

  1. it should be lawful, ensuring compliance with all applicable laws and regulations
  2. it should be ethical, ensuring adherence to ethical principles and values
  3. it should be robust, both from a technical and social perspective since to ensure that, even with good intentions, AI systems do not cause any unintentional harm. Each component is necessary but not sufficient to achieve Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. Where tensions arise, we should endeavour to align them

Ethical AI receives most attention in the defined guidelines. Seven requirements are described, which should be holistically evaluated, during the lifecycles of AI systems:

No alt text provided for this image

The lifecycle of AI, ? The High Level Expert Group on AI.

Next to that, 4 ethical imperatives have been defined for ‘Trustworthy AI’, namely ‘Respect for human autonomy’, ‘Prevention of harm’, ‘Fairness’ and ‘Explicability’. Many of this is to a large extent already reflected in existing legal requirements for which mandatory compliance is required and hence also fall within the scope of ‘lawful AI’, which is Trustworthy AI’s first component. Yet, as set out above, while many legal obligations reflect ethical principles, adherence to ethical principles goes beyond formal compliance with existing laws.

Ethica: ‘A system of ethics’

Naturally, deploying AI with ‘just’ NLP is not enough to have the new regulation processing system perform in a sustainable and competitive way, as mentioned above. Since external regulation has to be translated into internal policy every few minutes, some additional sophistication is required to quickly evaluate regulation against systems that formally and holistically describe business rationale. These systems are currently fragmented and asymmetric as well, because they are a mix of definitions of activity, principle, rule, risk and impact, either as assumption or goal. This creates issues like ‘unintended’ or ‘unknown’ bias, that seem almost impossible to detect and resolve, individually.

 Here too, a paradigm shift is required, to transform this fragmented body of ‘ethics’ into a structured database, that holds both ‘human explainable language’ and ‘machine readable code’. This system should also support human dialogue, chatbot-like, like a digital judge or counsellor, e.g. on how to interpret or apply policy rules from different perspectives.

No alt text provided for this image

An example debate by two human debaters and a human judge, where only the debaters can see the image. Red is arguing that the image is a dog, Blue is arguing for cat. Image credit: Wikipedia, CC-BY-SA. ? OpenAI.

Typically, cases of ethical, legal or economical tension could trigger discussion or litigation. We need to figure out how to facilitate this in the digital space. Of course, ethics, like legislations, are not static, because they are bound to time, culture and even trends, for example. Additional sophistication is required to address this diversity and volatility. There is no such thing as unbiased observation, learning and knowledge, neither in science nor in business. However, such a holistic framework of ethics or ‘Ethica’ should be able to help AI-driven organizations to ride the waves of regulatory tsunamis and have a dialogue with regulators, customers and other stakeholders in society, that is both intelligent and ethical. It’s safe to state that Spinoza’s influential ‘Ethica’ has helped to build societies and organizations in the last few centuries. It even describes practicalities, for those in need for self-help, e.g. on the topic of unreciprocated love. Why not use leverage all this in and benefit in the digital context?

Test description

Ethics, demonstrated in Geometrical Order, usually known as ‘Ethica’, is a philosophical treatise written by Benedict de Spinoza. Painting of Benedict or Baruch de Spinoza by an anonymous German painter, around 1665.

This sounds hopelessly academic and like a lot of work. But what’s the alternative? Abort digital strategies? The one implies the other, there’s simply no way back. The good news is that this will create a wave of future jobs and not only for top-notch ‘quants’. The people that hold expertise about ethics, law, behaviour and life in general come from totally different parts of the spectrum. 

The Why of AI

AI, although not too transparent, ethical or even legal, is basically capable to ‘override’ human goals, just like the computer in Willy Wonka. We need to reverse the perspective and move through the looking glass. Let’s make sure we can always override AI, whenever we think that’s necessary, via a human ‘in’ or ‘on’ the loop, or preferably, in command of the loop.

With the rise of Chess Grand Master and AI-driven FPL champion Magnus Carlsen, AI has again shown its ‘disruptive’ capabilities, in unexpected ways. More and more, AI is impacting the health, wealth and security of humans. AI should therefore be governed as such, like ood, pharma and finance industries. Maybe this calls for ‘FDA’ or ‘CE’-like certification. Let’s collaborate to create the ‘next generation’ of AI, e.g. by taking a closer look at the explicated and practical guidelines from Judea Pearl’s ‘cookbook’-like ‘Book of Why’, or by leveraging the good work that has already been done, e.g. by the ‘Alliance on Artificial Intelligence’.

Mrs. Ursula von der Leyen has stated the European Commission's resolution for 2020 and beyond, for General #AI Regulation or 'GAIR', GDPR-like. Hopefully, more Women in AI will step up and lead the way in the upcoming ‘Transition Twenties’, for all of us, to bring #explainable, #trustworthy and #sustainable #AI to #Life.






Very important issue!? These days I hope to publish about a stunning AI application. Human on distance (sure: somewhere in the development loop) ^JT

要查看或添加评论,请登录

Marcel van der Kuil的更多文章

社区洞察

其他会员也浏览了