Regulating AI: A Good or Bad Idea?

Regulating AI: A Good or Bad Idea?

To trust or not to trust, that is the question.

AI promises a brighter future, one filled with machines that do our dirty work, solve humanity’s most pressing problems, and maybe even learn to love us. These promises are not without peril. If we don’t approach the creation and implementation of AI thoughtfully, we may end up with machines that discriminate, don’t align with human values, and possibly even hurt us. 

This negative version of AI is already starting to take a tangible shape. For example, facial recognition algorithms have become somewhat infamous for recognizing certain faces better than others. Some facial recognition systems sold by major technologies companies have an error rate up to 34% higher for darker-skinned women than for light-skinned men. The key reason behind that bias is because these algorithms were trained on datasets that included more white men than dark women, so the AI was better at recognizing the faces it was more familiar with. The same way you’d be better at recognizing an oak tree over a pine tree if you were told what an oak tree looked like 10,000 times and a pine tree only once.

“The danger of AI is not that it’s going to rebel against us. It’s that it’s going to do exactly what we ask it to do.”  

- Janelle Shane during her April 2019 TED Talk.

This bias becomes a problem when we use AI to actually make decisions that impacts people. For example, imagine TSA using a facial recognition algorithm to determine which people to interrogate and which to let through. What if the AI recommends to interrogate a group of people simply because it wasn’t trained to recognize those people as ok? 

That’s the basic problem we’re facing in the short-term, and it extends beyond facial recognition algorithms to loan approvals, hiring decisions, medical diagnosis, and beyond. So how do we solve this computerized discrimination problem? Is AI regulation the right answer?

When is regulation helpful in general? 

Regulation is helpful when a behavior represents a potential public harm and that can be avoided by setting ground rules. In other words, the FDA regulates pharma company behavior by deciding which drugs they can sell since drugs can harm people if not tested properly. The SEC regulates how public companies can behave financially so as to not financially harm the many people who own their stock. The UN attempts to regulate how countries behave to avoid another world war. 

In these examples, and many others, regulation is a good thing because it attempts to optimize for human safety and happiness. For example, without regulation in the pharmaceutical industry, human greed would drive some people to behave poorly by selling drugs that aren’t properly tested and many people would get hurt. This unregulated scenario is bad for two reasons: 1) The fact that people get hurt and 2) It hinders long-term innovation. 

Long-term innovation requires trust.

I’m an optimist when it comes to AI. I believe AI will create an even more amazing world for us to live in. I see it happening already through the diagnosis of disease, increased safety in our communities, and personalized education at scale. While I fear both the immediate and existential threat that AI represents, my greater fear is not getting the opportunity to benefit from AI because our society gets burned one too many times during AI’s development phase and decides to put the axe on progress. 

Imagine that you are a kid, and you started a lemonade stand last week. You squeezed the lemons, added the water, the sugar, some ice, and you stirred. Your innovation worked and you sold it for a nice profit to your neighbors. Now, imagine that it’s week two and people are sick of your lemonade, so you decide that you have to innovate a new recipe. In your infinite kid wisdom, you decide to take the same recipe from before and add some of your mom’s favorite sleeping pills because you’ve observed her happiness when taking those pills. You serve the lemonade again and market your new recipe with a "secret ingredient that customers will love!" Your neighbors buy your lemonade, then as they commute to work they fall asleep at the wheel and people get hurt. You were innovating in an unregulated environment. 

That type of unregulated lemonade environment actually hinders innovation. It makes it so that you, as the innovator, lack an environment that allows for consistent and confident experimentation. And without experimentation, you cannot innovate. By hurting the general public like you did, it's unlikely that they will tolerate your continued experimentation. Bye bye innovation. 

If we don't trust AI, we won't benefit from it.

Imagine an AI application where you want to use a computer vision algorithm to read MRI scans. It’s a general purpose cancer detection algorithm, so it’s either 0 (no cancer) or 1 (cancer). A team of data scientists may build a great model with a vast training data set, they may test it against new data and show that it works, but when it gets pushed into production and it actually starts reading patient scans, its error rate shoots through the roof. Patients without cancer are being told they have cancer and patients with cancer are being told they’re healthy; it’s mayhem. This type of scenario would erode human trust in AI, and if enough of these scenarios happen, AI progress may find itself at a standstill. Through regulation, we may stand a better shot at sustained AI innovation.

The word regulation implies red tape, lack of progress, and is a generally uninspiring word. I, like most of us, prefer freedom, adventure, and creativity. Regulation seems to contradict these concepts. In part, that's because regulators can be too far removed from innovators to understand the value of regulation. Regulation shouldn’t be viewed solely as a means of risk management, it should be viewed as a means of sustaining innovative behavior. For example, when your parents told you to wear a helmet before riding your bike, they were regulating your behavior to ensure you didn't cause harm to yourself. That regulation was good because it created an environment where you could keep doing the innovative behavior you wanted: perfecting those wheelies.

Many people talk about “regulating big tech” because, they say, these tech companies are incapable of self-regulation. For most of their history, Google, Amazon, Facebook, Netflix and the rest have mostly provided great services for cheap or free prices with little reason for regulation. There didn't seem to be a negative side to their innovation. Google is a free search product, Amazon is a cheap everything store, Facebook is free online social networking, Netflix is cheaper and higher quality than Blockbuster used to be… The issue that has arisen over the last half a dozen years is one of AI, not “big tech.” As AI has become more sophisticated, so too have the abilities of these tech companies to do seemingly creepy stuff with their customer data. From selling weirdly targeted ads, to optimizing for screen time, to listening to customer conversations. Critics of big tech companies have grown louder as AI has grown smarter. This is a key point because as public trust in companies like Facebook erodes, it’s also an erosion of public trust in AI.

Ensuring we benefit from AI.

Thinking machines represent a new consideration for our society. We are building another form of advanced intelligence on this planet and we are proceeding to give this intelligence responsibility over many areas of our businesses, communities, and personal lives. It’s exciting because these AI-decision makers can see around corners that our human brains cannot. Problems can be solved with AI that we would never be able to solve if left to our own, organic, devices. However, given things like the 2016 election, the U.S. government’s research of autonomous swarm drones, and the use of deep fakes for fraudulent purposes, it’s clear that we, as a society, lack a framework for bringing thinking machines into our world. Luckily, there have been many things that we weren’t ready for throughout history, from nuclear capabilities to advanced biotechnology, and we always seem to make it work out.

Generally speaking, when two people first meet, they each strive to maintain a positive reputation with the other person. They want the other person to trust them and they want to learn to trust the other person. Point being, when it's a person to person relationship, trust is self-regulated because it's a human instinct to build positive social bonds. When it comes to the "AI - Human" relationship, AI isn't sophisticated enough yet to be responsible for its own reputation. Therefore, people who create and deploy AI into the world have a responsibility not just for their personal or their company's reputation, but also for the reputation of AI itself. If a "non-AI-insider" person is treated poorly by an AI, whether through discrimination or being hit by an autonomous car or being given an incorrect cancer diagnosis, that person's distrust of AI is going to be applied to AI across the board. It will be "AI" that is at fault, not the person who created the AI and not the car.

It's like a smart microwave.

AI is different from every other technological revolution because the technology itself is intelligence, which feels worthy of blame. Imagine that it's 1973 and you and your family just made the big decision to buy a microwave. It was a bit scary to have a new and difficult-to-understand technology preparing your food, but it just seemed so damn convenient. You used the microwave, and have continued to do so ever since, but you never lost your skepticism. Is this thing causing me cancer? While your relationship with your microwave is partly defined by your distrust of it, it's ok because the microwave can only hurt you if you let it. If you opt not to use it at all, or if you decide not to stand in front it for fear of getting beamed by tiny little micro waves, the microwave can't hurt you.

Now, imagine if your microwave were intelligent! It automatically detects the food inside so you never have to press any button and the temperature is always perfect, it texts you when your food is the perfect eating temperature so you don't have to watch it, it keeps track of the nutrient composition from your last week of meals and recommends food to round out your diet. What an amazing smart microwave! But, with all this added capability, comes deceit. What if the microwave secretly turns on every time you walk by it, hitting you with those microwaves you fear so much. What if the microwave starts recommending you eat mac and cheese six days a week based on your unique genetic makeup, causing you to gain weight. What if the microwave tells you the food is the right temperature to eat when in fact it's scolding hot! The horror! You no longer trust your smart microwave. You throw it out. You don't buy other "smart" products because you suspect they might harm you like the microwave did. You're weary if doctors say they're using AI to treat you. You don't like self-driving cars. In general, you're against the development of smart machines because it's clear that they won't listen to you. Your relationship with AI has been ruined, and it wasn't AI's fault.

The key thing to realize in this scenario is that the microwave isn't acting with malevolent intent. It isn't conscious enough to be "evil" or be driven by emotion. The bad things that the microwave did to you is because the humans who taught the microwave to do the good things unexpectedly taught the machine to do bad things too. It's up to us, as the humans creating these smart machines, to do our best to avoid inadvertently asking AI to cause us harm.

To answer the original question, regulating AI is a good idea if approached from the angle of sustaining AI innovation by way of building trust between people and AI.

AI with <3,

Michael

Hopefully this stirred up some helpful thoughts for you. I’m always down to chat about pretty much anything! Some of my main interests are AI, BMIs/BCIs, human longevity/CRISPR, psychology/neuroscience, global collaboration, space exploration, physics/quantum computing, and typically anything else that seems like it could improve the human experience. You can message me on LinkedIn or find me on Twitter @weissiam.

A final food for thought...

For the vast majority of human history, almost every decision made that has impacted our civilization was made by a human. Now that intelligence is becoming artificial, we have the opportunity to pass off decision-making to machines. In a way, the evolution of AI is like humanity creating its own boss. AI will determine many facets of our lives, and if we don’t build it right, we’re going to leave ourselves with one crappy boss.

Prof. Dr. Ingrid Vasiliu-Feltes

Deep Tech Diplomacy I AI Ethics I Digital Strategist I Futurist I Quantum-Digital Twins-Blockchain I Web 4 I Innovation Ecosystems I UN G20 EU WEF I Precision Health Expert I Forbes I Board Advisor I Investor ISpeaker

5 年

Trust but verify...!

要查看或添加评论,请登录

社区洞察