Artificial Intelligence (AI) Impact on Ethics
Created using CANVA Tool

Artificial Intelligence (AI) Impact on Ethics


“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” – Ray Kurzweil , Futurist

“How did you go bankrupt?" Two ways. Gradually, then suddenly.” is a brilliant quote from Ernest Hemingway from The Sun Also Rises. It could be said the same about the AI - How did AI take over humanity - Gradually and then Suddenly. Will it be a reality? How do we tackle the challenge of Ethics and Humanity related aspects of AI?

Metropolis” is a futuristic science fiction movie released in 1927 by Fritz Lang. This movie broaches upon the idea of the impact of industrialization and how an error in technology and automation could impact the humans and also highlights the impact of disparity between classes - The haves and have-nots, the rich and the poor. Humanity has progressed phenomenally over the past century in a multitude of areas - be it medicine, science, engineering. One aspect that has already made a giant leap and is continually causing an impact is Artificial Intelligence. 

"Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." - Prof. Stephen Hawking

 In the book “The Relativity of Wrong”, the great science fiction author and author of the seminal “Three Laws of Robotics”, Issac Asimov states the relativity of wrong decision making. AI ethics can be summarized in one statement. How relative is the solution to the wrong decision being made. Ever since the first seminal article on “Neural Networks” by Walter Pitts and Warren McCulloch in 1943 while trying to emulate “Turing Machine” and the coining of the term “Artificial Intelligence” at Dartmouth College in 1956, AI has progressed slowly but steadily. The real progress has been in the last two decades, leveraging the exponential growth in computing power and algorithms developed. AI has made a lot of impact on our lives. Be it Autodidactic Machine Learning Algorithms to highly predictive algorithms of automation, AI is everywhere. 

Isaac Asimov - Laws of Robotics
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

The question is how many of these laws can be obeyed by a Robot or an AI Algorithm all the time? When it comes to using of AI, it's only as good as the algorithms and in a way, it becomes an example of what the thought process is, that of the design and development teams involved. Would an algorithm we build be able to cover the diverse aspects of the global users? Can we train the algorithm to billions of permutations in choices and trillions of potential decisions and provide diverse options of how humans will behave based on the values and behaviours imbibed by each of them?  

Let us take some facts.  

“AI doesn't have to be evil to destroy humanity – if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings." – Elon Musk
  • Firstly, AI is built for and by humans and is shaped by human values and behaviours. It will act as a mirror of current societal thought or what is being fed in. Given the current complexity of human and societal challenges and complexities, what will happen if the same challenges are passed on to the systems getting developed? As per some estimates, most of the key Artificial Intelligence programmers are Young computer scientists that are a young white male. How can this set represents the diversity the globe has. Only 22% of the programmers working in Artificial Intelligence and Machine Learning are Women. When it comes to minorities, the number is far too less. There are examples of top tech giants having 4% to 5% strength in key roles when it comes to minorities. These are some of the implicit and existing biases we see.
  • Algorithmic biases are another major challenge when it comes to technology. Especially when we use Machine Learning algorithms extensively, the data getting fed, rules and logical decisions made determines the outcome. For example, the Google Image Search for CEOs displays only 11% of women CEOs when in reality there are about 27% women CEOs in the US. There are certain implicit decisions made by the algorithm based on how it is written. Additionally, the data set being fed needs to be as diverse as the actual real-world scenario. This is another challenge if the companies do not have the data readily available. 
“I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.” - Geoffrey Hinton - AI Pioneer and Father of Deep Learning / Neural Networks

Let us take some examples of where the AI-based algorithms have gone wrong. 

  • Microsoft experiment on an AI-driven Chatbot for Twitter that learns based on the inputs given - @TayAndYou / TayTweets failed miserably due to how it was designed. The “Say After Me” feature built was leveraged by abusive users to turn the chatbot to appear like a racist Chatbot. Microsoft stopped the exercise within a day of launch
  • Quite a few crime predictions and risk ranking tools used by security and police forces such as COMPAS, PredPol have been criticized for how it focuses on risk scores on people of colour and minority over the potentially high-risk majority. The reason is the dataset shared, the percentage of risk profiles generated in the limited basis and the algorithm.
  • Quite a few Chatbot based mobile apps have a challenge handling non-conventional queries and are likely to go into a potentially incomprehensive state. 
  • A Google Translate algorithm for the Turkish language was initially discriminating for who could be a babysitter versus a doctor. The algorithm was mapping “she is a doctor” as a “he is a doctor” when we do a reverse translation. Incorrect auto-translation tools in popular social media applications have resulted in dangerous actions taken by users as well as protection agencies across the world. 
  • AI Algorithms can reject profiles of people from Asian region due to the eyed and eye-brows and can predict a happy child to be predicted and applications can be rejected.
“There’s a real danger of systematizing the discrimination we have in society [through AI technologies]. What I think we need to do — as we’re moving into this world full of invisible algorithms everywhere — is that we have to be very explicit, or have a disclaimer, about what our error rates are like. — Timnit Gebru, Research Scientist, Google AI

In addition to the examples given above, there are challenges of deep fakes and abusive hackers to handle. The challenges get compounded if the algorithm is one-sided. How can we address the challenges of biases in AI and Big Data while we design? As a society, we need to handle issues such as Justice and equality, correct use of force, ensuring the privacy of users, Safety and certification of AI and Robots being used, How to address the displacement of labor and taxation, how to handle Information asymmetries arising as a result of the systems get implemented, how to find normative consensus covering the diverse global audience and how to address Government mismatches and challenges. 

This can be addressed by three layers of AI Governance Model - as postulated by Gasser and Almeida of Harvard University. This is shown in a rough sketch in the poster for this article.

  1. A Technical Layer of Data & Algorithms - that addresses Data Governance. Accountability and standards
  2. Ethical layer - that addresses ethical criteria and principles
  3. Social and Legal Layer - that addresses norms, regulation and legislation of the societal, national and legal requirements

It is also important for Corporates, Governments and various stakeholders to have the policy to regulate the AI and technology evolution to address the biases and fix accountability by implementing Governance principles. As an example, the city of New York has passed a law in 2017 to ensure the algorithms used by the city have a fair and transparent mechanism to address biases and discrimination. Australian Human Rights Commission has launched a project in 2018 to address the impact of AI and emerging technologies. Finally, technology giants such as Google have a very well written AI Policies that are getting emulated. 

Conclusion

To conclude, the success of AI will be when the algorithm produces the results in the most optimal and the best manner possible without any biases and a transparent manner, in a way better than a human can do. An example could be how AI could be used for handling traffic management in the Indian city of Bengaluru (a.k.a Silicon Valley of India) - in a way far better than the talented police force, as the algorithms and computing power can address the challenges of human fatigue and error. The success of the society and the world using technology for constructive purposes is largely depended on how this gets done without becoming a big deterrent. 

What are your comments?

 Note:- Title Images are created using CANVA tools. Authors of the quote referred where known. Most of the information shared is generic and available in various forms in the Internet. Respective trademarks are owned by corresponding firms. Opinions about tools highlighted are from a personal experience standpoint and in no way reflect the views of my current or past employers or clients.

#WhatInspiresMe #KRPoints #EthicsInAI #Innovation #Robotics #Ethics #EthicsInTechnology #Futurism

Louis Swart

Helping business owners focus on Revenue Growth by mastering Delegation to Rockstar VAs ??Business Growth & Mindset Coach ????Ironbrij VA Staffing Solutions ??Bridging Gap: Where You Are Now & Where You Desire To Be

3 年

Great article

Shivam Shivhare

Lead - Training & Delivery at TheSmartBridge ?? Certified TensorFlow Developer | Python | AI ??| Cloud?? | Data Analytics ?? | Docker ?? |

4 年

I love the kids coding concept--and thought it would be great for you! Coding improves Kids concentration by 300%. Live 1:1 online coding classes for Kids age 6-14 taken from home. Created by MIT and IIT Computer Scientists. *Proven benefits:* +300% improvement in concentration. +75% improvement in logic. +23% improvement in Abstract Thinking. Enroll for a FREE TRIAL class for your kid on https://bit.ly/Juniorcode *WhiteHat Jr is the world’s first structured Artificial Intelligence curriculum for kids.* Featured in Business Standard, Inc 42, Outlook, Mint, Business World, Times of India https://bit.ly/Juniorcode #kids #artificialintelligence #machinelearning #trials #innovation #coding #elearning #datascience #research #thinking #improvement #learning #education #online Thanks & Regards Shivam Shivhare

回复
Vijayendran M

Marketing Manager at KP Manish Global Ingredients Pvt. Ltd.

4 年

Great article, Mr.Kalilur Rahman. thanks?

Agnieszka Fienko-Pawlicka

Software Program | Project Management | Software Delivery | Team Leadership (EN, DE, PL)

4 年

Kalilur Rahman Thanks for reminding 'Isaac Asimov - Laws of Robotics'. Best regards. AFP

Thank you for sharing this article, Mr. Kalil. It is indeed very thought-provoking. Why Hawking was skeptical? Are we living in a "visible future" as argued by Simon(1950) or the time is yet to come as stated by Ray Kurzweil??Has the technology surpassed humans and started re-designing itself?

要查看或添加评论,请登录

社区洞察