ETHICS in AI
image credit: androidauthority.com

ETHICS in AI

Here is a thought experiment. You are driving a car which is about to get into an accident. You can either save yourself, a small child, or a group of 5 elder adults. It must be one of these options; it cannot be all. Which option will you choose?

This question has never been asked on a driver’s license exam, but questions like these will be asked of the AI that runs driver-less cars. Today, trying to answer these questions often leads to more uncomfortable questions. For example:

  1. Is there a quantitative measure of human life? Does it vary with age, gender, nationality, health?
  2. For now, at least, AI is not sentient. Therefore, the ethics AI expresses, is that which humans impart it. Who will decide what ethics to impart AI?
  3. Will Tesla practice different AI ethics from GM? Will the US practice different AI ethics from China?
  4. Are we expecting our AI to exhibit a higher ethical ground than ourselves?

Also, what is considered ethical behavior varies by culture and evolves over time. Thus, as AI becomes more intertwined in our daily lives with higher levels of capabilities, the discussion of AI ethics will cause humanity to dig deep into itself and its own assumptions of ethical and moral behavior, to solve these profound dilemmas. The most sought-after job ten years from now would be that of a philosopher, rather than that of a data scientist!

While AI today brings tremendous value to our world and will continue to do so, it is worth understanding what some of the ethical hazards our current and emerging uses of AI will pose, as below:

Lack of Fairness: AI algorithms are trained on prior transactional data. These algorithms thus amplify any human bias, conscious or unconscious, present in the historical data set. For example, an AI enabled recruiting algorithm for a leading technology firm, shortlisted only male candidates. This was due to the bias in the historical data of selected candidates, who were mainly male. The recruiting algorithm was shut down and the incident caused some soul searching within the firm. HBR has a very interesting article on this topic: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

Money vs Morals: Another tricky issue is optimizing AI for the right outcomes. A ‘for profit’ hospital must calibrate its AI to achieve balance between patient outcomes and profitability. Banks must ensure balance between inclusive lending, risk weighted pricing and managing default. When firms get competing objectives out of balance, it can raise ethical issues. An often-quoted example is, social media sites optimizing algorithms for generating clicks and advertising revenue, which thus lands up serving progressively more addictive, and at times radical and/or false content.

Balance of Power: In 1957, Vance Packard published ‘The Hidden Persuaders’, a prescient book on how advertisers manipulate our psychology and inner desires to peddle wares. The book also talks about the same techniques being applied to the 1956 presidential election! Fast-forward today, social media companies now understand us individually at a level of intimacy that is scary! While we may choose to bask under the glory of ‘free-will’, such intimate knowledge is ‘lethal’ in manipulative hands. Advanced AI technologies like Facial Recognition can further add to this information asymmetry, between the data and technology ‘haves’ and ‘have-nots’. The ‘telescreen’ in George Orwell’s 1984 has now become a reality! Today, a surveillance state can exert enormous power over its citizens through a combination of access to their digital footprint and facial recognition technology. This power can then be ‘lethal’ beyond a figurative sense.

Finally, individuals with access to advance technologies like Brain Computer Interfaces and bionic body parts, would have too great a competitive advantage against someone without it. It would be safe to assume that wealthy individuals would have such access much earlier than the common man, potentially further exacerbating the global issue of wealth inequality and creating larger gaps in other life aspects like health.

Privacy and Security: Information asymmetry can also result in violation of our privacy, for example, organizations misusing facial recognition to infer our sexual orientation or age and basing hiring decisions of that knowledge. Besides privacy, information security is another concern. Hackers can compromise and subsequently misuse our digital identifies. Today a credit card or SSN hack feels huge. Tomorrow we will contend with a compromise of our entire digital identity, which can dwarf the inconvenience of a credit card hack. And day after, when our brains our plugged into the cloud, hacking would take a very different ominous tone.

Tinkering with Nature: A fast emerging concern is the use of AI, intertwined with other advanced technologies to change the course of mother nature. This includes Prolonging Life, Designer Babies and Bionic humans. Often, the challenge in such advances is anticipating their second and third order consequences on humanity. For example, prolonging life can potentially put a strain on our natural resources. It will also surely cause changes in cultural and societal norms. If humans start living for 150 years on an average, we will have to answer questions like when do we get married, when do we have children, do we live with the same spouse throughout our life, how do we keep ourselves positively occupied and what do these long life spans do to our mental well-being.

Job Displacement: Over the past decades, every technology advance has created a fear of human’s being displaced from their jobs. While jobs have been displaced, many newer types of jobs and in greater numbers than those lost, have been created. A big question today though is, will AI be different, where we may see a massive net reduction of jobs. Perhaps an even bigger questions is, how many of the jobs being created by the AI revolution are well paying jobs, with good job satisfaction. A valid fear is that as AI monopolies are being created, they may result in ‘monopsonies’. Monopsony is a market situation when there is only one buyer. If majority of the new job creation is concentrated with a few employers, they will then have the power to restrict wage growth.

Every technology sits on the shoulders of its predecessor technologies and therefore, we are now entering a phase where both the amplification and frequency of technology disruption is substantially more. Interestingly this was anticipated by futurist Alvin Toffler in 1970, when he coined the term ‘Future Shock’ and wrote about the social implications of technological change. We will therefore require a more active intervention by governments and private-public partnerships, to enable displaced workers to acquire newer skills. Leaving this re-skilling to market forces may not be enough.

Over-reliance: As AI becomes more pervasive in our lives, humans may become over reliant on it, losing their innate abilities across a variety of life skills. Though this is frankly an old concern. Through time, as technology has continued its progress, humans have left old skills and acquired new ones. Most of us today cannot farm, build a house or sew clothes, the most basic survival skills. But we don’t worry about it. Similarly, as AI permeates our everyday, we will learn newer skills. The debate will be how much of our life activities we are okay turning over to AI, without losing our sense of being humans. For example, if a friend needs advice, a person suffering from depression needs counseling, kids need to be taught right vs. wrong – are we okay turning over these activities to AI? Link to an interesting article on this subject: https://www.fastcompany.com/40588020/the-case-against-teaching-kids-to-be-polite-to-alexa

In this future scenario of over-reliance on AI, we will also need to plan for a possible apocalyptic event where AI goes rogue or stops working. The question however to ask ourselves in this apocalyptic / dystopian scenario is what should we be concerned more about - our lost skills of giving advice, expressing empathy or reading people’s faces or our inability to grow food, build shelter and sew clothes?

Human Irrelevance: Yuval Noah Harari in his talks and his books mentions that while during the industrial revolution humans had to worry about exploitation, during the AI revolution humans will have to worry about irrelevance. Possibly though, before Super-intelligent AI and ‘Singularity’ becomes a reality, we must contend with much more pressing issues as we scale our use of AI. By addressing these issues like fairness, privacy, balance of power etcetera, we will hopefully take steps to ensure that we propagate the use of AI in a responsible manner, ensuring that humans lead more fulfilling lives, rather than finding themselves irrelevant.

In part 2 of this article I will discuss steps we can take to use AI responsibly and address and mitigate the ethical concerns that inevitably come up with our use of AI and perhaps any advanced technology.

Shruti Dutta

Managing Director at Accenture UK Financial Services - I&D Lead UKI Ops

4 年

Well said Gaurav, I believe Ethics and Bias are key need to governed well in AI.

回复
Saumyajit Ghosh

Accenture Growth & Strategy | Solution Development | Sales Excellence | Pre-sales | Sales Enablement

4 年

Food for thought and humour in equal measure - I couldn't help smiling at the question that if humans were to live ~150 yrs then one would wonder whether to live with the same spouse throughout ?? Look forward to part 2

回复
Abhishek Verma

Client Partner | Consumer Goods, Retail, & QSR | Banking & Capital Markets |

4 年

Thought provoking

回复
Neeta R.

Global Marketing & Communications, The Creatif Wagon

4 年

Very well written!!

回复

要查看或添加评论,请登录

Gaurav Agrawal的更多文章

社区洞察

其他会员也浏览了