Human-centered Artificial Intelligence
Image by Alex Knight 2EJCSULRwC8 at Unsplash

Human-centered Artificial Intelligence

Could humans take a more collaborative approach to Artificial Intelligence (AI) instead of seeing it as competition?

From the early days of AI in the fifties we were primed to see it competitively. Initially, we constructed AI definitions around human intelligence: by defining AI as ‘the study and development of computer systems that can copy intelligent human behaviour’ (1). In doing so, AI is seen as an alternative or even a replacement for human intelligence.

Next, we started tracking AI performance through all sorts of tests such as the Turing Test, which qualifies machines as intelligent when people cannot distinguish a conversation with a machine from one with a person. Or even the Wozniak’s Coffee Test, in which machines are defined successful if they can make coffee like we do. Another illustrative observation is that humans have delegated their best experts to fight machines in show quizzes and games of chess and Go.

But there is a different way of approaching AI, which comes alive in Kaplan & Haenlein’s definition of AI (2):

a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.

This definition taps into core benefits of AI of absorbing large amounts of data (from social media, CRM-databases, sensors), converting all sorts of sounds, text and visual data into insightful knowledge with help of algorithms, and as a bonus: taking better decisions based on previous outcomes (machine learning). It allows us to define in a more open and curious manner the type of relationship that we want to have with machines whether this is knowledge driven (driving cars), interpreting emotions (understanding consumer mood in a chat box), maintaining social relationships (robots as a friend) or creative expressions (producing essays and paintings).

Some firms like IBM have stopped using the term AI to address the fear of being replaced by machines. I have no doubt that AI will replace tasks or even professions currently commonly performed by people. Much in the same way that we dismissed horses as sources of power through the adoption of automobiles, self-driving cars will dismiss us as drivers too (3). Still, I like to believe that AI’s ability to learn and logically think is not truly effective if it doesn’t work alongside with humans. Because there are a lot of human factors to be considered, as these examples show.


Human-centered approach

  • AI might be more efficient but not always preferred by humans. In an interesting study on the relationship between humans and AI three jobs (journalist, teacher, driver) were split in all sorts of tasks belonging to the role (4). No matter if the task is performed by humans or AI, people always expect the task to be completed with competence. Some tasks such as preparation of school materials can be easily delegated to machines however for other tasks such as moderating discussions in classrooms, warmth and empathy are required. This is the case in healthcare too: at this stage of AI development patients may only accept to hear a diagnosis from their doctor, though machines could be competent to perform both the analysis and message delivery.
  • AI generates profit but we think this is not ethical: In June 2017 Uber’s AI enabled dynamic pricing system pushed up charges after the deadly terrorist attack in the London Bridge area (5). Surplus prices are a result of time of the day, number of available Uber drivers and demand at a location. Uber disabled the surge pricing after some 40 minutes in the immediate area of the attack. That human intervention remains necessary is illustrated by the fact that 6 months later Uber charged a Canadian rider 18,518.50 Canadian dollars (app. €11.900) for a 20-minute ride (6). It was difficult to get in contact with Uber and only after compassionate others stepped in on social media a full refund was given. Other types of ethical decisions are when we formulate algorithms for self-driving cars that in case of a crash determine if the car hits a child or a couple. 
  • AI is great fun but who owns the data? In 2016 Google’s enterprise Niantic Labs launched an augmented reality game to hunt Pokémon using its Google Maps application. Apart from probably real entertainment for young people the game also generated sales for Google and their partners as the game made users walk and enter the establishments that had paid for them to get the reward (7). This is not just a matter if Google should have informed us in advance what they intend to do with the data, but also the fact that they collect types of data that we were never aware of. For example, I understand that Google Maps needs to know my exact position on earth but what will they do with data such as chosen route and speed? Do I really understand facebook’s terms of agreement? Do privacy legislation and data protection agencies protect from a company similar to Cambridge Analytica using my personal data?
  • AI may be able to do a lot but not everything (yet). There are still technical challenges and watch-outs for AI that perhaps one day are resolved. First, if the AI enabled system works with data from the past, well, it is going to build on that and this may form an unintended bias. Sad example is that self-driving cars are better in detecting people with lighter skin tones than people with darker skin tones, simply because the algorithms were trained with images of the former (8). Another challenge is that the lack of transparency that may come with AI enabled decisions. It may be a great benefit as a customer to receive feedback from an AI enabled system if your loan was approved, but ‘the computer says no’ the customer wants to know how the bank reached the decision. Finally, neural networks might be that good in acquiring and processing new information but they tend to abruptly forget previously learned information. This is a phenomenon known as catastrophic forgetting. Let’s be fair, memory function in humans isn’t perfect, but catastrophic forgetting is a serious challenge for deep learning data scientists.


Evolution of human beings

In summary, I see a number of reasons why humans and artificial intelligence are better off working closely together: our preference for human interaction, ethics, privacy & data ownership, the intrinsic challenges as a result of unintended biased data, the lack of transparency of decision making process and catastrophic forgetting.

Data scientists are working hard to resolve many of the challenges in each of these areas. Take for example, privacy concerns (9). Recently, predictive models originally built for space research have allowed us to remove personal data even after the construction of the model. Another approach is a so-called generative adversarial network (GAN) whereby a neural network with real personal data and a neural network with fictious (but could be real) data compete. The end-result as a model based on fictious, anonymized data that works as accurately and is privacy friendly.

We cannot simply burden data-scientists with this responsibility. There is a need for all of us as individuals to keep on learning how to operate the constantly evolving machines around us, for organisations to promote critical thinking, to ask questions and to raise the awareness of ethical issues around data, while also ensuring governments develop appropriate norms and legislation.

The development of AI holds similarities with the evolution of humankind (10). For a long time scientists believed that Neanderthal humans were outsmarted and therefore surpassed by homo sapiens. However, recent discoveries show that Neanderthal humans were probably more intelligent than we are today because they had 15% more brains. In addition, remnants show that great musicians and painters were among them. Despite their lower cognitive intelligence homo sapiens has become the dominant type of human. This is attributed to our social adaptation: homo sapiens lived in larger groups, switched to other groups and learnt from others.

This is the way forward I see for us and AI: we will not beat AI on cognitive intelligence but we can stay ahead if we continue to stay connected to what makes us human: sensing feelings and building partnerships. 

 

About the author:

Constant Berkhout works at the crossroads of Shopper Psychology, Data Analytics and Retail Strategy and is author of two books:

Retail Marketing Strategy, Delivering shopper delight

Assortment and Merchandising Strategy, Building a retail plan to improve shopper experience


References:

  1. www.oxfordlearnersdictionaries.com
  2. A. Kaplan & M. Haenlein, M (2019), Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. 
  3. C. B. Frey & M.A. Osborne (2017), The future of employment: How susceptible are jobs to computerisation?
  4. J.E. Wieringa (2020), seminar Artificial Intelligence by the Customer Insights Center
  5. https://money.cnn.com
  6. www.firstpost.com
  7. S. Zuboff (2019), The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
  8. B. Wilson, J. Hoffman, and J. Morgenstern (2019), Predictive inequity in object detection.
  9. J.E. Wieringa (2020), interview “Marketing: balancing data analytics and privacy www.rug.nl/news
  10. R. Bregman (2020), Humankind: A hopeful History

要查看或添加评论,请登录

社区洞察