Limited capabilities of AI Systems

Limited capabilities of AI Systems

In the last article, we talked about economic trends that have fueled the development of artificial intelligence.

In this article, we will have a look at the limited capabilities of artificial intelligence to understand the vulnerabilities of this widely discussed (and used :) ) technology.

The articles are part of a series to help self-employed or entrepreneurs better understand and practically apply AI in their businesses.

If you have any questions, please feel free to reach me on LinkedIn.

Introduction

Human abilities to make correct decisions are limited. AI systems can support them in making better decisions.

"Decisions are often automated and largely unconscious."

Adult humans make about 20,000 decisions a day, a few big ones and many small ones, most of them unconscious. There are two basic ways for them to make decisions. One option is to weigh things up consciously. Accordingly, a manager would be aware of his or her own goals, options for action, and barriers, carefully weigh the pros and cons and decide according to the best overall outcome. This process is largely in line with the utility theory of economics and the human self-image, but it is only part of the truth.

Around 20,000 decisions are made every day, which is roughly equivalent to one decision every three seconds, assuming that we get eight hours of sleep per day.

Conscious deliberation is thus not even possible in most cases. That's why our brain often falls back on a second option: automated and largely unconscious decision-making processes that have developed over the course of evolution.

These are fast, efficient, almost effortless, and unfortunately subject to errors. Kahneman describes the two possibilities as system 1 (fast, automated, unconscious) and system 2 (slow, conscious, not automated).

https://juryanalyst.com/blog/daniel-kahneman-juror-bias-cognitive-bias/

Shown on four different targets are the results of four different teams, each with four members. The results of team A are correct, the hits are in the middle and close together. Those of Team B are distorted, the hits have missed the target and form a cluster. Team C's results are noisy, they are centered around the middle but widely scattered, Team D's are both distorted and noisy.

Right decisions

People make correct decisions predominantly in predictable environments in which decisions are met with the clearest and most immediate feedback possible.

Examples of such environments are mazes, games such as chess, driving, or the detection of anomalies in quality assurance. In driving, for example, a wrong decision would most likely be immediately followed by an accident. Therefore, skilled drivers usually share the same views about the correct behavior at traffic lights, and highway ramps and make the right decisions.

Even in the case of quality assurance, opinions rarely differ on the errors that occur among skilled workers.

Biases

Biases Influence unconsciously in environments that do not allow immediate feedback. The correctness of personnel decisions, for example, often only becomes apparent after years. Social biases, such as those that cause people with dark skin to be hired less often than white people after job interviews, often only become apparent after decades. Over 50 cognitive biases have been found already, and I have listed examples of some below.

  • One widespread phenomenon is the anchor effect. People use a certain value for an unknown value as orientation, but the known value does not have to have any relation to the topic. In price negotiations, this effect is often used for the initial offer.
  • The halo effect describes the tendency to oversimplify and to make hasty judgments - for example, to put people into mental pigeonholes even though the information required for a rational judgment is not available.
  • Heuristics are another way of reacting quickly and making decisions with limited knowledge and in limited time. In substitution heuristics, the original question is replaced by another question that is easier to answer. For example: the question "Will this man be a successful CEO?" is replaced by the question "Does this man look like he could be a successful CEO?".
  • Risk assessment and statistics are also not strengths of System 1. Even the different presentation of a probability of occurrence can trigger different behavior. For example, "one in 10" sounds like a greater proportion than "10 percent."

Noise

Noise includes unsystematic influences and has received less attention than bias.

Kahneman distinguishes two types of noise:

  • If different professionals in the same role make different decisions, he calls the result variability across individuals.
  • If the same person decides the same case differently at different times, it is called variability over opportunity.

For example, in one study, software developers were asked to estimate how much time they would spend on a particular task on two different days. The number of hours differed by an average of 71% - a deviation of around 10% was expected.

In some cases, the decisions of different experts on the same tasks differed even more than the 71 percent already mentioned. This applies to areas as diverse as the valuation of inventories and real estate, court judgments, and insurance policies.

Experts are therefore much less objective than they themselves think. The factors that lead decision-makers away from the correct assessment of a situation do not follow a clear pattern. Because the results are not one-way (i.e., biased), decision-makers are not aware of these influences. Instead, they believe they can assess the situation objectively because they have great confidence in the correctness of their own decision and at the same time great respect for the intelligence of their colleagues.

Distortions and noise

The combination of distortion and noise occurs frequently in reality. The noise in the results can be measured relatively easily by mentally removing the target - in the example, the target - from the representation of the results. Wherever the center may have been, the team did not hit it well. To measure noise, all you need are a few realistic cases evaluated independently by different professionals. The observed scatter of decisions yields enough knowledge about where the correct target is. By the way, this is also true for noise without bias.

Companies expect consistent and correct decisions from their employees. Unfortunately, people are unreliable in this regard. Decisions can vary from one employee to another. Even the same employee may decide differently from case to case because of irrelevant factors such as the weather, blood sugar levels, or a substitution heuristic.

The cost of this decision-making behavior to organizations is high. There are several ways to address the problem. The most radical solution would be to consistently replace human decisions with algorithms. In many domains, this is already possible:

  • In environments where humans can make correct decisions, machines typically can as well.
  • If human decisions suffer from biases, AI systems often also suffer from biases that they have learned from past human decisions in their training phase, for example. These biases can be addressed, for example, through optimized training data and procedures or explainable AI modules.
  • Theoretically, it is even easier to reduce noise. Even relatively little data and simple algorithms can solve the solve the problem.

However, the replacement of humans by machines is generally not desired - even if it would be technically feasible and make economic sense. It therefore makes more sense for companies to define processes within which AI systems support humans in making better decisions. The goal of the processes is to combine the strengths of AI systems with those of humans.

A process could look like the one shown below:

https://hbr.org/2019/07/what-ai-driven-decision-making-looks-like

An AI system provides possible decisions based on data. This could be, for example, the probabilities for certain diagnoses and therapies, for loan defaults, or defective built-in parts. An expert then makes a decision in conjunction with additional data that is not available digitally (such as personal experience with the patient/borrower/supplier). What is essential in this process is that humans do not interact directly with the data, but with possible decisions that AI systems produce by analyzing the data. This model is likely to be more widely accepted than the complete replacement of human decision-makers with machines.


In the next article, we will have a look at the applications of artificial intelligence and their categorization based on real-world examples, on our journey to fully understand how we can practically, and sustainably integrate AI into our businesses.

I'm looking forward to your feedback/questions.

Written with inquisitiveness, ambition, and humility,

Alexander Stahl


要查看或添加评论,请登录

Alexander S.的更多文章

社区洞察

其他会员也浏览了