AI can wreak havoc if left unchecked by humans
REUTERS/Fabrizio Bensch

AI can wreak havoc if left unchecked by humans

This article is published on the World Economic Forum-Blog

In the 16th Century, according to legend, the wife of the spiritual leader of Prague summoned a magical creature made of clay – a golem – to relieve the city of a water shortage. She commanded the golem to bring the water, then forgot to tell the creature to stop. Prague got all the water it needed – and then some – with the resulting flood inundating the city and killing many people.

We have our own magical creations these days, mathematical formulas called AI algorithms. Used wisely, these data-based decision engines can solve many difficult business challenges. Used in ignorance, these engines can wreak havoc, flooding businesses with narrow courses of action that can cause both financial and reputational harm.

Companies currently implementing, or planning to implement AI as part of their decision-making processes must learn to use these digital golems wisely, or pay the price.

AI is an unstoppable force

Make no mistake: AI is coming of age for many businesses around the world. A recent MIT / BCG survey found that, internationally, 20% of the companies surveyed are starting to bring AI to scale, with their AI teams becoming fully staffed and AI engines fully industrialized. The survey concludes that this small group of AI leaders will be able to use the self-learning capability of AI to continue to increase their lead. The challenge for lagging companies (and entire countries), therefore, is to close this gap before it becomes insurmountable.

Unfortunately, short-sighted management teams that try to catch up with the leaders through a kind of “brute force” AI implementation risk a more spectacular failure.

AI-powered decision models are designed to maximize an objective function, whether that means to optimize margin, minimize inventory, or reach any number of business objectives. But, devoid of heart and soul, these algorithms will base their decisions on numbers only, with sometimes dire consequences.

The real-world implications of unchecked AI

Lest you think this is some obscure matter to be fought over by mathematicians, data scientists, and coders locked in a room, consider these three real-world outcomes of business decisions dictated by AI, but not informed by humans.

? Several retailers have been trapped by algorithms designed to manage prices to maximize short-term margins. These algorithms were designed to assess the price sensitivity of individual retail locations. Those locations where demand for products was relatively unaffected by rising prices were deemed ripe for a series of price increases.

The only problem was that the stores the algorithm selected were in poor neighborhoods. When word got out that the companies were raising prices for its poorest customers, the media backlash was harsh. Each of the companies quickly abandoned the price increases, but not before the damage to their reputations had been done. Had any of the companies instituted a process that included human review of AI decisions, this public relations nightmare might have been avoided.

? An AI engine created by a fashion retailer was designed to identify which items a given customer would be most likely to respond positively to. Even with totally unbiased algorithms, the retailer found that the AI engine mindlessly pushed specific ethnic fashions to members of specific ethnic groups. The risk to the retailer is that this kind of selection may seem to support stereotyping. And if the retailer hard-codes the algorithm to prevent this, is this just a subtler form of stereotyping?

? Based on an AI analysis of each individual customer’s sequence of medical treatments, one health insurer began a programme offering special services to customers more likely to be admitted to hospital. When the insurer first started to propose these services to selected customers, call centre staff realized that some patients did not yet know that they were, in fact, more likely to need hospital care. Insufficiently thoughtful implementation of the AI engine forced the insurer’s staff into the awkward position of having to explain to customers why they were being offered hospital services they didn’t know they would need.

Common sense as an antidote to unchecked calculations

Leaders are used to setting goals for their teams. These goals often take the form of objectives that, at a higher, more abstract level, don’t take real-world conditions into account. Fortunately, team members normally know from experience how to reach the goal using common sense, business ethics, and moral values.

Witness, for example, the employee who understands the greater objective, but thinks to her or himself: “I know I’m not supposed to grant this rebate to that customer, but if I don’t I’ll lose the customer”; or, who knows that “I should make this elderly person wait in the line like everyone else, but I’m going to bring her in front of the line”; or that “I’m not supposed to let people go without paying the bill, but in this situation it makes better business sense to give them more time”.

The problem with algorithms is that they have none of this kind of “business sense”. They are machines devoid of real-world experience, common sense, or moral principles. Lacking these critical attributes, AI is unable to adjust its decisions to the ambiguity of our real world. Left unfettered, algorithms are perfectly capable of spewing out millions of inappropriate decisions – and of doing so literally at the speed of light, posing a risk with each decision to the company’s profitability and reputation.

And just to be clear, I am not talking about algorithms developed by people out to break the law or to spread toxic ideologies. I am talking about mathematical creatures summoned up by teams with the best of intentions.

Actions dictated by AI algorithms can result in absurd, and often insensitive, actions.

Meet the new CEO (with an “e” for Ethics)

For companies, to avoid the risks posed by unthinking, unfeeling AI decisions, they must first learn how to anticipate unintended consequences. To do so, companies must create “guard rails” – safeguards to ensure that AI directives are not mindlessly implemented. These guard rails can be created by hard-coding them into the algorithm itself, and enforced by surrounding them with specific KPIs.

Sometimes, the guard rails can be relatively simple to create. A well-known fast food chain, for example, developed an advanced engine to send personalized promotions to its loyal customers. If the promotions were based on the algorithm alone, the company might inadvertently offer so many meal deals that they could potentially harm their customers’ health. Fortunately, the company introduced common sense into the equation by capping the total number of calories in the food items it promotes every week.

In other situations, it can be much more difficult to create guard rails to mitigate AI decisions. A new “ethics” function must be deployed to train operational managers to anticipate the real business risks posed by AI. The objective of this function would be to work with data scientists to keep to the fore a simple fact about algorithms: because even unbiased AI engines depend on real-world data (which, by its very nature, is biased), AI-based information processing can lead to biased decision-making just as surely as human-based processing can.

The role of the ethics function must be broader than typical compliance-oriented functions, such as those designed to build and enforce processes to protect companies from legal risk. The overarching goal of an AI-related ethics function is to train managers and data science teams to avoid debacles that can result when these golem-like algorithms are incorrectly implemented. To give this function the gravitas it merits, companies should appoint a Chief Ethics Officer to support the other CEO as he or she sets AI policies, and define AI guidelines.

Companies that are feeling the pressure to catch up with leading AI businesses must bring AI to scale with great care, thinking beyond more straightforward issues of algorithm design and technical implementation. As in the case of the golem of Prague, the process of creating a magical creature is, in a sense, the easy part. The real challenge is in making that creature truly useful.

In the world of AI, the process of coding an algorithm accounts for approximately 10% of total required resources. Implementing the algorithms into legacy IT systems accounts for another 20%. The vast majority of effort – the final 70% – is about weaving together the human aspect and the algorithm to create wise business decisions. This last 70% is often underestimated by companies, but it is the only way to make sure that artificial intelligence leads to decisions that are, in the real world, actually intelligent.

This article is part of the World Economic Forum Annual Meeting


Patrick Stroh

Data & Analytics | AI / ML | GenAI, LLMs

5 年

The toilet seat/burial urn is an example of the most simplistic "AI" (major reach to use that term here).? Much of what passes (in marketing, and sales hype) is not "AI", or even machine learning, or even statistics or math, but the worst of "IF THEN" rule encoding dreamed up by someone implementing "the" recommendation engine, re-targeting, etc. A slightly more sophisticated version might "know" that certain products or services are only bought episodically (very hard to predict) and you're absolutely wasting money on those ads or ad platforms.? Just my major gripe about using that example.? That is not to take away from many of the other points made in the article.

Markus Bergfors

PhD, Chief Expert at Nordea

5 年

Is it really AI if it is checked by humans?

Karalee Close

Global Leader: Talent & Organization. Working at the intersection of strategy+ technology + people to drive exceptional performance

5 年

Great stuff Sylvain - see you in Davos!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了