HUMAN - AI

HUMAN - AI

If you ask something of Chatgpt, an artificial-intelligence (ai) tool that is all the rage, the responses you get back are almost instantaneous, utterly certain and often wrong. It is a bit like talking to an economist. The questions raised by technologies like Chatgpt yield much more tentative answers. But they are ones that managers ought to start asking.

One issue is how to deal with employees’ concerns about job security. Worries are natural. An ai that makes it easier to process your expenses is one thing; an ai that people would prefer to sit next to at a dinner party quite another. Being clear about how workers would redirect time and energy that is freed up by an ai helps foster acceptance. So does creating a sense of agency: research conducted by mit Sloan Management Review and the Boston Consulting Group found that an ability to override an ai makes employees more likely to use it.

Whether people really need to understand what is going on inside an ai is less clear. Intuitively, being able to follow an algorithm’s reasoning should trump being unable to.?But a piece of research by academics at Harvard University, the Massachusetts Institute of Technology and the Polytechnic University of Milan suggests that too much explanation can be a problem.

Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores. Some used a model whose logic could be interpreted; others used a model that was more of a black box. Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions. Workers were willing to accept the decisions of a model they could not fathom, however, because of their confidence in the expertise of people who had built it. The credentials of those behind an ai matter.

The different ways that people respond to humans and to algorithms is a burgeoning area of research. In a recent paper Gizem Yalcin of the University of Texas at Austin and her co-authors looked at whether consumers responded differently to decisions—to approve someone for a loan, for example, or a country-club membership—when they were made by a machine or a person. They found that people reacted the same when they were being rejected. But they felt less positively about an organisation when they were approved by an algorithm rather than a human. The reason? People are good at explaining away unfavourable decisions, whoever makes them. It is harder for them to attribute a successful application to their own charming, delightful selves when assessed by a machine. People want to feel special, not reduced to a data point.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了