Artificial intelligence
- is there a business case?

Artificial intelligence - is there a business case?

According to a recent Econsultancy report, 33% of large organisations (with revenue above £150m) are already using artificial intelligence (AI) and 39% plan to invest in AI technology in 2020. Having recently been involved in the creation of various AI proof of concepts, the business benefits are usually a no brainer. Either driving operational efficiencies by augmenting human abilities in a way that allows people to focus on more creative and strategic goals or driving growth and performance in a way that positively impacts the bottom line. 

Building a business case for the use of AI cannot be centred around business benefits alone though, even if they outweigh the required investment. Organisations have a responsibility to take into account other ethical considerations that are less centred around what AI can do (the art of the possible) and more about what it should or shouldn't do (the conscience of the possible). To define whether a business case for AI really exists or not, there are a series of uncomfortable questions businesses need to get comfortable with asking themselves. 

1. Are we willing to continue to invest in training the AI we build?

Many fears around the use of AI centre around the accuracy of its outcomes and as a result, how fair, reliable, safe, inclusive and accountable AI is in practice. In theory, AI should make decisions that are fairer than humans because computers are logical rather than irrational. However in reality, AI is designed by humans who use data that reflects an imperfect world and so from its very creation, AI is inherent with bias.  

A real life example of this is explained in an insightful Ted Talk by Margaret Mitchell, a research scientist at Google who trained AI to communicate about what it could understand from images it was fed. After assessing a series of images of house fires, Margaret was horrified to find the AI describing the images as ‘amazing’ and ‘spectacular’ despite these being life destroying events. Margaret quickly realised the reason for this was because most previous training images used had been of positive events and therefore, the vibrant colours and contrast in the fire images were perceived as positive. 

In this particular example, the AI isn’t necessarily rebelling in an evil (Terminator inspired) way but instead, doing exactly what it was asked to do without the human even realising it. Whilst any organisation today can build their own AI, it is only those who are willing to continue to train and invest in AI that will succeed because AI is only as good as the data it is fed. It’s inevitable that people will naturally create blind spots and biases in data sets and organisations experimenting with AI need to be willing to explore ways to overcome this. Whether that’s through discussion and collaboration amongst a diverse set of designers and developers, rigorous testing or simply being open to sharing successes and learnings in the spirit of continuous improvement.  

2. Will our employees and/or customers benefit from our use of AI? 

Even when the business benefits of AI are promising, consumers still seem to have mixed feelings about its use. Some are excited by the possibilities whilst others are disappointed by the experience not living up to expectations or simply prefer human interaction. Despite this, 70% of consumers believe AI can make their lives better and so it is up to organisations to demonstrate real value to employees and/or consumers, ensuring AI has an end user as well as business benefit (see Pega’s ‘What Consumers Really Think About AI: A Global Study’). As our lives are increasingly full of trade-offs especially when it comes to our personal data, people's choice to use AI is going to continue to be determined by what they will get in return. 

In the face of privacy as well as vanity concerns (according to a poll by Sina Technology, 60% of respondents said scanning their faces for payments made them feel “ugly”), people in China are increasingly embracing the use of facial payment technology in stores and on public transport driven by convenience and with some even perceiving it to be more secure than traditional payment methods. More recently, facial recognition cameras are also now being used alongside infrared temperature scanners to help prevent the spread of the Coronavirus which has undeniable user and society benefits.  

What most people don’t realise is that AI already touches our everyday lives from automated call centres to personalised recommendations on platforms like Netflix and Facebook and organisations right now are in a unique position to help demystify their use of AI. Especially when it is being used in the interest of delivering an enhanced employee and/or customer experience (and even if that is more of a desire than a current reality). By providing greater clarity on how AI will benefit a user's experience as well as allowing them to experience it and feedback, the more open and receptive they will be to engage with AI. Organisations should continue to involve both their employees as well as customers in the development of AI to help inform how and where businesses choose to use it in the future. Just because the technology exists doesn’t mean it will be acceptable for use in the eyes of the end users. 

3. Can we be honest with our employees and/or customers about our use of AI?

Most people fear the unknown which means a general lack of understanding in relation to AI can negatively shape how consumers perceive an organisations' use of the technology. Admittedly, it can be a struggle to understand the good AI can bring if the media headlines about machines taking over the world and humanity being doomed are anything to go by. It’s this mistrust that organisations investing in AI need to be willing to face head on by clearly introducing the user benefits and gradually increasing consumers' comfort levels with the technology. 

It was only recently after facing a staff backlash that Barclay’s scrapped a new staff tracking system that tracked the time employees spent at their desks and sent warnings to those spending too long on breaks. If organisations don’t feel they can be honest both internally and externally about their use of AI then they are probably doing something they shouldn’t be. Interestingly enough, there seems to be a phenomenon that when customers are informed they are interacting with AI, such as in the form of a chatbot, they are much more receptive to the use of the technology than when they are unaware. This certainly was the case when NatWest successfully launched Cora, a life-like avatar (AI virtual assistant) to help customers answer basic banking queries. 

Both employees and customers today demand organisations to be transparent about how and when they use AI, in a similar way we expect our personal data to be handled. Clear expectation setting in how organisations choose to use AI is what will foster trust with the technology and end users, otherwise people will simply feel organisations are trying to catch them out. 

4. Can we confidently secure the data we hold about our employees and/or customers? 

Artificial intelligence is enabling organisations, more so than ever before, to process and use people's personal information in new and innovative ways that can compromise their privacy. Coupled with this, a lack of regulation when it comes to the use of AI such as facial recognition technology is allowing organisations to use it inconsistently and in ways that may increase the risk of data protection violations. As GDPR brought to light, just because you have someones data doesn’t mean you can or should use it. 

Despite privacy concerns and campaigners vowing to launch legal challenges, earlier this year London’s Metropolitan Police started using facial recognition to help tackle serious crime. In parallel to this the EU are currently contemplating a temporary five year ban on facial recognition, particularly in public places such as train stations and shopping centres, largely due to the ethical and legal implications of its use which are yet to be fully discussed, debated and defined.

Whilst we await more vigorous AI driven regulations and guidelines, organisations in the meantime have an obligation to put guardrails in place to safeguard people's privacy. For example, GDPR introduced a new obligation for organisations to complete Data Protection Impact Assessments (DPIA) when data processing is likely to result in a high risk to individuals’ rights and freedoms. Exercises such as these are a good opportunity for organisations to stop and ask themselves the not so easy question - how can we handle our employees and/or customers data in a morally responsible and respectful way? 

Even though all organisations can use AI it doesn’t mean they should. As Max Tegmark eloquently states in his book ‘Life 3.0’, AI has “the potential to flourish like never before - or to self-destruct”. Whether it does or not will be based on how organisations choose (it’s important to remember it is a choice) to use it and their appetite to get comfortable answering some of these uncomfortable questions. Organisations experimenting with AI have a responsibility to make sure AI is doing what they expect it to be doing as well as making end users aware of its intended use. Clearly the power is within their hands and our faces, literally!

要查看或添加评论,请登录

Rebecca Vickery的更多文章

社区洞察

其他会员也浏览了