Would you let ChatGPT loose on your customers?
Image generated at stablediffusionweb.com

Would you let ChatGPT loose on your customers?

2022 has been a year of “wow” moments for AI, with tools being released to the public that seem to beat most people’s expectations of what a bot can do. The most dazzling examples have been visual, where tools like DALL.E 2 and Stable Diffusion can paint entirely new worlds from a short text prompt.

Chatbots have also taken a leap forward in comprehension and believability with the recent public demo release of ChatGPT by OpenAI. This large language model is notable for its uncanny ability to “understand” a broad range of prompts and respond with an incredible portfolio of possible outputs, like poetry, transcripts of conversations, and even computer code. Customer Service leaders who were among the 1 million plus people who tried ChatGPT in its first week have all asked the same question,

“How could I get this working for my customers?”

This Summer, I joined some workshops with a tech company that is also building a large language model – similar to ChatGPT – and we discussed possible applications in a customer service environment. What we concluded was that when combined with other technologies (AI and conventional automation), a large language model could drive a revolution in how customer service is delivered. When taken purely on its own, it could do a great job on a limited set of use-cases, but at worst could create risks to customers and companies.

Here are three additional capabilities that would improve ChatCPT’s suitability as a customer-facing chatbot:

1. A mix of probabilistic and deterministic responses

Many people who have experimented with ChatGPT have commented on its seemingly unassailable self-assurance when holding forth on any topic. The tech writer James Vincent describes this as “Fluent bullshit

ChatGPT is primarily a probabilistic mode: it constructs the answer that is most likely to be correct based on analysis of over 300 billion words of text sources. But it is important to remember that it does not know the actual answer to any question, and sometimes gets independently verifiable facts wrong.

In these cases, a deterministic model may be more suitable: “If customer asks X then provide answer Y”. This approach works when the customer’s need is clear and there is a single, objectively true answer. One of the problems with many current chatbots is that they are only using a deterministic approach, which means they can appear to miss the point if the customer does not provide the exact input they are expecting, so a mix of probabilistic and deterministic responses is required.

For example, for a bank:

  • ?If a customer asks, “Can you give me tips on how to reduce my monthly outgoings?” a probabilistic response will likely give a more complete answer that can be tuned towards the customer’s specific situation
  • But if they then ask “What will be my monthly repayments once my fixed rate comes to an end?” then we need to provide the one, correct, answer – a deterministic response needs to be invoked here

This creates all kinds of questions about compliance, both in terms of how compliance is monitored and how it can be built into the AI model itself. The developers of ChatGPT have gone to considerable length to ensure it provides “safe” responses, e.g. it does not offer racist opinions, showing that the algorithm itself can be trained to keep responses within certain boundaries.

2. ?Other problem-solving capabilities beyond a language model

ChatGPT primarily works by modelling the way people talk and write. A simplistic explanation is that it builds connections between words and phases and is then able to string these together into new utterances with remarkable accuracy.

But thinking is not just about language. The human brain adopts many different techniques to solve problems beyond just mimicking how other people talk. For example: simulation of possible outcomes, learning from the longer-term impacts of past decisions, mathematical calculation etc.

In the customer service world, ChatGPT on its own would not respond helpfully to a prompt like, “Help my find where there is a fault in my internet connection.” It would be able to provide a useful set of tips that might help to diagnose and fix the fault, especially if trained on a proprietary knowledgebase, but it lacks the analytical capabilities to work out the actual root cause of the problem. It cannot simulate the topology of a telecom network and then set out the logical top-down steps needed to isolate the location of the fault and recommend a fix. In examples like this, ChatGPT would need to call on other tools that are more suited to solve the problem at hand.

3. Augmenting almost-human with actual-human where it counts

I personally would find it hard to make a strong emotional connection with ChatGPT. Whether it is the bot’s intentionally passive tone of voice, or the prior knowledge that it definitely isn’t human, it is not easy to warm to the tool. There are some scenarios where there is still no substitute for a real person, no matter how realistic. For example: an airline is transporting someone’s beloved pet and the flight has been seriously delayed. Companies that are utilizing ever more realistic chatbots will need to experiment to find the moments where a real human connection creates a difference in outcome: customer advocacy, sales conversion, etc. – and adapt their approach accordingly.

In conclusion, ChatGPT represents a step forward in modelling human interactions that produces jaw-droppingly impressive results. Now is the time to start harnessing this technology and developing the next generation of customer service bots. The combined requirement for complex, nuanced, emotive and precise responses in a customer service environment mean that ChatGPT on its own may fall short, but in combination with other techniques, could radically change how companies engage with their customers.

Head to head with ChatGPT?

As an experiment, I fed the prompt “Write a 500 word LinkedIn article about how ChatGPT would need to be adapted to work as a customer facing chatbot for a large enterprise” into ChatGPT and have pasted the output below. Interestingly it covers a lot of practical points that I have not mentioned, but it also takes a much more confident view of its own capabilities. Which point of view do you find more useful?


?

Introduction:

ChatGPT is a powerful language processing tool that has been developed by OpenAI. It is based on the concept of a transformer, which is a neural network architecture that has proven to be particularly effective at handling natural language tasks. While ChatGPT has been used successfully in a variety of contexts, including chatbots and question answering systems, it would need to be adapted in order to work as a customer-facing chatbot for a large enterprise. In this article, we will explore some of the key considerations that would need to be taken into account in order to make ChatGPT work effectively in this context.

Technical considerations:

One of the key technical considerations when adapting ChatGPT for use as a customer-facing chatbot is the need to ensure that it is able to handle a wide range of input types and languages. This is particularly important for large enterprises that may operate in multiple countries and regions, as they will need to be able to handle customer queries in a variety of languages. To address this issue, ChatGPT would need to be trained on a diverse set of languages and dialects, and it would also need to be able to identify the language being used in each incoming message.

In addition to handling multiple languages, ChatGPT would also need to be adapted to handle a wide range of input types, including text, images, and even audio and video. This would require the integration of additional machine learning models and technologies, such as image and audio recognition systems, in order to enable ChatGPT to understand and respond appropriately to these different types of input.

Another key technical consideration is the need to ensure that ChatGPT is able to handle a high volume of customer queries in a timely and efficient manner. This may require the implementation of additional infrastructure and resources, such as additional servers and data centers, in order to ensure that ChatGPT is able to scale to meet the needs of a large enterprise.

Business considerations:

In addition to the technical considerations mentioned above, there are also a number of business considerations that would need to be taken into account when adapting ChatGPT for use as a customer-facing chatbot.

One of the key business considerations is the need to ensure that ChatGPT is able to provide accurate and relevant responses to customer queries. This will require the development of a comprehensive knowledge base that covers a wide range of topics and issues that may be relevant to the enterprise. This knowledge base would need to be continuously updated and maintained in order to ensure that ChatGPT is able to provide the most up-to-date and accurate information to customers.

Another important business consideration is the need to ensure that ChatGPT is able to handle customer queries in a manner that is sensitive to the needs and concerns of the individual customers. This will require the implementation of appropriate algorithms and protocols to ensure that ChatGPT is able to understand and respond appropriately to the context and tone of each customer query.

Finally, it will be important to ensure that ChatGPT is able to integrate seamlessly with the existing systems and processes of the enterprise. This may require the development of custom interfaces and integration points to allow ChatGPT to access and use data from other systems, such as customer relationship management (CRM) systems and databases.

Conclusion:

In conclusion, adapting ChatGPT for use as a customer-facing chatbot for a large enterprise will require the consideration of a range of technical and business considerations. These will include the ability to handle multiple languages and input types, the ability to scale to meet the needs of a high volume of customer queries, the development of a comprehensive knowledge base, the implementation of algorithms and protocols to ensure sensitivity to the needs and concerns of individual


Great work Nick Clark. ChatGPT is definitely a path breaking technology and i agree that it would eventually revolutionize the Support industry. Other than the challenges that you laid out, it would also be interesting to see how commercially viable a enterprise support product would be in collaboration with other AI capabilities and CRM applications. As per Open AI's cheif Sam Altman, compute costs of ChatGPT are "eye watering"

Neil Mackinnon

Leadership and Executive Coach

1 年

Love this, both in terms of concept and content. And you beat the machine!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了