Reducing the "Time to Understand Customer" with AI

Reducing the "Time to Understand Customer" with AI

DevOps teams have long embraced the value of consistent metrics to measure their performance, with the DORA metrics being the most common. Deployment Frequency, Lead Time for Changes, Mean Time to Recovery (MTTR), and Change Failure Rate are proven metrics that drive meaningful change in DevOps processes.

The one thing that is not clear from these metrics is why a DevOps team should release anything at all. There is no doubt that quickly and reliably delivering changes to production is important, but it is only meaningful if those changes solve meaningful problems. To understand what solutions are meaningful, you have to understand your customers.

For this, we need another metric, which I’ll call Time to Understand Customer.

Reducing the Time to Understand Customer metric is where AI can add incredible value. But to understand the value of AI, we need to understand the AI adoption cycle.

Stage 1: Drinking from the firehose

Most of our communication is digital and supposedly accessible. Emails, chat messages, and video calls are all dutifully transcribed and recorded, with at least rudimentary search functionality providing access to a history of interactions. Integrations with chat systems further increases the reach of these interactions with feeds of call notes and attendees.

But it doesn’t take long for this firehose of information to overwhelm any individual’s ability to consume it all. Eventually it reaches the point where it is impossible to even filter the stream of information to useful topics.

The challenge is that customer interactions take place in the real world using fuzzy natural language full of nuance, colloquialisms, emotion, and interpretations. As a result, it is only the people involved in the conversations that have a sense of their full content.

At this point in the adoption cycle, the Time to Understand Customer metric is probably measured in months if teams are willing to take the time to interview all those involved in the conversations. Or, the Time to Understand Customer may be undefined as teams simply have no way to collate useful information.

You are in stage 1 if the process of understanding customers involves speaking to someone who can then direct you to someone else to get some sliver of information regarding what customers actually do.

Stage 2: Mechanical Turk

The Mechanical Turk was an 18th-century hoax where a human controlled what appeared to be a chess-playing robot. The phrase lives on today with Amazon Mechanical Turk providing an automated platform where people perform many small, manual tasks.

Enterprises adopt the Mechanical Turk phase as a method for turning fuzzy real-world interactions into digital signals. Support tickets are manually categorized, Salesforce customer accounts accumulate a growing list of checkboxes, and call summaries have manually defined traffic light reports.

Each of these reality-to-digital conversions is a manual task placed on those interacting with customers. While these tasks are individually trivial, they start to add up. The insatiable demand to measure the world through primitives with a compareTo() function means no conversation is complete without distilling it down to 5 checkboxes in 3 different platforms.

The challenge is that people hate doing this work. It lacks the autonomy, mastery, and purpose that drive employee engagement. It is the digital equivalent of sorting items on a conveyor belt. I have yet to experience an example where manual processes that everyone dislikes are scaled in a repeatable and reliable manner.

As a result, most enterprises will settle on a few key metrics to be recorded from each interaction. This is better than nothing but does not come close to satiating the enterprise's thirst for broad, comparable, and actionable metrics.

At this point in the adoption cycle, the Time to Understand Customer metric may be down to minutes for a few select topics, but still months (or undefined) for random qualities that no one is tracking as digital values.

You are in stage 2 if you have heard the phrase “Salesforce Hygiene” in the last week.

Stage 3: AI

If there is one thing that Large Language Models (LLMs) excel at, it is extracting useful information from large bodies of plain language text. We all marvelled at the ability of ChatGPT to generate coherent answers to non-trivial questions, to the point where LLMs may well have passed the Turning Test.

While off-the-shelf LLMs have an amazing general knowledge, they know nothing of your interactions with customers. But, using a process called Retrieval Augmented Generation (RAG), it is possible to provide an LLM with large chunks of text vaguely related to a topic you are interested in and have the LLM answer any question you may have.

It is even possible to have LLMs produce machine readable answers such as JSON or simply numbers. For example, you can pass an email to an LLM, ask it to determine whether the email was discussing a product feature, and provide an answer on a scale on 1 to 10.

Once you have configured a pipeline to feed LLMs the contents of your customer interactions, you can then ask almost anything about your customers and get actionable answers.

Granted, the answers may not always be accurate. LLMs are well known for hallucinating, resulting in confident yet wrong responses. However, the same can be said of answers provided by people. At least LLMs have objective measurements of their ability to correctly answer questions, and they are improving all the time.

At this point in the adoption cycle, the Time to Understand Customer metric is down to hours or minutes for almost any question.

More importantly, there is now a scalable and repeatable process for reducing the Time to Understand Customer metric. If a question can not be answered quickly and easily, it is a question of capturing more unstructured data and feeding it into the LLM. This is not to understate the challenge of building such a system, but once such a system is established, it can be iterated on.

You are in stage 3 if you have the equivalent of ChatGPT but with the ability to query a recorded video call that took place yesterday.

Conclusion

The Time to Understand Customer metric is a powerful way to measure your ability to gain insights into your customers. It is the “why” that complements the “how” driving more engineering focused metrics like those defined by DORA.

AI, and LLMs specifically, are the glue between the mess of the real world and the digital signals that drive decision making. While there is plenty of plumbing required to feed real world interactions to LLMs, the resulting platform provides a scalable and repeatable process driving a virtuous cycle to further reduce the Time to Understand Customer.

要查看或添加评论,请登录

Matthew Casperson的更多文章

  • Boiling the DevOps frog

    Boiling the DevOps frog

    These are the requirements for a random “junior full-stack developer” role I found advertised this morning. These…

  • Process > LLMs

    Process > LLMs

    What makes a generalist? It is easy to think of generalists as jack of all trades and masters of none. Generalists are…

    1 条评论
  • Leetcode is awful

    Leetcode is awful

    We all know the familiar sitcom grocery store trope involving a pyramid of cans stacked in the middle of an aisle…

  • Replacing jobs with GenAI is the worst of DevOps all over again

    Replacing jobs with GenAI is the worst of DevOps all over again

    There is no doubt that DevOps called out some of the worst practices in IT departments. The inefficiencies of…

  • DevOps is a flat circle

    DevOps is a flat circle

    Moving from an engineering role into a highly technical sales role provides an amazing vantage point from which to…

  • (Almost) no one cares about your platform

    (Almost) no one cares about your platform

    I had the pleasure of attending, and presenting at, GitHub universe this year, and like most conferences, it was the…

  • Strangers deploying microservices

    Strangers deploying microservices

    I’ll admit I was skeptical when the GitHub team told us about the interactive sandbox sessions at GitHub Universe. If…

  • Deployment insights for everyone with LLMs

    Deployment insights for everyone with LLMs

    Logging levels are something that developers take for granted. I want to see WARN and above logs for my day to day…

    1 条评论
  • Octopus and Copilot as your own personal deployment firefighter

    Octopus and Copilot as your own personal deployment firefighter

    Imagine that production is down and the cost of lost business is adding up by the minute. This is a five alarm fire and…

    3 条评论
  • LLMs are not magic or scary

    LLMs are not magic or scary

    Tools like ChatGPT can seem like magic. It is now almost impossible to determine if text is written by a person or…

    1 条评论

社区洞察

其他会员也浏览了