When AI Agents Improve Conversion & When They Don't

When AI Agents Improve Conversion & When They Don't

We were certain when we launched our AI virtual agent system that it would improve? operational costs and we were confident it would offer customers a better experience. We found both things to be true.?

With operational costs, we saved about $2 every time a virtual agent successfully replied to a customer message. When it completed a full conversation, without any support from a human, we saved about $6.

Regarding customer experience, a number of tests showed that our chatbot resulted in a higher conversion rate. It was clear the virtual agent system improved customers’ shopping experience, simplifying the process and smoothing out speed bumps.

But conversion didn’t improve in every instance. Some tests showed mixed results. At Jerry , we believe in decision making with a customer-first mindset; which meant positive results were essential before rolling anything out to 100% of customers.

Here are some examples of tests that proved that our AI-driven chatbot clearly improved customer experience and a few that showed it isn’t always the answer.

First, what has worked:

  • Increasing visibility of the virtual agent within the shopping flow. We A/B tested greater visibility of the chatbot throughout the shopping experience. The test group, with the more visible chatbot, submitted 30% more questions to it and had a conversion rate 20% higher than the group with the less visible chatbot. We expected an increase, but this was an incredibly large impact in what is a pretty mature shopping experience.?

  • Adding dedicated chatbots at certain stages in the purchase flow (e.g., one for the point of purchase that is expert in this phase of the buyer journey). In each case, the number of questions being answered by our bots increased, thereby reducing the number of questions escalated to human agents. This also brought a steady rise in conversion. Overall, since we first added purchase-related bots, our conversion rate has increased by almost 70%.

Now, for what delivered mixed results:

  • Using virtual agents when the human agents are unable to answer within 40 seconds. Given the success that immediacy of chatbot responses afforded us in previous tests, we were eager to test this. However, this switch hasn’t had much of an impact on conversion at all, positive or negative. But we haven’t completely written this off. We're working to improve it and see if we can make it a winner.?

  • Replacing the app’s FAQ section with a chatbot. We hypothesized that removing the need for customers to search for questions would improve conversion. It wasn’t the case. It seemed to have no impact on conversion and, in fact, led to slightly more escalations to our human agents, which is not ideal. As it didn’t improve conversion or customer experience, or lighten our human agents’ workloads, we decided to keep the FAQ section as is.

An AI test is a success when it improves customer experience, and it’s a win-win when that improved experience also brings operational savings for your business. But prioritizing improvement, or at least consistency, in customer experience, always comes first. AI won’t always improve customer experience. We've found that at least 5-10% of the time the impact on customer experience can be neutral or even negative.

AI agents are almost always faster and generally more consistent than human agents. However, if you haven't provided the tools and knowledge that human agents have at their disposal, AI agents can become another frustrating barrier between the customer and the human agent who is ultimately needed to help them. Because of this, we put a lot of effort into giving our AI agents access to the same tools and knowledge as our human agents, and in areas where that isn't possible, we send our customers directly to our human agents.?

Testing and proper analysis of data is essential to determining positive, neutral or negative impact of AI agents on conversion and ultimately, customer experience. This ensures you are protecting your customers and your business from instances where AI isn't the answer.

Juan Figuerola-Ferretti

Customer Experience / MarTech Project Manager / Product Marketing

4 周

Most enlightening, thanks!

回复
David Schwartz

Digital Marketing Geek

1 个月

John, Thanks for sharing. This is the type of thoughtful insight marketers can apply to real world challenges.

Adrian Chan

Consulting in the UX of AI

1 个月

Thanks for sharing - were results granular enough for you to be able to distinguish among possible reasons for customer resistance to chatbots? eg. - did some users bail on agents because they wanted a human (might later realize agents are ok) - did the agents misunderstand customer requests on account of poor/unclear customer communication - are agents less helpful when the customer him/herself is unsure how to ask/search/define their issue - did you get any insights into agent personality, style, verbosity etc - e.g. whether you think agents could match humans for "personal touch?" - users

回复
Christopher Weil

Product Owner and Innovator - Intelligent Virtual Assistant Specialist - Owned by 2 Cats and 2 Kids, probably not in that order.

1 个月

These insights are excellent. I'm curious, what made you choose 40 seconds as the cutoff point for a chatbot to intercede? Was there significant drop-off at that point with no response? Follow-up to that, if there is a drop-off cliff at 40 seconds, do you have any sense whether that's universal or case-specific to the vertical you were looking at?

回复
Sev Paso

President at Zac 3 Media Consulting

1 个月

Thank you for sharing.

要查看或添加评论,请登录

社区洞察