When AI Agents Improve Conversion & When They Don't
We were certain when we launched our AI virtual agent system that it would improve? operational costs and we were confident it would offer customers a better experience. We found both things to be true.?
With operational costs, we saved about $2 every time a virtual agent successfully replied to a customer message. When it completed a full conversation, without any support from a human, we saved about $6.
Regarding customer experience, a number of tests showed that our chatbot resulted in a higher conversion rate. It was clear the virtual agent system improved customers’ shopping experience, simplifying the process and smoothing out speed bumps.
But conversion didn’t improve in every instance. Some tests showed mixed results. At Jerry , we believe in decision making with a customer-first mindset; which meant positive results were essential before rolling anything out to 100% of customers.
Here are some examples of tests that proved that our AI-driven chatbot clearly improved customer experience and a few that showed it isn’t always the answer.
First, what has worked:
Now, for what delivered mixed results:
An AI test is a success when it improves customer experience, and it’s a win-win when that improved experience also brings operational savings for your business. But prioritizing improvement, or at least consistency, in customer experience, always comes first. AI won’t always improve customer experience. We've found that at least 5-10% of the time the impact on customer experience can be neutral or even negative.
AI agents are almost always faster and generally more consistent than human agents. However, if you haven't provided the tools and knowledge that human agents have at their disposal, AI agents can become another frustrating barrier between the customer and the human agent who is ultimately needed to help them. Because of this, we put a lot of effort into giving our AI agents access to the same tools and knowledge as our human agents, and in areas where that isn't possible, we send our customers directly to our human agents.?
Testing and proper analysis of data is essential to determining positive, neutral or negative impact of AI agents on conversion and ultimately, customer experience. This ensures you are protecting your customers and your business from instances where AI isn't the answer.
Customer Experience / MarTech Project Manager / Product Marketing
4 周Most enlightening, thanks!
Digital Marketing Geek
1 个月John, Thanks for sharing. This is the type of thoughtful insight marketers can apply to real world challenges.
Consulting in the UX of AI
1 个月Thanks for sharing - were results granular enough for you to be able to distinguish among possible reasons for customer resistance to chatbots? eg. - did some users bail on agents because they wanted a human (might later realize agents are ok) - did the agents misunderstand customer requests on account of poor/unclear customer communication - are agents less helpful when the customer him/herself is unsure how to ask/search/define their issue - did you get any insights into agent personality, style, verbosity etc - e.g. whether you think agents could match humans for "personal touch?" - users
Product Owner and Innovator - Intelligent Virtual Assistant Specialist - Owned by 2 Cats and 2 Kids, probably not in that order.
1 个月These insights are excellent. I'm curious, what made you choose 40 seconds as the cutoff point for a chatbot to intercede? Was there significant drop-off at that point with no response? Follow-up to that, if there is a drop-off cliff at 40 seconds, do you have any sense whether that's universal or case-specific to the vertical you were looking at?
President at Zac 3 Media Consulting
1 个月Thank you for sharing.