Chatbot Survey Response Rates - Vs. Traditional Surveys
Originally posted in the Hubert.ai blog?
Let's address the elephant in the room. How do response rates compare between conversational feedback and traditional surveys?
Even though the main goal of replacing surveys with AI-driven chatbot conversations is enhancing data quality, questions about response rates are very common and a highly relevant factor when adopting a new data collection system.
Some conversational feedback tools (such as Hubert.ai) even combine qualitative and quantitive research methods which can limit the effect of low response rates. Read more on that here.
Conversational feedback is still in its cradle, so when it comes to response rates, there are no real scientific studies conducted in this field yet. Therefore, we have to trust data from vendors and other commercial actors out there.
Conversational data collection with Hubert.ai
From our own experience at Hubert.ai, there's absolutely no question about it. Response rates are definitely higher than the average survey. Our customers confirm it and it shows in our data. The course evaluation version has, on average, managed to raise the response rates by approximately 40% throughout our customer base of 1,800 schools and universities.
The enterprise version needs more data as it is fresh off the shelf, but it's sure looking promising so far. Among others, Siemens has been more than happy with the results of their Hubert-implementation. Have a look at the case study below.
Conversational feedback findings from other vendors
Here's what experience others have had when comparing traditional and chatbot surveys:
-Company Convrg managed to 3x their response rates using a chatbot.
-Surveybot claims to have an 80% completion rate on average in their surveys.
-Tetatet claims to achieve a 19% higher response rate than traditional surveys.
-SurveySparrow claims to increase response rates by 40%.
This seems to validate our own experiences. But what can this increase be attributed to?
Fortunately, Hubert has a habit of asking new respondents how they would compare him to a traditional survey. Here's the top positive feedback we've gotten:
1. Non-restrictive
"I can say what I really want to transmit" is something we often hear. As Hubert primarily was built to handle open-ended questions, it's easy to catch what's top of mind and then continue digging from there. There's simply no need to ask respondents about every little aspect of their experience.
2. Fun & engaging
Many, many respondents actually tell Hubert that's been fun chatting with him. If this is just people being nice, comparing him to a boring survey or plain lies is hard to say. Our hypothesis is that the interface is familiar and associated with something fun such as chatting with a friend.
3. Thought-provoking
Although some respondents have actually mentioned this in negative terms, the majority think it's a good thing that Hubert probes deeper and actually makes you think about why something was good or bad instead of just saying it was 2 out of 5.
Engagement - Surveybot Vs. Survey
A big problem in traditional surveys is that respondents often lose their engagement (which was limited in the first place) after a couple of questions and simply drop out. And who can really blame them? Long surveys have notoriously low completion rates.
Interestingly enough, we've actually found conversational feedback collection to have the opposite effect and engagement is often significantly higher towards the end of the session. Some respondents even ask Hubert for more questions.
We believe this is due to the chat-like interface which in many ways is similar to everyday chatting.
If you are curious to see what kind of results you can get from using Hubert as a replacement to your surveys, get in touch with us. A pilot project can be kept confined and doesn't cost a fortune.
Team Hubert
Building Profile for Slack App | Fostering a connected and engaged workplace environment.
4 年Verica Pavlovic