AI x ICT: ChatGPT and the Meaning of Surveys
engageSPARK
Survey and message hard-to-reach populations globally – remotely and at scale
Everyone and their dog is talking about ChatGPT. How it can take the bar exam. How it’s a great personal assistant. How it writes you social media posts in a heartbeat.
Of course… it can get you into hot water just as quickly. And it might just invent half the things it tells you. (You see, it just doesn’t want to let you down!)
But what if I told you that ChatGPT and similar chatbots are actually perfect for one particular problem? And that’s figuring out what on earth an answer to a survey question means.
What do you mean?
That probably sounded mysterious, so let’s jump right in with an example. Let’s say we ask a simple question: Do you like potatoes? Have a look at the following picture of how such a survey question might be answered:
Maybe you recognize the problem: We’re asking a yes-or-no question, and we’re getting a fine answer … but it’s not a yes and it’s not a no.
You and I recognize that this person indeed likes potatoes—so it’s a “yes”—but a dumb machine expects a simple reply and doesn’t get one.
In this example, all ends well. The survey is designed to tell the participant how to reply correctly. So they realize that they need to dumb the answer down and reply as expected. But that’s not always what happens.
So what do we currently do about all this?
Planning ahead
Let’s look at some more examples of how a yes-or-no question such as the potato one might be answered.
Here is what we can do about it:
That’s all fine, but none of these techniques can deal with the example at the top. In the end, it comes down to figuring out the intention: does that person mean “yes”? Do they mean “no”? Or something unrelated?
And that’s something that chatbots like ChatGPT are good at.
Using ChatGPT4 for surveys
To try this out, I’ve given ChatGPT (version 4) a few answers to the potato question and asked it to figure out what was meant.
I have asked a few people the following question: Do you like potatoes? I’ve collected their answers here:
1. I do but yum is better
2. yes!
3. ugh, hate potatoes
4. what are potatoes?
5. I love potatoes
6. sure
7. no
8. beeni
9. Who is this
领英推荐
10. yess
11. reply to buy my course
For each, repeat the answer and either “yes”, “no”, or “something else”.
Here is its reply:
1. “I do but yum is better” – Yes
2. “yes!” – Yes
3. “ugh, hate potatoes” – No
4. “what are potatoes?” – Something else
5. “I love potatoes” – Yes
6. “sure” – Yes
7. “no” – No
8. “beeni” – Something else
9. “Who is this” – Something else
10. “yess” – Yes
11. “reply to buy my course” – Something else
That is really helpful, isn’t it?
Matters of privacy
Now, you must be thinking: wow, that solves the problem! Amazing! Let’s go!
And it would solve the problem … if only we could use it. You see, there are two problems.
The first one is cost. Every call to ChatGPT and similar models gets charged. But then again, we’re not talking huge sums. Very likely worth it, if it means increasing the success rate of an SMS or WhatsApp survey substantially.
The real issue, though, is privacy. You’re feeding the answer (and maybe even the question) to some AI provider. What do they do with it?
Of course, you might say: why does it matter—it’s a yes-or-no question, it’s anonymous!
The issue is that we don’t know that. We can’t know that. Nothing stops a participant from replying with “Hi, I’m Mary Schmidt and i LOVE potatoes.”
There, boom, personal information. And what happens to the data you feed into these AI chatbots is still far from clear.
So, if we want to use ChatGPT-like bots for better text-based surveys, we have two choices: We find a GDPR-compliant AI tool. Or we somehow make sure that no personal information ever goes into it.
Check out more AI/Chatbot discussions on our new page!