Surveys should be mostly open text
Andrew Marritt
A pioneering people analytics practitioner with deep, hands-on expertise in applying AI to HR challenges.
Tom H. C. Anderson of OdinText has been writing about some research that he has run comparing Likert and Open Text survey responses.
Tom used Google Surveys to ask two samples of 1500 people a question about perceptions to Donald Trump’s controversial executive order banning people from certain muslim countries. To one group he asked the question as a typical 5 point Likert Question.
To the second group he asked the exact same question but instead let them respond in open text.
As you can see the answers are remarkably similar - within 2% of each other. According to his experiment an open question would be as suitable to gauge the level of agreement as a traditional quantitive scale question.
Tom’s findings are remarkably similar to what we see when using open questions as a replacement to traditional quantitive questions. We tend to use a combination of structured and unstructured questions in employee feedback requests but by far the most important are the open text questions. Open questions can achieve most of the aims of a scale question but provide some notable advantages.
In his post, Tom later highlights the difference between the Likert and the open text where the latter provided much richer data on the long-tail of responses (he describes them as low-incidence insights). As he notes:
“While there is nothing more to be done with the Likert scale data, the unstructured question data analysis has just begun…â€
Recently a client asked us why we didn’t have a sorting question available in Workometry. Our answer was that for that type of question we’d prefer to use open text.
Their proposed sorting question had 8 different choices for the employee to sort. I could show an almost identical question asked as open text by another client where we had identified just under 30 different options. Whilst we hadn’t done a controlled test like Tom did given our experience we’d expect pretty much identical results. A sorting would be right only if you want to limit the potential options to a subsection of the person’s true opinions
In a recent FT Tech Tonic Podcast with Astro Teller, the head of Alphabet’s ‘X’ lab Astro notes: (at about 14:45)
“If you are serious about learning you set up your test in a way you don’t know the answer to the question you’re asking and you’re indifferent to the answer. You have to do that or you’re not really learning you’re biasing either the test or how you read the data.â€
However good your survey design is, if you’re using a structured, choice-based question you’re asking a question where you’re by design limiting what the answer could be.
Open questions on the other hand give you the option of not knowing what you’re going to learn, before you ask. In our experience, if we’re doing a question like the ‘most important’ / sorting question above it would be common to find 2 or 3 of the top 10 answers that you wouldn’t have included in a similar structured question.
The other aspect that text provides is context. A structured question might identify the strength of feeling (though the example above shows that text can do this equally well) and who holds which feeling but it can’t show why they feel it. It’s why when companies do large engagement surveys often the immediate action is to do a series of focus groups on particular topics to understand the context.
When would we recommend structured question?
Even though we believe that in most instances the right option is to ask an open question there are a limited number of occasions when a structured question might be better. We find ourselves using them:
- when we want to report a simple metric over time, e.g. engagement or eNPS
- when our objective is to build a model and you need a score on a target variable for each individual
In both of these instances it’s because you want to have a score on something simple and purposely constrain the answer. We might be using such a score to determine what topics those who are engaged (or disengaged) are more likely to be discussing. It’s important to note, however, that for any feedback request we might ask 2 or 3 scale questions.
Why are scale questions so popular?
If open questions hold so many advantages why are most surveys mostly scale questions?
A very big factor is that it’s easy to report and analyse scale questions. Before tools like OdinText for customer feedback or Workometry for employee feedback analysing large volumes of unstructured data was hard.
Now, however that is not the case. Given the rapid progress of text analytics I suspect we’ll start to see the gradual death of the traditional survey. If you're serious about learning it can't come too soon.
About the author
Andrew is one of the pioneers of the European People Analytics scene. He is the founder of OrganizationView, creator of the open-question employee feedback tool Workometry and the co-founder of the People Analytics Switzerland community.
Andrew chaired the first European People Analytics conference - HR Tech World’s 2013 ‘Big Data’ event and has been co-chair of Tucana’s ‘People Analytics’ conference in 2014, 2015 & 2016. He teaches HR Analytics and Data-Driven HR in Europe and Asia and is a member CIPD’s Human Capital Analytics Advisory Group, setting standards and content strategy for HR Analytics content.
To keep informed of all Andrew’s writing, here and elsewhere please subscribe to OrganizationView’s Newsletter or follow him on Twitter.
People Analytics | Employee Listening | Global Leadership
8 å¹´Great insight Andrew Marritt. However, there are practical considerations on the tools and technology side of things. As a past practitioner, I can say that getting budget for text analytics tools is **pretty** difficult, unless it is already built into the survey tool. Most survey tools have basic capabilities, not advanced ones. Unless we solve for - how to make text analytics more accessible - traditional surveys will continue to reign, in my opinion.
Sr. Software Engineer
8 å¹´I think non-text responses are hugely useful in providing a foundation for reporting on a metric or any combination of metrics gathered from consumers. Further, I think you are way overestimating the eagerness of the typical person if you really think he or she is going provide short essay responses to numerous questions. Having said that, I totally agree that free text answers provide context and would be spectacularly useful for reporting.
People Analytics Change Leader
8 年Such a timely writeup Andrew in so many ways, helps consolidate the Trump hypothesis on a no hypotheses doesn’t it, divided we stand! As much as i would vouch for the veracity of NLP in HR , the core essence still lies in business insights based on conjoint, corroborative analysis, a true melting point between all types of data ( scaled, unscaled as you refer) and of-course domain expertise. Both approaches have their positives but unlike scaled data, inadequate volume in unscaled data, can make the analysis misleading. A small representative sampling on scaled data on the other hand can still have meaning ( albeit limited ) Added to it are the complexities of modeling in NLP itself, cant throw a generic TA API ( Google, amazon ) at HR survey data and expect the accuracy to be production level, I am sure you would corroborate that based on your own vast experience at Workometry Handcrafting the domain specific computational rules based on a conjoint analysis, often after carefully observing patterns in network maps and building a lemmatisation dictionary is as much art as it is math ( or stats ) Given adequate time and investment, the rewards of open-text (NLP) are truly immense for HR As you precisely nailed it ...."option of not knowing what you’re going to learn, before you ask." Thanks for an enjoyable writeup
Agency's who run surveys are a big part of the problem here. Most of them have no idea of concordance or what you can do with it and free text responses.
Owner Director of Know How HR, offering a range of HR services
8 å¹´saw a refreshing approach to survey feedbacks and it was all focusing on the postures. wow...