Non-Compulsory Reading
The following article first appeared in the November-December 2017 edition of the Australian Market and Social Research Society’s publication Research News.
N.B. Since the time of writing, the ABS marriage survey results have been released, and the above graphic outlines the 'yes' vote results of each major polling company compared against the actual result (61.6% 'yes' from 79.5% participation), with undecided responses removed and error margins marked.
Australia’s published polls have an enviable record of accuracy when it comes to election results, and so public trust in survey research is still relatively healthy.The same cannot be said of the UK and US, where polling organisations have endured sustained criticism for imprecise reading of elections and referenda.
Australia uses much the same polling methods. So, can the same inaccuracies occur in Australia?
This is unlikely in the case of political polling; our preferencing system removes many of the uncertainties caused by minor parties; an equitable split of voters per electorate means national polls are representative, and; our compulsory voting system neutralises the vagaries of turnout.
These things mean that the life of an Australian political pollster is made easier, but also mean that we are unaccustomed to the challenges faced by other systems.
Enter the ABS’s same-sex marriage survey.
Here, we faced an unusual scenario where registered electors were asked whether they supported same-sex couples being able to marry. Taking part was voluntary, you had eight weeks to do so – there was no ‘polling day’ – and votes were cast via a pre-paid envelope.
Every aspect of this unique contest created uncertainty for the pollster beyond simple margin of error. Many people could have chosen not to vote on various grounds. Those that chose to vote often did so quickly, so polls needed to be part ‘predictive tracker’ and part ‘exit poll’. And self-reporting of turnout – likelihood to vote, completing the form or posting it – had the potential to vary wildly.
The reaction from the various published polling organisations to these challenges was interesting to watch. If good research is about asking the right people the right questions in the right way; analysing them logically and reporting them fairly. How did Australia fare against these criteria?
Single-issue questions can be trickier in their framing than simple political vote. It is therefore heartening that most of our published polls immediately switched to asking the exact question on the voting form when this became known.
Almost every reputable poll reported to date has returned a national ‘yes’ preference of 60-70 percent (with undecided responses omitted), so at least we can be reassured by the consistency that this question mirroring provided.
At the time of writing, the one published poll that broke out of this band (at 73 percent) was the only one to employ CATI interviewing. In many circumstances I consider this method amongst the most accurate but, having run mixed-method surveys on this and other sensitive social issues to compare effects, I have found that involving an interviewer can increase socially-acceptable responses (in this case ‘yes’ or ‘undecided’).
An appropriate interviewing method – in this case, one that most accurately reflected the private ticking of a box – must be a consideration.
I was surprised that most polls publicly reported undecided responses. The vote was a binary choice: ‘don’t know’ did not appear on the ballot, so why report it? I even saw such responses reported for those who had already voted or, worse still, the voting intention of unregistered electors. We exclude such responses from political polls, and arguably should not allow them in the first place.
The other reassuring consistency in all these polls was that, where asked, those who had already voted were more likely to have voted ‘yes’. But why then are we publicly reporting everyone’s voting intention, when we should be able to tell that certain respondents had no intention of taking part?
Unlike the ABS, we were not trying to replicate national opinion, but to determine the outcome of the vote in a voluntary voting system.
Measuring turnout was the major sticking point for Australian pollsters. How do we estimate turnout in a non-compulsory system? Should we take anyone whose likelihood is 9-10/10, 10/10, the top box, the top two boxes? Or should we just wait to ask those taking part?
This is the aspect that seems to have proven most difficult because there was no reliable frame of reference in Australia, and every contest is different.
Fortunately, the ABS began to release regular estimates of participation (based roughly on sorted containers of envelopes), which were trending towards an impressive response rate of over 70 percent at the time of writing. This at least gave pollsters a rough point of comparison.
Unfortunately, as the ABS released the numbers of forms physically received and sorted, many of which would have been posted over a week beforehand, several published polls were measuring votes posted at lower levels. Later polls also reported significantly fewer votes cast than earlier ones.
These inconsistencies – largely borne of turnout question design – cannot be reconciled. And in a voluntary system, where votes cast can be skewed one way or the other, measuring turnout is just as important as headline voting intention.
Our counterparts in the UK and US are learning lessons such as these the hard way. Clearly, Australia also has something to learn about measuring this kind of voluntary contest, and we should use this rare opportunity to do so because most of the real-world choices we are asked to measure are neither compulsory nor made on a single day.
As you are reading this you likely already know the exact result of the same-sex marriage vote. At the time of writing that is a luxury I do not share, but my hope for the industry, and Australia, is that the consistent ‘yes’ majority result of our trusted polls proves accurate.
Jim Reed is Senior Director at Newgate Research
Former Research Director at Essential Media Communications
7 年It always seemed to me it was a rather strange exercise to conduct a survey for the sole purpose of estimating the result of another survey. It reminded me of this quote from John Warhurst of the ANU – “The public opinion polls, as relentless and as frequent as they are, provide documentation but not explanation. They repeat a series of simple questions and deluge us with the answers.” Public opinion polling should do better than this. When we established our public polling 10 years ago, we did so with the intention of trying to explain and understand public opinion – not just measuring a few standard metrics. So just trying to replicate the same-sex marriage survey results seemed to us to be not particularly useful. A key issue of interest regarding the same-sex marriage survey was whether it would actually reflect public opinion. Given that our public opinion polling over recent years had consistently shown around 60% support, it seemed it did. In fact, there is little evidence that the campaign changed opinions in any significant way. Another issue was whether the introduction of other issues into the debate (e.g. religious freedoms, sex education in schools, political correctness, etc.) had any impact on the vote. We addressed this in our polling which suggested it hadn’t. Simply, what seems to have happened is that – despite all the campaigning, all the media commentary and all the polls - people who supported same sex marriage voted “yes” and people opposed voted “no”. But if we as researchers think our most important role is to simply make predictions that are a couple of percent better than others, we are selling ourselves short.
Director Stakeholder Relations and Communications at Northern Australia Infrastructure Facility
7 年Interesting article Jim.
Founder of Resolve
7 年Thanks Paul. Ipsos appear to be the only polling organisation to have used CATI interviewing, and were certainly above other polls' yes vote at the time it was taken. The others were mainly on-line, automated telephone or a combination of the two (that latter category now includes Newspoll). The Roy Morgan poll quoted was SMS.