Site icon Netimperative

Guest comment: The online opinion poll that could cost £13.2bn

Online surveys are at the heart of a lot of decision making these days particularly, as you would expect for on-line businesses. Many people assume that an on line survey is representative of the online population but the recent Scottish Referendum polls reveal that this is not necessarily the case. Steve Abbott, director at The British Consumer Index, explains why when it comes to web surveys, who takes part can make all the difference.


In the run up to the Scottish Referendum an online poll showed the ‘Yes’ vote to be ahead
while other polls maintained a victory for the ‘No’ vote.
This provoked quite a reaction. Experience had shown that the on-line polls conducted ahead of other votes such as General Elections had proved to be reasonable accurate. The “No’ campaign, therefore, beefed up their pledges as to what a post “no’ vote Scotland would get, summarised below courtesy of The Guardian.
In the event, as we now know, the vote turned out to be a “No’ so why was the on-line poll so wrong in this instance?
Two Types of People
The answer lies in the difference between people who take part in on-line surveys and those
who don’t.
You Gov tell us that the reason on-line polls tend to be good at predicting elections is that people who respond to their surveys are more likely to actually go out and vote. They are generally more ‘active’ in the broad sense.
This works well when the turnout is low as is the case in most elections but this time the turnout was high, more representative of the population as a whole.
And that is the issue. On-line polls and surveys represent a subset of the population which is
more ‘active’ than the population in general. It can, therefore, be misleading to extrapolate an on-line survey to the whole population. Simply put, the people who respond to on-line surveys are more likely to actually turn out to vote so when turnout is lower the poll will be more accurate than when it is higher.
Taking another example; The British Consumer Index (BCI) publishes figures each month on peoples’ Financial Optimism, whether they think their personal financial situation will get better or worse in the next few months. They show the results for the population as a whole and for those individuals who respond to on-line surveys. These figures provide an illustration of the extent of the difference when it comes to
‘attitude’.
As can be seen, those who respond to On-line surveys are the most optimistic with those who do not use the internet are, as may be expected, least optimistic. The Green bar shows the figure for the population as a whole.
(Figures for 3 months to September 2014)

What this Means
Any survey needs to be interpreted with care. They cannot be taken too literally. While it is easy to weight a survey for demographics to make it representative it is far harder to adjust for attitude. Proxies can be used such as which newspaper people read etc. but when asking about attitudes there is no ‘benchmark’ to weight against (such as the Census for demographics).
It is widely understood that on-line surveys tend to reflect more definite views than the population in general and a more ‘active’ profile but quantifying that difference is where the difficulty lies.
When it comes to using an on-line survey to establish things like the likely demand for a product or attitudes to things like advertising and marketing that quantification can make the difference between success and failure of a product or campaign.
One solution is to run on-line surveys in tandem with other methodologies. On-line can provide cost efficient volume with a smaller ‘control’ survey using say, face to face. Conclusions can then be drawn from an amalgam of the two results.
The problem is that this can be both costly and time consuming as the ‘off line’ methodology
will, inevitably, take longer to collect.
Another solution is to use ‘reference’ data which can be split into whether people respond to on-line surveys. The British Consumer Index uses this method for its Financial Optimism data detailed above. The data comes from The British Population Survey which has been
collecting this data every month from July 2009. Its ‘reference’ data can be used to provide an indication of ‘Attitudinal Skew’ for any on-line survey to help with the extrapolation of the results to the population as a whole.
Summary
All of this is not to say that on-line surveys are bad. In fact they provide many benefits. They are quick, cost effective and can provide large and robust samples sizes. However, as with any data the results need to be interpreted.
The one thing that the Scottish Referendum experience shows us is that while on-line polls
can be very accurate in a ‘low turnout’ election they cannot be expected to perform as well when almost the whole electorate actually turns out to vote.
In a broader context, it illustrates the problem for other users in using these surveys to
understand not just the population but also their particular target segment as a whole.
By Steve Abbott
Director
The British Consumer Index

www.bcindex.co.uk

Exit mobile version