Why the polls keep getting it so wrong; and a solution – ask people who their friends and family are voting for
Today's guest blog comes from Juliet Hodges.
05 November 2018
By Guest
In 2016, the unexpected outcome of two votes shook the world: the UK voting to leave the European Union, and the US electing President Donald Trump. Even the pollsters got it wrong – for example, based on the latest polling data, the New York Times gave Clinton an 85 per cent chance of winning just the day before the election.
Accurate polling is important for a number of reasons. Poll results influence politicians' campaign strategies and fundraising efforts; affect market prices and business forecasts; and they can impact voters' perceptions and even turnout. So, when the polls are wide of the mark – as they were so badly in 2016 – many outcomes are being sent astray by misleading information.
But polling is not as simple as just asking a lot of people who they intend to vote for. Polls are often biased by who is motivated enough to respond, and people can be overly-optimistic about the likelihood they will actually vote.
Another factor, outlined by Andy Brownback and Aaron Novotny of the University of Arkansas in their recent paper in the Journal of Experimental and Behavioural Economics, is people feeling the need to conceal their true voting intentions.
Socially desirable responding (SDR) can occur when people feel their position is socially unacceptable, like voting against an ethnic minority or female politician, or when revealing their views on sensitive issues such as immigration and same-sex marriage. For instance, it's plausible that some people told pollsters they were going to vote for Clinton, while actually intending to vote for Trump.
To explore whether this was really happening, Brownback and Novotny conducted three experiments during the 2016 presidential campaign, including a telephone survey of 800 people of voting age in Arkansas, and a survey of almost 2000 US voters on Mechanical Turk.
Their ingenious method to tease out participants' true or "implicit" preferences involved presenting those in the "implicit" condition with four statements around hot-button issues like gun control and climate change. The statements were balanced to reflect both conservative and liberal views, such that participants were unlikely to agree with all four. For example, people who agreed with "I think the threat of global warming is exaggerated" tended not to agree with "I prefer presidential candidates who oppose the NRA", and vice versa. There was also a critical fifth statement, either "I often find myself agreeing with Donald Trump" or "I often find myself agreeing with Hillary Clinton". In this implicit condition, participants were simply asked to give a total number, from 0 to 5, regarding how many statements they agreed with – the idea being that they would feel their preference for Trump or Clinton was hidden within their overall tally.
Participants' scores in the implicit condition were later contrasted with the scores of participants in an "explicit" condition, in which they rated how many of the exact same policy statements they agreed with, and then, completely separately, they answered either, "Do you often find yourself agreeing with Donald Trump, yes or no?" or "Do you often find yourself agreeing with Hillary Clinton, yes or no?". In this explicit condition, their candidate preference was totally transparent (more like in a typical poll), and their answer to this final question was added to the number of policy statements they agreed with.
This meant that participants in both conditions had a score out of 5. Because participants in the implicit condition were able to indicate they agreed with Trump without admitting so overtly, then if socially desirable responding was occurring, we would expect the total number of statements agreed with to be higher when Trump featured in the implicit condition than the explicit condition.
That's exactly what the researchers found with the results suggesting people were more likely to express agreement with Trump in the implicit condition, and more likely to agree with Clinton in the explicit condition. Interestingly, the researchers found that there was no significant effect of political ideology, so those with more conservative views weren't more likely to show socially desirable responding than those with liberal views (based on six questions about economic ideology, covering topics like taxation, government spending and immigration). However, party affiliation did have an effect: registered Democrats in particular were more likely to suggest agreement with Trump in the implicit condition. Party identification, then, seems more important in producing this social desirability effect than the political beliefs that presumably underpin that choice. This is important, because it means socially desirable responding exacerbates the differences between the two parties, suggesting voters in each camp have less in common than they actually do.
While finding evidence of socially desirable responding in polling is interesting, it is unclear how this implicit test method could be used for more accurate polls in future, as the data it produces is too noisy. However, another group of researchers has tried a different approach with more obvious practical implications: asking people who their social circle are voting for. We know that when one person votes, their immediate family and friends are 15 per cent more likely to vote too, and it's likely voting choices are influenced in a similar way.
In their recent paper in Nature Human Behaviour, Mirta Galesic at the Santa Fe Institute and Max Plank Institute, and colleagues, tested this hypothesis not only with regards to the 2016 US election, but also the 2017 presidential election in France. They found that, in both cases, asking people if and how their social circle intended to vote was a more accurate predictor of the outcome than when participants only provided their own intentions or their prediction of who would win. In fact, in one poll, the researchers found that the social circle question was almost twice as accurate as reporting one's own intentions (82 per cent vs. 46 per cent) in the five key swing states that lost Clinton the election. Asking about participants' social circles means they don't have to reveal their own potentially embarrassing preferences, and it indirectly increases the sample size of the poll itself.
This study was also able to track social influence over time, as the researchers started polling both personal intentions and social circle intentions in July and continued weekly until after the US voted in November. One week before the election, more participants said they would vote for Clinton than Trump. Yet, as early as September, these polls accurately predicted Trump's win as people reported a swing towards him in their social circles. Not only that, but people who reported an intention to vote differently from their social circle were much more likely to change their own position at the last minute.
Another pattern the researchers observed is how participants' social circles become echo chambers. In August, supporters of both candidates reported that only around two-thirds of their social circle agreed with them. By November, Trump supporters had become much more homogenised, with 40 per cent reporting that 90 per cent or more of their circle now also supported Trump, relative to only 30 per cent of Clinton supporters saying the same. This could be for a number of reasons: switching support from third parties, deciding to vote when they originally weren't going to, or even freezing out people who disagreed with their political views.
These complementary strands of research both demonstrate flaws in traditional polling, and ways of addressing them in future. It seems likely that people represent their preferences in a socially desirable light, but it would be difficult to use the statement list methodology to predict an election outcome. Asking about a person's social circle is a much more effective way of obtaining more accurate results, and also sheds light on other social processes at work. Hopefully this will be adopted more widely going forward, to improve polling accuracy and avoid any last-minute surprises.
Further reading
—Social desirability bias and polling errors in the 2016 presidential election
—Asking about social circles improves election predictions
About the author
Juliet Hodges has a background in psychology and behavioural economics, and started her career applying these insights in advertising. She is currently working in healthcare, alongside her PhD at LSE's Department of Psychological and Behavioural Science. Follow @hulietjodges on Twitter, LinkedIn or read her posts for the Bupa Newsroom here.