Understanding Political Polls

Torontoist

30 Comments

politics

Understanding Political Polls

Political polls play a big part in elections, but they can be confusing to navigate. With a chart and frequently asked questions, we try to help you on your way.

With a long campaign and the availability of inexpensive polling, the 2014 Toronto election has seen more polls than ever. But often the numbers are better at raising questions than answers. Why do some polls contradict one another? Which ones are most likely to be reliable? Is Doug Ford really polling that high?

Here’s your guide to polls, and some help with how to interpret them.


Here is an interactive poll tracker. Select a particular candidate to just see their line on the chart, or control-select to compare multiple candidates. Change the date range with the slider in the top left in order to focus on selected time periods, or use the drop-down menus at the bottom right to focus on a particular pollster or polling method.


Learn About Tableau

Sample Size


One objection that is sometimes raised to polling is that sample sizes are not large enough to represent the population. Generally, this isn’t the case. If used correctly, relatively small samples can capture the opinion of the broader public in a way that is statistically valid.

One recent Toronto mayoral poll was released by Mainstreet Technologies on Friday, October 17, and was conducted with a sample size of 2,265 people. This may not sound like a lot in a city with a population of 2.8 million people, but the poll had the fourth largest sample size of the 36 Toronto mayoral polls conducted since November 2013.

Any given poll is only as good as its methodology. For a sample size to be useful, it needs to to be a representative cross-section of the population. If you only call, say, 20 people, you run the risk of skewing your poll towards specific demographics, whether they be downtown or suburban voters, men or women, young people or seniors. But once you reach a certain number—over 1000 people tends to be a pretty good sample for a city like Toronto—the risk that the pollster is not accurately reflecting the population diminishes.

For this reason, larger sample sizes are better, but past a certain point they generate diminishing returns. Pollsters want to balance accuracy and affordability, so generally city-wide polls won’t go beyond the 1000-2000 range. However, when you start to break polls down to get snapshots of particular demographics—to look at various parts of the city by income, or age, or other variables-then those polls become much less reliable, since the sample size for each specific demographic gets smaller and smaller. So when news outlets report that a certain candidate surged by seven points in a particular Toronto suburb, for instance, but the sample size for that subset was only 180 people, those results should be taken with a grain of salt.


Margins of Error


From the sample size, pollsters use a straightforward statistical formula to generate their margin of error. For a sample size of 1,000 in a city with 2.8 million people, the margin of error is 3.1 per cent; for 1,500 people it’s 2.53 per cent; and for 2,000 it’s 2.19 per cent.

This is the margin of error given within a certain confidence interval (which is an estimate of a poll’s accuracy based on the sample size, margin of error, and various statistical calculations), and most pollsters use 95 per cent as their interval. An example: if a poll found Doug Ford’s support at 30 per cent after 2,400 people were polled, the sample size would mean that the margin of error is 2 per cent. That means that based on the sample, the pollster is 95 per cent confident that Ford’s support is between 28 and 32 per cent.

The same principle means that in, say, the October 6 Forum survey—in which John Tory polled at 39 per cent and Doug Ford polled at 37 per cent, with a margin of error of 2.8 per cent—they are in a statistical tie.

Similarly, when Olivia Chow went up two points between the Forum polls conducted on September 29 and October 6, that’s not necessarily a sign that she is making real gains, as it’s within the 2.8 per cent margin of error. While many news stories get written based on these small movements in poll numbers or approval ratings, a lot of the movement is noise and not necessarily indicative of a larger change.

Polls don’t really nail down specifics: they should be understood as probabilities and ranges within normal distributions, but they do not aim at complete accuracy. They provide what can be helpful snapshots of public opinion and add some context to political coverage, but for that context to be meaningful, polls need to be understood as imprecise tools.


Weighting Process


Even with a reasonable sample size, raw polling data needs to be weighted to better reflect the population. You might have a situation, for instance, where 54 per cent of the people who responded to a poll are men, and 46 per cent are women. If the population at large is 51 per cent women and 49 per cent men, then the weighting process will weigh the responses given by women slightly more, in an effort to more accurately reflect the population. If the poll needs really heavy adjustments—if 65 per cent of respondents are women and 35 per cent men—then there’s likely something wrong with the sample.

Some pollsters, like Forum Research, will make further adjustments to their weighing, based on who they think are likely voters. For instance, seniors have historically been more likely to vote than youth, so a pollster might account for that in their weighting, to try to give their poll more predictive value. Pollsters do not publicly release what these adjustment formulas (although they might to clients who commission the polls) because it’s considered proprietary information: a good or bad weighting formula can make or break a polling company’s reputation, so they’re carefully guarded secrets.

This difference in weighting can partially explain how competing polling companies like Mainstreet Technologies and Forum can release polls conducted within 24 hours of each with what seem like dramatically different results, as they have at several points during this campaign. The pattern here is that Forum consistently has Doug Ford polling higher than Mainstreet, usually by about 3–4 points (although once by 9). This could be due to weighting, or the time of day the polling companies call (Mainstreet often calls on Sundays, so maybe they’re missing the football fans who are partial to Doug?). Regardless, polls should be compared on an apples-to-apples basis, so the most recent Forum poll should be compared to the previous Forum poll to establish trends within the same methodology.


Polling Methodologies


Most polling companies for the Toronto mayoral race use what is called interactive voice response (IVR), otherwise known as robocalls; 30 of the 36 Toronto mayoral polls taken over the past year have been conducted this way. Part of the reason is that it’s quick and cheap compared to other polling methods. It also makes it easier to get large sample sizes, which can be prohibitively expensive if you’re using live calls.

Despite what your friend on Facebook has written, IVR (and all of the other polling methods, except for Ipsos-Reid’s online panels) call cell phones as well as landlines. The phone numbers are randomly dialed, and the candidates are randomly ordered. That way, no one candidate appears first each time (there’s a slight bias towards choosing the first option).

A lot of people don’t answer these phone calls. For a sample size of 2,000, a polling company might need to dial around 100,000. People are much more likely to hang up on an automated dialer than a human in a call centre just trying to do their job.

IVR, like every other polling method, has a mixed record. It has done relatively well in recent Canadian elections compared to other methods, but did poorly in the 2012 US presidential election. But there have also been some recent embarrassing Canadian polls that have increased skepticism about polling reliability, both in the industry and among the general population. In November, Forum showed the Liberal candidate leading in the Brandon-Souris by-election by 29 percentage points. A day later, the Conservative candidate won.

Polls are not foolproof and never have been. In Toronto, for instance, language can be a barrier to reflecting the population at large, as a significant amount of people might not respond to a phone call in English. (Some pollsters do spot checks in other languages to test these biases).

But neither are polls “worth zero“. After all, every credible mayoral campaign will pore over internal polls, in which they invest a significant amount of their limited campaign funds, in order to glean insights on the state of the race. So while candidates who perform poorly in polls tend to publicly say they don’t pay attention to these things and the only poll that matters is the one on election day, the truth is they pay close attention to such information because they find it useful, and will gladly discuss numbers they like.


Polls Influencing Voters


Of course, polls do not exist in isolation, and studies have shown that they do influence voters [PDF]. That is, a voter is more likely to vote strategically if they see their preferred candidate is unlikely to win, shifting their support to their second choice or to the candidate most likely to defeat their least preferred choice.

Strategic voting can be a double-edged sword for campaigns. Early on in the 2014 mayoral election, the Chow team encouraged this kind of thinking, based on the idea that with a crowded right field Tory couldn’t put together a coalition strong enough to beat Ford. By July, polls showed Ford in third place, and Tory as a viable first-place candidate.

It is difficult to determine the extent to which strategic voting affects a given campaign, but polling data almost certainly contributes to it. That this would be a much less significant issue with ranked ballots is a topic for another day. But it does get at the idea that public opinion polls don’t just represent the political context of the day: in fact, they actively shape how people think after the poll is released.

It’s for this reason that you’ll occasionally see campaigns release internal polls in an effort to raise their supporters’ hopes and shift the momentum in the race. With that said, internal polls tend to be less reliable and lean towards the candidate that is paying for the results. As such, they are best viewed with a healthy amount of skepticism.


The Players and the Game


As in any industry, polling companies compete for credibility and recognition. Public political polling tends not to make much, if any, money. It’s a loss leader of sorts for market research companies, a way for them to advertise their firm and the services they provide. In that way, there’s clout in being able to say they were the most accurate in predicating the outcome of a given election, or that they are a research leader in Canada’s largest market.

Media outlets have a role to play, too. Polls have guaranteed news value, and readers are very likely to click on stories about polls. It’s also an easy exclusive for a media outlet, which is why they all commission polls (albeit ones that every competing outlet will re-write within half an hour). It’s difficult to determine how polls commissioned by media outlets compare to internal campaign polls, because the latter are rarely released. However, it is fair to say that polls commissioned by media outlets tend to be cheaper and less rigorous than the ones campaigns conduct.

Polls have their place within the political ecosystem, but they’re also a tool that is easily manipulated and misunderstood. Even a good poll is only as good as its methodology and accompanying analysis; sometimes the numbers say less than we might like to believe.


Further Reading


Ryerson Review of Journalism: Why Aren’t Political Reporters Asking the Right Questions About Polls?

Toronto Star: Confused About Political Polling? Here are some Answers.

Comments