Anticipating the recent national election, Los Angeles Times columnist Robin Abcarian wrote an opinion piece declaring, I have "poll derangement syndrome." Reading poll after poll (part of the syndrome) and the inability to accurately know their actual outcomes.
Whether for elections, major civic or societal events, business research, or a multitude of other reasons, polls and surveys are conducted to provide insight and understanding.
But do they? What goes into conducting a reliable survey/poll? Can they ever be 100% accurate?
After the last few national elections, many asserted, “the pollsters got it wrong.”
Or so it seemed. Unless the total population for a specific subject is surveyed/polled, statistical variance and, hence, uncertainty will be present.
That doesn’t mean don’t survey. How much is the variance (i.e., “margin of error”), and can the degree of uncertainty be tolerated for that survey’s specific purpose?
Here is some intel to help understand what goes into conducting a survey. First, the sampling method used to select and contact prospective respondents is critical for obtaining a random and representative population sample, the foundation for the most accurate and projectible results.
These days, most polls or surveys are conducted online and on cell phones; fewer are surveyed on-site, while paper surveys are virtually extinct – happily, in my opinion. For online surveys, sampling methods can include panels or targeted lists such as voter registration or business customer lists.
Phone methods include random digit dialing or robo-polling. Again, being random and representative are the critical factors regardless of selection and contact method.
According to Pew Research, telephone (landline) interviewing was dominant in the early 2000s, and pollsters adapted to cellphone-only households. Phone surveying has fallen amid declining response rates and increasing costs.
However, Pew Research Center studies (1997, 2003, 2012, and 2016) and other researchers found little relationship between response rates and accuracy. While low response rates don’t necessarily make polls inaccurate, they may produce a higher risk of error versus higher response. The key issue is whether the attitudes and outcomes being measured influence people’s decisions about taking or not taking the survey.
A second key factor is the sample size. Other researchers and I are often asked what sample size is acceptable. That depends on the population size from which the sample is being drawn and the extent to which the results are being used to project to the larger population, which is critical in political polls and perhaps less so for other surveys.
A good maximum sample size is around 10% of the population, up to 1,000. For example, in a population of 5,000, 10% would be 500; for 200,000, 10% would be 20,000, which exceeds 1,000, so in this case the maximum would be 1,000. Sampling more than 1,000 adds little additional accuracy, especially given the extra time and money to do so, unless the total sample will be segmented, each segment needing at least 50 respondents to be reportable.
Thirdly, after survey data are received, “adjustments” may be made to ensure that the respondent sample reflects the population in terms of key demographics like geography, age, gender, income, and education.
So, what has been happening in the last few elections when polling projections have been “too close to call” and/or did not reflect the actual outcome?
Essentially, certain voters have been underrepresented in the sample, and data adjustments cannot account for them if they do not participate, as the profile and responses of non-respondents are unknown.
Given a continued reliance on cell calls or other methods with increasingly lower response rates and how tightly the electorate is divided, good researchers know that outcomes will be uncertain to a certain degree (but the general public does not).
For the 2024 national election here are some key factors as to why the polls may have been considered falling short as detailed in the U.S. New & World Report, November 12, 2024:
“The most you can expect from polling is to be close in close elections, and it was,” says data guru Jon Cohen, founder of Truedot.ai and an advisory board member for Decision Desk HQ.
“Just about across the board, the polls understated Trump yet again,” Josh Clinton professor of political science and co-director of the Vanderbilt Poll at Vanderbilt University, says of 2024. “By smaller amounts – 1 or 2 percentage points – but that 1 or 2 percentage points still matters.”
This same article delves a bit deeper into why have Trump supporters been undercounted leading up to his elections:
Polling organizations simply might have trouble finding people planning to vote for him. Or his backers might not be willing to talk to the media (“fake news” and such), and pollsters.
“There’s something about Trump and his appeal to people who don’t typically participate in elections, and frankly, don’t commonly participate in polls,” Cohen says.
He notes that a long-standing bias in polling is that people who participate in other things are “disproportionately likely” to participate in polls. And as the Pew Research Center noted in a 2020 polling post-mortem, “there is no guarantee that weighting” – essentially boosting the voice of those underrepresented – “completely solves the problem.”
“When you are missing the very people that Trump has proven to be uniquely capable of turning out, then you get a systematic error like we’ve seen when he’s on the ballot,” Cohen says.
Thus, underrepresentation and non-participation are present in any survey and need to be accounted for, but without knowing who those people are and how they respond, there is an unknown margin of error and less than 100% certainty of the actual result.
This uncertainty is magnified (and less tolerated) for national elections, given the high expectations riding on their outcomes.
TAG’s take is that when deciding to conduct a survey, make sure that those conducting it are experienced with methodology and savvy in interpreting and explaining the data results, especially outcomes that may not match expectations.
Contact us to help you through this process and ensure your surveys produce the information and insights you need for sound decision-making.
Коментарі