VOLUME 21 #1

Current cover

click here to view pdf

 

DEPARTMENTS

A matter of opinion

RESEARCH | A public opinion poll is designed to predict future events like elections. Do you (a) agree with that statement; (b) disagree with it; or (c) don’t know?

David C. Wilson hopes you chose (b). In fact, he prefers you to strongly disagree.

David Wilson
Photo by Ambre Alexander

“Polls are designed to provide snapshots of the past, not predictions of the future,” the associate professor of political science and international relations says. “While the media love polls, especially around election time, it is a myth that they are designed to be accurate predictors of things like election outcomes.”

He notes an important distinction between “pollsters,” the experts who collect the data and design the surveys, and “poll sponsors,” who usually report the findings. In some cases, the two are the same.

Wilson is a polling specialist whose research and teaching focus on such topics as public opinion and political behavior, the psychology of politics, and survey research statistics and methodology. A former statistical consultant and research associate with polling organizations, including seven years with Gallup, he now also holds joint appointments at UD in the departments of Black American Studies and Psychology and is the coordinator of public opinion initiatives for the University’s Center for Political Communication (CPC).

The CPC conducts a variety of polls, and Wilson also teaches an undergraduate course each spring in which his students design, administer and analyze the results of a public opinion survey of their peers called the Blue Hen Poll.

“Polls provide data to help professionals make educated guesses, but they should not be viewed as crystal balls.”

With the increasing attention paid to surveys—the 2012 presidential campaign is a case in point—Wilson hopes Americans become careful and knowledgeable consumers when a poll’s results are announced.

“Public opinion polls can play an important role in the democratic process by revealing public preferences,” he says. “Just as the value of your vote relies on how informed you are about issues and candidates, the usefulness of polling relies on the public being informed about how those surveys are conducted.”

Here, he sheds light on some key caveats to consider before putting full faith in poll results.

Not all polls are created equal.

In assessing the results of a poll, think about the aspects of the survey that might have influenced the results. A poll paid for by an impartial public policy group or news organization or a university, for example, might word its questions and select its respondents differently from one conducted by a political candidate or partisan think tank. If you are called, don’t be afraid to ask who is sponsoring the poll; if the caller won’t tell you, it’s likely not professional.

Look for clear, unbiased wording.

Questions can be intentionally or unintentionally worded in confusing ways that make interpretations of the final results questionable. Wilson points out that questions with too many elements cause problems. He gave a real-life example of this in a piece he wrote for the Huffington Post: “Do you agree or disagree that the federal government has gotten totally out of control and threatens our basic liberties unless we clear house and commit to drastic action?” His reaction to that is an incredulous, “What?” There are at least four assertions in that single sentence, he says, making it confusing to the respondent and impossible to know which element is the focus of any person’s answer. These kinds of unclear questions muddy interpretations and lead to errors about what the public actually thinks.

Many factors can influence answers.

Wilson conducts research on how the ordering of questions can affect results. During the worst months of the recession, for example, asking respondents how they viewed the economy (most answers were negative) first and then following up with what they thought of the job President Obama was doing resulted in lower approval ratings for the president. Asking the questions in the reverse order yielded higher approval ratings for Obama. Wilson has found the same pattern with positive levels of overall life satisfaction being diminished when respondents are first asked about their satisfaction with finances.

“People like to look at survey results and think that the respondents have given a lot of thought to these issues and to their answers,” Wilson says. “But really, their answers are often driven by whatever is at the top of their minds at that moment.”

Pay attention to the margin of error.

Since it’s not possible to survey every person in a large population group, most polls rely on samples. Reputable pollsters report each survey’s error due to sampling, also called the “margin of error,” or MOE. For most surveys, the MOE reported is entirely due to the number of persons contacted, with larger numbers of respondents leading to lower margins of error. Most pollsters will not exceed a 5 percent MOE.

All polls contain some errors.

Errors can also occur when some groups are excluded from a list of potential respondents. For example, a telephone poll that doesn’t include cell phones, Wilson says, will underrepresent young people, poor people and those who work a lot of hours; all those groups rely more on cell phones and less on land lines. Other errors occur due to respondent behaviors such as avoiding personal or sensitive questions, being distracted or wanting to end the interview because it is too long. As a result, many respondents quit the interview or refuse to provide answers.

Results can be reported in very different ways.

Wilson says that poll watchers should be leery of drawing conclusions based on a single question. Polls always collect more data than they report, and the poll sponsor will commonly announce only what seems most newsworthy. This is most dangerous when poll sponsors publicize only the results that support their views. “The key is transparency,” Wilson says. “Polls should make their questionnaire and their results available so we can see the entire study.”

Another issue in interpreting poll results is context. Wilson gives the example of an elected official with a 40 percent approval rating. That number isn’t as informative as also knowing what his or her approval was a month ago or a year ago and how that compares with others in similar positions.

Don’t look too far ahead.

Polling is a field of serious study, Wilson says, but it’s also a business, and polling organizations like to stay in the public eye to attract customers. So political polling starts early.

“People who are sophisticated about polls know it is too early to understand anything that will happen in 2016 or even 2014, and yet we still see those polls being conducted,” he says, noting that any number of scandals, political crises, gaffes and external events can occur to change the minds of voters between the time they are surveyed and the time they go to vote—or decide to stay home.

“Polls provide data to help professionals make educated guesses, but they should not be viewed as crystal balls,” Wilson says. “At best, they’re a snapshot of what someone is willing to express at a given moment to a total stranger or a computerized voice.”

  • University of Delaware   •   Newark, DE 19716   •   USA   •   Phone: (302) 831-2792   •   © 2018