An Exercise on Question Wording and Political Polling

Political Science Polling Questions
Political Science
Reading Time: 5 minutes

Author:  Jeffrey L. Bernstein, Department of Political Science, Eastern Michigan University

 

As we learned once again in the 2020 election, polling is hard. The simple act of polling voters to determine their preferences, and then using those preferences to predict election results, seems more art than science.

The polling industry is already asking, and will continue to ask, difficult questions about how they constructed samples, weighted respondents and accounted for “shy Trump voters” during the post-mortem following the election. Articles on these topics will likely be prevalent during the build-up to the 2024 presidential election.

While these types of issues are important for understanding how polls work, the exercise I describe here focuses on one other polling challenge: question wording.

When I teach this subject in my Introductory American Government course, I always invoke Walter Lippmann’s wonderful phrase “pictures in our head.” This helps explain to students that how we word a question paints a different picture in the head of our respondents.

For example, would respondents answer differently if asked about “late-term” versus “partial-birth” abortion? The effort each side spends trying to define the issue in terms of their own vocabulary suggests they believe a small number of people would, in fact, respond differently to each question. I suspect this is in part because these two ways of wording the questions paint very different pictures inside the head of respondents.

Questions with Biased Wording

In class, I introduce some examples of questions with biased wording. One easy place to start is with the surveys political party organizations put out.

A quick Google search can find some wild examples of “scientific” polling questions that parties have used. One from the <!––>Trump campaign<!––> asks, “Who do you trust more to protect America from foreign and domestic threats?” and offers choices of (a) President Trump or (b) a corrupt Democrat.

A DNC Survey from 2020 asks voters to list the aspects of the Trump presidency they found most disturbing. By assuming voters found the presidency disturbing, they make assumptions that don’t exactly follow good polling practice.

These types of questions, of course, are not to be taken seriously. Sending out surveys like this is known in the field as FRUG-ing (Fund-Raising Under the Guise of Research).

While it serves a political purpose, it doesn’t teach us a whole lot about the subtleties of question wording. I find it much more interesting and useful to create surveys composed of legitimate questions, where subtle changes could make a difference in responses. Does changing a simple word, or varying the information we give our respondents, change their attitudes on political issues?

An Exercise on Polling Questions

For over twenty years, I’ve used an exercise to explore polling questions and how they’re worded. The exercise takes place over parts of two class periods. With advances in survey technology, this exercise is easier than ever to do—and more effective.

Part 1: Writing and Collecting Questions

To begin, I break students into small groups and task each group with writing four questions. These questions are done in two sets of two.

For each set, I ask students to vary one relatively small aspect of the question, in terms of words used or information given in the question. I emphasize that this variation should be minor. I remind them of the partisan polls we discussed earlier for a lesson of what not to do.

We don’t learn very much when we discover that profoundly biased questions yield different results; our focus, instead, is on more subtle forms of bias.

Each group chooses two topics for their questions, writing two different questions on each. The questions can be on any topic. I encourage students to consider widely discussed national issues, but also allow polling on smaller, niche topics.

The issues that seem to recur among my classes—climate change, legalization of marijuana, racism—provide a nice window into the issues animating the lives and politics of our students.

The easiest questions to write are those that exchange one word for another. For example, support or opposition to “pot” vs. “weed” vs. “marijuana,” “capital punishment” vs. “death penalty,” “marriage equality” vs. same-sex marriage,” etc.

The verb choice is also a source of variation—“protect” vs. “save” the environment, “restrict” vs. “regulate” gun ownership, etc.

Students could also choose to provide slightly different information in one question compared to another. For example, “Given the number of Americans without health care…” vs. “Given the large number of Americans without health care….” This could be a good way to begin a question and see if these tweaks yield different responses.

Once I collect the questions, I sort through them and select about a dozen to use on the official survey. I use Survey Monkey to set up an A/B test of survey questions, in which respondents randomly receive one version of each paired question.

I create a link for the students to use in sharing the survey with friends and family. There’s usually a target number of surveys I ask the class to obtain (about thirty times the number of students).

At this point, students often point out that they won’t be able to obtain a random sample by just surveying friends and family. They are, of course, correct. This leads us into a fruitful discussion about internal validity and external validity.

Part 2: Sharing the Results

After the survey is completed, I usually make a printout of all the different results, organized by question. I spend about twenty minutes in class taking requests to show the results for different questions. We end up spending a lot of time discussing them and identifying why different questions yield different effects.

Students often find themselves drawing hypotheses concerning:

  1. The salience of the issue: do more salient issues like abortion exhibit smaller question-wording effects than less salient issues, such as trade policy?
  2. The difference in the wording: to what extent do more, or less, subtle shadings of the words yield different levels of question-wording effects?
  3. The different impact of changing words versus changing information in the stem of the question: does either variation produce larger effects?

I will usually follow this up with an exam question that asks students to make assessments about the impact of question wording on polling results. I’m almost always impressed with how students pull insights from this exercise. They typically offer up a well-developed set of principles for when question wording will—and won’t—have an impact.

The Impact of Research-Based Exercises

For years, I did this exercise and thought it to be an innovative way to teach about public-opinion polling. It’s only in the last few years that I’ve begun to see this as a way that gets to the heart of the university experience.

It’s difficult for many professors to pull their research into introductory-level classes. Most of what we work on is too esoteric to use in a sustained way. However, the vision of professors actively producing knowledge in their fields, and of students training to one day play that role (even as undergraduates!) is an important one to convey.

So, I’m sure to point out to my students that as they participate in this exercise, they’re doing actual political science research. I don’t know how their questions will perform on the survey: they don’t, either. All of us, together, can form our hypotheses. But, the act of testing these hypotheses against data is a communal exploration.

On more than one occasion, results obtained in our exercise led the class to refine the experiment and go back into the field to learn more. I’ve even had independent study projects and senior theses grow out of this class exercise.

In the end, I’m certain most students who participate in this exercise end up as better consumers of public opinion polls.

For further insights and peer-tested tips on teaching an effective course, check out our full library of professional development resources.