Poll Bearers

The Onion recently ran an article headlined “85 Percent of Public Believes Bush’s Approval Rating Fell Last Month.” The story discussed a made-up Gallup poll that measured how many Americans “strongly believe that the American people no longer strongly believe that Bush” is doing a good job. The Onion’s satire demonstrates the tenuous position polls hold in our society: simultaneously mocked and worshipped. Yet opinion polls, despite their shortcomings, are the best way to capture public opinion without relying expressly on anecdotal evidence. Extrapolating what a group of people thinks based on a few individuals’ experiences produces a skewed portrayal of public opinion. Polls solve this problem.

Opinion polls, also known as surveys or questionnaires, ask a series of questions to a randomly selected group of people about all sorts of topics, ranging from views about specific individuals to general subjects. Groups seeking information from polls fall into four broad categories: corporations conducting market research, media outlets gauging public opinion, candidates seeking information from voters, and elected officials measuring public opinion in their constituencies. As interesting as an in-depth look at corporate polling would be, the nature of this publication begs a look primarily at political polls.

For candidates, the objective of polling is simple. “It helps them figure out how to win,” said Mark Mellman, President and CEO of The Mellman Group, a Democratic polling firm. For elected officials, the goals of polling are hardly different. “First, it helps them figure out how to win reelection,” said Mellman. “And second, it helps them figure out how to present their visions and their proposals in the most effective and impactive way.”

To do this, polls ask questions in two major subject areas: favorability and messaging. As the name suggests, the purpose of favorability questioning is to ascertain how the public feels about the candidate. For instance, an August CBS News poll asked, “Is your opinion of George W. Bush favorable, not favorable, undecided, or haven’t you heard enough about George W. Bush yet to have an opinion?” CBS broke down the results among all respondents, Republicans, Democrats, and Independents. Not surprisingly, 79 percent of Republicans were favorable towards Bush, in contrast with 31 percent of Democrats. Among all respondents, 46 percent said they were favorable, a much lower number than Bush has seen in the past couple years.

Another tool pollsters use to ascertain favorability is the horserace question. The horserace question places the candidate up against one or more other candidates to determine which one the public prefers. Horserace questions provide one of the most straightforward ways to learn voters’ preferences, as evidenced by the almost daily release of new data gleaned from horserace questions about the 2004 presidential election. In one interesting example, a September CNN/Gallup/USA Today poll asked, “If retired General Wesley Clark were the Democratic Party’s candidate and George W. Bush were the Republican Party’s candidate, who would you be more likely to vote for: Wesley Clark, the Democrat, or George W. Bush, the Republican?” Among all respondents, 48 percent said Clark and 46 percent said Bush. As the election progresses, Clark’s numbers will no doubt fluctuate dramatically, but Democrats were very encouraged by Clark’s “victory” over Bush in this poll.

The second major subject area—messaging questions—applies more to candidates than to elected officials. These questions aim to figure out what message the candidate should present to the public about himself and about his policies. “The public has a limited attention span. We have to figure out which one or two or three of [the proposed messages] are going to be most effective,” said Mellman.

Typically, a candidate (or one of his staffers) sits down with an analyst from the polling firm to discuss the candidate’s basic positions on relevant issues. The pollster, in conjunction with the candidate’s staff, then writes several questions incorporating those policies to determine which reverberates best among voters. For example, a question about Presidential hopeful Howard Dean might read, “Howard Dean signed a bill in Vermont allowing gay couples to join in civil unions, which provide the legal benefits of marriage without actually permitting gays to marry. Does Dean’s advocacy of gay rights make you more or less likely to vote for Howard Dean for President?” More sophisticated questions are several sentences long and, not coincidentally, resemble the script of an advertisement; most likely, the questions that test best with respondents will be used in mailings, TV, or radio ads.

When a poll includes lists of messaging questions, the pollster has to be very careful to prevent question-order bias—that is, the possibility that respondents’ answers might be influenced by the order in which the questions are listed. Generally, pollsters eliminate question-order bias by rotating questions, so that several different groups of respondents hear the questions in different orders.

Another trick pollsters use to prevent skewed results that might result from respondent fatigue is to divide up certain sections of questions among groups of respondents when the survey is particularly long. This process, know as making “splits,” involves asking a fraction of the questions to group one, or “Split A,” another fraction to “Split B”, and so on. The splits are determined randomly so that they do not differ significantly from the larger sample.

By “sample,” I am referring to the body of individuals who answer the poll. In order for a poll to be statistically significant, the sample must be sufficiently large as well as representative of the larger population. For example, if a poll is supposed to capture the mood of the nation, it needs to reflect the demographics of the country.

“You need to have a representative sample that looks like America—roughly 51 percent women and 49 percent men. If you’re doing telephone calling you need to continue to make calls until you get that representative sample,” said Karlyn Bowman, a fellow at the American Enterprise Institute.

Most pollsters agree that 1000 respondents is a sufficiently large sample size to accurately reflect what Americans think. And if the sample is supposed to represent a much smaller population—voters in Community Board 9, say—the sample size will almost surely be smaller, not just because it’s statistically unnecessary but also because telephone and online surveys are very expensive. To give you an idea of the cost: even a run-of-the-mill Congressional campaign would probably need to spend at least $10,000 to get any useful results.

Besides determining favorability and messaging responses, the third major objective of a political poll is to determine the politician’s target audience. Questionnaires use combinations of all the sections of a poll—demographic, favorability, messaging, and other miscellaneous questions—to ascertain “base” voters versus “swing” voters. Base, or core voters are the people who will almost certainly vote for the candidate regardless of the message the candidate chooses. Swing voters are still undecided about their allegiance, so their response to messaging is critical. Bush’s base, for example, is comprised of staunch fiscal and social conservatives, whereas the swing voters he really needs to court are moderate Republicans who might be persuaded to vote for one of the Democratic Presidential candidates.

Policy First, Polling Later

If a candidate can afford to do even the smallest bit of polling, he’d better do it. Failing to poll could cost a candidate valuable time and energy while he expounds ineffective messages to groups of voters who may already be part of his base.

“Most every campaign uses polling as a tool of determining electoral strategy. There’s not a campaign that’s run today where there’s a significant amount of resources expended that polling isn’t used,” said Jefrey Pollock, President of Global Strategy Group, a Democratic firm.

Usually, however, candidates do not change their policies based on polling, they merely use polls to help them select the best message from among their preexisting positions to present to the public. “The job of the good pollster is not to try to find a good message out there and pin it on the candidate, it’s to say of all of the things the candidate has done or might do, which is the most effective,” said Mellman.

Charles Millard, a former Republican Councilman in New York who ran for Congress in 1994, used polling in his campaign. Although Millard lost to Carolyn B. Maloney, polling helped the campaign staff target voters with direct mail campaigns because they could identify which voters agreed or disagreed with Millard on particular issues, such as the Clinton tax cut, gun control, and whether to build a garbage incinerator in Brooklyn.

Still, Millard emphasized that he never changed his position on the issues based on the polling. “I certainly never said ‘well, gosh, I used to think the incinerator was a good idea, but the poll says everybody disagrees with me, I guess from now on I’ll say the incinerator is a bad idea.’ I never did that,” he said. “On the other hand, if I thought the incinerator was a good idea and I knew that a particular segment of the population thought it was a terrible idea, I wouldn’t advertise that position in my mailings to them.”

Similarly, elected officials generally poll to see what voters care about, not to base their legislative agendas on what topics poll best. There are exceptions, however.

Critics often cite Karl Rove, Bush’s senior advisor, as frequently directing the president to push legislation that will attract swing voters. Data about these swing voters and the issues they care about comes right from polls. For example, some argue that Bush wanted to claim credit for the Medicare drug benefit to court the coveted seniors’ vote. Elderly folks, who often pay enormous sums for prescription drugs, have a much higher voter turnout rate than people in other age groups. Similarly, Rove is said to have urged Bush to sign the steel subsidy bill last year, despite Bush’s oft-cited support for free trade, to woo voters in critical swing states where steel is produced, such as Pennsylvania and Ohio.

Bill Clinton, too, despite his reputation for meticulously weighing the advantages and disadvantages of any policy before making a decision, got a lot of flak for relying too heavily on polling.

“There are many anecdotal accounts of polls that were used in the Clinton administration to decide where the president should take a vacation,” said Bowman. Vacation plans aside, Clinton is most infamous for having his political advisor, Dick Morris, conduct a secret poll in the middle of the night to ascertain whether Americans were ready to hear about Clinton’s affair with Monica Lewinsky. Morris later said that the results of the survey were so damning, he immediately told Clinton, “You can’t tell them about it. They’ll kill you. They’re just not ready for it.” Because of that one poll, the story goes, Clinton decided to deceive his family, staffers, and the public for months about the affair.

Partisan Objectivity

The polling scene basically consists of three kinds of polling institutions: partisan firms, academic research centers, and media polling centers. Prominent academic centers include Quinnipiac University or Gallup, while ABC News/Washington Post, CNN/Time, and NBC/Wall Street Journal are among the more established media polling centers. Partisan firms, despite their abundance, are the ones you don’t hear so much about.

During the last few decades, partisan firms have sprouted up across the country, touting their services to parties, candidates, and politically affiliated organizations. Most of these firms, which range in size from a dozen to several hundred employees, also provide corporate polling services, as businesses are much more likely to shell out the big bucks than cash-starved politicians.

On the surface, partisan firms seem like a contradiction in terms: how can an explicitly Democratic or Republican business provide unbiased data? The reason, as both Mellman and Pollock explained, is that the data is still pure—it’s just provided as a service to the party each pollster supports.

“Though polling is objective, the purposes to which it is put are not objective at all,” said Mellman. “We choose to be in the political business because we actually care about a political agenda. We want to use our scientific knowledge in the service of those broader political objectives.” Pollock put it more bluntly: “I’d never work for Republicans because that’s who I am and that’s what I believe in.”

Some critics charge that partisan polling firms manipulate data to present the best case for their clients or the party they represent. They point to the fact that polls always seem to support the group that’s released them. And while there’s certainly the occasional deliberate manipulation, “for the most part the major pollsters are pretty honest,” said Bowman.

Usually, the reason that polls paint the organizations that release them in a favorable light is that groups choose to publish only the “good” polls. “I think more often than not what people say is bias in polling is really bias in the release of polls,” said Mellman. Question wording, too, presents problems. Although pollsters generally try to attain the most pure data, it’s tricky to write the question in such a way that the words do not significantly influence the response. For example, Bowman pointed out, “We’ve had 5 questions in the last 5 weeks about whether the public supports spending $87 billion on Iraq and each had a different wording.”

Even with the potential for data manipulation or over-reliance, polling is an invaluable political tool. It is, arguably, the best way to advance one of the primary goals of democracy: to represent the majority’s interests; politicians can know exactly what their constituents think because of opinion polls. And even if you disagree about the usefulness of surveys, at least you’re now equipped to answer the question: “How favorable are you to opinion polling?”