View from
The Center

Polling is Alive and Well: An Interview with Gallup’s Editor-in-Chief

One pundit said that “data died” after the 2016 presidential election. Gallup’s editor-in-chief doesn’t think so.

On April 11th, Gallup editor-in-chief Frank Newport joined Merion West‘s Erich Prince to discuss the state of polling post-2016, the impact of phrasing and question length on poll responses, and the economics of conducting polls in the Internet age.

So to start off, you’ve had a variety of stops prior to becoming editor-in-chief of Gallup. I understand you’ve been a sociology professor, then a talk show host in Houston. Can you talk a little bit about how you became interested in polling and what led you to become editor-in-chief of Gallup?

Well, that’s a complex history. I say that I’ve always been interested in people: understanding and dealing with people. That led me to get interested in sociology, which is a scientific study of human social behavior. One thing led to the other, and I ended up getting a Ph.D. in sociology at the University of Michigan in Ann Arbor. I went directly from there to the University of Missouri in St. Louis as an assistant professor of sociology. I was following that traditional track of studying humans, but then I got sidetracked.

I had always been interested in broadcasting, and I actually worked my way through college with broadcasting. I was very interested in that. So, after a few years in academia, I got fairly bored with that approach to life, so I actually moved to Houston and became a talk show host, among other things. I was exercising that side of my interest. Then, I got together with a pollster in Houston, who I had actually interviewed for a segment on one of my talk shows.

He was intrigued by my interest in both the media and also polling and social science. So I moved over and became a partner at his firm. I actually helped do polling and marketing research for media companies as clients. I got back into the business of studying humans. But instead of doing it in academia, I was doing it in the commercial sphere. That’s what I’ve been doing ever since. I later moved to Gallup and became editor-in-chief, and that’s where I am today.

I understand you’re also an author. Taking a look through your book God is Alive and Well, one thing I want to ask you about is a point you make early: that Dr. Gallup always liked to have his questions short and to the point. I think the example you gave of one such question was: “Do you believe in God?” So I’m wondering, to what degree have those types of questions been continued at Gallup in the time since Dr. Gallup’s death?

Yes, that’s a very intriguing area of interest to social scientists of any kind: how do you ask questions. When you’re in the survey realm or the poll realm, it becomes even more important. It’s complicated, like anything else. It’s like asking a heart surgeon: “How do you do heart surgery.” You get a complex answer because there are many different variations and different kinds of patients in different situations. That’s the case with polling in question-wording as well. Let me just give you one example: if a survey is administered orally or on the phone, then shorter questions make a lot of sense in part because a respondent might find it difficult to hold a lot of information in his or her head before they answer.

Some people like to use that question format. Wall Street Journal and NBC use it quite frequently; if you’re an interviewer, you read something to the respondent: “Some people think that the best approach to Syria is…..” Then you say: “Other people think the best approach to Syria….is another way.” In both cases, you read a few sentences. Then you say: “which of these two approaches do you agree with most?”

But it’s difficult for respondents to even remember what the first one was in that setting. So, that doesn’t work as well as it should. That’s kind of what Dr. George Gallup, who founded Gallup, had in mind when he said: “Short questions can be very beneficial” because you’re asking the respondent something they quickly grasp and answer. The best example of that is our presidential job approval rating, which was pioneered in the days of Franklin Roosevelt back in the 1930’s, and we’ve been asking it basically the same way ever since. It’s very simple: “Do you approve or disapprove of the way that the current president is doing his job as president?”

Some people ask me what we do when someone has mixed views: “I like certain things he’s doing, but I really dislike the other things the president is doing.” This is true for Trump, for Obama, George W. Bush, whomever it is. But we say, “Well, we’re asking you to summarize.”

A simple question actually has a lot of benefits. But I should say if you’re doing a survey online, where people are looking at the screen or in our mail survey where they’re looking at any paper, then it’s possible to have a more complex question―because kind of like an S.A.T. exam, you can study the alternatives and make a choice and not have to remember it in your mind. So, in some settings, you have a little more latitude to ask a complex question. But, as a rule regardless, I think simple, to-the-point questions make a lot of sense.

So I understand as early as October of 2015, the media began to report that Gallup would not be doing horse race coverage of the 2016 election and instead would be asking more broader questions on voter attitudes. Can you talk a little bit about that shift away from the horse race coverage?

Yes, that received a lot of news coverage at the time because Gallup has historically done great horse race coverage.

I understand you got a lot of  press, both before and after election day, when it came to Romney and Obama in 2012.

That’s right. But you always get a lot of press. You can go back to 1948, and there is a lot of press as well, when Gallup was doing the Dewey-Truman race. Every election has this controversy–or the potential for controversy. Elections are highly charged, and you have very emotional partisans on both sides. Everybody is scrutinizing the data and accusing everybody else of being biased or not doing it right because it’s not showing what they think it should show.

It’s a complicated situation, but we looked at the resources that it took when we did the 2012 election and tracked the “Who are you going to vote for?” question every night by asking a big sample of people that question. It was a large number of resources, money, and expense to be able to track this number. Then the morning after the election we knew who was going to win anyway, so it didn’t really add that much, we felt, to the overall public understanding of what should be done to make our society better overall.

Also, based on when Dr. Gallup started doing this many years ago, there are many more polls now than there were then, which track the horse race. So it isn’t as important today that Gallup had its oar in the water there since there’s so many other oars in the water compared to in previous years. For a variety of reasons along those lines and because of the the fact that it’s very expensive and time-consuming and that there are many other people doing it, we felt we could use our resources in other ways and not devote a huge percent of our coverage of the presidential election to just that one question: who’s going to win the election.

I want to ask about the perceptions of data post-2016. I have a quote here from Mike Murphy saying on MSNBC, on the evening of November 8th that: “Tonight data died.” I’m wondering, first off, how true you think that assessment is—but, more importantly, even if that’s not actually true, is that perception on the uptick?

No, I don’t; I mean there was a lot of discussion like that after 1948 when the polls, Gallup included, predicted that Truman would lose in his bid for reelection, and, of course, he won. That didn’t come to pass. Data is a pretty big term, by the way; next time you go to your doctor and they give you your cholesterol count, that’s data. Do you no longer trust that? Do you no longer trust the scores you get back from baseball that night? That’s data that comes in. So, I don’t think it’s fair to say that data is dead.

I think that in polling and survey research, clearly, you have to analyze it, just like you would health research. There are a lot of studies that are thrown out, and you read in various publications online about this health study and that health study, whether it’s good or bad to have P.S.A. tests, whether it’s good or bad to have mammograms, or whether eating red meat is good or bad for you, and so forth. So there is a huge amount of information. You do have to be cautious and put it together and look at the source, so I think polling data is very similar.

When we try to assess the opinions of the population of a country, like the United States, this is a complex process has many different pitfalls along the way. So I think the consumer always has to be cautious, but I continue to think that it’s very valuable that we have public opinion polling, that we have data from the public. I do believe that we can continue to do it accurately and with our sample representing what everybody thinks, so I don’t think that public opinion data is dead, at all. I think these [studies] are as a vital a part of our representative democracy as they have ever been.

For our last question, I want to ask you a little bit about the economics polling understanding. Pew, for example, has gained coverage, from an economic perspective, for conducting more and more surveys from cell phones to supplement their landline surveys. That can be more expensive, of course. What are some of the other challenges in balancing accuracy with extras costs—for example, going to sample more people for maybe only incremental returns and accuracy?

Well, that’s always a concern, but it’s not just Pew doing some surveys. 70% of our research is done on cell phones. Everybody is using cell phones, so that’s not something that’s new or that only one firm is looking at. Most firms, with a few exceptions, do the majority of their surveys now on cell phones.

I should say that it’s always true that there is a cost benefit from doing survey research. Methods change over the years. We used to do in-person surveying, as I mentioned. Gallup used to go and knock on doors, but that’s very expensive. Then, we went the phone surveys and we had to include non-listed numbers in a variety of ways. Now, we’re moving to cell phones, so it’s a constantly moving target. So you’re constantly assessing what is the value of using this method of research versus how well we think it represents a population.

It works the other way. You can do research very cheaply these days, particularly on the Internet. You personally could go to SurveyMonkey, get a survey, and then buy a sample of people online pretty cheaply. You send the sample of people, who’ve agreed to do surveys, a well-formatted survey. You get two thousand people to come back in, pretty darn cheaply. Then you say: “I have a survey here.” But it wouldn’t be very accurate, in general. Therefore, you have to be concerned that if you try to do things cheaply, are you getting results which are worthless? This, of course, is true in life itself. You can build a new house and do it cheaply at every corner, and the house collapses the next time a big wind storm comes up. So, it’s the same type of thing. We’re always concerned with those trade-offs. It’s a constantly changing and evolving field.

So Gallup and other survey researchers, who want to typify the American population, are constantly saying: what is the best method to use as people’s communication methods change, as they have been: online, using phones, and so forth. What’s the best method we can use that gives us the most accurate results? Keep in mind that, as is true of everything, we have to be conscious of the expenses. So those are constantly the ratios and concerns that any survey researcher today is having to take into account as they go about their business.

I appreciate your time and answers, Dr. Newport. Thank you for joining us today.

Thank you.