Taking Surveys Seriously? (I)

Time Magazine: “Do you care about what you sing?”

Bob Dylan: “How can I answer that question if you’ve got the nerve to ask me?”

In this post I discuss whether we should exclude a respondent from survey analysis if their answers show that they’ve lost patience with the process. This stems from my discovery that the American
public can get pretty sassy when they’re taking political science surveys. If you ask enough questions that people don’t appreciate, they’ll get sick of giving reasonable answers.

This comes across in the Co-operate Congressional Election Study (CCES) – a US public opinion survey, exceptional both for its breadth and for its depth. Not only are over 50,000 respondents contacted, but these respondents are given the opportunity to answer some questions in an open-ended way.

And at least some respondents take advantage of the extra space to make a joke at the expense of the survey itself. Like Senator Coburn, some respondents never miss a chance to strike a blow against the survey-industrial complex. It's these write-in answers, and their causes and consequences, that I'm going to examine. There are of course other more systematic ways to look at the general topic of accurate measurement in surveys. For example, factual questions assess respondents’ political knowledge, interviewers sometimes quantify respondents’ level of engagement, and respondents’ answers to some questions can be independently validated – like those on turnout.

But perhaps write-in answers give us a perspective that these other measures do not. Perhaps other people have their own views on this approach, but I believe isolating facetious write-in answers
could be important for three reasons. First, we can perhaps use the number of flippant responses as one way of assessing how arduous the respondents are finding the survey overall. Second, we can look at whether particular questions seem to touch a nerve for multiple respondents, or whether some respondents just give flippant answers to all questions. Third, if people aren’t taking the survey seriously, should we take their responses seriously? I believe that frustration, flippancy, facetiousness, etc, are all completely reasonable reactions to survey research, and to politics – but if (for example) someone gives flippant write-in answers, and then ticks the first box on every subsequent multiple choice question, should we interpret those answers as genuine, or not? Should we include a respondent in our analysis if their answer to the question of “How often do you pray?” was “now hoping this is over”? Write-in answers give us the chance to measure aggravated boredom, actual hostility, or just ridicule, directed towards the survey instrument itself. I hope I've persuaded you that this is at least potentially worthwhile (although the irony of quantifying these qualitative examples of hostility to quantification is not lost on me). I use the 2010 CCES and examine this topic in the following way:

1) How many respondents ‘write-in’ answers that could be perceived as flippant or facetious – as not taking the survey seriously?

2) Can we predict which types of respondents have a propensity for such answers?

3) In practice, can including or excluding such respondents affect our inferences?

            The first point is what I’m going to tackle today. It looks like a relatively simple descriptive question. I went through all responses to open-ended questions, and highlighted those that were not necessarily anti-political but simply anti-survey. It’s hard to tell whether a given answer represents someone not caring about the survey, or just not caring about politics. Not caring about politics is a very different question, and decades of scholarship on low democratic participation deals with this very idea. Not caring about the survey is different.

            In order to try to get at this and only this, I only coded answers that weren’t necessarily derogatory. It’s one thing to skip a write-in question because you don’t care about politics. Even registering your disgust by claiming, for instance, that your ideal senate candidate is “any stray animal”, doesn’t necessarily imply to me that you aren’t taking the survey seriously. Nor did I
include responses like “None of your business” unless the response included not just refusal but also hostility – e.g. “NONE OF YOUR DAMN BUSINESS”. I mostly look for answers that seem to impart no judgment at all except a sort of aggravated boredom on the part of the respondent – but I welcome suggestions on how to code this better in the future.

Hopefully there would be little argument over coding some of these:

“For whom did you vote for U.S. Senator?”

             “Yo Mama”

            “Kim Jong-Il”

            “Pee Wee Herman”

Others, however, are more difficult to code. But using questions on why respondents didn’t vote, on their preferred candidates for House, Senate, and Governor, and on their party ID, we obtain the following:

Number of

Tagged Answers

Respondents

0

55,370

1

20

2

8

3

2

My first reaction is that this seems to bode well for the CCES, in that very few responses seem are tagged. Replicating this for the 2012 ANES – which asked open-ended questions both face-to-face and online – might help us see whether web surveys provoke less frustration than online surveys. Another important point is that the largest numbers of tagged responses were answers to questions about candidates. This subject, more than turnout or party ID, seems to strike a nerve. As a meager cliffhanger, I’ll hold off exploring the statistical significance (if any) of this pattern. In the next blog post in this series, I’ll look at what demographic characteristics set these respondents apart, at more details about which questions provoke the most tagged answers, and then at whether
including them or excluding them could ever affect our inferences.

I’ll be back soon with another post, but in the meantime, I leave you in the capable hands of Keith:

 http://www.spike.com/video-clips/k71q25/the-office-keiths-appraisal

About Ben Farrer

Ben is currently an assistant professor in the Department of Environmental Studies at Knox College. He received his PhD in Political Science from Binghamton University in 2014. Ben was previously a visiting assistant professor in the Department of Political Science at Hobart and William Smith Colleges, and previously held a research position in the Department of Political Science at Fordham University. His research and teaching interests are centered around parties and interest groups, particularly those from under-represented constituencies. A great deal of his work deals with the political organizations of the environmental movement. He studies both American and Comparative politics.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.