Thursday, November 29, 2012

Consequences of Survey Nonresponse

I was asked to write a paper on the consequences of unit nonresponse in surveys (http://ann.sagepub.com/content/645/1/88). Quite a daunting task at first glance... but I found it quite interesting to go through the thought exercise of how, particularly increasing, nonresponse rates are affecting how we do surveys and the inferences we make from the collected data. The interesting part is not the obvious--sure, an article may be rejected from JAMA for using data from a survey with lower than 60% response rate, because it is equated with bias--but rather, all the other ways it affects surveys and survey inference. Surveys become more costly because of an increasing need to spend resources on reduction of nonresponse. They also become more complex, such as incorporating responsive designs, multiphase designs more generally, and using multiple modes. While higher nonresponse rates may not lead to higher nonresponse bias, it does mean greater reliance on auxiliary data and statistical models.



I did a simple analysis for this paper that I found particularly intriguing. While a meta-analysis by Groves and Peytcheva found lack of association between nonresponse rates and nonresponse bias across surveys, use of psychotherapeutic drugs is highly associated with the response rate in a national survey. It could be a spurious relationship, but additional investigations of this nature would be useful, particularly from ongoing surveys with fixed designs over time.

The entire issue of the ANNALS of the American Academy of Political and Social Science is on nonresponse and includes a great set of articles on the topic: http://ann.sagepub.com/content/645/1.toc.