It is understandable why interviewers and survey organizations target cases with likely higher response propensities in order to maximize response rates. If the goal is to really reduce the potential for nonresponse bias, however, this may no longer be the correct approach. It may not also lead to the lowest variance in adjusted estimates because it can lead to high variability in nonresponse weight components.
In a new article, we propose an approach that is counter to current practice that aims to reduce the potential for nonresponse bias and reduce the variance due to nonresponse adjustments (Peytchev, Riley, Rosen, Murphy, and Lindblad, 2010). Using paradata, frame information, and survey data from a previous wave (thus highly associated with key estimates in the current data collection), we estimate response propensities for the current data collection. We randomly assigned half of the cases predicted with low response propensities to an experimental treatment - interviewers received higher compensation for completing any of these cases. With this new protocol, rather than maximizing response rates by focusing on cases that are all more likely to participate (and thus can increase the association between response propensities and survey measures that drives nonresponse bias), we place greater focus on those who are most likely to be excluded from the pool of respondents.
There are two theoretical benefits of this approach. If the intervention is successful, nonresponse bias can be reduced by reducing the association between response propensities and survey measures. Second, by reducing the variability in response propensities, the variance of survey estimates can also be reduced.
In out two-step approach, we found that the prediction of likelihood of participation was fairly successful. The interviewer incentive, however, was not effective in increasing participation. We provide several post-hoc explanations, a likely one of which is the overall response rate in this wave - over 90%. This left little if any room for improvement.
Since writing this article, we have further developed the criteria for prioritizing cases in order to more effectively reduce nonresponse bias and hope to report on experiments in the future. I hope that more surveys move away from the single key objective of maximizing response rates for a given budget to facilitate progress on improving survey estimates through data collection.