Commonly, surveys have a target number of interviews, together with a target response rate. The number of interviews can be determined by a desired precision of survey estimates based on variance assumptions. Most surveys have multiple objectives, producing numerous estimates and even types of analysis. But what if a survey has a focal objective, such that survey data collection can be geared towards that objective? Clinical trials are one such example, but there are likely many surveys that despite producing many estimates, are really conducted to produce one or few key estimates such as an employment rate or consumer sentiment.
If that is the case, then setting the target number of interviews prior to conducting the survey is not optimal; excessive resources may be expended after the desired precision has been achieved, or the threat of nonresponse bias is minimal. Instead, Wagner and Raghunathan argue that data collection can be stopped based on statistical criteria in a new article. They focus on the threat of remaining nonresponse bias, but also note that it can be extended to sampling error.
Their work is invaluable in pushing us to think less deterministically about the survey process and provide one solution. Much work remains to be done. For example, the face validity of response rates, whether unfortunate or not, is here to stay. Therefore, if costs have not substantially increased per additional interview, one may prefer to continue data collection rather than stop even if the threat of nonresponse bias is not substantial. Factoring costs into the stopping rule is challenging but likely a necessary step. Surveys also have multiple objectives, more often than not. Wagner and Raghunathan's stopping rule can undoubtedly be applied to multiple estimates and data collection not stopped until all are met, but one could not foresee all the uses of survey data - which is at least in part the reason for a target response rate. It would be useful to have a more omnibus stopping rule, although a simple solution is not in sight. As a survey practitioner, I also see challenges in implementation; earlier work on stopping rules has been in mail surveys and the demonstration in the new article is simulated on an interviewer administered survey. However, in a telephone or face to face survey it is not easy to have an undetermined end date of data collection, letting go of interviewers at any point in time. Despite this wish list, Wagner and Raghunathan's stopping rule is a major step in a needed direction, informing data collection by the potential of [nonresponse] error rather than a predetermined response rate or field data collection period.