A Study of Interviewer Compliance in 2013 and 2014 Census Test Adaptive Designs

  • 1 , 20746, Suitland
  • 2 University of Michigan, Institute for Social Research, MI 48105, Ann Arbor


Researchers are interested in the effectiveness of adaptive and responsive survey designs that monitor and respond to data using tailored or targeted interventions. These designs often require adherence to protocols, which can be difficult when surveys allow in-person interviewers flexibility in managing cases. This article describes examples of interviewer noncompliance and compliance in adaptive design experiments that occurred in two United States decennial census tests. The two studies tested adaptive procedures including having interviewers work prioritized cases and substitute face-to-face attempts with telephone calls. When to perform such procedures was communicated to interviewers via case management systems that necessitated twice-daily transmissions of data. We discuss reasons when noncompliance may occur and ways to improve compliance.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • Biemer, P., P. Chen, and K. Wang. 2013. “Using Level-of-Effort Paradata in Non-Response Adjustments with Application to Field Surveys.” Journal of the Royal Statistical Society Series A 176(1): 147–168. Doi: http://dx.doi.org/10.1111/j.1467-985X.2012.01058.x.

  • Blumerman, L., E. Moffett, M. Bentley, T. Boone, and M. Chapin. 2015. “2020 Census Operational Plan Overview and Operational Areas.” Presentation to the Census Bureau’s National Advisory Committee. October 8, 2015.

  • Campanelli, P., P. Sturgis, and S. Purdon. 1997. “Can You Hear Me Knocking? An Investigation into the Impact of Interviewers on Survey Response Rates. London, GB, National Centre for Social Research.

  • Coffey, S. 2013. “Implementing Adaptive Design for the National Survey of College Graduates.” Article presented to FedCASIC, Suitland, Maryland, March 20, 2013.

  • Durrant, G. and F. Steele. 2009. “Multilevel Modelling of Refusal and Non-Contact in Household Surveys: Evidence from Six UK Government Surveys.” Journal of the Royal Statistical Society: Series A (Statistics in Society) 172(2): 361–381. Doi: http://dx.doi.org/10.1111/j.1467-985X.2008.00565.x.

  • Fowler, F. and T. Mangione. 1988. Standardized Survey Interviewing: Minimizing Interviewer-Related Error. Sage Publications: Newbury Park, CA.

  • Groves, R. and S. Heeringa. 2006. “Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs.” Journal of the Royal Statistical Society: Series A (Statistics in Society) 169: 439–457. Doi: http://dx.doi.org/10.1111/j.1467-985X.2006.00423.x.

  • Groves, R. and K. McGonagle. 2001. “A Theory-Guided Interviewer Training Protocol Regarding Survey Participation.” Journal of Official Statistics 17(2): 249–265.

  • Hackman, J. and G. Oldham. 1976. “Motivation through the Design of Work: Test of a Theory.” Organizational Behavior and Human Performance 16: 250–279. Doi: http://dx.doi.org/10.1016/0030-5073(76)90016-7.

  • Kawamoto, K., C. Houlihan, E. Balas, and D. Lobach. 2005. “Improving Clinical Practice Using Clinical Decision Support Systems: A Systematic Review of Trials to Identify Features Critical to Success.” British Medical Journal 330(7494): 765. Doi: http://dx.doi.org/10.1136/bmj.38398.500764.8F.

  • Kirgis, N. and J. Lepkowski. 2013. “Design and Management Strategies for Paradata-Driven Responsive design: Illustrations from the 2006–2010 National Survey of Family Growth.” In Improving Surveys with Paradata: Analytic Uses of Process Information, edited by F. Kreuter, 121–144. Hoboken, New Jersey: John Wiley and Sons.

  • Kreuter, F., A. Mercer, and W. Hicks. 2014. “Increasing Fieldwork Efficiency through Prespecified Appointments.” Journal of Survey Statistics and Methodology 2(2): 210–223. Doi: http://dx.doi.org/10.1093/jssam/smu005.

  • Laflamme, F. and H. St-Jean. 2011. “Highlights from the First Two Pilots of Responsive Collection Design for CATI Surveys.” In Proceedings of the Joint Statistical Meetings, American Statistical Association. Available at: https://www.amstat.org/sections/srms/proceedings/y2011/Files/301087_66138.pdf (accessed March 2016).

  • Luiten, A. and B. Schouten. 2013. “Tailored Fieldwork Design to Increase Representative Household Survey Response: An Experiment in the Survey of Consumer Satisfaction.” Journal of the Royal Statistical Society, Series A 176(1): 169–189. Doi: http://dx.doi.org/10.1111/j.1467-985X.2012.01080.x.

  • Morton-Williams, J. 1993. Interviewer Approaches. England: Dartmouth Publishing Company Limited.

  • O’Muircheartaigh, C. and P. Campanelli. 1999. “A Multilevel Exploration of the Role of Interviewers in Survey Non-Response.” Journal of the Royal Statistical Society, Series A 162(3): 437–446. Doi: http://dx.doi.org/10.1111/1467-985X.00147.

  • Peytchev, A., S. Riley, J. Rosen, J. Murphy, and M. Lindblad. 2010. “Reduction of Nonrespose Bias in Surveys through Case Prioritization.” Survey Research Methods 4(1): 21–29. Doi: http://dx.doi.org/10.18148/srm/2010.v4i1.3037.

  • Pickery, J. and G. Loosveldt. 2002. “A Multilevel Multinomial Analysis of Interviewer Effects on Various Components of Unit Nonresponse.” Quality and Quantity 36(4): 427–437. Doi: http://dx.doi.org/10.1023/A:1020905911108.

  • Poehler, E., D. Cronkite, P. Sanchez, A. Wakim, G. Dusch, H. Walrath, R. King, and J. Jones. 2016. “2020 Research and Testing: 2014 Census Test Nonresponse Followup Panel Comparisons and Instrument Analysis.” Washington, DC: U.S. Census Bureau.

  • Purdon, S., P. Campanelli, and P. Sturgis. 1999. “Interviewers Calling Strategies on Face-to-Face Interview Surveys.” Journal of Official Statistics 15(2): 199–216.

  • Rosen, J., J. Murphy, A. Peytchev, S. Riley, and M. Lindblad. 2011. “The Effects of Differential Interviewer Incentives on a Field Data Collection Effort.” Field Methods 23(1): 24–36. Doi: http://dx.doi.org/10.1177/1525822X10383390.

  • Schouten, B., M. Calinescu, and A. Luiten. 2013. “Optimizing Quality of Response through Adaptive Survey Designs.” The Hague: Statistics Netherlands.

  • Tourangeau, R., F. Kreuter, and S. Eckman. 2012. “Motivated Underreporting in Screening Interviews.” Public Opinion Quarterly 76(3): 453–469. Doi: http://dx.doi.org/10.2307/41684579.

  • Wagner, J. and K. Olson. 2011. “Where Do Interviewers Go When They Do What They Do? An Analysis of Interviewer Travel in Two Field Surveys.” In Proceedings of the Joint Statistical Meetings, American Statistical Association, Survey Research Methods Section, Miami, July 30-August 4, 2011.

  • Wagner, J. 2013a. “Adaptive Contact Strategies in Telephone and Face-to-Face Surveys.” Survey Research Methods 7(1): 45–55. Doi: http://dx.doi.org/10.18148/srm/2013.v7i1.5037.

  • Wagner, J. 2013b. “Using Paradata-Driven Models to Improve Contact Rates.” In Improving Surveys with Paradata: Analytic Uses of Process Information, edited by F. Kreuter, 145–170. Hoboken, New Jersey: John Wiley and Sons.

  • Wagner, J., B. West, N. Kirgis, J. Lepkowski, W. Axinn, and S. Ndiaye. 2012. “Use of Paradata in a Responsive Design Framework to Manage a Field Data Collection.” Journal of Official Statistics 28(4): 477–499.

  • Walejko, G., A. Keller, G. Dusch, and P. Miller. 2014. “2020 Research and Testing: 2013 Census Test Assessment.” Washington, DC: U.S. Census Bureau.


Journal + Issues