Brady T. West, Frauke Kreuter and Ursula Jaenichen
Statistics, 15, 185-198.
Trappmann, M., Gundert, S., Wenzig, C., and Gebhardt, D. (2010). PASS: a Household Panel Survey for Research on Unemployment and Poverty. Proceedings of the American Statistical Association, Section on Survey Research Methods, 130, 609-622.
West, B.T. and Olson, K. (2010). How Much of Interviewer Variance is Really NonresponseError Variance? Public Opinion Quarterly, 74, 1004-1026.
Wiggins, R.D., Longford, N.T., and O’Muircheartaigh, C.A. (1992). A Variance Components Approach to Interviewer Effects
Providing Sample Members with Their Preferred Survey Mode Really Increase Participation Rates?” Public Opinion Quarterly 76(4): 611–635. Doi: http://dx.doi.org/10.1093/poq/nfs024 .
Peytchev, A., R.K. Baxter, and L.R. Carley-Baxter. 2009. “Not All Survey Effort is Equal. Reduction of Nonresponse Bias and NonresponseError.” Public Opinion Quarterly 73(4): 785–806. Doi: http://dx.doi.org/10.1093/poq/nfp037 .
Peytcheva, E. and R.M. Groves. 2009. “Using Variation in Response Rates of Demographic Subgroups as Evidence of Nonresponse Bias in Survey Estimates
Sunghee Lee, Tuba Suzer-Gurtekin, James Wagner and Richard Valliant
This study attempted to integrate key assumptions in Respondent-Driven Sampling (RDS) into the Total Survey Error (TSE) perspectives and examine TSE as a new framework for a systematic assessment of RDS errors. Using two publicly available data sets on HIV-at-risk persons, nonresponse error in the RDS recruitment process and measurement error in network size reports were examined. On nonresponse, the ascertained partial nonresponse rate was high, and a substantial proportion of recruitment chains died early. Moreover, nonresponse occurred systematically: recruiters with lower income and higher health risks generated more recruits; and peers of closer relationships were more likely to accept recruitment coupons. This suggests a lack of randomness in the recruitment process, also shown through sizable intra-chain correlation. Self-reported network sizes suggested measurement error, given their wide dispersion and unreasonable reports. This measurement error has further implications for the current RDS estimators, which use network sizes as an adjustment factor on the assumption of a positive relationship between network sizes and selection probabilities in recruitment. The adjustment resulted in nontrivial unequal weighting effects and changed estimates in directions that were difficult to explain and, at times, illogical. Moreover, recruiters’ network size played no role in actual recruitment. TSE may serve as a tool for evaluating errors in RDS, which further informs study design decisions and inference approaches.
Barbara Felderer, Antje Kirchner and Frauke Kreuter
.J. Duncan. 1993. “Errors in Survey Reports of Earnings, Hours Worked, and Hourly Wages.” Journal of the American Statistical Association 88: 1208–1218. Doi: http://dx.doi.org/10.1080/01621459.1993.10476400 .
Sakshaug, J.W. and F. Kreuter. 2012. “Assessing the Magnitude of Non-Consent Biases in Linked Survey and Administrative Data.” Survey Research Methods 6: 113–122. Doi: http://dx.doi.org/10.18148/srm/2012.v6i2.5094 .
Sakshaug, J.W., T. Yan, and R. Tourangeau. 2010. “NonresponseError, Measurement Error, and Mode of Data Collection: Tradeoffs in a
Weighting procedures are commonly applied in surveys to compensate for nonsampling errors such as nonresponse errors and coverage errors. Two types of weight-adjustment procedures are commonly used in the context of unit nonresponse: (i) nonresponse propensity weighting followed by calibration, also known as the two-step approach and (ii) nonresponse calibration weighting, also known as the one-step approach. In this article, we discuss both approaches and warn against the potential pitfalls of the one-step procedure. Results from a simulation study, evaluating the properties of several point estimators, are presented.
Adult Americans are encouraged to engage in at least 150 minutes of moderate to vigorous physical activity (MVPA) each week. National surveys that collect physical activity data to assess whether or not adults adhere to this guideline use self-report questionnaires that are prone to measurement error and nonresponse. Studies have examined the individual effects of each of these error sources on estimators of physical activity, but little is known about the consequences of not adjusting for both error sources. We conducted a simulation study to determine how estimators of adherence to the guideline for adults to engage in 150 minutes of MVPA each week respond to different magnitudes of measurement and nonresponse errors in self-reported physical activity survey data. Estimators that adjust for both measurement and nonresponse errors provide the least amount of bias regardless of the magnitudes of measurement error and nonresponse. In some scenarios, the naïve estimator, which does not adjust for either error source, results in less bias than estimators that adjust for only one error source. To avoid biased physical activity estimates using data collected from self-report questionnaires, researchers should adjust for both measurement error and nonresponse.
Survey nonresponse may increase the chances of nonresponse error, and different interviewers contribute differentially to nonresponse. This article first addresses the relationship between initial impressions of interviewers in survey introductions and the outcome of these introductions, and then contrasts this relationship with current viewpoints and practices in telephone interviewing. The first study described here exposed judges to excerpts of interviewer speech from actual survey introductions and asked them to rate twelve characteristics of the interviewer. Impressions of positive traits such as friendliness and confidence had no association with the actual outcome of the call, while higher ratings of “scriptedness” predicted lower participation likelihood. At the same time, a second study among individuals responsible for training telephone interviewers found that when training interviewers, sounding natural or unscripted during a survey introduction is not emphasized. This article concludes with recommendations for practice and further research.
Folsom, R.E. 1991. “Exponential and Logistic Weight Adjustments for Sampling and NonresponseError Reduction.” In Proceedings of the American Statistical Association, Social Statistics Section, 197-202.
Folsom, R.E. and A.C. Singh. 2000. “The Generalized Exponential Model for Sampling Weight Calibration for Extreme Values, Nonresponse, and Poststratification.” In Proceedings of the American Statistical Association, Survey Research Methods Section, 598-603. Available at: https
Katherine A. McGonagle, Robert F. Schoeni and Mick P. Couper
. Proceedings of the Survey Research Methods Section of the American Statistical Association. Washington, DC: American Statistical Association, 2930-2935.
Ryu, E., Couper, M.P., and Marans, R.W. (2006). Survey Incentives: Cash vs In-kind; Face-to-face vs Mail; Response Rate vs NonresponseError. International Journal of Public Opinion Research, 18, 89-106.
Scherpenzeel, A., Zimmermann, E., Budowski, M., Tillmann, R., Wernli, B., and Gabadinho, A. (2002). Experimental Pre-Test of the Biographical Questionnaire, Working Paper, No. 5-02. Neuchatel