Producers of large government-sponsored surveys regularly use Computer-Assisted Interviewing (CAI) software to design data collection instruments, monitor fieldwork operations, and evaluate data quality. When used in conjunction with responsive survey designs, last-minute modifications to problems in the field are quickly addressed. Complementing this strategy, but little discussed, is the need to implement similar changes in the post data collection stage of the survey data life cycle. We describe a continuous data processing system where completed interviews are carefully examined as soon as they are collected; editing, recode, and imputation programs are applied using CAI tools; and the results are reviewed to correct problematic cases. The goal: provide higher quality data and shorten the time between the conclusion of data collection and the appearance of public use data files.
Brady T. West, Joseph W. Sakshaug and Guy Alain S. Aurelien
Methods for Complex Sample Data: Logistic Regression and Discrete Proportional Hazards Models.” Communications in Statistics-Theory and Methods 14: 1377–1392. Doi: https://doi.org/10.1080/03610928508828982 .
Chantala, K., D. Blanchette, and C.M. Suchindran. 2011. “Software to Compute Sampling Weights for Multilevel Analysis.” Technical Report, Carolina Population Center, UNC at Chapter Hill. Available at http://www.cpc.unc.edu/research/tools/data_analysis/ml_sampling_weights (accessed January 30, 2018).
Claeskens, G. 2013. “Lack of Fit, Graphics, and
Adolfsson, C. and P. Gidlund. 2008. “Conducted Case Studies at Statistics Sweden.” Paper presented at the Work Session on Statistical Data Editing, Vienna, Austria, 21-23 April 2008. Available at: http://www.unece.org/fileadmin/DAM/stats/documents/2008/04/sde/wp.32.e.pdf (accessed February 2016).
Brinkley, E., K. Farwell, and F. Yu. 2011. “Selective Editing Methods and Tools: An Australian Bureau of Statistics Perspective.” In Proceedings of Statistics Canada Symposium 2011. Available at: http
MoonJung Cho, John L. Eltinge, Julie Gershunskaya and Larry Huff
Large-scale establishment surveys often exhibit substantial temporal or cross-sectional variability in their published standard errors. This article uses a framework defined by survey generalized variance functions to develop three sets of analytic tools for the evaluation of these patterns of variability. These tools are for (1) identification of predictor variables that explain some of the observed temporal and cross-sectional variability in published standard errors; (2) evaluation of the proportion of variability attributable to the abovementioned predictors, equation error and estimation error, respectively; and (3) comparison of equation error variances across groups defined by observable predictor variables. The primary ideas are motivated and illustrated by an application to the U.S. Current Employment Statistics program.
. “Regression Analysis for Sample Survey.” Sankhyā 37, Series C, Pt. 3: 117–132.
Graham, J., A. Olchowski, and T. Gilreath. 2007. “How Many Imputations Are Really Needed? Some Practical Clarifications of Multiple Imputation Theory.” Prevention Science 8: 206–213. Doi: http://dx.doi.org/10.1007/s11121-007-0070-9 .
Groves, R. and S. Heeringa. 2006. “Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs.” Journal of the Royal Statistics Society: Series A (Statistics in Society) 169: 439–457. Doi: http://dx.doi.org/10
. “The prototype.” Available at: https://rainopik.github.io/eubdhackmegatrend/ (accessed October 2018).
Opik, R. 2017b. “The source code of the prototype.” Available at: https://github.com/rainopik/eubdhack-megatrend/ (accessed October 2018).
Peixoto, T.P. 2014. “The graph-tool python library.” figshare . Doi: http://dx.doi.org/10.6084/m9.figshare.1164194 .
Smith, G. 2010. PostgreSQL 9.0: High Performance . Packt Publishing Ltd.
Stasko, J. 2014. “Value-driven evaluation of visualizations.” In Proceedings of the Fifth Workshop on Beyond
Peter Struijs, Astrea Camstra, Robbert Renssen and Barteld Braaksma
Generic Software Package for Developing Macro-Editing Tools. Paper presented at the Work Session on Statistical Data Editing, Ljubljana. Available at: http://www.unece.org/fileadmin/DAM/stats/documents/ece/ces/ge.44/2011/wp.14.e.pdf (accessed January 29, 2013).
Struijs, P. (2005). Improving the Quality of Statistics through the Application of Process Methodology. Paper prepared for the Advisory Council on Methodology and Information Technology.
Studman, B. (2010). A Collaborative Development Approach to Agile Statistical Processing
Deirdre Giesen, Mario Vella, Charles F. Brady, Paul Brown, Daniela Ravindra and Anita Vaasen-Otten
: If We Bother Them More, Are They Less Cooperative?” Journal of Official Statistics 22: 97–112. Available at: http://www.jos.nu/Articles/abstract.asp?article=221097 (accessed June 2017).
Rainer, N. 2008. “Measuring response burden under EU-context: Some principles for a management tool at the EU-level.” Paper presented at the 94th Directors-General of the National Statistical Institutes (DGINS) Conference. Vilnius, September 25–26, 2008. Available at: http://ec.europa.eu/eurostat/documents/1001617/4411693/O-2-AUSTRIA-MEASURING.pdf (accessed June 2017
. “Comparing Traditional and Crowdsourcing Methods for Pretesting Survey Questions.” SAGE Open 6(4): 1–14. Doi: https://doi.org/10.1177/2158244016671770 .
Groves, R.M. 1989. Survey Errors and Survey Costs . New York: Wiley.
Groves, R. and S. Heeringa. 2006. “Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs.” Journal of the Royal Statistical Society, Series A 169(3): 439–457. Doi: http://dx.doi.org/10.1111/j.1467-985X.2006.00423.x .
Hardin, M., D. Horn, R. Perez, and L. Williams. 2012. “Which Chart or