Search Results

You are looking at 1 - 6 of 6 items for

  • Author: Paul Biemer x
Clear All Modify Search
Open access

Paul Biemer, Dennis Trewin, Heather Bergdahl and Lilli Japec

Open access

Boris Lorenc, Paul P. Biemer, Ingegerd Jansson, John L. Eltinge and Anders Holmberg

Open access

Joe Murphy, Paul Biemer and Chip Berry

Abstract

This article discusses the critical and complex design decisions associated with transitioning an interviewer-administered survey to a self-administered, postal, web/paper survey. Our approach embeds adaptive, responsive, and tailored (ART) design principles and data visualization during a multi-phased data collection operation to project the outcomes of each phase in preparation for subsequent phases. This requires rapid decision making based upon experimental results using a data visualization system to monitor critical-to-quality (CTQ) metrics and facilitate projections of outcomes from the current phase of data collection to inform the design of the subsequent phase. We describe the objectives of the overall design, the features designed to address these objectives, components of the visual adaptive total design (ATD) system for monitoring quality components and relative costs in real time, and examples of the visualization elements and functionalities that were used in one case study. We also discuss subsequent initiatives to develop an interactive version of the monitoring tool and applications for other studies, including those employing adaptive, responsive, and tailored (ART) designs. Our case study is a series of pilot studies conducted for the Residential Energy Consumption Survey (RECS), sponsored by the U.S. Energy Information Administration (EIA).

Open access

John L. Eltinge, Paul P. Biemer and Anders Holmberg

Abstract

This article outlines a framework for formal description, justification and evaluation in development of architectures for large-scale statistical production systems. Following an introduction of the main components of the framework, we consider four related issues: (1) Use of some simple schematic models for survey quality, cost, risk, and stakeholder utility to outline several groups of questions that may inform decisions on system design and architecture. (2) Integration of system architecture with models for total survey quality (TSQ) and adaptive total design (ATD). (3) Possible use of concepts from the Generic Statistical Business Process Model (GSBPM) and the Generic Statistical Information Model (GSIM). (4) The role of governance processes in the practical implementation of these ideas.

Open access

Susan L. Edwards, Marcus E. Berzofsky and Paul P. Biemer

Abstract

Sensitive outcomes of surveys are plagued by wave nonresponse and measurement error (classification error for categorical outcomes). These types of error can lead to biased estimates and erroneous conclusions if they are not understood and addressed. The National Crime Victimization Survey (NCVS) is a nationally representative rotating panel survey with seven waves measuring property and violent crime victimization. Because not all crime is reported to the police, there is no gold standard measure of whether a respondent was victimized. For panel data, Markov Latent Class Analysis (MLCA) is a model-based approach that uses response patterns across interview waves to estimate false positive and false negative classification probabilities typically applied to complete data.

This article uses Full Information Maximum Likelihood (FIML) to include respondents with partial information in MLCA. The impact of including partial respondents in the MLCA is assessed for reduction of bias in the estimates, model specification differences, and variability in classification error estimates by comparing results from complete case and FIML MLCA models. The goal is to determine the potential of FIML to improve MLCA estimates of classification error. While we apply this process to the NCVS, the approach developed is general and can be applied to any panel survey.

Open access

Paul Biemer, Dennis Trewin, Heather Bergdahl and Lilli Japec

Abstract

This article describes a general framework for improving the quality of statistical programs in organizations that provide a continual flow of statistical products to users and stakeholders. The work stems from a 2011 mandate to Statistics Sweden issued by the Swedish Ministry of Finance to develop a system of quality indicators for tracking developments and changes in product quality and for achieving continual improvements in survey quality across a diverse set of key statistical products. We describe this system, apply it to a number of products at Statistics Sweden, and summarize key results and lessons learned. The implications of this work for monitoring and evaluating product quality in other statistical organizations are also discussed.