Search Results

You are looking at 1 - 4 of 4 items for

  • Author: Frauke Kreuter x
Clear All Modify Search
Open access

Frauke Kreuter

Abstract

This special issue on “Systems and Architectures for High-Quality Statistics Production” is a stimulating resource for statistical agencies and private sector data collectors in a challenging time characterized by massive amounts of data, from a variety of sources, available in varying intervals, and with varying quality.

Traditionally, statistical products were created from a single source, most often through surveys or administrative data. However, neither surveys nor administrative data alone can match the data needs of today’s society. In addition, the need to reduce the costs of data production necessitates that multiple sources are used in combination. The need to reduce costs also necessitates the streamlining of production cycles, and the increasing difficulties in data collection itself require such systems to be much more flexible than they have been in the past. Increasingly, these reasons are driving statistical agencies and private data collectors to redesign their entire data production cycle. The examples in this special issue from Statistics Netherlands and Statistics New Zealand demonstrate such developments in government agencies; the example from RTI reflects efforts visible among private sector data collectors. This commentary will highlight some issues of general interest related to organizational challenges, and some that create the basis for reproducible research and are therefore of general interest to the research community.

Open access

Richard Valliant, Jill A. Dever and Frauke Kreuter

Abstract

Determining sample sizes in multistage samples requires variance components for each stage of selection. The relative sizes of the variance components in a cluster sample are dramatically affected by how much the clusters vary in size, by the type of sample design, and by the form of estimator used. Measures of the homogeneity of survey variables within clusters are related to the variance components and affect the numbers of sample units that should be selected at each stage to achieve the desired precision levels. Measures of homogeneity can be estimated using standard software for random-effects models but the model-based intracluster correlations may need to be transformed to be appropriate for use with the sample design. We illustrate these points and implications for sample size calculation for two-stage sample designs using a realistic population derived from household surveys and the decennial census in the U.S.

Open access

Brady T. West, Frauke Kreuter and Ursula Jaenichen

Abstract

Recent research has attempted to examine the proportion of interviewer variance that is due to interviewers systematically varying in their success in obtaining cooperation from respondents with varying characteristics (i.e., nonresponse error variance), rather than variance among interviewers in systematic measurement difficulties (i.e., measurement error variance) - that is, whether correlated responses within interviewers arise due to variance among interviewers in the pools of respondents recruited, or variance in interviewer-specific mean response biases. Unfortunately, work to date has only considered data from a CATI survey, and thus suffers from two limitations: Interviewer effects are commonly much smaller in CATI surveys, and, more importantly, sample units are often contacted by several CATI interviewers before a final outcome (response or final refusal) is achieved. The latter introduces difficulties in assigning nonrespondents to interviewers, and thus interviewer variance components are only estimable under strong assumptions. This study aims to replicate this initial work, analyzing data from a national CAPI survey in Germany where CAPI interviewers were responsible for working a fixed subset of cases.

Open access

Morgan Earp, Melissa Mitchell, Jaki McCarthy and Frauke Kreuter

Abstract

Increasing nonresponse rates in federal surveys and potentially biased survey estimates are a growing concern, especially with regard to establishment surveys. Unlike household surveys, not all establishments contribute equally to survey estimates. With regard to agricultural surveys, if an extremely large farm fails to complete a survey, the United States Department of Agriculture (USDA) could potentially underestimate average acres operated among other things. In order to identify likely nonrespondents prior to data collection, the USDA’s National Agricultural Statistics Service (NASS) began modeling nonresponse using Census of Agriculture data and prior Agricultural Resource Management Survey (ARMS) response history. Using an ensemble of classification trees, NASS has estimated nonresponse propensities for ARMS that can be used to predict nonresponse and are correlated with key ARMS estimates.