Search Results

1 - 3 of 3 items

  • Author: Taylor Lewis x
Clear All Modify Search
Univariate Tests for Phase Capacity: Tools for Identifying When to Modify a Survey’s Data Collection Protocol

Abstract

To mitigate the potentially harmful effects of nonresponse, most surveys repeatedly follow up with nonrespondents, often targeting a response rate or predetermined number of completes. Each additional recruitment attempt generally brings in a new wave of data, but returns gradually diminish over the course of a fixed data collection protocol, as each subsequent wave tends to consist of fewer responses than the last. Consequently, point estimates begin to stabilize. This is the notion of phase capacity, suggesting some form of design change is in order, such as switching modes, increasing the incentive, or, as is considered exclusively in this research, discontinuing the nonrespondent follow-up campaign altogether. A previously proposed test for phase capacity calls for multiply imputing nonrespondents’ missing data to assess, retrospectively, whether the most recent wave of data significantly altered a key, nonresponse-adjusted point estimate. This study introduces a more flexible adaptation amenable to surveys that instead reweight the observed data to compensate for nonresponse. Results from a simulation study and application indicate that, all else equal, the weighting version of the test is more sensitive to point estimate changes, thereby dictating more follow-up attempts are warranted.

Open access
Does the Length of Fielding Period Matter? Examining Response Scores of Early Versus Late Responders

Abstract

This article discusses the potential effects of a shortened fielding period on an employee survey’s item and index scores and respondent demographics. Using data from the U.S. Office of Personnel Management’s 2011 Federal Employee Viewpoint Survey, we investigate whether early responding employees differ from later responding employees. Specifically, we examine differences in item and index scores related to employee engagement and global satisfaction. Our findings show that early responders tend to be less positive, even after adjusting their weights for nonresponse. Agencies vary in their prevalence of late responders, and score differences become magnified as this proportion increases. We also examine the extent to which early versus late responders differ on demographic characteristics such as grade level, supervisory status, gender, tenure with agency, and intention to leave, noting that nonminorities and females are the two demographic characteristics most associated with responding early.

Open access
The Relative Impacts of Design Effects and Multiple Imputation on Variance Estimates: A Case Study with the 2008 National Ambulatory Medical Care Survey

Abstract

The National Ambulatory Medical Care Survey collects data on office-based physician care from a nationally representative, multistage sampling scheme where the ultimate unit of analysis is a patient-doctor encounter. Patient race, a commonly analyzed demographic, has been subject to a steadily increasing item nonresponse rate. In 1999, race was missing for 17 percent of cases; by 2008, that figure had risen to 33 percent. Over this entire period, single imputation has been the compensation method employed. Recent research at the National Center for Health Statistics evaluated multiply imputing race to better represent the missing-data uncertainty. Given item nonresponse rates of 30 percent or greater, we were surprised to find many estimates’ ratios of multiple-imputation to single-imputation estimated standard errors close to 1. A likely explanation is that the design effects attributable to the complex sample design largely outweigh any increase in variance attributable to missing-data uncertainty.

Open access