Researchers are interested in the effectiveness of adaptive and responsive survey designs that monitor and respond to data using tailored or targeted interventions. These designs often require adherence to protocols, which can be difficult when surveys allow in-person interviewers flexibility in managing cases. This article describes examples of interviewer noncompliance and compliance in adaptive design experiments that occurred in two United States decennial census tests. The two studies tested adaptive procedures including having interviewers work prioritized cases and substitute face-to-face attempts with telephone calls. When to perform such procedures was communicated to interviewers via case management systems that necessitated twice-daily transmissions of data. We discuss reasons when noncompliance may occur and ways to improve compliance.
In this article, we investigate the relationship between interviewer travel behavior and field outcomes, such as contact rates, response rates, and contact attempts in two studies, the National Survey of Family Growth and the Health and Retirement Study. Using call record paradata that have been aggregated to interviewer-day levels, we examine two important cost drivers as measures of interviewer travel behavior: the distance that interviewers travel to segments and the number of segments visited on an interviewer-day. We explore several predictors of these measures of travel - the geographic size of the sampled areas, measures of urbanicity, and other sample and interviewer characteristics. We also explore the relationship between travel and field outcomes, such as the number of contact attempts made and response rates.We find that the number of segments that are visited on each interviewer-day has a strong association with field outcomes, but the number of miles travelled does not. These findings suggest that survey organizations should routinely monitor the number of segments that interviewers visit, and that more direct measurement of interviewer travel behavior is needed.
Nonresponse rates have been growing over time leading to concerns about survey data quality. Adaptive designs seek to allocate scarce resources by targeting specific subsets of sampled units for additional effort or a different recruitment protocol. In order to be effective in reducing nonresponse, the identified subsets of the sample need two key features: 1) their probabilities of response can be impacted by changing design features, and 2) once they have responded, this can have an impact on estimates after adjustment. The National Agricultural Statistics Service (NASS) is investigating the use of adaptive design techniques in the Crops Acreage, Production, and Stocks Survey (Crops APS). The Crops APS is a survey of establishments which vary in size and, hence, in their potential impact on estimates. In order to identify subgroups for targeted designs, we conducted a simulation study that used Census of Agriculture (COA) data as proxies for similar survey items. Different patterns of nonresponse were simulated to identify subgroups that may reduce estimated nonresponse bias when their response propensities are changed.
This study attempted to integrate key assumptions in Respondent-Driven Sampling (RDS) into the Total Survey Error (TSE) perspectives and examine TSE as a new framework for a systematic assessment of RDS errors. Using two publicly available data sets on HIV-at-risk persons, nonresponse error in the RDS recruitment process and measurement error in network size reports were examined. On nonresponse, the ascertained partial nonresponse rate was high, and a substantial proportion of recruitment chains died early. Moreover, nonresponse occurred systematically: recruiters with lower income and higher health risks generated more recruits; and peers of closer relationships were more likely to accept recruitment coupons. This suggests a lack of randomness in the recruitment process, also shown through sizable intra-chain correlation. Self-reported network sizes suggested measurement error, given their wide dispersion and unreasonable reports. This measurement error has further implications for the current RDS estimators, which use network sizes as an adjustment factor on the assumption of a positive relationship between network sizes and selection probabilities in recruitment. The adjustment resulted in nontrivial unequal weighting effects and changed estimates in directions that were difficult to explain and, at times, illogical. Moreover, recruiters’ network size played no role in actual recruitment. TSE may serve as a tool for evaluating errors in RDS, which further informs study design decisions and inference approaches.