Search Results

You are looking at 1 - 2 of 2 items for

  • Author: Antje Kirchner x
Clear All Modify Search
Open access

Antje Kirchner

Abstract

This article explores the randomized response technique (RRT) - to be specific, a symmetric forced-choice implementation - as a means of improving the quality of survey data collected on receipt of basic income support. Because the sampled persons in this study were selected from administrative records, the proportion of respondents who have received transfer payments for basic income support, and thus the proportion of respondents who should have reported receipt is known.

The article addresses two research questions: First, it assesses whether the proportion of socially undesirable responses (indication of receipt of transfer payments) can be increased by applying the RRT. Estimates obtained in the RRT condition are compared to those from direct questioning, as well as to the known true prevalence. Such administrative record data are rare in the literature on sensitive questions and provide a unique opportunity to evaluate the ‘more-is-better’ assumption. Second, using multivariate analyses, mechanisms contributing to response accuracy are analyzed for one of the subsamples.

The main results can be summarized as follows: reporting accuracy of welfare benefit receipt cannot be increased using this particular variant of the RRT. Further, there is only weak evidence that the RRT elicits more accurate information compared to direct questioning in specific subpopulations.

Open access

Barbara Felderer, Antje Kirchner and Frauke Kreuter

Abstract

More and more surveys are conducted online. While web surveys are generally cheaper and tend to have lower measurement error in comparison to other survey modes, especially for sensitive questions, potential advantages might be offset by larger nonresponse bias. This article compares the data quality in a web survey administration to another common mode of survey administration, the telephone.

The unique feature of this study is the availability of administrative records for all sampled individuals in combination with a random assignment of survey mode. This specific design allows us to investigate and compare potential bias in survey statistics due to 1) nonresponse error, 2) measurement error, and 3) combined bias of these two error sources and hence, an overall assessment of data quality for two common modes of survey administration, telephone and web.

Our results show that overall mean estimates on the web are more biased compared to the telephone mode. Nonresponse and measurement bias tend to reinforce each other in both modes, with nonresponse bias being somewhat more pronounced in the web mode. While measurement error bias tends to be smaller in the web survey implementation, interestingly, our results also show that the web does not consistently outperform the telephone mode for sensitive questions.