Geert Loosveldt and Koen Beullens
In this article we examine the interviewer effects on different aspects of response styles, namely non-differentiation and straightlining, which in general refers to the tendency to provide the same answers to questions in a block of questions. According to research about response styles, the impact of the interviewer on this kind of response behavior is rare. Five blocks of items in the questionnaire in the sixth round of the European Social Survey (2012) are used in the analysis. These data also allow for an evaluation of the differences between countries in terms of non-differentiation and straightlining. Five different measurements of these aspects of response style are used in the analysis. To disentangle the impact of respondents and interviewers on these aspects of response style, a three-level random intercept model is specified. The results clearly show interviewer effects on the respondent’s tendency to select a response category that is the same as the response category for the previous item. In some countries the proportion of explained variance due to differences between interviewers is larger than the proportion of variance explained by the differences between respondents.
Jorre T.A. Vannieuwenhuyze, Geert Loosveldt and Geert Molenberghs
The confounding of selection and measurement effects between different modes is a disadvantage of mixed-mode surveys. Solutions to this problem have been suggested in several studies. Most use adjusting covariates to control selection effects. Unfortunately, these covariates must meet strong assumptions, which are generally ignored. This article discusses these assumptions in greater detail and also provides an alternative model for solving the problem. This alternative uses adjusting covariates, explaining measurement effects instead of selection effects. The application of both models is illustrated by using data from a survey on opinions about surveys, which yields mode effects in line with expectations for the latter model, and mode effects contrary to expectations for the former model. However, the validity of these results depends entirely on the (ad hoc) covariates chosen. Research into better covariates might thus be a topic for future studies.
Caroline Vandenplas, Geert Loosveldt and Koen Beullens
Adaptive and responsive survey designs rely on monitoring indicators based on paradata. This process can better inform fieldwork management if the indicators are paired with a benchmark, which relies on empirical information collected in the first phase of the fieldwork or, for repeated or longitudinal surveys, in previous rounds or waves. We propose the “fieldwork power” (fieldwork production per time unit) as an indicator for monitoring, and we simulate this for the European Social Survey (ESS) Round 7 in Belgium and in the Czech Republic. We operationalize the fieldwork power as the weekly number of completed interviews and of contacts, the ratio of the number of completed interviews to the number of contact attempts and to the number of refusals. We use a repeated measurement multilevel model, with surveys in the previous rounds of the European Social Survey as the macro level and the weekly fieldwork power as repeated measurements to create benchmarks. We also monitor effort and data quality metrics. The results show how problems in the fieldwork evolution can be detected by monitoring the fieldwork power and by comparing it with the benchmarks. The analysis also proves helpful regarding post-survey fieldwork evaluation, and links effort, productivity, and data quality.