Evaluating Mode Effects in Mixed-Mode Survey Data Using Covariate Adjustment Models

Open access

Abstract

The confounding of selection and measurement effects between different modes is a disadvantage of mixed-mode surveys. Solutions to this problem have been suggested in several studies. Most use adjusting covariates to control selection effects. Unfortunately, these covariates must meet strong assumptions, which are generally ignored. This article discusses these assumptions in greater detail and also provides an alternative model for solving the problem. This alternative uses adjusting covariates, explaining measurement effects instead of selection effects. The application of both models is illustrated by using data from a survey on opinions about surveys, which yields mode effects in line with expectations for the latter model, and mode effects contrary to expectations for the former model. However, the validity of these results depends entirely on the (ad hoc) covariates chosen. Research into better covariates might thus be a topic for future studies.

References

  • Agresti, A. (2002). Categorical Data Analysis. Hoboken, NJ: Wiley.

  • Angrist, J.D., Imbens, G.W., and Rubin, D.B. (1996). Identification of Causal Effects Using Instrumental Variables. Journal of the American Statistical Association, 91, 444-455. DOI: http://www.dx.doi.org/10.1080/01621459.1996.10476902

  • Bowden, R.J. and Turkington, D.A. (1990). Instrumental Variables. Cambridge: Cambridge University Press.

  • Casella, G. and Berger, R.L. (2002). Statistical Inference (2nd edition). Duxbury, CA: Pacific Grove.

  • Cochran, W.G. (1977). Sampling Techniques. New York: Wiley.

  • De Leeuw, E.D. (2005). To Mix or not to Mix Data Collection Modes in Surveys. Journal of Official Statistics, 21, 233-255.

  • Dillman, D.A., Phelps, G., Tortora, R., Swift, K., Kohrell, J., Berck, J., and Messer, B.L. (2009a). Response Rate and Measurement Differences in Mixed-Mode Surveys Using Mail, Telephone, Interactive Voice Response (IVR) and the Internet. Social Science Research, 38, 1-18. DOI: http://www.dx.doi.org/10.1016/j.ssresearch.2008.03.007

  • Dillman, D.A., Smyth, J.D., and Christian, L.M. (2009b). Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method (3rd edition). Hoboken, NJ: Wiley.

  • Fricker, S., Galesic, M., Tourangeau, R., and Yan, T. (2005). An Experimental Comparison of Web and Telephone Surveys. Public Opinion Quarterly, 69, 370-392. DOI: http://www.dx.doi.org/10.1093/poq/nfi027

  • Galles, D. and Pearl, J. (1998). An Axiomatic Characterization of Causal Counterfactuals. Foundations of Science, 1, 151-182. DOI: http://www.dx.doi.org/10.1023/ A:1009602825894

  • Greenfield, T.K., Midanik, L.T., and Rogers, J.D. (2000). Effects of Telephone Versus Face-to-Face Interview Modes on Reports of Alcohol Consumption. Addiction, 95, 277-284. DOI: http://www.dx.doi.org/10.1046/j.1360-0443.2000.95227714.x

  • Greenland, S., Pearl, J., and Robins, J.M. (1999). Causal Diagrams for Epidemiologic Research. Epidemiology, 10, 37-48.

  • Hayashi, T. (2007). The Possibility of Mixed-Mode Surveys in Sociological Studies. International Journal of Japanese sociology, 16, 51-63. DOI: http://www.dx.doi.org/10.1111/j.1475-6781.2007.00099.x

  • Heerwegh, D. and Loosveldt, G. (2011). Assessing Mode Effects in a National Crime Victimization Survey Using Structural Equation Models: Social Desirability Bias and Acquiescence. Journal of Official Statistics, 27, 49-63.

  • Holbrook, A.L., Green, M.C., and Krosnick, J.A. (2003). Telephone Versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias. Public Opinion Quarterly, 67, 79-125. DOI: http://www.dx.doi.org/10.1086/346010

  • Holland, P.W. (1986). Statistics and Causal Inference. Journal of the American Statistical Association, 81, 945-960. DOI: http://www.dx.doi.org/10.1080/01621459.1986.10478354

  • Jäckle, A., Roberts, C., and Lynn, P. (2010). Assessing the Effect of Data Collection Mode on Measurement. International Statistical Review, 78, 3-20. DOI: http://www.dx.doi.org/10.1111/j.1751-5823.2010.00102.x

  • Kreuter, F., Olson, K., Wagner, J., Yan, T., Ezzati-Rice, T.M., Casas-Cordero, C., Lemay, M., Peytchev, A., Groves, R.M., and Raghunathan, T.E. (2010). Using Proxy Measures and Other Correlates of Survey Outcomes to Adjust for Non-response: Examples from Multiple Surveys. Journal of the Royal Statistical Society, Series A, 173, 389-407. DOI: http://www.dx.doi.org/10.1111/j.1467-985X.2009.00621.x

  • Lee, R.M. and Renzetti, C.M. (1990). The Problems of Researching Sensitive Topics: An Overview and Introduction. American Behavioral Scientist, 33, 510-528.

  • Lehmann, E.L. (2001). Elements of Large-Sample Theory. New York: Springer.

  • Little,R.J.A. (1986). Survey NonresponseAdjustments for Estimates ofMeans. International Statistical Review, 54, 139-157.

  • Little, R.J.A. and Rubin, D.B. (2002). Statistical Analysis with Missing Data (2nd edition). London: Wiley.

  • Loosveldt, G. and Storms, V. (2008). Measuring Public Opinions About Surveys. International Journal of Public Opinion Research, 20, 74-89. DOI: http://www.dx.doi.org/10.1093/ijpor/edn006

  • Lugtig, P., Lensvelt-Mulders, G.J.L.M., Frerichs, R., and Greven, A. (2011). Estimating Nonresponse Bias and Mode Effects in a Mixed-Mode Survey. International Journal of Market Research, 53, 669-686.

  • Medway, R.L. and Fulton, J. (2012). When More Gets You Less: A Meta-Analysis of the Effect of Concurrent Web Options on Mail Survey Response Rates. Public Opinion Quarterly, 76, 733-746. DOI: http://www.dx.doi.org/10.1093/poq/nfs047

  • Millar, M.M. and Dillman, D.A. (2011). Improving Response to Web and Mixed-Mode Surveys. Public Opinion Quarterly, 75, 249-269. DOI: http://www.dx.doi.org/10.1093/poq/nfr003

  • Molenberghs, G., Njeru Njagi, E., Kenward, M.G., and Verbeke, G. (2012). Enriched-Data Problems and Essential Non-Identifiability. International Journal of Statistics in Medical Research, 1, 16-44.

  • Morgan, S.L. and Winship, C. (2009). Counterfactuals and Causal Inference: Methods and Principles for Social Research. Analytical Methods for Social Research. New York: Cambridge University Press.

  • Olson, K., Smyth, J.D., and Wood, H.M. (2012). Does Giving People their Preferred Survey Mode Actually Increase Survey Participation Rates? An Experimental Examination. Public Opinion Quarterly, 76, 611-635. DOI: http://www.dx.doi.org/10.1093/poq/nfs024

  • Pearl, J. (1995). Causal Diagrams for Empirical Research. Biometrika, 82, 669-688. DOI:http://www.dx.doi.org/10.1093/biomet/82.4.669

  • Pearl, J. (2009). Causality: Models, Reasoning and Inference (2nd edition). New York: Cambridge University Press.

  • Rosenbaum, P.R. and Rubin, D.B. (1983). The Central Role of the Propensity Score in Observational Studies for Causal Effects. Biometrika, 70, 41-55. DOI: http://www.dx.doi.org/10.1093/biomet/70.1.41

  • Rubin, D.B. (1974). Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies. Journal of Educational Psychology, 66, 688-701. DOI: http://www.dx.doi.org/10.1037/h0037350

  • Rubin, D.B. (1978). Bayesian Inference for Causal Effects: The Role of Randomization. The Annals of Statistics, 6, 34-58.

  • Rubin, D.B. (1991). Practical Implications of Modes of Statistical Inference for Causal Effects and the Critical Role of the Assignment Mechanism. Biometrics, 47, 1213-1234.

  • Rubin, D.B. (2005). Causal Inference Using Potential Outcomes: Design, Modeling, Decisions. Journal of the American Statistical Association, 100, 322-331. DOI: http:// www.dx.doi.org/10.1198/016214504000001880

  • Storms, V. and Loosveldt, G. (2005). Procesevaluatie van het Veldwerk van een Mixed Mode Survey naar het Surveyklimaat in Vlaanderen. Leuven: KUL, Centrum voor Sociologisch Onderzoek.

  • Tourangeau, R. and Yan, T. (2007). Sensitive Questions in Surveys. Psychological Bulletin, 133, 859-883.

  • Vannieuwenhuyze, J.T.A. and Loosveldt, G. (2013). Evaluating Relative Mode-Effects in Mixed Mode Surveys: Three Methods to Disentangle Selection and Measurement Effects. Sociological Methods and Research, 42, 82-104. DOI: http://www.dx.doi.org/10.1177/0049124112464868

  • sVannieuwenhuyze, J.T.A., Loosveldt, G., and Molenberghs, G. (2012). A Method to Evaluate Mode Effects on the Mean and Variance of a Continuous Variable in Mixed- Mode Surveys. International Statistical Review, 80, 306-322. DOI: http://www.dx.doi.org/10.1111/j.1751-5823.2011.00167.x

  • Voogt, R.J. and Saris, W.E. (2005). Mixed Mode Designs: Finding the Balance Between Nonresponse Bias and Mode Effects. Journal of Official Statistics, 21, 367-387.

  • Weisberg, H.F. (2005). The Total Survey Error Approach: A Guide to the New Science of Survey Research. Chicago: University of Chicago.

  • Weisberg, H.F. (2010). Bias and Causation: Models and Judgment for Valid Comparisons. Hoboken, NJ: Wiley.

Journal of Official Statistics

The Journal of Statistics Sweden

Journal Information


IMPACT FACTOR 2016: 0.411
5-year IMPACT FACTOR: 0.776

CiteScore 2016: 0.63

SCImago Journal Rank (SJR) 2016: 0.710
Source Normalized Impact per Paper (SNIP) 2016: 0.975

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 20 20 20
PDF Downloads 6 6 6