Sample Representation and Substantive Outcomes Using Web With and Without Incentives Compared to Telephone in an Election Survey

Oliver Lipps 1  and Nicolas Pekari 1
  • 1 FORS – Swiss Centre of Expertise in the Social Sciences, c/o University of Lausanne Quartier Mouline Lausanne 1015, Lausanne, Switzerland.


The objective of this article is to understand how the change of mode from telephone to web affects data quality in terms of sample representation and substantive variable bias. To this end, an experiment, consisting of a web survey with and without a prepaid incentive, was conducted alongside the telephone Swiss election survey. All three designs used identical questionnaires and probability samples drawn from a national register of individuals.

First, our findings show that differences in completion rates mostly reflect different levels of coverage in the two modes. Second, incentives in the web survey strongly increase completion rates of all person groups, with the exception of people without Internet access or limited computer literacy. Third, we find voting behavior to be much closer to official figures in the web with the incentive version compared to the two other designs. However, this is partly due to the different sociodemographic compositions of the samples. Other substantive results suggest that the incentive version includes harder-to-reach respondents. Unit costs are much lower in the two web designs compared to the telephone, including when a relatively high incentive is used. We conclude that in countries with high Internet penetration rates such as Switzerland, web surveys are already likely to be highly competitive.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • .AAPOR (The American Association for Public Opinion Research). 2011. Standard definitions: Final dispositions of case codes and outcome rates for surveys, 7th ed. AAPOR.

  • Alexander, G., G. Divine, M. Couper, J. McClure, M. Stopponi, K. Fortman, D. Tolsma, V. Strecher, and C. Johnson. 2008. “Effect of Incentives and Mailing Features on Online Health Program Enrolment.” American Journal of Preventive Medicine 34: 382-388. Doi:

  • Ansolabehere, S. and B. Schaffner. 2014. “Does Survey Mode Still Matter? Findings From a 2010 Multi-Mode Comparison.” Political Analysis 22: 285-303. Doi:

  • Atkeson, L., A. Adams, and R. Alvarez. 2014. “Nonresponse and Mode Effects in Self-and Interviewer-Administered Surveys.” Political Analysis 22: 304-320. Doi:

  • Baker, R., S.J. Blumberg, J.M. Brick, M.P. Couper, M. Courtright, J.M. Dennis, D.

  • Dillman, M.R. Frankel, P. Garland, R.M. Groves, C. Kennedy, J. Krosnick, P.J.

  • Lavrakas, S. Lee, M. Link, L. Piekarski, K. Rao, R.K. Thomas, and D. Zahs. 2010. “Research Synthesis AAPOR Report on Online Panels.” Public Opinion Quarterly 74: 711-781. Doi:

  • Biemer, P. 1988. “Measuring Data Quality.” In Telephone Survey Methodology, edited by W. Nicholls II, R. Groves, P. Biemer, L. Lyberg, J. Massey, W. Nicholls II, and J. Waksberg, 273-283. New York: Wiley & Sons.

  • Biemer, P. and L. Lyberg. 2003. Introduction to Survey Quality. New York: John Wiley & Sons.

  • Birnholtz, J., D. Horn, T. Finholt, and S. Bae. 2004. “The Effects of Cash, Electronic, and Paper Gift Certificates as Respondent Incentives for a Web-Based Survey of Technologically Sophisticated Respondents.” Social Science Computer Review 22: 355-362. Doi:

  • Blumberg, S. and J. Luke. 2013. Wireless Substitution: Early Release of Estimates from the National Health Interview Survey. July-December 2012. Available at: (accessed December 2014).

  • Bosnjak, M. 2005. “Effects of Two Innovative Techniques to Apply Incentives in Online Access Panels.” Presentation at the General Online Research Conference (GOR), Zürich, March 22-23.

  • Bosnjak, M. and T. Tuten. 2003. “Prepaid and Promised Incentives in Web Surveys: an Experiment.” Social Science Computer Review 21: 208-217. Doi:

  • Brandtzæg, P., J. Heim, and A. Karahasanovic. 2011. “Understanding the New Digital Divide - A Typology of Internet Users in Europe.” International Journal of Human- Computer Studies 69: 123-138. Doi:

  • Braunsberger, K., H. Wybenga, and R. Gates. 2007. “A Comparison of Reliability Between Telephone and Web-Based Surveys.” Journal of Business Research 60: 758-764. Doi:

  • Burden, B.C. 2000. “Voter Turnout and the National Election Studies.” Political Analysis 8: 389-398.

  • Busse, B. and M. Fuchs. 2012. “The Components of Landline Telephone Survey Coverage Bias. The Relative Importance of No-Phone and Mobile-Only Populations.” Quality and Quantity 46: 1209-1225. Doi:

  • Callegaro, M. and C. DiSogra. 2008. “Computing Response Metrics for Online Panels.” Public Opinion Quarterly 72: 1008-1032. Doi:

  • Chang, L. and J. Krosnick. 2009. “National Surveys via RDD Telephone Interviewing Versus the Internet.” Public Opinion Quarterly 73: 641-678. Doi:

  • Christian, L., D. Dillman, and J. Smith. 2008. “The Effects of Mode and Format on Answers to Scalar Questions in Telephone and Web Surveys.” In Advances in Telephone Surveys, edited by J.M. Lepkowski, 250-275. New York: Wiley & Sons.

  • Cobben, F. and J. Bethlehem. 2005. “Adjusting Undercoverage and Nonresponse Bias in Telephone Surveys.” Discussion paper 05006. CBS, Statistics Netherlands, Voorburg/ Heerlen. Available at: (accessed February, 2016).

  • Curtin, R., E. Singer, and S. Presser. 2007. “Incentives in Random Digit Dial Telephone Surveys: a Replication and Extension.” Journal of Official Statistics 23: 91-105.

  • De Leeuw, E. 2005. “To Mix or Not To Mix Data Collection Modes in Surveys.” Journal of Official Statistics 21: 233-255.

  • Dillman, D. 2000. Mail and Telephone Surveys: The Tailored Design Method. New York: John Wiley & Sons.

  • Dillman, D. 2011. Mail and Internet surveys: The Tailored Design Method - 2007 Update With New Internet, Visual, and Mixed-Mode Guide. New York: John Wiley & Sons.

  • Dillman, D. and L. Christian. 2005. “Survey Mode as a Source of Instability in Responses Across Surveys.” Field Methods 17: 30-52. Doi:

  • Dillman, D., G. Phelps, R. Tortora, K. Swift, J. Kohrell, J. Berck, and B. Messer. 2009. “Response Rate and Measurement Differences in Mixed-Mode Surveys Using Mail, Telephone, Interactive Voice Response (IVR) and the Internet.” Social Science Research 28: 1-18. Doi:

  • Ernst Stähli, M. 2012. “Telephone Surveys in Switzerland: Spotlight.” In Telephone Surveys in Europe: Research and Practice, edited by M. Häder, S. Häder and M. Kühne, 25-36. Berlin: Springer.

  • Fricker, S., M. Galesic, R. Tourangeau, and T. Yan. 2005. “An Experimental Comparison of Web and Telephone Surveys.” Public Opinion Quarterly 69: 370-392. Doi:

  • Göritz, A. 2004. “The Impact of Material Incentives on Response Quantity, Response Quality, Sample Composition, Survey Outcome, and Cost in Online Access Panels.” International Journal of Market Research 46: 327-345. Gö ritz, A. 2006. “Incentives in Web Studies: Methodological Issues and a Review.” International Journal of Internet Science 1: 58-70.

  • Groves, R., R. Cialdini, and M. Couper. 1992. “Understanding the Decision to Participate in a Survey.” Public Opinion Quarterly 56: 475-493. Doi:

  • Groves, R., F. Fowler, M. Couper, J. Lepkowski, E. Singer, and R. Tourangeau. 2004a. Survey Methodology, Wiley Series in Survey Methodology. New York: Wiley.

  • Groves, R., S. Presser, and S. Dipko. 2004b. “The Role of Topic Interest in Survey Participation Decisions.” Public Opinion Quarterly 68: 2-31. Doi:

  • Holbrook, A.L. and J.A. Krosnick. 2010. “Social Desirability Bias in Voter Turnout Reports Tests Using the Item Count Technique.” Public Opinion Quarterly 74: 37-67. Doi:

  • Joye, C. 2012. “Srph-Castem.” FORS - SFSO workshop, June 21. Neuchâtel.

  • Joye, D., A. Pollien, M. Sapin, and M. Ernst Stähli. 2012. “Who Can Be Contacted by Phone? Lessons from Switzerland.” In Telephone Surveys in Europe: Research and Practice, edited by M. Häder, S. Häder and M. Kühne, 85-102. Berlin: Springer- Verlag.

  • Karp, J.A. and D. Brockington. 2005. “Social Desirability and Response Validity: A Comparative Analysis of Overreporting Voter Turnout in Five Countries.” The Journal of Politics 67: 825-840. Doi:

  • Kreuter, F., S. Presser, and R. Tourangeau. 2008. “Social Desirability Bias in CATI, IVR, and Web Surveys.” Public Opinion Quarterly 72: 847-865. Doi:

  • Krosnick, J. 1991. “Response Strategies for Coping With the Cognitive Demands of Attitude Measures in Surveys.” Applied Cognitive Psychology 5: 213-236. Doi:

  • Link, M. and M. Fahimi. 2008. “Telephone Survey Sampling.” In Sampling of Populations: Methods and Applications, edited by P.S. Levy and S. Lemeshow, 455-487. New York: Wiley.

  • Lipps, O. and K. Kissau. 2012. “Nonresponse in an Individual Register Sample Telephone Survey in Lucerne (Switzerland).” In Telephone Surveys in Europe: Research and Practice, edited by M. Häder, S. Häder and M. Kühne, 187-208. Berlin: Springer-Verlag.

  • Lipps, O. and N. Pekari. 2013. Mode and Incentive Effects in an Individual Register Frame Based Swiss Election Study. FORS Working Paper Series, paper 2013-3. Lausanne: FORS.

  • Lipps, O., N. Pekari, and C. Roberts. 2015. “Coverage and Nonresponse Errors in an Individual Register Frame Based Swiss Telephone Election Study.” Survey Research Methods 9: 71-82.

  • Little, R.J. and S. Vartivarian. 2003. “On Weighting the Rates in Non-Response Weights.” Statistics in Medicine 22: 1589-1599. Doi:

  • Lozar Manfreda, K., M. Bosnjak, J. Berzelak, I. Haas, and V. Vehovar. 2008. “Web Surveys Versus Other Survey Modes: a Meta-Analysis Comparing Response Rates.” International Journal of Market Research 50: 79-104.

  • Malhotra, N. and J. Krosnick. 2007. “The Effect of Survey Mode and Sampling on Inferences About Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys With Nonprobability Samples.” Political Analysis 15: 286-323. Doi:

  • McDonald, M.P. 2003. “On the Overreport Bias of the National Election Study Turnout Rate.” Political Analysis 11: 180-186. Doi:

  • Messer, B.L. and D.A. Dillman. 2011. “Surveying the General Public Over the Internet Using Address-Based Sampling and Mail Contact Procedures.” Public Opinion Quarterly 75: 429-457. Doi:

  • Mohorko, A., E. de Leeuw, and J. Hox. 2013a. “Internet Coverage and Coverage Bias in Europe: Developments Across Countries and Over Time.” Journal of Official Statistics 29: 609-622. Doi:

  • Mohorko, A., E. de Leeuw, and J. Hox. 2013b. “Coverage Bias in European Telephone Surveys: Developments of Landline and Mobile Phone Coverage Across Countries and Over Time.” Survey Methods: Insights from the Field. Doi:

  • Mood, C. 2010. “Logistic Regression: Why We Cannot Do What We Think We Can Do, and What We Can Do About It.” European Sociological Review 26: 67-82. Doi:

  • Nagelhout, G., M. Willemsen, M. Thompson, G. Fong, B. van den Putte, and H. de Vries. 010. “Is Web Interviewing a Good Alternative to Telephone Interviewing? Findings from the International Tobacco Control (ITC) Netherlands Survey.” BMC Public Health 10: 351. Doi:

  • Omnibus 2010. Survey on Information and Communication Technology, Swiss Federal Statistical Office 2010. Excel result sheets (in German; accessed October 21, 2013). Available at: (accessed December 2014).

  • Parsons, N. and M. Manierre. 2014. “Investigating the Relationship Among Prepaid Token Incentives, Response Rates, and Nonresponse Bias in a Web Survey.” Field Methods 26: 191-204. Doi:

  • Peytchev, A. 2009. “Survey Breakoff.” Public Opinion Quarterly 73: 74-97. Doi:

  • Revilla, M.A. and W.E. Saris. 2013. “A Comparison of the Quality of Questions in a Face-to-Face and a Web Survey.” International Journal of Public Opinion Research 25: 242-253. Doi:

  • Ryu, E., M. Couper, and R. Marans. 2006. “Survey Incentives: Cash vs. In-Kind, Face-to- Face vs. Mail, Response Rate vs. Nonresponse Error.” International Journal of Public Opinion Research 18: 89-106. Doi:

  • Sánchez-Fernández, J., F. Muñoz-Leiva, F.J. Montoro-Ríos, and J. Ángel Ibáñez-Zapata. 2010. “An Analysis of the Effect of Pre-Incentives and Post-Incentives Based on Draws on Response to Web Surveys.” Quality and Quantity 44: 357-373. Doi:

  • Schaurer, I., B. Struminskaya, L. Kaczmirek, and W. Bandilla. 2012. “The Price We Have to Pay: Incentive Experiments in the Recruitment Process for a Probability-Based Online Panel.” Presentation at the General Online Research Conference (GOR) March 5-7, 2012, Mannheim.

  • Scherpenzeel, A. and V. Toepoel. 2012. “Recruiting a Probability Sample for an Online Panel. Effects of Contact Mode, Incentives, and Information.” Public Opinion Quarterly 76: 470-490. Doi:

  • Schonlau, M., A. van Soest, A. Kapteyn, and M. Couper. 2009. “Selection Bias in Web Surveys and the Use of Propensity Scores.” Sociological Methods and Research 37: 291-318. Doi:

  • Selb, P. and S. Munzert. 2013. “Voter Overrepresentation, Vote Misreporting, and Turnout Bias in Postelection Surveys.” Electoral Studies 32: 186-196. Doi:

  • Sinclair, M., J. O’Toole, M. Malawaraarachchi, and K. Leder. 2012. “Comparison of Response Rates and Cost-Effectiveness for a Community-Based Survey: Postal, Internet and Telephone Modes with Generic or Personalised Recruitment Approaches.” BMC Medical Research Methodology 12: 132. Doi:

  • Singer, E. and R. Bossarte. 2006. “Incentives for Survey Participation. When are they ‘Coercive’?” American Journal of Preventive Medicine 31: 411-418. Doi:

  • Singer, E. J. van Hoewyk, and M. Maher. 2000. “Experiments with Incentives in Telephone Surveys.” Public Opinion Quarterly 64: 171-188. Doi:

  • Singer, E. and C. Ye. 2013. “The Use and Effects of Incentives in Surveys.” The ANNALS of the American Academy of Political and Social Science 645: 112-141. Doi:

  • Stephenson, L. and J. Creˆte. 2011. “Studying Political Behavior: A Comparison of Internet and Telephone Surveys.” International Journal of Public Opinion Research 23: 24-55. Doi:

  • Struminskaya, B., L. Kaczmirek, I. Schaurer, and W. Bandilla. 2014. “Assessing Representativeness of a German Probability-Based Panel.” In Online Panel Research: A Data Quality Perspective, edited by M. Callegaro, R. Baker, J. Bethlehem, A. Gö ritz, J. Krosnick, and P. Lavrakas, 61-85. New York: John Wiley & Sons.

  • Su, J., P. Shao, and J. Fang. 2008. “Effect of Incentives on Web-Based Surveys.” Tsinghua Science and Technology 13: 344-347. Doi:

  • Teisl, M., B. Roe, and M. Vayda. 2006. “Incentive Effects on Response Rates, Data Quality, and Survey Administration Costs.” International Journal of Public Opinion Research 18: 364-373. Doi:

  • Vannieuwenhuyze, J. and G. Loosveldt. 2013. “Evaluating Relative Mode Effects in Mixed-Mode Surveys: Three Methods to Disentangle Selection and Measurement Effects.” Sociological Methods & Research 42: 82-104. Doi:

  • Van Veen, F., A. Göritz, and S. Sattler. 2011. “The Impact of Monetary Incentives on Completion and Data Quality in Online Surveys.” Presentation at the European Survey Research Association (ESRA) Conference, Lausanne, July 18-22 and General Online Research (GOR) Conference, Düsseldorf, March 14-16.

  • Von der Lippe, E., P. Schmich, and C. Lange. 2011. “Advance Letters as a Way of Reducing Non-Response in a National Health Telephone Survey: Differences Between Listed and Unlisted Numbers.” Survey Research Methods 5: 103-116. Doi:

  • Warren, J. and A. Halpern-Manners. 2012. “Panel Conditioning in Longitudinal Social Science Surveys.” Sociological Methods and Research 41: 491-534. Doi:

  • Yeager, D., J. Krosnick, L. Chang, H. Javitz, M. Levendusky, A. Simpser, and R. Wang. 2011. “Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples.” Public Opinion Quarterly 75: 709-747. Doi:

  • Zickhur, K. and A. Smith. 2012. Digital Differences. Pew Internet & American Life Project 13. Available at: (accessed August 2014).


Journal + Issues