Measuring Trust in Medical Researchers: Adding Insights from Cognitive Interviews to Examine Agree-Disagree and Construct-Specific Survey Questions

Jennifer Dykema 1 , Dana Garbarski 2 , Ian F. Wall 3  and Dorothy Farrar Edwards 4
  • 1 University of Wisconsin Survey Center (UWSC), , WI 53706
  • 2 Loyola University Chicago, , IL 60660, Chicago
  • 3 , MI, 49508, Grand Rapids
  • 4 University of Wisconsin-Madison, 2176 Medical Science Center, WI 53706


While scales measuring subjective constructs historically rely on agree-disagree (AD) questions, recent research demonstrates that construct-specific (CS) questions clarify underlying response dimensions that AD questions leave implicit and CS questions often yield higher measures of data quality. Given acknowledged issues with AD questions and certain established advantages of CS items, the evidence for the superiority of CS questions is more mixed than one might expect. We build on previous investigations by using cognitive interviewing to deepen understanding of AD and CS response processing and potential sources of measurement error. We randomized 64 participants to receive an AD or CS version of a scale measuring trust in medical researchers. We examine several indicators of data quality and cognitive response processing including: reliability, concurrent validity, recency, response latencies, and indicators of response processing difficulties (e.g., uncodable answers). Overall, results indicate reliability is higher for the AD scale, neither scale is more valid, and the CS scale is more susceptible to recency effects for certain questions. Results for response latencies and behavioral indicators provide evidence that the CS questions promote deeper processing. Qualitative analysis reveals five sources of difficulties with response processing that shed light on under-examined reasons why AD and CS questions can produce different results, with CS not always yielding higher measures of data quality than AD.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • Anderson, L.A. and R.F. Dedrick. 1990. “Development of the Trust in Physician Scale: A Measure to Assess Interpersonal Trust in Patient-physician Relationships.” Psychological Reports 67: 1091–1100. Doi:

  • Audacity Developer Team. 2008. Audacity (Version 1.2.6) [Computer Software]: Available at: (accessed April 2019).

  • Bassili, J.N. and B.S. Scott. 1996. “Response Latency as a Signal to Question Problems in Survey Research.” Public Opinion Quarterly 60: 390–399. Doi:

  • Braunstein, J.B., N.S. Sherber, S.P. Schulman, E.L. Ding, and N.R. Powe. 2008. “Race, Medical Researcher Distrust, Perceived Harm, and Willingness to Participate in Cardiovascular Prevention Trials.” Medicine 87: 1–9. Doi:

  • Carpenter, P.A. and M.A. Just. 1975. “Sentence Comprehension: A Psycholinguistic Processing Model of Verification.” Psychological Review 82: 45–73. Available at: (accessed April 2019).

  • Corbie-Smith, G., S.B. Thomas, and D.M.M. St. George. 2002. “Distrust, Race, and Research.” Archives of Internal Medicine 162: 2458–2463. Doi:

  • Davidov, E., B. Meuleman, J. Cieciuch, P. Schmidt, and J. Billiet. 2014. “Measurement Equivalence in Cross-National Research.” Annual Review of Sociology 40: 55–75. Doi:

  • De Leeuw, E. and N. Berzelak. 2016. “Survey Mode or Survey Modes?” In The SAGE Handbook of Survey Methodology, edited by C. Wolf, J. Dominique, T.W. Smith, and F. Yang-chih, 142–156. Los Angeles: SAGE Publications Ltd. Available at: (accessed April 2019).

  • Dijkstra, W. and Y. Ongena. 2006. “Question-Answer Sequences in Survey-Interviews.” Quality & Quantity 40: 983–1011. Doi:

  • Dillman, D.A., J.D. Smyth, and L.M. Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method (4th edition). Hoboken, NJ: John Wiley.

  • Draisma, S. and W. Dijkstra. 2004. “Response Latency and (Para)linguistic Expression as Indicators of Response Error.” In Methods for Testing and Evaluating Survey Questionnaires, edited by S. Presser, J.M. Rothgeb, M.P. Couper, J.T. Lessler, E. Martin, J. Martin, and E. Singer, 131–148. New York: Springer-Verlag. Doi:

  • Dykema, J., J.M. Lepkowski, and S. Blixt. 1997. “The Effect of Interviewer and Respondent Behavior on Data Quality: Analysis of Interaction Coding in a Validation Study.” In Survey Measurement and Process Quality, edited by L. Lyberg, P. Biemer, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz, and D. Trewin, 287–310. N.Y: Wiley-Interscience. Available at: (accessed April 2019).

  • Dykema, J., N.C. Schaeffer, and D. Garbarski. 2012. “Effects of Agree-Disagree Versus Construct-Specific Items on Reliability, Validity, and Interviewer-Respondent Interaction.” Presented at the American Association for Public Opinion Research, May 17–20. 2012. Orlando, Florida, U.S.A.

  • Dykema, J., N.C. Schaeffer, and D. Garbarski. 2019. “Towards a Reconsideration of the Use of Agree-Disagree Questions in Measuring Subjective Evaluations.” Unpublished manuscript, University of Wisconsin-Madison, Madison-WI.

  • Edwards, D.F. 2015. “Voices Heard.” Presented at the Health Equity Leadership Institute, Madison, WI.

  • Egede, L.E. and C. Ellis. 2008. “Development and Testing of the Multidimensional Trust in Health Care Systems Scale.” Journal of General Internal Medicine 23: 808–815. Doi:

  • Fleiss, J.L. 1981. Statistical Methods for Rates and Proportions, 2nd edition. New York: Wiley.

  • Fortune-Greeley, A.K., K.E. Flynn, D.D. Jeffery, M.S. Williams, F.J. Keefe, R.B. Reeve, G.B. Willis, and K.P. Weinfurt. 2009. “Using Cognitive Interviews to Evaluate Items for Measuring Sexual Functioning Across Cancer Populations: Improvements and Remaining Challenges.” Quality of Life Research 18: 1085–1093. Doi:

  • Fowler, F.J. and C. Cosenza. 2009. “Design and Evaluation of Survey Questions.” In The Sage Handbook of Applied Social Research Methods, edited by L. Bickman and D.J. Rog, 375–412. Thousand Oaks, CA: Sage.

  • Hall, M.A., F. Camacho, E. Dugan, and R. Balkrishnan. 2002a. “Trust in the Medical Profession: Conceptual and Measurement Issues.” Health Services Research 37: 1419–1439. Doi:

  • Hall, M.A., F. Camacho, J.S. Lawlor, V. DePuy, J. Sugarman, and K. Weinfurt. 2006. “Measuring Trust in Medical Researchers.” Medical Care 44: 1048–1053. Available at: (accessed April 2019).

  • Hall, M.A., E. Dugan, B. Zheng, and A.K. Mishra. 2001. “Trust in Physicians and Medical Institutions: What is It, Can It be Measured, and Does It Matter?” Milbank Quarterly 79: 613–639. Doi:

  • Hall, M.A., B. Zheng, E. Dugan, F. Camacho, K.E. Kidd, A. Mishra, and R. Balkrishnan. 2002b. “Measuring Patients’ Trust in their Primary Care Providers.” Medical Care Research and Review 59: 293–318.Doi:

  • Hanson, T. 2015. “Comparing Agreement and Item-Specific Response Scales: Results from an Experiment.” Social Research Practice 1: 17–25. Available at: (accessed April 2019).

  • Hayman, R.M., B.J. Taylor, N.S. Peart, B.C. Galland, and R.M. Sayers. 2001. “Participation in Research: Informed Consent, Motivation and Influence.” Journal of Paediatrics and Child Health 37: 51–54. Available at: (accessed April 2019).

  • Henderson, G., J. Garrett, J. Bussey-Jones, M.E. Moloney, C. Blumenthal, and G. Corbie-Smith. 2008. “Great Expectations: Views of Genetic Research Participants Regarding Current and Future Genetic Studies.” Genetics in Medicine 10: 193–200. Doi:

  • Höhne, J.K. and D. Krebs. 2018. “Scale Direction Effects in Agree/Disagree and Item-Specific Questions: A Comparison of Question Formats.” International Journal of Social Research Methodology 21: 91–103. Doi:

  • Höhne, J.K. and T. Lenzner. 2018. “New Insights on the Cognitive Processing of Agree/Disagree and Item-Specific Questions.” Journal of Survey Statistics and Methodology 6: 401–417. Doi:

  • Höhne, J.K., S. Schlosser, and D. Krebs. 2017. “Investigating Cognitive Effort and Response Quality of Question Formats in Web Surveys Using Paradata.” Field Methods 29: 365–382. Doi:

  • Holbrook, A.L. 2008. “Recency Effect.” In Encyclopedia of Survey Research Methodology, edited by P.J. Lavrakas, 695–696. Newbury Park, CA: Sage.

  • Johnson, R.B. and A.J. Onwuegbuzie. 2004. “Mixed Methods Research: A Research Paradigm Whose Time Has Come.” Educational Researcher 33: 14–26. Doi:

  • Krosnick, J.A. and S. Presser. 2010. “Question and Questionnaire Design.” In Handbook of Survey Research, Second Edition, edited by P.V. Marsden and J.D. Wright, 263–313. Bingley, UK: Emerald Group Publishing Limited.

  • Kuru, O. and J. Pasek. 2016. “Improving Social Media Measurement in Surveys: Avoiding Acquiescence Bias in Facebook Research.” Computers in Human Behavior 57: 82–92. Available at: (accessed April 2019).

  • Landis, J.R. and G.G. Koch. 1977. “The Measurement of Observer Agreement for Categorical Data.” Biometrics 33: 159–174. Doi:

  • Lelkes, Y. and R. Weiss. 2015. “Much Ado about Acquiescence: The Relative Validity and Reliability of Construct-Specific and Agree-Disagree Questions.” Research and Politics 2: 1–8. Doi:

  • Liu, M., S. Lee, and F.G. Conrad. 2015. “Comparing Extreme Response Styles between Agree-Disagree and Item-Specific Scales.” Public Opinion Quarterly 79: 952–975. Doi:

  • Mainous, A.G., D.W. Smith, M.E. Geesey, and B.C. Tilley. 2006. “Development of a Measure to Assess Patient Trust in Medical Researchers.” Annals of Family Medicine 4: 247–252. Doi:

  • Revilla, M. and C. Ochoa. 2015. “Quality of Different Scales in an Online Survey in Mexico and Columbia.” Journal of Politics in Latin America 7: 157–177. Available at: (accessed April 2019).

  • Rogers, W. 1994. “Regression Standard Errors in Clustered Samples.” Stata Technical Bulletin 13. Available at: (accessed April 2019).

  • Ryan, G.W. and H.R. Bernard. 2003. “Techniques to Identify Themes.” Field Methods 15: 85–109. Doi:

  • Saris, W.E., M. Revilla, J.A. Krosnick, and E.M. Shaeffer. 2010. “Comparing Questions with Agree/Disagree Response Options to Questions with Item-Specific Response Options.” Survey Research Methods 4: 61–79. Doi:

  • Schaeffer, N.C. and J. Dykema. 2011. “Response 1 to Fowler’s Chapter: Coding the Behavior of Interviewers and Respondents to Evaluate Survey Questions.” In Question Evaluation Methods: Contributing to the Science of Data Quality, edited by J. Madans, K. Miller, A. Maitland, and G. Willis, 23–39. Hoboken, NJ: John Wiley & Sons, Inc. Available at:

  • Scharff, D.P., K.J. Mathews, P. Jackson, J. Hoffsuemmer, E. Martin, and D. Edwards. 2010. “More than Tuskegee: Understanding Mistrust about Research Participation.” Journal of Health Care for the Poor and Underserved 21: 879–897. Doi:

  • Schuman, H. and S. Presser. 1996. Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. Thousand Oaks, CA: Sage Publications, Inc.

  • Smith, T.W., P.V. Marsden, and M. Hout. 2013. General Social Survey, 1972–2010 [Cumulative File]. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2013-02-07. Doi:

  • Streiner, D.L., G.R. Norman, and J. Cairney. 2015. Health Measurement Scales: A Practical Guide to Their Development and Use. Oxford, UK: Oxford University Press.

  • Sturgis, P., C. Roberts, and P. Smith. 2014. “Middle Alternatives Revisited: How the neither/nor Response Acts as a Way of Saying “I Don’t Know”?” Sociological Methods & Research 43: 15–38. Doi:

  • Thompson, H.S., H.B. Valdimarsdottir, G. Winkel, L. Jandorf, and W.W. Redd. 2004. “The Group-Based Medical Mistrust Scale: Psychometric Properties and Association with Breast Cancer Screening.” Preventive Medicine 38: 209–218. Doi:

  • Tourangeau, R., M.C. Couper, and F. Conrad. 2004. “Spacing, Position, and Order: Interpretive Heuristics for Visual Features of Survey Questions.” Public Opinion Quarterly 68: 368–393. Doi:

  • Tourangeau, R., L.J. Rips, and K. Rasinski. 2000. The Psychology of Survey Response. Cambridge, England: Cambridge University Press.

  • Williams, M.M., D.P. Scharff, K.J. Mathews, J.S. Hoffsuemmer, P. Jackson, J.C. Morris, and D.F. Edwards. 2010. “Barriers and Facilitators of African American Participation in Alzheimer Disease Biomarker Research.” Alzheimer Disease & Associated Disorders 24: S24–S29. Available at: (accessed April 2019).

  • Willis, G.B. 2005. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Thousand Oaks, CA: Sage.

  • Willis, G.B. and K. Miller. 2011. “Cross-Cultural Cognitive Interviewing: Seeking Comparability and Enhancing Understanding.” Field Methods 23: 331–341. Doi:

  • Willits, F.K., G.L. Theodori, and A.E. Luloff. 2016. “Another Look at Likert Scales.” Journal of Rural Social Sciences 31: 126–139. Available at: (accessed April 2019).

  • Yan, T. and R. Tourangeau. 2008. “Fast Times and Easy Questions: The Effects of Age, Experience and Question Complexity on Web Survey Response Times.” Applied Cognitive Psychology 22: 51–68. Available at: (accessed April 2019).

  • Zheng, B., M.A. Hall, E. Dugan, K.E. Kidd, and D. Levine. 2002. “Development of a Scale to Measure Patients’ Trust in Health Insurers.” Health Services Research 37: 185–200. Doi:


Journal + Issues