Analysis of Competitive Learning at University Level in Mexico via Item Response Theory

Open access

Abstract

This paper presents a study of the multiple choice test from the eleventh knowledge tournament for Statistics I, in order to determine whether it instills competitive learning in university students. This research uses Item Response Theory (IRT). The results obtained show that only 27 students (13.43% of the total number of participants) have an acceptable level of ability (1.03 to 2.58), while the level of ability of the rest of the students is not satisfactory (-1.68 to 0.76). The participants are not a group of students seeking to test their knowledge of the subject or looking for an academic challenge. Better strategies for motivating students in terms of competitive learning must be found.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • Aiken L. R. (1979). Relationship between the item difficulty and discrimination indexes. Educational and Psychological Measurement39 821–824. https://doi.org/10.1177/001316447903900415

  • Aiken L. R. (2003). Test psicológicos y evaluación (11th ed.). Naucalpan: Pearson Education.

  • Awopeju O. A. & Afolabi E. R. I. (2016). Comparative analysis of classical test theory and item response theory based item parameter estimates of senior school certificate mathematics examination. European Scientific Journal12(28) 263–284. https://doi.org/10.19044/esj.2016.v12n28p263

  • Baker F. B. & Kim S.-H. (2017). The basics of item response theory using R. Springer International Publishing. https://doi.org/10.1007/978-3-319-54205-8

  • Balmori S. Y. Delgadillo G. H. & Méndez R. I. (2011). Evaluación de un examen parcial de bioquímica. REB. Revista de Educación Bioquímica33(4) 3–7.

  • Best J. W. & Kahn J. V. (2006). Research in education. Pearson (10th ed.). Boston MA: Pearson Education.

  • Birnbaum A. (1968). Some latent trait models and their use in inferring an examinee’s ability. In F. Lord & M. Novick (Eds.) Statitsical Theories for Mental Test Scores (p. 592). ReadingMA: Addison-Wesley.

  • Cantador I. & Conde J. (2010). Effects of competition in education: a case study in an elearning environment. In IADIS international conference e-Learnig (pp. 11–18). Retrieved from http://arantxa.ii.uam.es/~Cantador/doc/2010/elearning10.pdf

  • Carpio Cañada J. Mateo Sanguino T. J. Merelo Guervós J. J. & Rivas Santos V. M. (2015). Open classroom: enhancing student achievement on artificial intelligence through an international online competition. Journal of Computer Assisted Learning31(1) 14–31. https://doi.org/10.1111/jcal.12075

  • DiBattista D. & Kurzawa L. (2011). Examination of the quality of multiple-choice items on classroom tests. Canadian Journal for the Scholarship of Teaching and Learning2(2) 1–23. https://doi.org/10.5206/cjsotlrcacea.2011.2.4

  • DMC. (2017). Torneo de estadística. Retrieved from http://metodos.cucea.udg.mx/estadistica.php

  • Escudero E. B. Reyna N. L. & Morales M. R. (2000). The level of difficulty and discrimination power of the basic knowledge and skills examination (EXHCOBA). Revista Electrónica de Investigación Educativa2(1) 1–16.

  • Fasli M. & Michalakopoulos M. (2015). Learning through game-like simulations. Innovation in Teaching and Learning in Information and Computer Sciences5(2) 1–11. https://doi.org/10.11120/ital.2006.05020005

  • Finch H. & French B. (2015). Latent variable modeling with R. New York NY: Routledge.

  • Furr M. R. & Bacharach V. R. (2013). Psychometrics: an introduction. Sage Publications (Second). Sage Publications Inc.

  • Gajjar S. Sharma R. Kumar P. & Rana M. (2014). Item and test analysis to identify quality multiple choice questions (MCQS) from an assessment of medical students of Ahmedabad Gujarat. Indian Journal of Community Medicine39(1) 17. https://doi.org/10.4103/0970-0218.126347

  • Hambleton R. K. Swaminathan H. & Rogers H. J. (1991). Fundamentals of item response theory. https://doi.org/10.2307/2075521

  • Hierro L. Á. Atienza P. & Pérez J. L. (2014). Una experiencia de aprendizaje universitario mediante juegos de torneo en clase. REDU. Revista de Docencia Universitaria12(4) 415–436. https://doi.org/10.4995/redu.2014.5634

  • Ingale A. S. Giri P. A. & Doibale M. K. (2017). Study on item and test analysis of multiple choice questions amongst undergraduate medical students. International Journal of Community Medicine and Public Health4(5) 1562–1565. https://doi.org/10.18203/2394-6040.ijcmph20171764

  • Johnson D. W. & Johnson R. T. (2002). Learning together and alone: overview and meta-analysis. Asia Pacific Journal of Education22(1) 95–105. https://doi.org/10.1080/0218879020220110

  • K. R. Hambleton & Jones R. W. (1993). Comparison of classical test theory and item response theory and their applications to test development. Educational Measurement: Issues and Practice12(3) 38–47. https://doi.org/10.1097/01.mlr.0000245426.10853.30

  • Kim S. L. & Sonnenwald D. H. (2002). Investigating the relationship between learning style preferences and teaching collaboration skills and technology. Proceedings of the Asis Annual Meeting39 64–73. https://doi.org/10.1002/meet.1450390107

  • Lawrence R. (2004). Teaching data structures using competitive games. IEEE Transactions on Education47(4) 459–466. https://doi.org/10.1109/TE.2004.825053

  • Lord F. (1980). Applications of item response theory to practical testing problems. Hillsdale NJ: Routledge.

  • Marie S. M. J. A. & Edannur S. (2015). Relevance of item analysis in standardizing an achievement test in teaching of physical science. Journal of Educational Technology12(3) 30–36.

  • McDonald M. (2017). The nurse educators guide to assessing learning outcomes. BurlingtonMA: Jones & Bartlett Learning.

  • Miller M. D. Linn R. & Gronlund N. (2009). Measurement and assessment in teaching (10th ed.). upper Saddle River NJ: Pearson Education.

  • Mitra N. Nagaraja H. Ponnudurai G. & Judson J. (2009). The levels of difficulty and discrimination indices in type a multiple choice questions of pre-clinical semester 1 multidisciplinary summative tests. International E-Journal of Science Medicine & Education3(1) 2–7.

  • Muñiz J. (2010). Las teorías de los test: teoría clásica y teoría de respuesta a los items. Papeles Del Psicólogo31(1) 57–66.

  • Owens L. & Straton R. G. (1980). The development of a cooperative competitive and individualised learning preference scale for students. British Journal of Educational Psychology50 147–161.

  • Rao C. Kishan Prasad H. Sajitha K. Permi H. & Shetty J. (2016). Item analysis of multiple choice questions: assessing an assessment tool in medical students. International Journal of Educational and Psychological Researches2(4) 201. https://doi.org/10.4103/2395-2296.189670

  • Rasch G. (1980). Probabilistic models for some intelligence and attainment tests. Chicago US: The University of Chicago Press.

  • Regueras L. M. Verdú E. Munoz M. F. Perez M. A. de Castro J. P. & Verdú M. J. (2009). Effects of competitive e-learning tools on higher education students: a case study. IEEE Transactions on Education52(2) 279–285. https://doi.org/10.1109/TE.2008.928198

  • Reise S. P. (1990). A comparison of item- and person-fit methods of assessing model-data fit in IRT. Applied Psychological Measurement14(2) 127–137. https://doi.org/10.1177/014662169001400202

  • Rizopoulos D. (2006). ltm: an R package for latent variable modeling and item response theory analyses. Journal of Statistical Software17(5) 1–25. https://doi.org/10.18637/jss.v017.i05

  • Rizopoulos D. (2017). Package “ ltm.” Retrieved from https://github.com/drizopoulos/ltm

  • Romero G. M. O. Rojas P. A. D. Domínguez O. R. L. Pérez S. M. P. & Sapsin K. G. (2015). Difficulty and discrimination of the items of the exams of reasearch methodology and statistics. Edumecentro7(2) 19–35.

  • Sinharay S. (2003). Bayesian item fit analysis for dichotomous Item response theory models.

  • Thissen D. & Wainer H. (2001). Test scoring. Mahwah NJ: Lawrence Erlbaum Associates Inc.

  • Verhoeff T. (1997). The Role of competitions in education. Future world: Educating for the 21st century. Retrieved from http://olympiads.win.tue.nl/ioi/ioi97/ffutwrld/competit.html

  • Zamri Khairani A. & Shamsuddin H. (2016). Assessing item difficulty and discrimination indices of teacherdeveloped multiple-choice tests. In S. Fun Tang & L. Logonnathan (Eds.) Assessment for Learning Within and Beyond the Classroom (pp. 417–426). Springer Science & Business Media. https://doi.org/10.1007/978-981-10-0908-2_6