A “psychopathic” Artificial Intelligence: the possible risks of a deviating AI in Education

Open access


This work analyses the use of artificial intelligence in education from an interdisciplinary point of view. New studies demonstrated that an AI can “deviate” and become potentially malicious, due to programmers’ biases, corrupted feeds or purposeful actions. Knowing the pervasive use of artificial intelligence systems, including in the educational environment, it seemed necessary to investigate when and how an AI in education could deviate. We started with an investigation of AI and the risks it poses, wondering if they could be applied also to educative AI. We then reviewed the increasing literature that deals with the use of technology in the classroom, and the criticism about it, referring to specific use cases. Finally, as a result, the authors formulate questions and suggestions for further research, to bridge conceptual gaps underlined by lack of research.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • Almohammadi K. Hagras H. Alghazzawi D. Aldabbagh G. (2017). A survey of artificial intelligence techniques employed for adaptive educational systems within e-learning platforms. Journal of Artificial Intelligence and Soft-Computing Research 7(1) 47-64.

  • Amini A. Soleimany A. Schwarting W. Bhatia S. Rus D. (2019). Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure Conference on Artificial Intelligence Ethics and Society.

  • Athalye A Engstrom L. Ilyas A. Kwok K. (2018). Synthesizing Robust Adversarial Examples Proceedings of the 35 th International Conference on Machine Learning Stockholm Sweden PMLR 80.

  • Automatic Language Processing Advisory Committee. (1966). Languages and machines: computers in translation and linguistics. A report by the Automatic Language Processing Advisory Committee Division of Behavioral Sciences National Academy of Sciences National Research Council. Washington D.C.: National Academy of Sciences National Research Council 1966. (Publication 1416.)

  • Baggaley J. (2010). The Luddite Revolt continues. Distance Education 31(3) 337–343.

  • Bevilacqua L. Capuano N. Ceccarini F. Corvino F. (2009). Interfacce Utente Avanzate per l’e-learning Journal of E-Learning and Knowledge Society 5(3) 95-104.

  • Bloom B. S. (1984). The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-To-One Tutoring Educational Researcher 13(6) 4-16.

  • Bostrom N. (2011). Information Hazards: A Typology of Potential Harms From Knowledge. Review of Contemporary Philosophy 10 44-79.

  • Bostrom N. (2014). Superintelligence: Paths dangers strategies. Oxford: Oxford University Press.

  • Caliskan A. Bryson J.J. Narayanan A.(2017). Semantics derived automatically from language corpora contain human-like biases Science 183-186.

  • Chaudhri V. K. Gunning D. H. Lane H. C. Roschelle J. (2013). Intelligent Learning Technologies: Applications of Artificial Intelligence to Contemporary and Emerging Educational Challenges (Introduction to the Special Articles in the Fall and Winter Issues) AI Magazine 10-12.

  • Cheung Anne S. Y. (2015). Defaming by Suggestion: Searching for Search Engine Liability in the Autocomplete Era. in Comparative perspectives on the fundamental freedom of expression (ed. András Koltay).

  • Dermeval D. Paiva R. Bittencourt I. Vassileva J. Borges D. (2018). Authoring Tools for Designing Intelligent Tutoring Systems: a Systematic Review of the Literature. International Journal of Artificial Intelligence in Education 28(3) 336-384.

  • Friedman B. Nissenbaum H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS) 14 (3) 330-347.

  • Goertzel B. (2014). Artificial General Intelligence: Concept State of the Art and Future Prospects Journal of Artificial General Intelligence 5(1) 1-46.

  • Gritzalis D. Iseppi G. Mylonas A. & Stavrou V. (2018). Exiting the Risk Assessment Maze: A Meta-Survey. ACM Computing Surveys (CSUR)51(1) 11.

  • IBM. (2019). IBM Watson About. [Retrieved 10/04/2019] https://www.ibm.com/watson/about/index.html.

  • Kaplan A. Haenlein M. (2018). Siri Siri in my hand: Who’s the fairest in the land? On the interpretations illustrations and implications of artificial intelligence Business Horizons (62)1 15-25.

  • Koliska M. Diakopoulos N. (2018). Disclose Decode and Demystify: An Empirical Guide to Algorithmic Transparency. The Routledge Handbook of Developments in Digital Journalism Studies. Eds. Scott Eldridge II and Bob Franklin.

  • Kostecka-Szewc A. (2017). Nuove tecnologie-nuove sfide alla didattica. Annales Universitatis Paedagogicae Cracoviensis 9(3) 158-166.

  • Larson J. MattuS. Kirchner L. Angwin J. (2016). How We Analyzed the COMPAS Recidivism Algorithmhttps://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.

  • Luckin Rose; Holmes Wayne; Griffiths Mark and Forcier Laurie B. (2016). Intelligence Unleashed: An argument for AI in Education. Pearson Education London.

  • Lu H. Li Y. Chen M. Kim H. & Serikawa S. (2018). Brain intelligence: go beyond artificial intelligence. Mobile Networks and Applications23(2) 368-375.

  • McCulloch W. S. & Pitts W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics5(4) 115-133.

  • McDermott D. (1976). Artificial intelligence meets natural stupidity. ACM SIGART Bulletin (57) 4-9.

  • McDowell Marinchak C. L. Forrest E. & Hoanca B. (2018). The Impact of Artificial Intelligence and Virtual Personal Assistants on Marketing. In M. Khosrow-Pour D.B.A. (Ed.) Encyclopedia of Information Science and Technology Fourth Edition (pp. 5748-5756). Hershey PA: IGI Global. doi:10.4018/978-1-5225-2255-3.ch499.

  • McLaren B. M. Scheuer O. & Mikšátko J. (2010). Supporting Collaborative Learning and e-Discussions Using Artificial Intelligence Techniques. International Journal of Artificial Intelligence in Education. 20(1) 1–46.

  • McStay A. (2018). Emotional AI: The Rise of Emphatic Media. London: Sage Publication.

  • Michie D. (1973). Machines and the theory of intelligence Nature 241(23.02.1973) 507-512.

  • Minsky M.L. PapertS. (1969). Perceptrons: an introduction to computational geometry Cambridge Mass.: MIT Press.

  • Neuendorf K. A. (2016). The content analysis guidebook. Sage. Thousand Oaks.

  • Newell A & C. Shaw J. (1957). Programming the logic theory machine. Western Computing Proceedings 128.

  • Osoba O. A. Welser W. (2017). The Risks of Artificial Intelligence to Security and the Future of Work. Santa Monica CA: RAND Corporation. [Retrieved 10/04/2019] https://www.rand.org/pubs/perspectives/PE237.html.

  • Pistono F. & Yampolskiy R. V. (2016). Unethical research: how to create a malevolent artificial intelligence. arXiv preprint arXiv:1605.02817.

  • Rajan K. Saffiotti A. (2017). Towards a science of integrated AI and Robotics. Artificial Intelligence. 1-9.

  • Riek L.D. Howard D. (2014). A Code of Ethics for the Human-Robot Interaction Profession. We Robot. 1-10.

  • Scherer M. U. (2015). Regulating artificial intelligence systems: Risks challenges competencies and strategies. Harv. JL & Tech.29 353.

  • Schmidhuber J. (2015). Deep learning in neural networks: An overview. Neural Networks 61 5-117.

  • ShohamY. Perrault R. Brynjolfsson E. Clark J. Manyika J. Niebles J. C. Lyons T. Etchemendy J. Grosz B. Bauer Z. (2018). The AI Index 2018 Annual Report. AI Index Steering Committee Human-Centered AI Initiative Stanford University Stanford CA.

  • Spitzer M. (2013). Demenza digitale. Come la nuova tecnologia ci rende stupidi. Milano: Corbaccio.

  • Sharif M. Bhagavatula S. Reiter M.K. Bauer L. (2016). Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition CCS’16 October 24-28 2016 Vienna Austria.

  • Turing A. M. (1937). On computable numbers with an application to the Entscheidungsproblem. Proceedings of the London mathematical society 2(1) 230-265.

  • Urwin R. (2017). Artificial Intelligence - The quest for the ultimate thinking machine. Arcturus Publishing Limited London.

  • Vannini N. Enz S. Sapouna M. Wolke D. Watson S. Woods S. Aylett R. (2011). “FearNot!”: A Computer-Based Anti-Bullying-Programme Designed to Foster Peer Intervention. European Journal of Psychology of Education. 26(1) 21-44.

  • Vivanet G. (2014). Sull’efficacia delle tecnologie nella scuola: analisi critica delle evidenze empiriche. TD Tecnologie Didattiche 22(2) 95-100.

  • Turing A. M. 1950. Computer Machinery and Intelligence. Mind: A quarterly Review of Psychology and Philosophy 433-460.

  • World Economic Forum Global Risk Report 2019. [Retrieved 10/04/2019] https://www.weforum.org/reports/the-global-risks-report-2019.

Journal information
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 196 196 47
PDF Downloads 188 188 51