Artificial Intelligence as a Means to Moral Enhancement

Open access

Abstract

This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artificial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelligence for moral enhancement, but not in artificial intelligence that relies solely on ambient intelligence technologies.

Anderson, M. and S. L. Anderson (2011). Machine ethics, Cambridge University Press.

Baertschi, B. (2014). “Neuromodulation in the service of moral enhancement.” Brain topography 27(1): 63-71.

Borenstein, J. and R. Arkin (2016). “Robotic nudges: the ethics of engineering a more socially just human being.” Science and engineering ethics 22(1): 31-46.

Carlson, M. S., et al. (2014). “Identifying factors that influence trust in automated cars and medical diagnosis systems.” in AAAI Symposium on The Intersection of Robust Intelligence and Trust in Autonomous Systems.

Crockett, M. J. (2014). “Moral bioenhancement: a neuroscientific perspective.” Journal of medical ethics 40(6): 370-371.

De Dreu, C. K. (2012). “Oxytocin modulates cooperation within and competition between groups: an integrative review and research agenda.” Hormones and behavior 61(3): 419-428.

de Sio, F. S., et al. (2014). “How cognitive enhancement can change our duties.” Frontiers in systems neuroscience 8: 131.

de Vries, P. W. (2004). Trust in systems: effects of direct and indirect information, Technische Universiteit Eindhoven.

DeGrazia, D. (2013). “Moral enhancement, freedom, and what we (should) value in moral behaviour.” Journal of medical ethics: medethics-2012-101157.

Dennett, D. C. (1981). “True Believers: The Intentional Stance andWhy ItWorks,” in A.F. Heath, ed., Scientific Explanation: Papers Based on Herbert Spencer Lectures Given in the University of Oxford. Oxford: Clarendon Press: 53-75.

Dominelli, L. (1998). “Multiculturalism, anti-racism and social work in Europe,” in eds. C. Williams, H. Soydan and M. R. D. Johnson, Social Work and Minorities. London: Routledge: 36-57.

Dow, J. (2015). Passions and Persuasion in Aristotle’s Rhetoric, Oxford University Press, USA.

Dworkin, G. (1972). “Paternalism.” The Monist: 64-84.

Dworkin, G. (2016). “Paternalism.” Stanford Encyclopedia of Philosophy. from http://plato.stanford.edu/entries/paternalism.

Emerson, R. M. (1976). “Social exchange theory.” Annual review of sociology: 335-362.

Fedo, M. (2016). The lynchings in Duluth, Minnesota Historical Society Press.

Glenn, A. L. and A. Raine (2014). “Neurocriminology: implications for the punishment, prediction and prevention of criminal behaviour.” Nature Reviews Neuroscience 15(1): 54-63.

Hamari, J., et al. (2014). Do persuasive technologies persuade? - a review of empirical studies. International Conference on Persuasive Technology, Springer.

Harris, J. (2010). Enhancing evolution: The ethical case for making better people, Princeton University Press.

Harris, J. (2011). “Moral enhancement and freedom.” Bioethics 25(2): 102-111.

Harris, J. (2013). “‘Ethics is for bad guys!’ Putting the ‘moral’ into moral enhancement.” Bioethics 27(3): 169-173.

Hobbes, T. (2004). De cive, Kessinger Publishing.

Kant, I. (1987). Critique of judgment, Hackett Publishing.

Lee, J. D. and K. A. See (2004). “Trust in automation: Designing for appropriate reliance.” Human Factors: The Journal of the Human Factors and Ergonomics Society 46(1): 50-80.

MacIntyre, A. (2006). Ethics and Politics: Volume 2: Selected Essays, Cambridge University Press.

Meyer, M. L., et al. (2012). “Empathy for the social suffering of friends and strangers recruits distinct patterns of brain activation.” Social cognitive and affective neuroscience: nss019.

Muir, B. M. (1987). “Trust between humans and machines, and the design of decision aids.” International Journal of Man-Machine Studies 27(5-6): 527-539.

Nickel, P. J. (2013). Trust in technological systems. Norms in technology, Springer: 223-237.

Parasuraman, R., et al. (1993). “Performance consequences of automation-induced ‘complacency’.” The International Journal of Aviation Psychology 3(1): 1-23.

Perelman, C. and Olbrechts-Tyteca, L. (1969). The New Rhetoric: A Treatise on Argumentation, University of Notre Dame Press, Notre Dame.

Persson, I. and Savulescu, J. (2011). “Unfit for the future? Human nature, scientific progress, and the need for moral enhancement.” In Enhancing human capabilities, ed. J. Savulescu, R. ter Meulen, and G. Kahane. Oxford: Wiley- Blackwell: 486-500.

Picard, R. W. (2000). Affective Computing, MIT Press.

Plato (1997). Plato: complete works. Indianapolis, Hackett.

Rowe, C. J., & Broadie, S. (2002). Nicomachean ethics. Oxford University Press, USA.

Sauer, J., et al. (2015). “Experience of automation failures in training: effects on trust, automation bias, complacency and performance.” Ergonomics: 1-14.

Savulescu, J. and H. Maslen (2015). Moral Enhancement and Artificial Intelligence: Moral AI? Beyond Artificial Intelligence, Springer: 79-95.

Shiffrin, S. V. (2000). “Paternalism, unconscionability doctrine, and accommodation.” Philosophy & Public Affairs 29(3): 205-250.

Slovic, P. (2010). If I look at the mass I will never act: Psychic numbing and genocide. Emotions and risky technologies, Springer: 37-59.

Tsai, G. (2014). “Rational persuasion as paternalism.” Philosophy & Public Affairs 42(1): 78-112.

Van den Hoven, J., et al. (2012). “Engineering and the problem of moral overload.” Science and engineering ethics 18(1): 143-155.

Studies in Logic, Grammar and Rhetoric

The Journal of University of Bialystok

Journal Information


Cite Score 2017: 0.28

SCImago Journal Rank (SJR) 2017: 0.136
Source Normalized Impact per Paper (SNIP) 2017: 0.293

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 769 769 54
PDF Downloads 495 495 31