The Thorny Challenge of Making Moral Machines: Ethical Dilemmas with Self-Driving Cars

Edmond Awad 1 , Jean-François Bonnefon 2 , Azim Shariff 3 , and Iyad Rahwan 4
  • 1 The Media Lab, Institute for Data, Systems and Society, Massachusetts Institute of Technology, , Cambridge
  • 2 Toulouse School of Economics (TSM-R, CNRS), Université Toulouse-1 Capitole, Toulouse, France
  • 3 Department of Psychology, University of British Columbia, Vancouver, Canada
  • 4 The Media Lab, Massachusetts Institute of Technology, Center for Humans and Machines, Max-Planck Institute for Human Development, , Cambridge


The algorithms that control AVs will need to embed moral principles guiding their decisions in situations of unavoidable harm. Manufacturers and regulators are confronted with three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. The presented moral machine study is a step towards solving this problem as it tries to learn how people all over the world feel about the alternative decisions the AI of self-driving vehicles might have to make. The global study displayed broad agreement across regions regarding how to handle unavoidable accidents. To master the moral challenges, all stakeholders should embrace the topic of machine ethics: this is a unique opportunity to decide as a community what we believe to be right or wrong, and to make sure that machines, unlike humans, unerringly follow the agreed-upon moral preferences. The integration of autonomous cars will require a new social contract that provides clear guidelines about who is responsible for different kinds of accidents, how monitoring and enforcement will be performed, and how trust among all stakeholders can be engendered.

If the inline PDF is not rendering correctly, you can download the PDF file here.


Journal + Issues