Morality, protection, security and gain: lessons from a minimalistic, economically inspired multi-agent model

  • 1 Poznan University of Technology, 60-965, Poznan, Poland


In this work, we introduce a simple multi-agent simulation model with two roles of agents that correspond to moral and immoral attitudes. The model is given explicitly by a set of mathematical equations with continuous variables and is characterized by four parameters: morality, protection, and two efficiency parameters. Agents are free to adjust their roles to maximize individual gains. The model is analyzed theoretically to find conditions for its stability, i.e., the fractions of agents of both roles that lead to an equilibrium in their gains. A multi-agent simulation is also developed to verify the dynamics of the model for all values of morality and protection parameters, and to identify potential discrepancies with the theoretical analysis.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1] Allen C., Varner G., and Zinser J. Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3):251–261, 2000.

  • [2] Anderson M. and Anderson S. L. Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4):15, 2007.

  • [3] Ayala F. J. The biological roots of morality. Biology and Philosophy, 2(3):235–252, 1987.

  • [4] Bazzan A. L. C., Bordini R. H., and Campbell J. A. Agents with moral sentiments in an iterated prisoner’s dilemma exercise. Technical report, 1997.

  • [5] Bazzan A. L. C., Bordini R. H., and Campbell J. A. Evolution of agents with moral sentiments in an iterated prisoner’s dilemma exercise. In Parsons S., Gmytrasiewicz P., and Wooldridge M., editors, Game Theory and Decision Theory in Agent-Based Systems, pages 43–64. Springer, 2002. URL:, doi:10.1007/978-1-4615-1107-6_3.

  • [6] Belloni A., Berger A., Besson V., Boissier O., Bonnet G., Bourgne G., Chardel P. A., Cotton J.-P., Evreux N., Ganascia J.-G., et al. Towards a framework to deal with ethical conflicts in autonomous agents and multi-agent systems. In CEPE 2014 Well-Being, Flourishing, and ICTs, pages paper–8, 2014.

  • [7] Birnbaum M. H. Morality judgments: Tests of an averaging model. Journal of Experimental Psychology, 93(1):35, 1972.

  • [8] Chiu C.-y., Dweck C. S., Tong J. Y.-y., and Fu J. H.-y. Implicit theories and conceptions of morality. Journal of Personality and Social Psychology, 73(5):923, 1997.

  • [9] Coelho H., da Rocha Costa A. C., and Trigo P. On agent interactions governed by morality. In Interdisciplinary Applications of Agent-Based Social Simulation and Modeling, pages 20–35. IGI Global, 2014.

  • [10] DeScioli P. and Kurzban R. Mysteries of morality. Cognition, 112(2):281–299, 2009.

  • [11] Floridi L. and Sanders J. W. On the morality of artificial agents. Minds and Machines, 14(3):349–379, Aug 2004. URL:, doi:10.1023/B:MIND.0000035461.63578.9d.

  • [12] Gotts N. M., Polhill J. G., and Law A. N. R. Agent-based simulation in the study of social dilemmas. Artificial Intelligence Review, 19(1):3–92, 2003.

  • [13] Gunkel D. J., Bryson J. J., and Torrance S. The machine question: AI, ethics and moral responsibility, 2012.

  • [14] Harsanyi J. C. Can the maximin principle serve as a basis for morality? A critique of John Rawls’s theory. American political science review, 69(2):594–606, 1975.

  • [15] Hill R. P. and Watkins A. A simulation of moral behavior within marketing exchange relationships. Journal of the Academy of Marketing Science, 35(3):417–429, 2007. URL:, doi:10.1007/s11747-007-0025-5.

  • [16] Hofbauer J. and Sigmund K. Evolutionary games and population dynamics. Cambridge University Press, 1998.

  • [17] Komosinski M. and Adamatzky A., editors. Artificial Life Models in Software. Springer, London, 2nd edition, 2009. URL:, doi:10.1007/978-1-84882-285-6.

  • [18] Kuhn S. T. Reflections on ethics and game theory. Synthese, 141(1):1–44, 2004.

  • [19] May R. M. and Leonard W. J. Nonlinear aspects of competition between three species. SIAM journal on applied mathematics, 29(2):243–253, 1975.

  • [20] McLaren B. M. Computational models of ethical reasoning: Challenges, initial steps, and future directions. IEEE intelligent systems, (4):29–37, 2006.

  • [21] Moor J. Four kinds of ethical robots. Philosophy Now, 72:12–14, 2009.

  • [22] Nawa N. E., Shimohara K., and Katai O. Does diversity lead to morality? On the evolution of strategies in a 3-agent alternating-offers bargaining model. In Workshop on Evolutionary Computation and Multi-Agent Systems (ECOMAS) at the Genetic and Evolutionary Computation Conference (GECCO-2001), pages 317–320, 2001.

  • [23] Rahwan I. Interest-based negotiation in multi-agent systems. PhD thesis, University of Melbourne, Department of Information Systems, 2004.

  • [24] Robbins R. and Hall D. Decision support for individuals, groups, and organizations: Ethics and values in the context of complex problem solving. In AMCIS 2007 Proceedings, page 329, 2007.

  • [25] Saptawijaya A. and Pereira L. M. Towards modeling morality computationally with logic programming. In International Symposium on Practical Aspects of Declarative Languages, pages 104–119. Springer, 2014.

  • [26] Sullins J. P. When is a robot a moral agent? IRIE: International Review of Information Ethics, 2006.

  • [27] Wallach W., Allen C., and Smit I. Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI & Society, 22(4):565–582, 2008.

  • [28] Wiegel V. and van den Berg J. Combining moral theory, modal logic and MAS to create well-behaving artificial agents. International Journal of Social Robotics, 1(3):233–242, 2009.

  • [29] Wilson E. O. The biological basis of morality. The Atlantic Monthly, 281(4):53–70, 1998.


Journal + Issues