Model-based Utility Functions

Open access

Abstract

Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • Alamino R. and Nestor C. 2006. Online learning in discrete hidden Markov models. In: Djafari A. M. (ed) Proc. AIP Conf vol. 872(1) pp. 187-194.

  • Baum L. E. Petrie T. Soules G. and Weiss N. 1970. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Statist. 41(1) pp. 164-171.

  • Bishop C. 2006. Pattern Recognition and Machine Learning. Springer Berlin.

  • Bostrom N. 2003. Ethical issues in advanced artificial intelligence. In: Smit I. et al (eds) Cognitive Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence Vol. 2 pp. 12-17. Int. Institute of Advanced Studies in Systems Research and Cybernetics.

  • Dewey D. 2011. Learning what to value. In: Schmidhuber J. Thórisson K. R. and Looks M. (eds) AGI 2011. LNCS (LNAI) vol. 6830 pp. 309-314. Springer Heidelberg.

  • Ghahramani Z. 1997. Learning dynamic Bayesian networks. In: Giles C. and Gori M. (eds) Adaptive Processing of Temporal Information. LNCS vol. 1387 pp. 168-197. Springer Heidelberg.

  • Gisslén L. Luciw M. Graziano V. and Schmidhuber J. 2011. Sequential constant size compressors for reinforcement learning. In: Schmidhuber J. Thórisson K. R. and Looks M. (eds) AGI 2011. LNCS (LNAI) vol. 6830 pp. 31-40. Springer Heidelberg.

  • Goertzel B. 2004. Universal ethics: the foundations of compassion in pattern dynamics. http://www.goertzel.org/papers/UniversalEthics.htm

  • Hibbard B. 2008. The technology of mind and a new social contract. J. Evolution and Technology 17(1) pp. 13-22.

  • Hutter M. 2005. Universal artificial intelligence: sequential decisions based on algorithmic probability. Springer Heidelberg.

  • Hutter M. 2009a. Feature reinforcement learning: Part I. Unstructured MDPs. J. Artificial General Intelligence 1 pp. 3-24.

  • Hutter M. 2009b. Feature dynamic Bayesian networks. In: Goertzel B. Hitzler P. and Hutter M. (eds) AGI 2009. Proc. Second Conf. on AGI pp. 67-72. Atlantis Press Amsterdam.

  • Koutroumbas K. and Theodoris S. 2008. Pattern recognition (4th ed.). Academic Press Boston.

  • Li M. and Vitanyi P. 1997. An introduction to Kolmogorov complexity and its applications. Springer Heidleberg.

  • Lloyd S. Computational Capacity of the Universe. Phys. Rev. Lett. 88 (2002) 237901.

  • Olds J. and P. Milner P. 1954. Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain. J. Comp. Physiol. Psychol. 47 pp. 419-427.

  • Omohundro S. 2008. The basic AI drive. In Wang P. Goertzel B. and Franklin S. (eds) AGI 2008. Proc. First Conf. on AGI pp. 483-492. IOS Press Amsterdam.

  • Orseau L. and Ring M. 2011a. Self-modification and mortality in artificial agents. In: Schmidhuber J. Thórisson K. R. and Looks M. (eds) AGI 2011. LNCS (LNAI) vol. 6830 pp. 1-10. Springer Heidelberg.

  • Puterman M. L. 1994. Markov Decision Processes - Discrete Stochastic Dynamic Programming. Wiley New York.

  • Ring M. and Orseau L. 2011b. Delusion survival and intelligent agents. In: Schmidhuber J. Thórisson K. R. and Looks M. (eds) AGI 2011. LNCS (LNAI) vol. 6830 pp. 11-20. Springer Heidelberg.

  • Russell S. and Norvig P. 2010. Artificial intelligence: a modern approach (3rd ed.). Prentice Hall New York.

  • Schmidhuber J. 2002. The speed prior: a new simplicity measure yielding near-optimal computable predictions. In: Kiven J. and Sloan R. H. (eds) COLT 2002. LNCS (LNAI) vol. 2375 pp. 216-228. Springer Heidelberg.

  • Schmidhuber J. 2009. Ultimate cognition à la Gödel. Cognitive Computation 1(2) pp. 177-193.

  • Sutton R. S. and Barto A. G. 1998. Reinforcement learning: an introduction. MIT Press.

  • Wang P. 1995. Non-Axiomatic Reasoning System — Exploring the essence of intelligence. PhD Dissertation Indiana University Comp. Sci. Dept. and the Cog. Sci. Program.

  • Wasser M. 2011. Rational universal benevolence: simpler safer and wiser than "friendly AI." In: Schmidhuber J. Thórisson K. R. and Looks M. (eds) AGI 2011. LNCS (LNAI) vol. 6830 pp. 153-162. Springer Heidelberg.

  • Yudkowsky E. 2004. CoherentExtrapolatedVolition. http://www.sl4.org/wiki/CollectiveVolition

Search
Journal information
Cited By
Metrics
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 716 335 19
PDF Downloads 173 72 9