Abstract
Artificial agents, particularly but not only those in the infosphere Floridi (Information—a very short introduction. Oxford University Press, Oxford, 2010a), extend the class of entities that can be involved in moral situations, for they can be correctly interpreted as entities that can perform actions with good or evil impact (moral agents). In this chapter, I clarify the concepts of agent and of artificial agent and then distinguish between issues concerning their moral behaviour vs. issues concerning their responsibility. The conclusion is that there is substantial and important scope, particularly in information ethics, for the concept of moral artificial agents not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, which considers whether artificial agents may have mental states, feelings, emotions and so forth. By focussing directly on “mind-less morality”, one is able to by-pass such question as well as other difficulties arising in Artificial Intelligence, in order to tackle some vital issues in contexts where artificial agents are increasingly part of the everyday environment (Floridi L, Metaphilos 39(4/5): 651–655, 2008a).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
For an excellent introduction see Jamieson (2008)
- 2.
See for example Bedau (1996) for a discussion of alternatives to necessary-and-sufficient definitions in the case of life.
- 3.
It is interesting to speculate on the mechanism by which that list is maintained. Perhaps by a human agent; perhaps by an AA composed of several people (a committee); or perhaps by a software agent.
References
Allen, C., G. Varner, and J. Zinser. 2000. Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence 12: 251–261.
Alpaydin, E. 2010. Introduction to machine learning. 2nd ed. Cambridge, MA/London: MIT Press.
Arnold, A., and J. Plaice. 1994. Finite transition systems: Semantics of communicating systems. Paris/Hemel Hempstead: Masson/Prentice Hall.
Barandiaran, X.E., E.D. Paolo, and M. Rohde. 2009. Defining agency: Individuality, normativity, asymmetry, and spatio-temporality in action. Adaptive Behavior—Animals, Animats, Software Agents, Robots, Adaptive Systems 17 (5): 367–386.
Bedau, M.A. 1996. The nature of life. In The philosophy of life, ed. M.A. Boden, 332–357. Oxford: Oxford University Press.
Cassirer, E. 1910. Substanzbegriff Und Funktionsbegriff. Untersuchungen Über Die Grundfragen Der Erkenntniskritik. Berlin: Bruno Cassirer. Translated by Swabey, W. M., and M. C. Swabey. 1923. Substance and function and Einstein’s theory of relativity. Chicago: Open Court.
Danielson, P. 1992. Artificial morality: Virtuous robots for virtual games. London/New York: Routledge.
Davidsson, P., and S.J. Johansson, eds. 2005. Special issue on “on the metaphysics of agents”. ACM: 1299–1300.
Dennet, D. 1997. When Hal kills, who’s to blame? In Hal’s legacy: 2001’s computer as dream and reality, ed. D. Stork, 351–365. Cambridge, MA: MIT Press.
Dixon, B.A. 1995. Response: Evil and the moral agency of animals. Between the Species 11 (1–2): 38–40.
Epstein, R.G. 1997. The case of the killer robot: Stories about the professional, ethical, and societal dimensions of computing. New York/Chichester: Wiley.
Floridi, L. 2003. On the intrinsic value of information objects and the infosphere. Ethics and Information Technology 4 (4): 287–304.
———. 2006. Information technologies and the tragedy of the good will. Ethics and Information Technology 8 (4): 253–262.
———. 2007. Global information ethics: The importance of being environmentally earnest. International Journal of Technology and Human Interaction 3 (3): 1–11.
———. 2008a. Artificial intelligence’s new frontier: Artificial companions and the fourth revolution. Metaphilosophy 39 (4/5): 651–655.
———. 2008b. The method of levels of abstraction. Minds and Machines 18 (3): 303–329.
———. 2010a. Information—A very short introduction. Oxford: Oxford University Press.
———. 2010b. Levels of abstraction and the Turing test. Kybernetes 39 (3): 423–440.
———. 2010c. Network ethics: Information and business ethics in a networked society. Journal of Business Ethics 90 (4): 649–659.
Floridi, L., and J.W. Sanders. 2001. Artificial evil and the foundation of computer ethics. Ethics and Information Technology 3 (1): 55–66.
———. 2004. On the morality of artificial agents. Minds and Machines 14 (3): 349–379.
———. 2005. Internet ethics: The constructionist values of Homo Poieticus. In The impact of the internet on our moral lives, ed. R. Cavalier. New York: SUNY.
Franklin, S., and A. Graesser. 1997. Is it an agent, or just a program?: A taxonomy for autonomous agents. In Proceedings of the workshop on intelligent agents III, agent theories, architectures, and languages, 21–35. Berlin: Springer.
Jamieson, D. 2008. Ethics and the environment: An introduction. Cambridge: Cambridge University Press.
Kerr, P. 1996. The grid. New York: Warner Books.
Michie, D. 1961. Trial and error. In Penguin science surveys, ed. A. Garratt, 129–145. Harmondsworth: Penguin.
Mitchell, M. 1998. An introduction to genetic algorithms. Cambridge, MA/London: MIT.
Moor, J.H. 2001. The status and future of the Turing test. Minds and Machines 11 (1): 77–93.
Motwani, R., and P. Raghavan. 1995. Randomized algorithms. Cambridge: Cambridge University Press.
Moya, L.J., and A. Tolk. 2007. Special issue on towards a taxonomy of agents and multi- agent systems. In Society for computer simulation international, 11–18. San Diego: International.
Rosenfeld, R. 1995a. Can animals be evil?: Kekes’ character-morality, the hard reaction to evil, and animals. Between the Species 11 (1–2): 33–38.
———. 1995b. Reply. Between the Species 11 (1–2): 40–41.
Russell, S.J., and P. Norvig 2010. Artificial intelligence: A modern approach, 3rd International. Boston/London: Pearson.
Turing, A.M. 1950. Computing machinery and intelligence. Mind 59 (236): 433–460.
Wallach, W., and C. Allen. 2010. Moral machines: Teaching robots right from wrong. New York/Oxford: Oxford University Press.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Floridi, L. (2021). Artificial Agents and Their Moral Nature. In: Floridi, L. (eds) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol 144. Springer, Cham. https://doi.org/10.1007/978-3-030-81907-1_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-81907-1_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81906-4
Online ISBN: 978-3-030-81907-1
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)