Abstract
Autonomous robots will have to have the capability to make decisions on their own to varying degrees. In this chapter, I will make the plea for developing moral capabilities deeply integrated into the control architectures of such autonomous agents, for I shall argue that any ordinary decision-making situation from daily life can be turned into a morally charged decision-making situation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We could further refine this by defining the set of impermissible actions relative to some situation S.
- 2.
Note that I am using the term “moral dilemma” in an non-technical sense as I do not want to be side-tracked by the discussion on whether there are “genuine moral dilemmas”…
- 3.
Note that a direct comparison between a robotic and human driver in the car scenario is not possible because the robot does not have to take its own destruction into account, whereas in the human case part of the human decision-making will include estimating the chances of minimizing harm to oneself.
References
Alechina, N., Dastani, M., & Logan, B. (2014, forthcoming). Norm approximation for imperfect monitors. In Proceedings of AAMAS, Paris.
Anderson, M., & Anderson, S. L. (2006). MedEthEx: A prototype medical ethics advisor. In Paper Presented at the 18th Conference on Innovative Applications of Artificial Intelligence, Boston.
Arkin, R., & Ulam, P. (2009). An ethical adaptor: Behavioral modification derived from moral emotions. In IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), 2009, Daejeon (pp. 381–387). IEEE.
Briggs, G., & Scheutz, M. (2012). Investigating the effects of robotic displays of protest and distress. In Proceedings of the 2012 Conference on Social Robotics, Chengdu. LNCS. Springer.
Briggs, G., & Scheutz, M. (2014). How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress. International Journal of Social Robotics, 6, 1–13.
Briggs, G., Gessell, B., Dunlap, M., & Scheutz, M. (2014). Actions speak louder than looks: Does robot appearance affect human reactions to robot protest and distress? In Proceedings of 23rd IEEE Symposium on Robot and Human Interactive Communication (Ro-Man), Edinburgh.
Bringsjord, S., Arkoudas, K., & Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems, 21(4), 38–44.
Bringsjord, S., Taylor, J., Housten, T., van Heuveln B, Clark, M., & Wojtowicz, R. (2009). Piagetian roboethics via category theory: Moving beyond mere formal operations to engineer robots whose decisions are guaranteed to be ethically correct. In Proceedings of the ICRA 2009 Workshop on Roboethics, Kobe.
Dworkin, R. (1984). Rights as trumps. In J. Waldron (Ed.), Theories of rights (pp. 153–167). Oxford: Oxford University Press.
Guarini, M. (2011). Computational neural modeling and the philosophy of ethics. In M. Anderson, & S. Anderson (Eds.), Machine ethics (pp. 316–334). Cambridge: Cambridge University Press.
Kramer, J., & Scheutz, M. (2007). Reflection and reasoning mechanisms for failure detection and recovery in a distributed robotic architecture for complex robots. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome (pp. 3699–3704).
Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25(2), 147–186.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21, 18–21.
Schermerhorn, P., & Scheutz, M. (2009). Dynamic robot autonomy: Investigating the effects of robot decision-making in a human-robot team task. In Proceedings of the 2009 International Conference on Multimodal Interfaces, Cambridge.
Schermerhorn, P., & Scheutz, M. (2011). Disentangling the effects of robot affect, embodiment, and autonomy on human team members in a mixed-initiative task. In ACHI, Gosier (pp. 236–241).
Scheutz, M. (2002). Agents with or without emotions? In R. Weber (Ed.), Proceedings of the 15th International FLAIRS Conference, Pensacola Beach (pp. 89–94). AAAI Press.
Scheutz, M. (2012). The inherent dangers of unidirectional emotional bonds between humans and social robots. In P. Lin, G. Bekey, & K. Abney (Eds.), Anthology on robo-ethics. Cambridge/Mass: MIT Press.
Scheutz, M. (in preparation) Moral action selection and execution.
Scheutz, M., Schermerhorn, P., Kramer, J., & Anderson, D. (2007). First steps toward natural human-like HRI. Autonomous Robots, 22(4), 411–423.
Scheutz, M., Briggs, G., Cantrell, R., Krause, E., Williams, T., & Veale, R. (2013). Novel mechanisms for natural human-robot interactions in the DIARC architecture. In Proceedings of the AAAI Workshop on Intelligent Robotic Systems, Bellevue.
Strait, M., Briggs, G., & Scheutz, M. (2013). Some correlates of agency ascription and emotional value and their effects on decision-making. In Proceedings of the 5th Biannual Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), Geneva (pp. 505–510).
Talamadupula, K., Benton, J., Kambhampati, S., Schermerhorn, P., & Scheutz, M. (2010). Planning for human-robot teaming in open worlds. ACM Transactions on Intelligent Systems and Technology, 1(2), 14:1–14:24.
Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Scheutz, M. (2016). The Need for Moral Competency in Autonomous Agent Architectures. In: Müller, V.C. (eds) Fundamental Issues of Artificial Intelligence. Synthese Library, vol 376. Springer, Cham. https://doi.org/10.1007/978-3-319-26485-1_30
Download citation
DOI: https://doi.org/10.1007/978-3-319-26485-1_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-26483-7
Online ISBN: 978-3-319-26485-1
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)