Abstract
This Commentary critiques an extremely relational view of robot moral status, drawing out its practical implications for ethics and law. It also suggests next steps for AI ethics if extremely relational reasoning is compelling. Section I introduces the topic, distinguishing an ‘extremely relational’ view from more moderate relational views. Section II illustrates extremely relational views using the example of embodiment. Section III explores practical implications of extremely relational views for ethics and law. Section IV offers possible responses to extreme relationism. Section V concludes by suggesting next steps for AI ethics.
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In “Not Relational Enough? Towards an Eco-Relational Approach in Robot Ethics,” Puzio defends what I call an ‘extremely relational’ view of moral personhood and applies it to robots. According to extreme relationism, alternatives to relational views are, in the final analysis, relational too. Puzio’s argument favoring such an approach turns on showing that the best alternatives to relational accounts ground personhood in cognitive capacities for sentience, consciousness, and intelligence; yet these qualities must be contextualized, embodied, and enacted to exist –hence, they are relationally-based.
In this commentary, I draw out some troubling practical implications of Puzio’s extremely relational account of personhood and considers possible responses. Section II illustrates extreme relationism using the example of embodiment. Section III shows the view’s disruptive implications for ordinary ethics. Section IV considers possible responses. The paper concludes that the best way forward might involve distinguishing complexly true metaphysical claims about persons from views utilized in practical ethics and laws.
2 Extending the Relational Turn
Relational approaches to evaluating robot moral status are becoming well-established and beginning to take hold alongside more standard views that base moral status on intrinsic properties, such as sentience, consciousness, or intelligence. Applied to robots, relational views generally hold that the criteria that underpin personhood refer to social-relational features, such as the ability to communicate, play, be friendly, and display prosocial qualities like caring, loyalty, and kindness to others. Puzio proposes extending relational approaches to cognitive capacities themselves, and argues that approaches to personhood that rely on cognitive capacities are inevitably relational in the sense of being connected with “contexts, bodies, and actions” rather than existing on their own, in a decontextualized, disembodied and hypothetical state. Others have begun exploring aspects of this proposal, such as the possibility that intelligence, or certain forms of it, are essentially embodied. For example, Bongard & Levin (2021, p. 2) argue that like human learning, machine learning cannot develop without a body (in the form of hardware) to explore the world, describing, “a continuum of emergence” of intelligence and agency. Weigmann (2012) likewise observes that thinking is influenced by bodily states and sensory experiences. An illustration might be machines that model their own morphology, altering their bodies in response to damage, and thus, becoming intelligent in new ways and expanding their neural networks as they use their artificial brain to interact (Bongard & Levin, 2021). Some suggest an “embodiment turn” (Nathan, 2023, p. 1) is co-occurring in AI with a “relational turn” (Coeckelbergh, 2010; Gunkel, 2012). The intuitive idea driving such suggestions is that rather than putting a fully formed brain into a machine, a machine becomes ‘brainy’ by interacting and learning from its world. If consciousness, sentience, and intelligence are necessarily embodied and contextualized, they are what Puzio (2024) calls, “hybrid affairs.”
3 Disruptive Implications
Homing in on property-based views of personhood that invoke consciousness, sentience, and intelligence, Puzio tries to show that even if robots currently lack certain sophisticated cognitive qualities that humans have, human personhood is not exceptional or fundamentally different in kind from the personhood attributable to robots who evince their own kind of sophistical relational qualities.
Puzio goes on to suggest that an extremely relational analysis of personhood carries striking implications for midlevel moral concepts, like responsibility and agency, making these notions into something “distributed across multiple agents.” This disrupts ordinary understandings, which generally assign responsibility and agency to self-contained individuals. Puzio’s analysis also disrupts concepts of responsibility and agency reflected in law and ethics that appeal to individuals’ intentions, and assume that individuals are autonomous agents who can be held solely responsible for their actions. Puzio gives as an example of joint agency and shared responsibility robotic surgeries that do not simply add an innovation to human-centered actions, but instead, transform human action into joint action and collaboration by enhancing visualization, precision, and possibilities between surgeons and AI-equipped robots.
A relational analysis of cognitive capacities brings close at hand worries ordinarily pegged as problems looming in a transhumanist or post-humanist future, when humans ‘merge’ with machines (Porter, 2017), or along the way, as brain-machine interfaces grow in sophistication (Jecker & Ko, 2022a, 2022b). If compelling, the analysis implies that these concerns are here and now.
Finally, extremely relational views are highly disruptive to contemporary AI ethics debates. To illustrate, consider AI ethics debates surrounding the topic of machines replacing humans and substituting morally inferior relationships (Turkle, 2011). Puzio’s analysis suggests that nonhumans and humans are co-constitutive, where to say that a and b are co-constitutive is to say that that the identity of a consists in part of the identity of b; likewise, the identity of b consist in part of the identity of a. If nonhumans are constitutive of humans, then there is no ‘pure’ human that can be replaced, and no ‘pure’ humans who enter human–human relationship. It’s a jumble.
4 Possible Replies
One way to cast doubt on Puzio’s analysis is to question its generalizability. Even if sentience, consciousness, and intelligence are reducible to relational elements, perhaps other properties form the basis for personhood, and they are not relational. One possible candidate is pre-reflective self-consciousness, understood as self-awareness implicit in all consciousnesses that provides one with “a continuous awareness of oneself as the subject of one’s stream of experience.” (Smith, 2020, n.p.n.) Zahavi (2020, p. 23) holds that pre-reflective self-consciousness is a unique individual experience had by the one having a subjective experience: “To undergo an experience necessarily means that there is something it is like to have that experience” for the one having it. That is true not just of bodily sensations and perceptual experiences, but also intentional desires and preferences, and entertaining abstract beliefs. Zahavi maintains that this experience of ‘what it is like’ is not reductive to relational qualities; even in cases involving shared feelings, like joy, that engender a ‘what-it-is-like-for-us-ness,’ there still exists an individual pre-reflective self-awareness of ‘what it is like’ for each individual (Zahavi, 2018; Brinck et al., 2017).
Another possible response to Puzio’s proposal would be to agree that personhood is relational but point out that Puzio’s analysis is narrowly construed in the sense that it remains tethered to status quo accounts that personhood is based on consciousness, sentience, and intelligence. Yet these hardly exhaust the properties that might be considered relevant to relational personhood. A broader list of relational qualities might include those suggested by sub-Saharan African ethics. Some interpretations of African ethics hold that robots have moral standing if they can commune with others in morally excellent ways (Jecker et al., 2022a, 2022b). Others ascribe moral standing to all living things, including trees and plants that lack consciousness, by virtue of being within a ‘web of life’ with all living things (Behrens, 2014).
A final possible response to Puzio’s analysis might be to bite the bullet. This response requires learning to live with an asymmetry between extremely relational analyses of persons and practical approaches in ethics and law. Barber (2020) suggests that when metaphysics and ordinary ethics conflict, this can force a revision of ethics, but not of metaphysics. In other words, if persons are extremely relational in the sense that Puzio describes, then the disruption to practical ethics and law is irreconcilable. Metaphysics and practical ethics coexist in separate spheres. We continue to assume accountability in ethics and law. We continue to ask crucial questions about how we ought to live with and relate to robots: what kind of relationships should we have with robots, and what is possible? Can relationships with machines evince friendship and love (Jecker, 2021a, 2021b)? Can they be constitutive of a good life for human beings (Jecker, 2024; Loh & Loh, 2023)? These ethical questions can and should proceed.
5 Conclusion
If extremely relational approaches are compelling, and if they are also deeply disruptive of ordinary ethical thinking, the right approach might be to adapt. By distinguishing ‘strict and philosophical’ talk, which relies on complexly true metaphysical notions, from practical reasoning in law and ethics, which draw on simplified notions, we clear a path forward. Ultimately, there is much to like about Puzio’s incisive and insistent analysis. If we take their approach seriously, robot ethics should turn its attention to asking, ‘What kind or robot-human relating do we have reason to value?’ and ‘How can we design and deploy robots to relate in these ways?’.
Data Availability
Not applicable.
Code Availability
Not applicable.
References
Barber, A. (2020). Is metaphysics immune to moral refutation? Acta Philosophica, 35, 469–492. https://doi.org/10.1007/s12136-019-00415-y
Behrens, K. (2014). An African relational environmentalism and moral considerability. Environmental Ethics, 36(1), 63–82. https://doi.org/10.5840/enviroethics20143615
Bongard, J., & Levin, M. (2021). Living things are not (20th century) machines: updating mechanism metaphors in light of the modern science of machine behavior. Frontiers in Ecology and Evolution, 9, 650726. https://doi.org/10.3389/fevo.2021.650726
Brinck, I., Reddy, V., & Zahavi, D. (2017). The primacy of the ‘We’?” In C. Durt, T. Fuchs, & C. Tewes (Eds.), Embodiment, enaction, and culture: Investigating the constitution of the shared world (pp. 131–147). MIT Press.
Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209–221. https://doi.org/10.1007/s10676-010-9235-5
Gunkel, D. J. (2012). The machine question critical perspectives on AI, robots, and ethics. MIT Press. https://doi.org/10.7551/mitpress/8975.001.0001
Jecker, N. S. (2021a). You’ve got a friend in me: sociable robots for older adults in an age of global pandemics. Ethics and Information Technology, 23(Supp 1), 35–43. https://doi.org/10.1007/s10676-020-09546-y
Jecker, N. S. (2021b). Nothing to be ashamed of: sex robots for older adults with disabilities. BMJ Journal of Medical Ethics, 47(1), 26–32. https://doi.org/10.1136/medethics-2020-106645
Jecker, N. S. (2024). Robots we relate to and confer moral status on. In D. J. Gunkel (Ed.), Handbook of the ethics of AI. Edward Elgar Publishing Ltd.
Jecker, N. S., & Ko, A. (2022a). The unique and practical advantages of applying a capability approach to brain computer interface. Philosophy and Technology, 35, 101. https://doi.org/10.1007/s13347-022-00597-1
Jecker, N. S., Ko, A. (2022b). Brain-computer interfaces could allow soldiers to control weapons with their thoughts and turn off their fear – but the ethics of neurotechnology lags behind the science. The Conversation 02 December. https://theconversation.com/brain-computer-interfaces-could-allow-soldiers-to-control-weapons-with-their-thoughts-and-turn-off-their-fear-but-the-ethics-of-neurotechnology-lags-behind-the-science-194017
Jecker, N. S., Atuire, C. A., & Ajei, M. O. (2022a). The moral standing of social robots: untapped insights from Africa. Philosophy and Technology, 35(2), 1–22. https://doi.org/10.1007/s13347-022-00531-5
Jecker, N. S., Atuire, C. A., Ajei, M. O. (2022b). Two steps forward: an African relational account of moral standing. Philosophy and Technology, 35(2). https://doi.org/10.1007/s13347-022-00533-3
Loh, J., & Loh, W. (Eds.). (2023). Social Robotics and the Good Life. The Normative Side of Forming Emotional Bonds with Robots. Verlag.
Nathan, M. J. (2023). Disembodied AI and the limits to machine understanding of students’ embodied interactions. Frontiers in Artificial Intelligence, 6, 1148227. https://doi.org/10.3389/frai.2023.1148227
Porter, A. (2017). Bioethics and transhumanism. Journal of Medicine and Philosophy, 42, 237–260. https://doi.org/10.1093/jmp/jhx001
Puzio, A. (2024). Not relational enough? Towards an eco-relational approach in robot ethics. Philosophy and Technology, 37, 45. https://doi.org/10.1007/s13347-024-00730-2
Smith, J. (2020). Self-consciousness. In Zalta EN, ed., Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/sum2020/entries/self-consciousness/
Turkle, S. (2011). Alone Together. Basic Books.
Weigmann, K. (2012). Does intelligence require a body? European Molecular Biology Organization (EMBRO) Reports, 13(12), 1066–1069.
Zahavi, D. (2018). Collective intentionality and plural pre-reflective self-awareness. Journal of Social Philosophy, 48(1), 61–75. https://doi.org/10.1111/josp.12218
Zahavi, D. (2020). Self-awareness and alterity. Northwestern University Press.
Funding
No funding to report.
Author information
Authors and Affiliations
Contributions
Each author contributed substantially to the conception and analysis of the work; drafting or revising it critically; final approval of the version to be published; and is accountable for all aspects of the work.
Corresponding author
Ethics declarations
Ethics Approval
Not applicable.
Consent to Publish
Not applicable.
Competing Interests
None to declare.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Jecker, N.S. Extremely Relational Robots: Implications for Law and Ethics. Philos. Technol. 37, 52 (2024). https://doi.org/10.1007/s13347-024-00735-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13347-024-00735-x