1 Introduction

People tend to have empathic responses toward social robots. This has been established by a number studies in human–robot interaction (HRI) (e.g. Rosenthal-von der Pütten et al. 2014). And it is generally believed that people empathize with social robots because they attribute human properties to them (Crowell et al. 2019); a process popularly known as mental anthropomorphism (Epley et al. 2007; Airenti 2015; Damiano and Dumouchel 2018). Humans thus perceive nonhuman entities such as robots to have motives, intentions, emotions, and varying kinds of mental states (Krach et al. 2008). Moreover, sociable robots mimicking humans (or animals) has been found to easily invite people’s moral intuitions; indeed, simple morphological similarities are often enough to invoke identification and prosocial behavior (Riek et al. 2009). By considering these observations we should not be too surprised that people have such moral intuitions and regulate their behavior accordingly around socially responsive robots.

But, the essential question here for moral philosophy is whether people are right or wrong in treating robots with moral consideration. Usually, this debate is had in terms of moral status: only if some robot qualify as a moral patient will other agents on the moral domain (humans, for instance) have obligations to treat it with moral regard (Gunkel and Bryson 2014).Footnote 1 The conditions on which some entity could be considered a moral patient have sparked quite some debate among scholars. Many theories share the basic assumption that patienthood rest on subjective experience of mental states, whether the particular position is formulated in terms of sentience or consciousness (Torrance 2008; Singer 2011; Donath 2020), or of having interests (Neely 2014; Rodogno 2016; Basl and Bowen 2020). In this vein, Bryson and Kime (2011) has argued that extending moral consideration to robots is a category mistake caused by mental anthropomorphism; it is an epistemic error in which we ‘over-identify’ with humanoid robots and mistakenly attribute mental properties on basically insentient and non-conscious things.

But dissatisfaction has mounted against the reasoning that underline these arguments, sometimes categorized as ‘property’ or ‘standard’ approaches (Chappell 2011; Gunkel 2017; Coeckelbergh 2018; Danaher 2019). On this thinking, moral status is credited entities that can reasonably be said to have some defined properties, often mental or psychological, that are counted as warrants of moral status in the first place. It is often argued that not only are such approaches inherently anthropocentric (they tend to start from human properties to establish patienthood for non-human entities), they also run into epistemic difficulties as mental properties are not discoverable. These problems have motivated several alternatives to ground moral status by other means. For example, virtue ethical approaches have suggested that rather than being about properties, internal states, or interests, the debate about moral consideration for robots should be concerned with how human behavior around robots reflect and shape the virtuous subject (Cappuccio et al. 2019; Sparrow 2020). Others have taken inspiration from environmental ethics that enables one to argue that moral status could obtain for teleologically organized systems such as nonsentient organisms and, perhaps, artificial objects such as robots (Basl 2019). A specific alternative is the ‘relational approach’ developed by Coeckelbergh and Gunkel (Coeckelbergh and Gunkel 2014; Gunkel 2017). They protest that standard approaches eschew moral experience and suggest with Levinas that “ethics is first philosophy”. Indeed, they hold that ethics precedes ontology and call for deeper engagement with the relational nature of moral experience.

I share Coeckelbergh’s (2018) worry in relation to standard approaches, “that empathic responses to robots are bound to remain unexplained and unjustified”, and take it as a central motivation in this paper. We cannot simply bypass emotional and pre-conscious aspects of social cognition in our conceptual analyses, as Seibt and Rodogno (2019) formulate it. In this vein, I propose a reappraisal of empathic responses and suggest from an ethical framework that this kind of spontaneous moral reactions should inform our moral status ascription for robots. I do this by exploring and developing on the ethics of K.E. Løgstrup (1905–1981).Footnote 2

I take Løgstrup’s ethical analysis to be a relevant fit, for the reason that it deals exactly with the kind of pre-reflexive responses to the Other we witness in HRI studies; where our moral intuitions oftentimes drive us to be more cordial than we would on second thought. In fact, he took empathy, trust, compassion, and similar phenomena as inherently good and valued these spontaneous responses to the Other above acts of moral deliberation. And, interestingly, Løgstrup emphasized that our moral responses do not merely work toward the wellbeing of the Other, but also promote it in what I will call a ‘first-person’ perspective. Sympathetic acts of charity toward a fellow’s plight help resolve one’s encircling self-absorption, the human predicament that Augustine of Hippo and later Martin Luther dubbed incurvatus in se (Rabjerg and Stern 2018). This then reveals an ethical aim native to theology—perhaps a surprising or unexpected source for dealing with the ethical challenges of human–robot interaction.

Building up to the ethical debate, I will first make some observations of the phenomenon in question by sketching extant empirical studies on empathic responses, mind-attribution, and moral intuition in HRI. In Sect. 3 I briefly discuss essential definitions and develop commensurability between the explanandum (empathic responses toward robots) and the explanans I propose (Løgstrup’s ethics). In Sect. 4 I then survey and engage with the recent debate on moral patienthood for robots, which regulates the acceptability of moral behavior toward social robots. Since existing solutions are unsatisfying for reasons I shall explore, I take steps similar to the relational approach before exploring and developing the Løgstrupian alternative in Sects. 5 and 6. Lastly, in Sect. 7, I offer a summation of the argument before reflecting on some implications from a broader societal perspective.

2 Sympathies for the synthetic

A growing body of research on HRI have proliferated in recent years, and a number of these have set out to pick up on the effects of anthropomorphism on emotional and moral reactions toward robots (e.g. Gazzola et al. 2007; Krach et al. 2008; Riek et al. 2009; Young et al. 2011; Rosenthal-von der Pütten et al. 2013, 2014; Rosenthal-von der Pütten and Krämer 2015; Krämer et al. 2015; Wang and Quadflieg 2015; Suzuki et al. 2015; Graaf and Allouch 2016; Crowell et al. 2019). In some case studies, test subjects have self-reported emotional responses from interacting with robots, while others supplement this by measuring neural activity (e.g. Rosenthal-von der Pütten et al. 2014; Wang and Quadflieg 2015) or electroencephalography (Suzuki et al. 2015).

Unsurprisingly, robot morphology has been found to impact empathic attitudes. In a study from 2009, Laurel Riek et al. explored how people empathized with robots along the anthropomorphic spectrum. They presented test subjects with short video-clips of several robots of incrementing degrees of human likeness, including one actual human person. The videos showed different kinds of mistreatment to the robots and human in turn, and the researchers concluded, perhaps unsurprisingly, that higher degrees of human likeness in robot morphology incites more empathic user feedback. For the android ‘Alicia’ in the test, empathy levels were self-reported as almost on par with the human ‘Anton’ in the test (scoring 3.65 and 4.01 respectively on a Likert scale of 6) (Riek et al. 2009). Similar studies have found that sheer size has an impact on user attitudes as well, indicating that bigger is better in terms of perceived agency (an observation echoed in Crowell et al. 2019; Löffler et al. 2019).

Suzuki et al. (2015) measured empathic responses using electroencephalography (EEG) alongside self-reporting methods. They showed media-clips to test subjects portraying robots and humans in supposedly painful situations, such as scissors cutting into a human hand, and subsequently into a robotic one. They generally found that the EEG data followed self-reporting, and interestingly found that in the scissor-hand test, levels of empathy triggered from the perceived pain stimuli were very similar for the human to the robotic hand. Invoking and comparing this with the findings of an older study by Gazzola et al. (2007) who found mirror neurons activated equally in test subjects when observing a human or robot perform a given set of actions, it suggests that human brains are neurologically wired to empathize with mental states believed to be true of entities that are reminiscent of the self.

A similar study aiding and advancing this conjecture used functional magnetic resonance imaging (fMRI) to demonstrate that neurological activity, usually associated with mental state attribution and mental model-building, was activated in test subjects playing board games with robot interlocutors. The researchers concluded in this study that “the same cortical network contributing to mental state attribution in implicit human–human interactions […] was activated in the human–machine interactions” (Krach et al. 2008, 6). They also found that the activity in these networks increases linearly with the anthropomorphic design of the robot. That is, to put it in the vernacular, brain centers responsible for recognizing other minds simply ‘light up’ stronger the more that interlocutor looks human. Empathic responses make sense as a consequence of perceiving that entity as another mind.Footnote 3

If true that we engage in mental model-building on the neurological level when interacting with humanoid robots, the findings of one particularly interesting study by Rosenthal-von der Pütten et al. (2014) makes sense. In this study, researchers had test subjects watching videos of affectionate and violent treatment of a robot and a human, and then comparing the emotional reaction of participants toward robots and humans in turn. The researchers tracked neural regions associated with emotional responses using fMRI scans in combination with self-reports, and both methods confirmed emotional reactions to observed pleasure or pain for both robot and human. And while they did find slight differences in neural activity when comparing only the videos showing abusive behavior, suggesting more emotional distress and concern for humans rather than robots, “no different neural activation patterns emerged for the affectionate interaction toward both, the robot and the human” (Rosenthal-von der Pütten et al. 2014, 201). If, as Krach et al. and others have demonstrated, test subjects perceive and interact with robots as if they have mental capabilities, it is perfectly reasonable for them to also empathize with them in the face of violent or affectionate treatment.Footnote 4

At this point it is reasonable to ask if empathic behavior is dependent on participants’ ignorance with regard to the non-sentient and non-conscious nature of the robots employed. Rosenthal-von der Pütten et al. (2013, 29) reflects on this question in another study where robotic pets were included: “since numerous studies show that people usually do not admit that they see robots or agents as social beings, it is still surprising that participants admitted to having negative feelings when an artificial animal is tortured. This and similar thoughts expressed by HRI-researchers suggests that even if humans are cognitively aware that what they are facing is nothing more than ‘dumb’ machinery, they cannot help but translate social responses from human–human interaction into encounters with these novel and responsive social actors. In fact, as Airenti comments in relation to the uncanny valley-effect, people are perfectly willing to interact socially with robots against the better of their knowledge, as if that robot is another mind.Footnote 5 Indeed, people may actually prefer that robots not leave them in doubt of their nonconscious machine nature: “Humans may interact with machines”, Airenti (2015, 125) concludes,”but they reserve to themselves the power to fill their mind, attributing both mental states and emotions”.

It seems some people not only engage in this kind of suspension of disbelief, but actively engage in building relations with robots they know have limited capabilities. How do these observations square with the expectation that empathic responses toward robots will rescind as the ‘novelty effect’ wears off (Smedegaard 2019)? It is difficult to glean a tendency from the little research we have on long-term effects of social robots. But given what we know about human sentimentality one could equally well suspect empathic responses to increase. Some of the observations we do have on long-term effects suggest an attachment to robots does not fade after the novelty effect wears off. People arrange funerals for their ‘dead’ AIBO’s (McCurry 2018) and one person has compared the heartbreaking loss of his JIBO to the pain of losing his mother to dementia (van Camp 2019)—suggesting levels of attachment way beyond naming a Roomba. In two studies on long-term interaction, researchers found that social interaction and attachment generally increased over time (Leite et al. 2013), especially when robots have autonomous movement and conversational abilities (Kertész and Turunen 2017). But observing that social interaction and attachment with robots in many cases continually increases is of course not synonymous with establishing that this translates into more substantial attitudes of moral responsibilities over time.

And while long-term perspectives on human relations to robots is an illuminating factor for the moral patiency debate, these concerns lie beyond the present scope. Why? Because in the position we develop here, building on the ethical analysis of K. E. Løgstrup, it is precisely the kind of intuitive and immediate moral response to the Other we are interested in. In contrast to most moral philosophies, Løgstrup valued pre-reflexive and spontaneous responses. Because moral intuition, deeply seated in our embodied co-existence, outperforms intellectualized moral reasoning in the interpersonal sphere.

3 Anthropomorphism, empathic responses, and robots—some definitions

At this point it is helpful to run through a couple of definitions of the rather interrelated, complex and not entirely undisputed terms employed throughout. Also, by spelling them out I hope it becomes clearer why the ethics of Løgstrup match the issue at hand.

Why are emotional attitudes such as empathic concern invoked toward social robots such as NAO, Pleo or Sophia?Footnote 6 Trivially, because we anthropomorphize them. A typical definition of anthropomorphism runs like”the tendency to imbue the real or imagined behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions” (Epley et al. 2007, 864). And since robots look and behave like entities we know to be alive, sentient, and conscious, we attribute to them such mental traits (Wang and Quadflieg 2015; Airenti 2015). The evolutionary origin of anthropomorphizingFootnote 7 is thought to be that human survival was more probable with a strong cognitive faculty for agency detection. It was better to believe a predator was approaching and take appropriate measures one time too many than too few. Early humans thus had a very high motivation to anthropomorphize in order to discover and understand the behavior of perceived agents in one’s environment; a cognitive device still with us today. Hence, anthropomorphism has to do with agency recognition and prediction and sometimes considered to be part of a “Hyperactive Agency Detection Device” (HADD) (Damiano and Dumouchel 2018, 2). Anthropomorphizing a robot is thus epistemologically questionable; correct in that it picks out a social agent seemingly performing autonomous behavior and capable of manipulating the environment,Footnote 8 but wrong to infer the mental abilities that is usually true of such agents. Our cognitive faculties are simply not developed to categorize and deal with these new kinds of entities, as Nyholm (2020) also points out. It is due to this ‘epistemic lapsus’ that humans identify with robotic artifacts that look and behave like humans (or other familiar social agents). And ultimately what paves the way for the effective attunement to the mental and emotional states we expect to find in such entities. While anthropomorphism as explanation raises a host of interesting questions (not least in relation to theory of mind), the significant one here is the ethical interpretation of the empathic responses it solicits humans to extend toward robots.

Defining empathy is no less tricky. At the very basic level I take empathy to be an ‘affective resonance phenomenon’ directed at the wellbeing of others. On a more narrow definition, having empathy with someone is the experience of feeling what one senses another person is feeling, a sort of copying of another’s emotional state in a specific situation (Misselhorn 2009; Maibom 2014).Footnote 9 Like strings on guitars and pianos attune and resonate with each other in the same frequency range, people reverberate the emotion of others when empathizing. The difference to the next-door notion of sympathy is that while empathy is going through another’s emotional state, sympathy is welfare directed (Clark 1987; Maibom 2014); it is an emotional reaction toward your fellow’s plight without necessarily echoing their emotional state. While empathy is often understood as a protomoral feeling-with, sympathy, compassion, and empathic concern are variants of feeling-for and are thus closely linked to pro-social and moral behavior (Ugazio et al. 2014).

But getting too technical could prove counter-productive here, since the term is more loosely defined and employed in the empirical studies currently of interest. Observe that Rosenthal-von der Pütten et al. (2013, 2014) investigate “empathic concern” which they take to denote the basic emotional distress directed at the suffering of others. Suzuki et al. (2015) follow Decety’s model for empathy as comprised of three components “affective arousal, emotional understanding, and emotion regulation”. Leite et al. follow Hoffman in a quite loose definition of empathy as “an affective response more appropriate to someone else’s situation than to one’s own”(Leite et al. 2013, 303). Others (e.g. Riek et al. 2009) does not give a definition but links it to pro-social behavior, and leave it to participants to define if they experienced empathy toward some robot in the study. In short, empirical researchers keep their findings of empathy to robots loosely defined, a sort of empathy + that often include aspects of what should technically count as sympathy or compassion. In order to encompass these studies, we need to capture and employ a broader sense of the term.

Now, as we aim here to illuminate the moral patiency question by interrogating human empathic behavior toward robots, we also need to spell out the relationship between empathy and morality (Maibom 2014). As touched upon above, empathy as a feeling-with is the affective basis for moral interpersonal behavior. In this pre-reflexive attunement to the Other, deeply seated in our responsive bodies, we have access to significant knowledge about the state of the Other, that the conscious and reflective mind does not immediately have. Of course, how we choose to act morally from this knowledge is ultimately the result of other variables (Ugazio et al. 2014). But this goes to show that empathy is, at bottom, morally charged. To capture this pre-reflexive affective basis that generate spontaneous moral motions (that the subject is free to either reject or act upon) and to be in concert with empirical research, I will prefer the notion of ‘empathic response’ rather than sympathy or compassion.

Empathic responses have—along with most other pre-reflexive and unforced prosocial gestures—generally received little praise in western moral philosophy.Footnote 10 This has been pointed out by Stokes, who notes that neither Kantian nor utilitarian strands of ethics have valued pre-reflexive and spontaneous behavior (Stokes 2016). Morality proper is thought to rest on deliberation about maxims, principles, or utility and thus ‘thoughtless’ actions has received marginal attention. Løgstrup is one exception where the spontaneous responses to fellow men, especially the stranger, takes absolute center stage. But is his definition and phenomenological approach commensurable to the responses HRI research has classified as empathy? Since Løgstrup wrote in Danish, he did not use the term empathy but rather ‘medfølelse’, which translates literally to ‘feeling-with’ (Løgstrup 2015). But it is clear that Løgstrup thought of medfølelse as welfare directed, as he defines it as the preoccupation with the Other’s plight to “remove hindrances to the freedom and flourishment of the distressed” (2015, 271, own translation). In this way he employs a broader definition of empathy, not too far from ‘empathic concern’ found in Rosenthal-von der Pütten et al. (2013, 2014).Footnote 11

In sum, empathic responses thus denote an emotional and pre-reflexive concern for the wellbeing of others that include extra-empathic motions—a definition that encompass its uses in empirical studies, captures the basic moral nature of the phenomenon and is in concert with the way Løgstrup employs the term.

4 Moral concern for robots and the patiency-debate

We are now in a position to move to the normative side of the issue at hand: whether empathic responses are morally blamed or praiseworthy. Answers to this are dependent on the status of robots as moral patients. We have seen that people tend to regulate their moral behavior toward social robots on account of empathic responses, to de-facto extending some degree of moral patiency; but do robots qualify as proper objects of human moral concern?

As introduced above, the debate about the moral status for robots has standardly been a discussion on properties and which ones permit moral status. On the property-approach, popular in the analytical tradition, moral patiency obtains for any entity that possesses relevant and sufficient properties. Consequently, moral concern and behavior toward entities devoid of the relevant properties is simply mistaken. But there seems to be very little agreement about which are the necessary or sufficient ones. One common denominator seems to be following Jeremy Bentham’s break with Cartesian tradition—which for a very long period dictated rationality as the determining quality for moral standing—when he suggested we rather ask ‘can it suffer’ than ‘can it think’ when considering moral status. Entertaining this question led to preferring sentience over rationality, and the resultant increase in sensitivity to the suffering of other beings has led to a significant expanse of our moral circle, as the history of ethics testifies (cf. Singer 2011). At the very least then, the ability to feel pain seems to be a central property. But, as Dennett (1998) and others point out, we do not properly understand pain. Do we consider pain with corresponding mental qualia only, or do we take instinctive pain (in lower animals) or symbolic representations of pain (in robots) to be deserving of moral patiency?

In his model for moral patienthood, Torrance (2014) takes the conscious experience of suffering and satisfaction as constituents of moral patiency for artificial agents. Meaning that entities capable of deriving pleasure or pain from having one’s goals achieved or frustrated deserve moral status. Others include in the same vein the ability to have significant interestsFootnote 12 as criterion for moral patiency, arguing to the effect that only conscious beings are able to enjoy these (Rodogno 2016; Eberl 2017; Basl and Bowen 2020).

Some criticism has mounted against these approaches. A common objection is that mental properties are not discoverable, as Danaher (2019) among others has put forward. Mental states are not observable, as Turing also noted; we can only infer these on the grounds of observable behavior (Turing 1950). Passing the (in)famous Turing test is no proof of conscious thinking or experience, only of skillful manipulation of symbols. As Searle famously put forward in his ‘Chinese room’ argument, syntactic competence does not suffice for semantic understanding (Searle 1980).

As a way forward, Danaher advocates ‘Ethical Behaviorism’ which can be read as a modest version of the property approach (Danaher 2019). Given the “epistemic opacity of properties”, we should simply take behavior to be sufficiently indicative of mental states. And we should affirm moral status of robots when they are “roughly performatively equivalent” to that of other entities enjoying moral patiency. He also argues that the performative threshold above which moral status obtain might not be that high. On this utilitarian outlook, this is no different from how other entities come to enjoy moral patiency on our moral circle—that we take their behavior as indicative of corresponding mental properties. No, we do not have any certainty of ‘what’s going on, on the inside’, but it does not matter on this approach. According to Danaher, we should simply bite the bullet and concern ourselves only with the behavioral testimony to properties.Footnote 13

Another alternative that shares the insight that behavior is a key concern is the virtue ethical approach. Virtue ethical accounts has received some traction in recent years (Vallor 2016; Cappuccio et al. 2019; Sparrow 2020; Coeckelbergh 2020), and they are generally concerned with delineating how interactions with robots both reflect and cultivate a virtuous character. Reversely, such approaches are concerned that cruel or inappropriate behavior towards robots—whether enacted unprovoked or solicited by the robot or the fantasy around it—reveals and exacerbates a vicious character. It is thus out of concern for human character formation and the shaping of appropriate habitual responses that moral consideration for robots should obtain. The price for not observing common moral conduct around robots representing humans and animals, is the corruption of human morality and that undesirably conduct will ultimately translate back into relations with real humans or animals. Moral obligation owed in relation to robots as patients is then neither directed at them, nor derived from their inherent value based on some property, but owed to human morality or the humanity that robots represent.

Others take issue with the inherent anthropocentric bias of property approaches. (Coeckelbergh and Gunkel 2014; Gunkel 2017; Coeckelbergh 2018). That is, proponents of this approach often start with properties true for human beings, and then proceed to find those properties in other entities. It seems misguided, they argue, to establish the patiency for nonhumans, by looking for human properties. Coeckelbergh and Gunkel raise several other objections and think the cumulative case warrants another approach altogether (Coeckelbergh and Gunkel 2014). Whether they have amassed something insurmountable for property approaches is beyond the scope here, but I think their call for attention to moral experience motivates exploring alternatives. Property approaches simply lack engagement with and attention to the emotional and relational forces of social reality, and the worry is “that empathic responses to robots are bound to remain unexplained and unjustified” (Coeckelbergh 2018, 147).

For these compounded reasons, Gunkel (2017) thinks we need to ‘think otherwise’ and he advocates with Coeckelbergh a ‘relational approach’ (Coeckelbergh and Gunkel 2014). Crucially on this understanding is the priority between ontology and ethics. In their joint article on the moral standing of animals,Footnote 14 Coeckelbergh and Gunkel follow Emmanuel Levinas by contending that ethics precedes ontology. On this view, “morality is not a branch of philosophy, but first philosophy” (Gunkel 2014, 126), while the relational aspect of human nature is emphasized. Contrary to a property approach that begins by making ontological determinations about who or what is a legitimate moral subject, they propose with Levinas to see it the other way around: moral and social relations are given, and the Other always and already obligates me in advance of customary decisions and debates concerning who or what is (not) a moral subject (Coeckelbergh and Gunkel 2014). We first respond to the Other, and then, after having made the response, we identify and determine what we responded to (Gunkel 2017). Thus, when our moral intuition informs us to treat some encountered entity with moral consideration—when an entity ‘supervenes’ and ‘faces’ us and demands that we respond (Coeckelbergh and Gunkel 2014)—it truly becomes an Other for us in the Levinasian sense. Coeckelbergh (2018, 149) maintains that this approach “takes seriously the phenomenology and experience of other entities such as robots, and sees moral standing not as the starting point but rather as the outcome: moral standing is itself the outcome of the process of relation and interaction”. Gunkel (2017, 10) further infer with environmental ethicist Callicott that “relations are prior to the things related” and can thus agree with Coeckelbergh that moral status does not obtain on intrinsic grounds, but extrinsically: “it is attributed to entities within social relations and within social context”.

It is not entirely clear if this account amounts to metaphysical constructivism. But the idea that we identify and determine the Other subsequent to our response to them seems to suggest so. But on a weaker reading they might simply say that moral responses and intuition serve as heuristics for ontology. The difference lies in whether moral status is constructed as an ontological reality or rather discovered whenever an entity ‘supervenes before us’ and trigger our moral responses. In either case, moral patienthood for robots would ultimately obtain on the basis of human perception on this account; on how we intuit and respond to the Other and construct it in relations to us (thus ultimately not getting rid of the anthropocentric bias, one might add). But since we each have different moral intuitions, should applying the Levinasian idea not only mean that patienthood follow for specific robots in relation to specific people depending on their moral response to the robot? To my knowledge, Coeckelbergh and Gunkel have not in their published work contemplated if moral patiency should obtain only on the individual level, but rather seem to think in universal terms. And I suspect suggesting an inversion of ethics and ontology is by definition an across-the-board enterprise. Perhaps this is why they hold back on taking the Levinasian idea to its logical conclusion and never explicitly defend moral patienthood for robots; the implications are simply dauntingly vast. Rather they entertain and scrutinize the idea and formulate their approach as a new “relational and moral hermeneutics” (Coeckelbergh 2014) and a way of “thinking otherwise” (Gunkel 2017).

In any case, I agree that the epistemic uncertainty of mental properties and the negligence of social-relational experience are good reasons to motivate exploring alternatives to standard approaches.Footnote 15 I share Coeckelbergh’s concern that empathic responses are left unexplained and unjustified. The view I develop here is in many ways compatible with the relational approach, and it is likely that comparing the Levinasian approach to a Løgstrupian one might contribute in honing it. Though the analysis I offer will bring out some details that direct it at a different and more limited conclusion with respect to moral patiency for robots. My view also shares with virtue ethics an emphasis on the effects of interpersonal behavior for human flourishment. But as we will see below, this is a superficial agreement that diverges at a deeper anthropological level. Yet, before being able to do so and develop my account in relation to similar ones, I first offer an exposition of the central tenets of Løgstrup’s ethical thinking for our purposes.

5 The Løgstrupian alternative

The aim of exploring and developing Løgstrup’s ethical apparatus to bear on the present issue is not merely to give a phenomenological interpretation of empathic responses to robots, but ultimately to provide a normative framework for moral status ascription. As teased above and developed in the following, Løgstrup found spontaneous empathic responses directed at the wellbeing of others to be instances of genuine good. This idea lies at the root of the argument advanced in this paper—that we can acceptably take robots as moral patients—so exploring it here is an essential task.

To get under the hood of Løgstrup’s sometimes rather convoluted thinkingFootnote 16 before we spell out the implications for moral patienthood for robots, we need to unpack a metaphysical dualism known in the Løgstrup reception as the Doctrine of two accountsFootnote 17 (Rabjerg 2017). To Løgstrup, the prime ethical question is not why there’s so much evil in the world, but rather why there is good—such as our empathic responses to fellow man, animals and now apparently also social robots—given our selfish ways and everyone’s struggle for themselves. The brilliance of Nietzsche, according to Løgstrup, was his brutal exposition of man’s evil and hypocrisy (Løgstrup 2014).

But if man really is just a selfish beast, how come we experience love, mercy, trust, empathy and so forth, Løgstrup asks. Conceding that human will is bound by selfishness was rather uncontroversial in Løgstrup’s own Lutheran tradition,Footnote 18 and also fit well in a Darwinian scheme. Left to our own devices, human beings are ‘curved in on themselves’, incurvatus in se. And while Løgstrup did adopt this Lutheran tradition of describing humanity’s sinful nature, he also maintained that we sometimes experience that the centripetal force of our incurvature is displaced (Rabjerg and Stern 2018). This would occasionally happen, observed Løgstrup, when responding to the plight of fellow man. Charitable acts directed at the welfare of others had the potential of breaking the subjects encircling self-absorption and calling forth instances of genuine good.

But Løgstrup found it implausible and inconsistent to credit these acts of kindness on the human self, if human volition was indeed radically corrupted and selfishly incurved. To avoid this, Løgstrup suggested that another source for goodness had to exists, outside of human volition—and that this is what Nietzsche, Kant, and Kierkegaard had overlooked (Løgstrup 2013). They are blind to this because they only keep one account, the anthropological, as Løgstrup terms it. Since that which breaks our incurvature never manifest in a social vacuum, but only appear in our interdependent lives, when responding to fellow man, the solution for Løgstrup was to open an ‘ontological account’ on which these phenomena can be credited (Løgstrup 2010). By denoting it ‘ontological’ he distinguishes it firmly from ‘the anthropological account’, suggesting the account is ‘life itself’. By positing another account, Løgstrup avoided the problem he thought Kant and Kierkegaard struggled with, namely to explain how good and evil coexist in an inner human; where they battle for supremacy and causing humans to alternate between good or bad actions depending on how firmly the moral reins are held by human will, maxims or successful rational deliberation.

Genuine good is thus something that emerges between us and not something that originates from human deliberation or volition, as the latter was impossible on Løgstrup’s reading of the negative Lutheran anthropology just mentioned. Consequently, he regarded the rationalized and coerced moral behavior as calculated appropriations of the genuine self-less neighbor love that emerge as a pre-reflexive response to the Other. In this way, spontaneity became a hallmark of the genuinely good act, barring moral deliberation about how to be a virtuous subject from tainting the pure and ‘sovereign’ act. He termed these responses ‘sovereign expressions of life’ (suveræne livsytringer), first in an article on ‘Sartre’s and Kierkegaard’s account of the demonic enclosure’ (Løgstrup 1966) and develops them later, most notably in his Controverting Kierkegaard from 1968 (Løgstrup 2013). Besides empathy, the examples Løgstrup often mentioned include trust, love, mercy and openness of speech.

Let us briefly look at what larger role sovereign expressions play in Løgstrup’s thinking, before we can draw out the implications for the present issue. Løgstrup found them to correspond to the ethical demand, and he thus developed the concepts of sovereign expressions not just as a response to the anthropological problem, but also to the ethical one. But why think there is an ethical demand to care for others in the first place, and how is such a demand fulfilled?

Given our interdependent human lives and how we constitute each other’s word, we always find ourselves in power relations with one another. These power relations become manifest in how we always have something of the life and welfare of the other within our power. Or, as Løgstrup metaphorically has it in The Ethical Demand (2010): we always have something of the other’s life in our hand. No matter how small or great the amount, it is always up to the individual to decide whether to administer it to the destruction or flourishment of the other. The ethical demand is then nothing more or less than the demand to always take care of however much of the other’s wellbeing is within our grasp.

But why heed this demand? Or: why think that other’s wellbeing is my burden? Løgstrup does not point to God as a moral law giver as could be expected in his tradition, since his aim was to formulate the ethical demand in secular terms. Instead, he underpinned the ethical demand with the notion of life itself as a gift. Since we all receive life as an unmerited gift but have no benefactor to respond to,Footnote 19 we are left to direct our gratitude to the people who constitute our world. In fact, Løgstrup (2010) formulates the implication of being unmerited receivers of life as being ‘in debt’. And to ward off protests to the demand in the name of reciprocity, Løgstrup was quick to qualify the demand as one-sided.

But the overwhelming problem for the moral subject, as Løgstrup sees it—and now slowly coming back to the sovereign expressions of life—is that the ethical demand is principally unsatisfiable. Why? Because as soon as we realize, or ‘hear’, the demand, it is already too late; hearing the demand marks the absence of love. If one has to be told to be merciful and do good (whether by reason or by a moral lawgiver) one has already failed. In other words, a sovereign expression failed to emerge, because the subject was busy deliberating about utility, motives, maxims, right and wrong etc. In response to hearing the demand, I appropriate what I should have been doing immediately, had I acted from spontaneous neighbor love. As Løgstrup (2013, 127, own translation) formulates it: “Morality is the supply of substitutive motives for substitutive acts”. In this critique of Kantian ethics, Løgstrup points out that deliberating about maxims (or duties, utility, responsibilities, motives, etc.) is not just ‘one thought too many’ (as Bernard Williams had it): it is a testimony to moral failure.

This brings us full circle back to the sovereign expressions of life that correspond to the ethical demand: they begin to emerge in a pre-reflexive state before one hears the demand. Sovereign expressions are spontaneous or pre-conscious, something that happens ‘behind our backs’ as Løgstrup interchangeably has it. Only in this way are we able to satisfy the ethical demand, only we are strictly speaking not the subject of those acts. Our job is to surrender to or consummate the sovereign expressions (Løgstrup 2013) without corrupting these genuine goods with our selfish desire to be good.Footnote 20

Now beginning to draw out consequences for our present purpose, it’s important to note that Løgstrup’s very central idea was that sovereign expressions manifest for the flourishment of the life of the Other. But he also argued that the acting subject is a beneficiary of the sovereign expression too.Footnote 21 In other words, these expressions fulfill a ‘double task’: they promote wellbeing for both agent and patient, or what I will call ‘first-person’ and ‘third-person’ benefits respectively. This is a critical distinction here, since even when there’s no flourishment or interests per se to promote for the robot Other—there still is for the human counterpart. I do however suspect Løgstrup regarded the first-person benefit more as corollary than primary, though he definitely considered it a vital one.Footnote 22

The first-person benefits of consummating the sovereign expressions of life are that they break the centripetal force of our self-encircling thoughts and feelings that characterize the human incurvatus in se. We are liberated from this “inturnedness” not by God’s grace, but only through our ethical encounter with other people, as Rabjerg and Stern (2018) argue. And for precisely this reason—that sovereign expressions of life only emerge in social encounters—we need other social beings to be ‘unturned’. We are captives within ourselves but set free in social interaction, to rephrase a journal entry by Løgstrup.Footnote 23

To illustrate: a friend of mine happened upon a lost tourist couple, looking for a meal in her small-town neighborhood after all stores and restaurants were closed. Overcome with empathy for them she invited the couple home for dinner before she could stop to think it over. Would it be okay with the family back home? Was this proper use of her economic and social capital? And why should the couples’ poor planning be her burden? But going through with her empathic response—In Løgstrup’s terminology: consummating the sovereign expression of life—she invested herself in trying to care for these people to the best of her imagination and ability. In taking the strangers home to cook for them, she was moved by the openness and kindness emerging between them; not just hers, nor solely theirs. She has often since described that night as one of her best memories, and how an openness toward the world lingered with her. To push the Løgstrupian vocabular: the spontaneous sovereign expression turned her outward from her inturnedness.

Suppose now that the couple had merely been actors or philosophical zombies. In that case, the couple was not really helped by my friend, but she might still have escaped the incurvature of selfishness through the experience. By the same token, this could simply have been a robot (albeit of a future and more sophisticated kind), and the encounter would in principle still have the potential to promote my friend’s wellbeing. But, on a smaller scale, such interactions could in principle happen now in relation to responsive social robots we anthropomorphize and ‘imaginatively perceive’Footnote 24 as others.

To recapitulate the point: Sovereign expressions contribute to first-person flourishment, separate from the needs they meet on the receiving end of the expression. Or, if you will, both subject and object are on the receiving end of sovereign expressions. And while singling out the positive first-person effects and disregarding the third-person benefits is probably beyond Løgstrup’s intent,Footnote 25 it is nevertheless on this critical development my argument turns. It allows me to be agnostic about metaphysical and technical possibilities of robots acquiring mental states or intrinsic interests that might ground their moral status. But more importantly, it provides a normative reason for the permissibility of taking robots as moral patients.

On this approach then, we have an interpretive framework for the empathic responses toward robots that empirical research report. The ethical demand impinges on test-subjects witnessing robots being tortured, for example, and an empathic gesture might emerge as spontaneously response. But obviously, picking up on an empathic acting impulse is far from equal to a fully consummated sovereign expression. The lab-cases or robots themselves might be too limited in design or the interactional ability for someone to bring sovereign expressions to their fruition. Likewise, we might not be able to go through the motions in relation to many extant social robots in the wild. And this might in both cases cause awkwardness or discomfort on the human counterpart, not unlike the uncanny valley phenomenon. Moreover, this difficulty might even be accompanied by inappropriateness, if sovereign expressions toward robots divert attention away from true third-person beneficiaries, i.e. sentient beings like humans and animals. I shall discuss this more in closing, after considering how my Løgstrupian approach measures up against other approaches in the literature, specifically virtue ethics and the relational approach.

6 Differentiating the Løgstrupian approach

The position developed above shares with other approaches that patienthood for robots might be credited on extrinsic premises rather than intrinsic ones. Specifically, my proposal shares with virtue ethical approaches the prospects of what I have termed ‘first-person benefits’ from extending moral consideration to robots. In virtue ethical approaches, this arises as an opportunity to exercise or promote a virtuous character, as Cappuccio et al. (2019) have it. In their formulation, a virtue ethical account “recommends treating social robots in a morally considerate manner because this is what a humane and compassionate agent would habitually do in their social interactions and because the opposite behavior would not be compatible with a virtuous lifestyle and moral flourishing” (Cappuccio et al. 2019, 13). As a consequence, mistreating a robot would be detrimental to the subject’s character by animating a vice, and this is why robots should be treated with moral considerations, even if such mistreatment does not cause harm to any sentient being. Along those lines, others invoke Kant’s ‘cruel habits argument’ to describe this effect, that “he who is cruel to animals becomes hard also in his dealings with men” (e.g. Darling 2016, 1). More broadly, in the vocabulary of Vallor (2015), our interaction with robots carry the potential risk of moral ‘deskilling’ and, conversely, the opportunity for moral ‘upskilling’, in the same way employing new technologies has historically demoted certain practical skills while promoted others.

This way of framing how interaction with robots might contribute to human flourishing, however commendable and educational such accounts might be, is quite far from what we can take Løgstrup’s ethical thinking to support. Løgstrup himself was very skeptical that we could edify our moral selves. His critique was not only born from his anthropological pessimism (cf. Rabjerg 2017; Rabjerg and Stern 2018), but he also regarded the project of exercising morally correct dispositions as a way for motives and outcomes to come apart (Løgstrup 2013, 128–129). In the self-reflection on aligning motivation with virtues, the individual loses sight of the Other and “thrown back onto itself” (2013, 128). For this reason, the motivation to extend moral consideration to a robot (or another human for that matter) as an opportunity to exercise one’s virtues while protecting one’s moral character from the corruption of bad behavior, essentially runs contra to his two-accounts thinking (2010, 158–162). Our moral characters are already corrupted, he would argue, and the self-congratulatory attempts being good is simply human incurvature in disguise: civil on the outside, but really just a way for the enclosed self to continue their self-encircling. The first-person benefits of taking robots as Others that I propose with Løgstrup, eventually lie elsewhere than where virtue ethicists suggest, namely in the capacity of sovereign expressions to displace our inturnedness.

Now, there is another tricky and overlooked issue related to moral status conferred extrinsically that applies equally to the relational approach as well as the one I have developed here: If moral status depends on extrinsic social premises—such as people’s moral responses or intuition—what happens to moral status if those extrinsic premises change? Suppose some users no longer perceive the robot as a patient or if different users have differing views and intuitions regarding the same robot, will its moral status change so that it is a patient only sometimes? Exactly how volatile moral status is in relation to shifting moral responses among users is beyond the present scope. But bearing a sensitivity to this issue in mind, I think we can only stake a modest or ‘weak’ claim of patienthood for robots when working from this kind of extrinsic premises. With weak here meaning permissible rather than obligatory and individual rather than universal—provided the extrinsic premises hold. By contrast, on a ‘strong’ position all agents on the moral domain would have an obligation to respect the robot as a patient.

I think the Løgstrupian position I develop differ on this point from the relational approach based on Levinas. The phenomenological description of how robots “supervenes before us” (Gunkel 2017, 10) and “face us, take us out of our self-involvement, and demand from us that we respond” (Coeckelbergh and Gunkel 2014, 723) is remarkably close to the Løgstrupian vocabulary. Yet subordinating ontology to ethics and giving it priority in both temporal sequence and epistemological status suggests a fundamental disagreement with Løgstrup’s ontological ethics (cf. Thornton 2020). And in the present context, it might be the Levinasian axiom of ethics as first philosophy that drives their relational approach towards a stronger conclusion than what I suggest the extrinsic premises can bear.

7 Conclusion and perspectives

A Løgstrupian approach as I have suggested and developed here provides an understanding of and appreciation for the moral engagement with robots that research in HRI has demonstrated to occur and that we might expect to increase as social robots continue to develop and proliferate. I have argued that empathic responses incited by engagements with robots, when anthropomorphizing and intuiting them as Others, can be read as the impulse of sovereign expressions of life. Such moral phenomena can promote good for the moral agent (the human), even if the patient (the robot) of this moral concern lacks the formal properties of being a moral patient. Namely in the way such forces displace the inturnedness of our human incurvature. By conceiving of empathic responses and similar moral phenomena as sovereign expressions, treating robots as moral patients has its merits, not because robots currently are true beneficiaries of these expressions but because we as human agents are. The implication is that the moral status of robots as patients is derivative of their relation to humans, and for this reason, we can only mount this argument to defend moral patienthood for robots in individual cases.

One obvious criticism is that the implication of my argument is quite exploitable by creators and interest-holders of robots. We should not be blind to scenarios where social robots are employed as friendly front-end interfaces of extensive data-harvesting systems. In the service of these or more malicious interests, manufacturers will likely design robots to be evocative objects that deliberately invoke empathic responses. As designed objects, robots materialize and mediate certain moral values and norms intended to guide human behavior (Verbeek 2017). Empathic responses might consequently be staged. Should we really count teased responses among genuine instances of good? I think the short answer is: sometimes. This bullet is easier to bite when we realize that this problem is already with us: I could just as well have ulterior motives in baiting your empathic response directly—we do not need to wait around for robots to embody shady motives to bring out the problem. But robots as proxies for this problem definitely magnifies and complexifies it. But I do not think this detracts from the principally goodness of sovereign expressions, but rather show how we can be quite elaborate at hindering them. Granted, some scenarios might render empathic responses to robots inappropriate, mistaken, or even harmful.

One reason for tempering our empathic responses to robots is if they divert emotional resources away from comparably more valuable human relationships. Turkle (2011) has been a frontrunner in championing the concern that if robots eventually provide an easier and more agreeable company, humans might increasingly opt for the fake appropriation only to end up lonely.Footnote 26 I share her sentiment: humans need humans. But I also think we need more research on the long-term effect of relationships with robots to glean whether Turkle might be right in this bleak assessment, as briefly discussed in Sect. 2. If accepting the robot as a moral patient-for-the-user has adverse effects on the user’s wider human relations or otherwise disrupts human community and societies, we might want to regulate robot design to make it actively deter users from responding empathically—especially if true third-person beneficiaries are around.

Mentioning these examples is of course just scratching the surface of all the relevant aspects to consider in a broader societal application of moral patiency for robots. Beyond the present scope also lie legal conceptions of patienthood that are vital to consider in this eventuality. In this context some momentum has gathered in favor of extending some kind of personhood for robots (e.g. Gellers 2020). And while the two are distinct fields, we should like our legal framework and moral conviction to converge to some degree.

As a contribution to this wider discussion, I have in this paper developed an argument in favor of taking robots as moral patients on the basis of one particular ethical framework. Since the social affordances of robotic artefacts seem to allow for sovereign expressions of life to emerge for the benefit of the human agent, individual robots are acceptably moral patients in relation to individual humans.