1 Introduction

Scenarios are an important instrument for ethical reflection on the future of our technological culture and practices. For example, in the area of health care and elderly care, I have written a short story about a robotic dog and I have proposed an approach which explicitly creates and engages with such scenarios as a way to explore future possibilities and problems (Coeckelbergh 2012). It is also helpful to interpret recent films and other fictional works on this topic, for instance the film Robot and Frank (2012). Narratives and thought experiments are a great tool for philosophical reflection on health care and elderly care since they tap our ethical sensitivities and assumptions about care, robots and other care technology, the elderly, and especially what we think good or bad care is.

Often scenarios about robots or machines in elderly care are doom scenarios. They depict elderly people as isolated from human beings, only “cared” for by machines and living in a virtual world. Sparrow and Sparrow, for instance, present the following scenario:

[W]e imagine a future aged-care facility where robots reign supreme. In this facility people are washed by robots, fed by robots, monitored by robots, cared for and entertained by robots. Except for their family or community service workers, those within this facility never need to deal or talk with a human being who is not also a resident. It is clear that this scenario represents a dystopia rather than a utopia as far as the future of aged care is concerned (Sparrow and Sparrow 2006, p. 152).

The authors argued that robot technologies would not only reduce human contact but also be deceptive: the robots would deceive elderly people into thinking that they were companions who care about them, are pleased to see them, etc., rather than machines (which the authors call their “real nature”) and create a virtual reality that may make people happy but deceived. Both loss of human contact and deception are frequently mentioned in the discussion about the ethics of care robots (see also for example Sharkey and Sharkey 2012).

Now doom scenarios, by themselves, are not necessarily useless or bad at all, since they can stimulate our moral imagination and make us more aware of what we (do not) want and value. For instance, this particular doom scenario makes it clear that we do not want to have elderly care without human care givers. However, it is the task of philosophical reflection and responsible empirical research to critically question the assumptions made in these narratives and scenarios (for example about present practices), to articulate their normative message (if not already very explicit), to create and explore different and better narratives about the present and the future, and to study how to improve the technologies and the practices in which they may be embedded and which they may change, for example ICT-mediated elderly care practices.

In this paper, I articulate and critically reflect on three assumptions revealed by Sparrow and Sparrow’s (2006) narrative and similar doom scenarios, assumptions which get something wrong about present ICT use and elderly care. The first assumption is that (elderly) care is necessarily incompatible with deception. The second assumption is that robots and other intelligent and autonomous health care technologies necessarily imply the creation of a “virtual” world as opposed to the “real world” and therefore effectively imply a case of deception. The third assumption is that the elderly people of the future are the (very) aged people of today. In response to these assumptions, I propose an alternative view, which does not incur these problematic assumptions, but nevertheless is able to critically evaluate care robots and, more generally, the present and future of technology-mediated care. My articulation of this alternative view also reveals a fourth problematic assumption: that the nature of technology is limited to the material artifacts.

2 Criticizing three assumptions in doom scenarios about the future of care: deception, virtuality, and limited digital literacy

In doom scenarios such as that by Sparrow and Sparrow (2006), elderly people who need a lot of care are entirely isolated from the rest of society and are monitored and kept “happy” with a range of information technologies, including autonomous robots. With regard to deception, such doom scenarios therefore give a clear normative message to the reader: they suggest that (1) good care is incompatible with deception and that (2) good care should not be deceptive in the sense that it should be “real” care defined as care given by humans in the “real” world as opposed to the “virtual” world created by ICT-mediated care.

I deeply share the concern with the ethical quality of the future of care that is present in these doom scenarios and agree that we should not leave elderly people in the hands of machines, at least if this means that humans are no longer involved in care. Most of us will agree that good care includes human care givers—for whatever reasons that might be given for this claim (I will not discuss this here). Yet I believe the worry about deception, however genuine and authentic, is problematic if it assumes that good care should never involve any form of deception. While there may be a prima facie obligation to treat people as autonomous persons, care givers also have an obligation to take into account the particular empirical situation and capacities of the elderly person and in some cases the result might involve “deception”, without this being necessarily morally problematic. I will further develop this point in the next section.

Second, the worry about deception is misguided in so far as it suggests that the use of robots in care (and perhaps also the so-called virtual reality and games in care) set up a “virtual” world as opposed to a “real” world. I believe this is very problematic interpretation of our present use of information technology since it assumes a dualistic view of ICT-mediated living: it assumes that on the one hand there is the “real” world of human care and on the other hand there is the “virtual”—meaning “fake”, “deceptive”, and “illusory”—world of robot care. But this view does little justice to the phenomenology and practice of ICT-mediated care and ICT-mediated living in general. When we use the Internet, robots, and other networked electronic devices, we are not really leaving the “real” world (even if we might want to!); instead, when our presence and existence is mediated by technology, it is still “real” in all kinds of senses: it is embodied, connected to materiality, and thoroughly social. It is still embodied, since our minds do not disconnect from our bodies when we use the technology; we still think and act as the beings we are—with a body. It is also not entirely immaterial; we still use material technologies, and this is especially the case with robots. Interaction with robots—in care and elsewhere—is as material and physical as interaction can get. Also, if we use ICTs to communicate with (human) others, this is not “virtual” sociality as opposed to “real” sociality; it might be a different, technology-mediated form of sociality, but it is still sociality. In all these cases, it is not any less “real”. The real should not be defined as the opposite of the virtual; if the term “virtual” still makes sense in the non-dualistic approach we may want to develop, it should be defined as a kind of reality or “world”, a mode of experience and perception. ICT-mediated care, then, might be different than other forms of care (and we should take seriously its distinctive phenomenology), but it is not any less real.

Furthermore, even if one would want to hang on to the real-virtual dichotomy, it is not the case that we either have to live in a “virtual” world or in a “real” world. At present, we learned to seamlessly move between the two worlds or, rather, effortlessly “mix” the two. We simultaneously live online and offline. Floridi calls this “onlife”, rather than either living “online” or living “offline”. He writes:

[I]t is progressively more difficult to understand what life was like in pre-digital times and, in the near future, the very distinction between online and offline will become blurred and then disappear (Floridi 2011, p. 477).

Now if we are already blurring this distinction now in our young and middle-aged “onlives”, we will probably do this in the future too, when we are very old. The evolution of ICT-mediated living seems to move in the direction of more, rather than less, blurring of such distinctions—if they will still make sense at all to our grandchildren.

This brings me to the third problematic assumption frequently found in scenarios and arguments about elderly care and technology. Doom scenarios such as Sparrow and Sparrow’s (2006) assume that the elderly of the future—their lives and their use of ICTs—are similar to the elderly of today, in particular those of very old age whose digital literacy skills are quite limited indeed. If these were the future elderly, then of course they would have the impression that they are being deceived, that they live in a “virtual” world as opposed to the “real” world they knew from home. But this assumption is wrong: we—highly computer-literate people or even digital natives—are the elderly of the future. Moreover, already today we observe that the “younger” elderly (aged 60–70, or perhaps 65–75) are rather well “at home” in the onlife world. They might not be as quick with picking up new apps and new ways of using ICTs as digital natives are, but when these “younger” elderly people will need more care than they do now, they will not enter these higher levels of care and these new, ICT-mediated care environments as digital illiterates. It is unlikely that they will strongly oppose the use of robots and other ICTs in care.

In fact, quite the contrary may happen. Taking into account how the present generations are living with ICTs, it is likely that people in the near future will demand these electronic technologies to be in place and will demand to stay connected. It might well be an exaggeration to suggest that “being connected” will be regarded as a human right, but it is plausible that access to these technologies will be seen as being a matter of some importance, including ethical importance. In other words, it might become a problem when care for elderly people would not involve the current information technologies. This may also happen to robots in care if they would be widely adopted—devices which are unlikely to be function as stand-alone machines but would probably be embedded in all kinds of electronic networks and media.

One way of framing this normative demand, for instance, is not to use the language of rights (which is perhaps too strong indeed) but the concept of capabilities. Nussbaum and Sen (1993) have used the term to express that although we may have formal access to some goods or resources, we are not necessarily capable to use them in a way that empowers us and effectively increases our quality of life (Nussbaum 2000, 2006; Nussbaum and Sen 1993). In particular, Nussbaum (2006) has argued that human dignity requires “an appropriate threshold level” (Nussbaum 2006, p. 75) of the following “central” human capabilities:

1. life: “Being able to live to the end of a human life of normal length; not dying prematurely, or before one’s life is so reduced as to be not worth living”

2. bodily health (includes nourishment and shelter)

3. bodily integrity: free movement, freedom from sexual assault and violence, having opportunities for sexual satisfaction

4. being able to use your senses, imagination, and thought; experiencing and producing culture, freedom of expression and freedom of religion

5. emotions: being able to have attachments to things and people

6. practical reason: being able to engage in a conception of the good and critical reflection about the planning of one’s life

7. affiliation: being able to live with and toward others, imagine the other, and respect the other

8. other species: being able to live with concern to animals, plants and nature

9. play: being able to laugh, to play, to enjoy recreational activities

10. control over one’s environment: political choice and participation, being able to hold property, being able to work as a human being in mutual recognition

(Nussbaum 2006, pp. 76–78; my summary)

Good elderly care, and especially preserving human dignity in elderly care (which is arguably a concern shared by Sparrow and Sparrow (2006) and other doom scenarists), then requires that care seeks to maintain or enhance elderly people’s capabilities, also when care robots and other ICTs are used in the future. However, this way of formulating the problem assumes that capabilities and ICTs are independent entities, which is problematic. As I have argued previously (Coeckelbergh 2011; see also Coeckelbergh 2012), capabilities are not technology-independent but are always transformed by technological developments. For instance, what we understand by “social affiliation” (number 7 in Nussbaum’s (2006) list) today is different from what people in the past understood by it, for instance in the context of a medieval village or even cities 50 years ago. Today our social capabilities also depend on if and how we are able to use the Internet and social media our capability for affiliation. Our social life has changed through these technologies; it has all kinds of online/offline (“onlife”) variations. To restrict someone’s access to the web and related technologies, then, may be seen as jeopardizing someone’s “social affiliation” capability. The same might happen to robots, provided they would play a key role in social relations. Now if this happens and people’s social relations become dependent on these technologies (and if this is how people view or will view sociality), then normative notions are likely to change too. Then one could argue that elderly care is less good care if it does not offer a range of technologies, including robots, which enable people to exercise their social affiliation capability, since this capability has already been transformed by digital, electronic technologies.

Now I cannot and do not want to predict the future. This impact on the capability of social affiliation might not happen. But the more general point I want to make here is that (1) clearly there is a sense in which we, and therefore the near future and future elderly, live different lives due to the technologies, which really impact on our capabilities, and (2) this has normative consequences: it has implications for how we think about the rights, needs, and capabilities of all people—including elderly people. This needs to be taken into account when creating and discussing scenarios of robot care or “machine care”.

Furthermore, this discussion also implies that the main issue we should be worried about is not “deception”, if that means being given a “virtual” world as opposed to a “real” world. This is not how many people live and think today—and for sure not those who have been born into this new world and always had an “onlife” experience living with the new technologies and living with others through the new technologies. Therefore, it is likely that this is not how many people will and want to experience their lives as elderly people when they need more care. It turns out that the ethical evaluation of ICT-mediated elderly care, including robot care, is at least more complicated than suggested by Sparrow and Sparrow (2006) and similar doom scenarios. If we want to think about the future of elderly care, through scenarios or not, then we have to take into account how today many people already experience their lives-with-ICTs and how they are likely to experience them in the future when they are elderly and need more care. If we want to create and use scenarios at all (which as I have argued is an excellent method), we have to adapt them to the new realities of today, rather than hanging on to assumptions that do no longer reflect contemporary living and experience.

3 Why there are still a lot of ethical problems

This does not mean, however, that there are no ethical problems with robotic care and ICT-mediated living, far from it. The new ways of relating to things and to others are not necessarily as good or better than the “old” ways. Defending the view that deception is not always and necessarily morally problematic and that in ICT-mediated care there is not necessarily an issue of “deception” if that means “giving people a virtual world”, does not commit me to the view that there are no ethical issues with robotic care, and not even to the view that there is no deception problem at all. Let me identify a number of remaining problems that need to be addressed.

3.1 Disengagement on the part of care receivers

First, the use of robots in elderly care may or may not incur problems of disengagement. Let me introduce this issue in a way that is inspired by the work of Dreyfus (2001) and Borgmann (1984). At first sight, it seems true that today many of our technological ways of doing are in danger of detaching us from things and other people. According to Borgmann (1984), modern technologies—he calls them “devices”—are so easy to use that they become part of our “background” and no longer require engagement with them or the development of skills: “The machinery makes no demands on our skill, strength, or attention, and it is less demanding the less it makes its presence felt” (Borgmann1984, p. 42). What he calls “focal practices”, by contrast, require skilled engagement. He gives the example of gathering around a stove and drinking wine together: such “focal” practices require more skill, and skilled engagement makes us more social and gives us “character” (Borgmann 1984). This criticism seems also applicable to contemporary information and communication technologies. In his critique of the Internet, Dreyfus (2001) has argued that the Internet prevents engagement and commitment; it makes everything “easily accessible and optimizable” (Dreyfus 2001, pp. 1–2). One could also add that the Internet threatens the social, since it does not promote direct engagement with others.

Whether or not Borgmann (1984) and Dreyfus (2001) are right about this (as I will suggest below the issue is more complex), attending to the issue of (dis)engagement must be part of evaluating ICTs such as care robots: Does the use of these robots contribute to “focal” care practices, which require skill and support social relations? More generally, do robots enable us to develop new skills and do they better connect us with our environment and with others, or do they encourage de-skilling and isolation?

On the one hand, it is clear that in general ICTs sometimes disengage us from things and people. Yet if this happens, this is not an issue of deception: the problem is not that people are provided with a “fake” world as opposed to “reality”; the problem is that in their “onlives” at times their relation to reality may be less direct and engaging, and that this is partly due to the kind of technologies used. If this is a problem, then it needs to be addressed, and not only with regard to elderly care, but with regard to the lives of all of us. It is important to ask which electronic technologies (and the way they are used an embedded in lives and practices) may be more or less suitable for supporting engagement with things and people.

On the other hand, in contrast to Borgmann (1984) and Dreyfus (2001), we can also try to see possibilities for re-skilling and re-engagement—new forms of engagement. New technologies could also open up new worlds and new ways of engagement with one another. In fact, there are indications that they already do. In our “onlives,” we already live in different ways, some of which may well be more detached and less engaged, but some of which are more engaged. New technologies could render and are rendering our lives meaningful and engaging in different ways. For example, gaming might be detaching and disengaging in some ways, but not in others: they might detach us from our immediate social and physical environment, but at the same, they introduce opportunities for engagement with new kinds of artifacts (“virtual” ones)—let’s say new forms of in-game engagement—and for establishing new, very real kinds of social relations mediated via the game.

For robots and ICTs in elderly care, this means that we should evaluate specific robotic technologies and their use in order to identify dimensions of robotic experience and practice that fit the label “disengagement”, but that we should remain open enough to see new possibilities for engaging with one’s environment and for communication and sociality. For example, like many other autonomous robots in research, care, and entertainment, the famous Paro robot has been shown to function as a kind of medium which brings people together as they focus on the new artifact and talk about—thus increasing social interaction (see for instance Kidd et al. 2006). Robots may also encourage elderly people to develop new skills or to maintain existing skills (e.g., skills that have to do with care for a pet or young child). In that sense, robot-mediated care can have a “focal” dimension and is—in its effects on engagement and skill—not too different from Borgmann’s (1984) stove or wine drinking. Of course at the same time, the robot may encourage people to disengage from the non-robotic environment and to take distance from others who are not involved in the robotic adventure. There is likely to be a distance between those who interact with the robot and talk about among themselves, and those who are excluded from the interaction and robot-mediated and robot-focused conversation.

Compare with the (dis)engagement effects of a television. On the one hand, the technology and medium may distract from other activities and from direct engagement with one’s “offline” environment and with others in that environment. It is also a “device” which is all too easy: we just switch it on, it asks little engagement from us in order to function. But, on the other hand, watching a TV program together can also be a “focal” activity which gathers people around it, and new forms of watching (digital) TV/using the Internet in the information age may ask more involvement and interactivity. What is needed here and in the case of care robots is not only a priori philosophical argument and narrative, but collective learning how to use new technologies in ways that promote engagement with things and people, and efforts to develop ICTs-mediated and robot-mediated practices in such a way that such engagement is encouraged rather than hindered. For care of the elderly (and health care more generally), this means that the design and use of care robots need to be geared toward more engaging forms of relating to reality. The problem is not deception but disengagement.

3.2 Disengagement on the part of care givers

Second, disengagement is and can also be a problem on the part of the care givers. Today elderly care and health care take place in the context of a modern culture and modern form of life. As I have argued previously, this has led to organizational environments and a societal climate which are not conducive to what I have called “care craftsmanship” (Coeckelbergh 2013). Not only elderly people and patients might have less meaningful and less engaged relations with reality and with one another; care givers too may experience that forms of modern care hinder the development of engaging and meaningful relations with care receivers and with the concrete bodily and material realities involved in care. They may feel that the speed of modern care prevents them from having enough time to engage in meaningful and rewarding conversations with care receivers, and that elderly people do not receive the attention, care, respect, and dignity they deserve. Robots, then, may well contribute to this climate and these forms of care. For example, they may be used to replace care givers entirely and as such form part of a modern vision of care aimed at efficiency at the cost of the warm, human care all of us hope we can get when it is our time to be care for (again). But in principle it might be possible to develop robotic systems that are embedded and support different ways of doing and a different care culture. Again I think evaluation of this is certainly not only up to philosophers and cannot be dealt with in the abstract; rather, care givers, care receivers, developers of technologies, and other stakeholders have to try out together different ways of doing, including different technology-mediated forms of care, which enhance capabilities and promote more rather than less social and environmental engagement. Moreover, any such evaluation must take into account the skills and knowledge present ICT users have and might be able to develop, since they will be the elderly of the future.

3.3 Deception after all?

Finally, there may still be problems of deception in care for the elderly, but then at the level of concrete care and care giver/care receiver relations, and not directly due to the use of technology as such but inherent to care relations. Philosophically, the problem could be stated as having to choose between on the one hand a duty to respect the autonomy of persons, and on the other hand a consequentialist duty to sometimes deceive for the good of the person. Yet I think these moral intuitions should not be framed in terms of a choice, as options, let alone as a dilemma: in practice both are and should be combined and balanced in elderly care. On the one hand, care givers try to respect the autonomy and “dignity” of persons. Put in the language of moral philosophy, care givers may approach the autonomy of the person from a Kantian perspective, treating autonomy as a transcendental quality of every human being qua “noumenon”. One could define this as a prima facie duty and a “default” attitude of respecting the (transcendental) autonomy, and dignity of the person. This means that, seen from this transcendental point of view, there is a duty not to deceive. However, empirically speaking not all elderly persons are fully autonomous. They may have cognitive impairments, for instance. Care practices need to reflect this phenomenal, empirical reality and need to accommodate for the problems faced by the elderly person and help and care for the elderly person. For example, elderly care may include helping someone who is cognitively impaired to do things (s)he can no longer do, or indeed having the person interact with a robot if this improves the quality of that particular person’s life. Of course the latter should not be taken for granted. The “if” is important here. But this needs to be assessed by the care giver.

Thus, in daily care practices, deception may play a legitimate role. The deontological requirement to respect a person’s transcendental autonomy and not to deceive is, and needs to be balanced with, the consequentialist aim to improve someone’s well-being. There is no “final solution” to this problem, no solution that can be provided by detached reasoning. Moral philosophy can clarify values and principles at play here, but this is not sufficient. In the end, those who work with elderly people have to make the more practical judgment, taking into account the empirical situation and capacities of the person, next to moral principles and intuitions, including respecting autonomy and never treating a person as a mere means.

Note that, inspired by Kantian or other views about the autonomy and dignity of the person, care givers may still decide to treat people “as if” they were fully empirically autonomous, even if they are not. Again, some elderly people are clearly not (no longer) the (empirically) autonomous persons they used to be because of cognitive impairments that develop with old age or illness, but they may be treated “as if” they are fully autonomous. They are given choices, and care givers try to respect their wishes. This is a form of deception, if you like, to the extent that the person is given the illusion that (s)he is full autonomous, whereas (s)he clearly is not. Perhaps this form of “deception” should be the default mode of treatment. But sometimes care givers may opt for a paternalistic (or maternalistic) form of deception when things are done for the “benefit” and “happiness” of the people, even if that means not saying things as they are or twisting the truth. Of course the latter form of deception should be used as minimally as possible and should not become the default mode. But the phrase “as minimally as possible” is important here: those elderly people who do not only have physical but also cognitive limitations can no longer autonomously decide what is best for them and need help, need others to make decisions about their well-being. Sometimes this may involve “deception”, for example when a robot pet is given to people who think it is a pet, in order to encourage interaction with the robot and with other people, and perhaps indeed to create some happy moments. In practice, care givers have to strike a balance between the two forms of “deception”.

With regard to using robots in elderly care, this means that although the “default” position should be that people are treated as autonomous persons who should have a say in how technology is used in their lives, for some categories of elderly people robots can indeed play a role in doing what care givers, family members, etc., think is best for them. Since paternalism is always morally problematic, this form of deception should be done “carefully” and in small doses. But to treat all elderly people always as fully (empirically) autonomous persons assumes a truly illusionary view of the reality of (very) old age. Perhaps it even assumes an illusory view of care practices in general: sometimes it is a matter of good care to do something that you think is best for someone else.

Finally, use of the term “deception” is perhaps too strong and too negative when we mean that ICTs produce fantasy worlds. Compare this issue to problems in parenting. First, allowing children to watch TV, use the Internet, play games, etc., is not an issue of “deceiving” them in the sense of showing them a “fake” world as opposed to a “real” one. Today the world of TV, games, and Internet is simply part of the “reality” of many people. This means that instead of evaluating “deception” we should rather evaluate the different realities—mediated by ICTs or not—and their moral implications. Furthermore, if reality is plural and/or “hybrid” (e.g., online/offline), it may well be problematic in terms of capabilities if we keep children entirely away from these technologies. It may well be problematic to restrict their reality to what is considered a “non-technological” one. This position does not imply that the technologies are all-unproblematic. For example, they are problematic if they lead to less engagement with other family members, with the natural environment, and so on. And if a parent limits the amount of hours a child is allowed to watch TV or use the Internet in a week, or shields the child off from some real adult worlds, this may well be a reasonable, well-supported paternalistic measure which is done for the good of the child, because the child is not yet a fully autonomous adult who can decide for herself how much TV or Internet use is good for her, or what to watch. At other times, perhaps most of the time, (older) children may be treated as if they are autonomous adults. But even then the question remains which kind of technology-mediated activities and habits promote which kind of social relations and which kind of relations to the environment. Even if autonomy is in place and if there is no deception, then both care receivers and care givers need to find answers to these questions.

One may object that robots are sometimes intended to deceive, designed to deceive, “pretending” to be more than a machine. But this does not change the problem. Toys are also intended to deceive, yet we do not think it is (always) problematic to give them to children. What I suggest is not that elderly people should be treated like children, but rather that care may involve “deception”—intended or not—and that this is not necessarily morally problematic.

The question if and how to use robots in care, then, is more complicated than suggested by the discussion focused on deception. For instance, next to dealing with the objections already presented, it needs to take into account that technologies are always more than material objects: the way we use and interact with them is part of what the technology “is”. The deception objection, formulated as one about the “real nature” of robots (“they are machines, they should not pretend to be something else”), is misconceived since it assumes a too simplistic view of technology. Care robots do not “deceive” when they “pretend” to be more than machines. They already are more than machines when they are used in play and interaction, when they are used in care contexts. What they “are” is not only defined by their material properties but also by the way we perceive and use them. The question is: Is a particular kind of play and interaction, mediated in this way by this particular robot and ICT, good for this elderly person, and does our care for this person respect her autonomy and dignity while at the same time taking into account her specific problems and needs? These questions cannot be answered in the abstract, but need to be addressed by those who give and receive care. Moreover, as suggested earlier in this article, human dignity and technology are not entirely independent: new technologies influence how we think about dignity. This is also likely to happen in the case of care robots, at least if they would be widely used in the daily lives of people. Ethics is a journey, and new technologies influence the story.

4 Conclusions

Just as we are used to living with fiction and non-fiction, we are also increasingly used to living simultaneously online and offline, or at least we are used to switch between them. The future of elderly care is a future which may or may not have robots in it, but it will certainly have us in it: care receivers and care givers who are used to deal with ICTs or who are even digital natives: people who have never known a world without ICTs. If robots are going to swarm elderly care and health care at all (it may not happen), it is very likely that they will meet “robotic natives” who are used to having all kinds of ICTs around in their lives, including robots. In that scenario, they will not experience living with robots as deception; they will see it as part of what it is to live with technology: part of what it is to work and to entertain, part of what it is to connect and communicate with others. New technologies might even influence what we mean by dignity, autonomy, reality, and social relations. It is important, however, to remain critical and to keep exploring what we think are good and meaningful ways of living—with robots, with ICT, with others, and with new hybrid artificial/natural environments. Narratives about the future of care are a great way to contribute to this challenge. But if we write new stories, let us not forget to include the present and emerging generations in it, and take into account how our current worlds—in care and elsewhere—have been changing and are still changing through technology. Finally, it is good to remind ourselves of the power language as one of the greatest technologies and social engagement machines civilization has ever seen: let us use it to write together (or perhaps better: “edit”) a better care future. Writing the future of elderly care should not only and not mainly be a task for philosophers, but should involve all stakeholders in elderly care.