Expanding on our previous work, which explores how robots might increase freedom for care recipients and caregivers alike, this paper examines the claim that the use of robot caregivers may enhance the lives of impaired children and promote their future well-being. Those caring for children with impairments are often fully absorbed by their caregiving duties. Once the daily regimens necessary to ensure that basic physical needs are met, there may be little time or energy available to children or their caregivers to cultivate the capability to engage in play. In principle, robot assistance with caregiving tasks might ease some of the burdens of providing care, thereby opening up space and time for recreation. Although the central focus is on robots, we do not intend to convey that using robots would become the only or best intervention for improving the lives of impaired children. Instead, robots can expand the array of viable alternatives for improving the lives of those with impairments and their caregivers.

Various types of robots are being designed to complete a wide range of tasks. For example, Paro, a robotic seal, and AIBO, a robotic dog, were built to provide some level of entertainment and perhaps emotional engagement. Other robots, such as Care-O-Bot 3 and CareBot, are designed to lift objects and give out medication. Vallor (2011) explains that while most “carebots” are still in developmental stages, some robots are comparatively far along this path or already in their second generation. Though the use of robot caregivers is typically restricted to small populations under experimental conditions, continued discoveries related to child-robot interaction may lead to a growing demand for the technology. Currently, because of an aging population in many countries such as Japan and Germany, the main emphasis of public discourse about robot caregivers is on their use in caring for the elderly.

A robot caregiver’s intervention could encourage human caregivers and others to engage children with impairments in recreational activitiesFootnote 1 instead of limiting interaction with children to duties purely related to their care. There is no denying that caregivers can foster the development of multiple capabilities, but therapeutic intervention is hard work for both the caregiver and the care recipient. Using the capabilities approach as our theoretical foundation, we will begin to show how a design and implementation process guided by an understanding of the importance of central human capabilities could contribute to the flourishing of impaired children and their caregivers. Our present contribution is part of a growing literature examining the merits of the capabilities approach as applied to the realm of robotics and ethics (see Borenstein and Pearson 2010; Coeckelbergh 2010; Vallor 2011). Further, one can extrapolate from conclusions about robot intervention for children with impairments to the potential for robot caregivers to foster the capabilities of children, and perhaps adults, without impairments.

Background

The importance of integrating play into a child’s life should not be understated. There are several explanations for this but the capabilities approach is one way of capturing the essential connection between play and a child’s ability to thrive in a distinctively human way. The capabilities approach is a relatively new framework in the realm of ethics though much of its conceptual heritage can be tied back to Aristotle’s virtue ethics. The principal founders of the approach are Martha Nussbaum and Amartya Sen. In their assessment, the capabilities approach gleans the “best” elements from other ethical traditions, such as the value of personal freedom. And ideally, it overcomes the theoretical “baggage” that often entangles Utilitarianism, such as the problem of adaptive preference (Nussbaum 2000, 139), and deontology, which might overemphasize whether an entity has the capacity for rational thought (Nussbaum 2006, 138).

Drawing from Nussbaum (2000, 2006), and Sen (1999), actualizing a “capability” is deeply tied to having the freedom to make decisions or perform different types of actions. In Nussbaum’s formulation of the Capabilities Approach, she delineates ten “central human capabilities” that are essential to flourishing in a distinctively human way. Alongside the capability to play, which is the primary focus of this paper, Nussbaum’s list includes the capabilities of life, bodily health, and control over one’s environment (2000, 78–80; 2006, 76–78). What is pivotal is creating an environment where each person’s capabilities can be actualized. Following Nussbaum, we contend that even though it is important to promote the flourishing of all people, there is a special obligation to encourage the development of a child’s capabilities. The capability to play allows children to decide, for example, whether to entertain themselves or engage with other people. While important in its own right, play nurtures the development of other central human capabilities so that the child’s future opportunities will not be severely restricted. Hence, we must make it a priority to ensure full integration of recreational activities into a child’s life.

Rather than merely guaranteeing access to a particular technology (e.g., a computer) or the ability to perform a specific action (e.g., use word processing software), what is crucial about nurturing capabilities is that they allow a person to select from a range of options (Borenstein and Pearson 2010). As Nussbaum asserts, children should be required to engage in activities that will make them literate as well as physically and emotionally healthy. Literacy is especially important since it enables people to envision creative possibilities for improving their lives. It is often a key step toward enhancing one’s employment prospects and pursuing other things that people value. Actualizing a capability can enable someone to have the flexibility to navigate the challenges of a dynamic and evolving world.

Nussbaum emphasizes that play is crucial to a child’s development. She astutely states that “We may suppose that children naturally play and express themselves in play. This, however, is not precisely true” (Nussbaum 2000, 90). She goes on to explain that in some cultures female children are not given the opportunity to play and thus may not know how to do so (Nussbaum 2000, 90). This phenomenon is usually symptomatic of a broader socio-cultural problem whereby women do not have genuine choices about their future. A consequence of this is that even if these women do gain more freedom, they may lack awareness of the existing possibilities and how to go about pursuing them. Similarly, if children with impairments are not situated in an environment that reinforces the importance of play and provides them with avenues for doing so, they might fail to actualize the relevant capability. Highlighting this idea is a report from the American Academy of Pediatrics, which states that “Because every child deserves the opportunity to develop to their unique potential, child advocates must consider all factors that interfere with optimal development and press for circumstances that allow each child to fully reap the advantages associated with play” (Ginsburg et al. 2007, 182). While the emphasis among child development experts appears to be on the instrumental role of play in promoting the development of various abilities (see, e.g., Spiegel 2008), play can also be understood as having inherent value (Huizinga 1950).

In general, children are less capable of making informed choices than adults, but the hope is that proper nurturing of children’s capabilities will enable them to become autonomous agents. Properly supervised recreation provides a low-risk context for children to develop their autonomy. Children can experiment with choosing from among an array of options and assess the results of their choices in a safe setting, which can be valuable practice for decision making in non-recreational contexts.

Children with or without impairments should be given the opportunity to acquire capabilities beyond those associated with becoming “productive members of society”. While being productive tends to be valued above many other things, especially in a capitalistic society, the development of basic capabilities is vital to human flourishing. For those whose impairments render them incapable of participating in political or other processes, it is probably even more important to encourage the development of their capability to play, especially if they are excluded from contexts or activities likely to generate social approval. Acquiring this capability has the potential to compensate for the lack of positive feedback bestowed on individuals whose contributions fall in line with the prevailing emphasis on productivity.

On a related note, there are children whose impairments make it unlikely that they will live long enough to enjoy the benefits associated with obtaining particular capabilities (or the harms associated with the failure to develop them), including the capability to play. If the child is not going to grow to adulthood or will never be able to function as an independent adult, for example, it does not make much sense to emphasize the acquisition of financial management skills. There are, however, some skills grounded in central human capabilities that are essential to human flourishing. Hence, we should provide opportunities for the fullest development possible for each individual. Though not always a straightforward process, it is necessary to distinguish between limitations that are the result of the availability and quality of social institutions and those that are genuinely insurmountable limitations of a particular type of impairment. The potential causes of illiteracy can help to illustrate this point. On one hand, a child might be illiterate because of the lack of access to teachers or schools. On the other hand, it could be due to a mental impairment (e.g., severe cerebral palsy). No matter how limited a child’s lifespan is going to be, caregivers should make the most of opportunities to enhance the child’s well-being.

Caring for Children with Impairments

We have come a long way from the days when children with physical or mental impairments were routinely confined to institutions, often for a lifetime. Yet room for improvement remains, and a hypothesis to consider is whether and to what extent human-robot interaction (HRI) could contribute to the flourishing of these children. Depending on a robotic caregiver’s design, individuals with serious impairments could participate in a wider array of human activities, just as other interventions (e.g., wheelchairs, prosthetic limbs, computer-aided communication, etc.) can facilitate people’s abilities to interface with their surroundings. Such technological assistance can help children with impairments attain goals similar to other children, albeit through following a different path, illustrating what Toboso (2011) refers to as “functional diversity”. Along these lines, the capabilities approach emphasizes whether and how individuals can use resources to achieve certain goals and appears to prescribe the integration of significant flexibility into the design and use of robot caregivers.

Children for whom extensive care is required in order for them to maintain at least a basic level of functioning are at risk of not developing the capability to play because their role as patient can easily obscure the fact that they are children and how crucial play is to their socialization and well-being. For example, Li and colleagues note that children with cancer often lack the opportunity to play, which can contribute to the emergence of developmental delays and psychological maladies (2010, 52). Caregivers might consider integrating some playful interaction into regular caregiving activities, but it may not always be practical to do so, especially given that care is sometimes provided at some cost to the health and overall well-being of the caregiver. The caregiver may simply have “nothing left” with which to go beyond the basic provision of care.

Moreover, parents and others might view the integration of recreational activities as being inconsistent with the “proper tasks” of a good caregiver. That said, the all-consuming nature of being a caregiver calls out for some counterbalancing recreational activity that will promote the flourishing of the “co-cared”.Footnote 2 The recreational activities of the co-cared may or may not overlap with those of the care recipient, but in some cases engaging one’s care recipient in recreational activities might be the only opportunity for the caregiver to play.

The isolation of individuals with impairments is a serious problem. Yet if they require constant attention in order to address the various facets of their condition, it may mean that they suffer, albeit in a different way, from a lack of alone time—time that might allow for reflection or for recuperation from stress-inducing stimuli. For example, if a care recipient absorbs the anxiety or frustration of her caregiver, such stress might negatively impact the overall well-being of the care recipient (Wright et al. 2002). Presumably, efforts that improve the caregiver’s well-being might benefit the emotional health of the care recipient.

Our social institutions should work toward ameliorating loneliness among individuals whose condition is not the result of a voluntary choice on their part (e.g., a hermit). Although the study population was elderly individuals, Banks and colleagues (2008) claim that robotic dogs can help to reduce loneliness. Loneliness of course is an ongoing concern with impaired children and mitigating it is a laudable goal. Moreover, loneliness in children can impede their social or psychological development; and its effects may be different from cases where elderly individuals are suffering from feelings of isolation or abandonment. Maintaining the care recipient-caregiver relationship can lead to the exclusion of nearly all other sorts of activities. In principle, robot intervention could make a valuable contribution here by mitigating the loneliness that is felt by care recipients and their caregivers.

In order for caregivers to provide a high level of care, they must maintain their own physical and mental health. A study by Murphy and colleagues reports that nearly all caregivers had “adverse mental and physical health impacts” associated with caregiving tasks and experienced anxiety about their child’s health (2006, 184). Caregivers often fail to attend to their own health problems due to lack of time, respite hours, and alternative care options (Murphy et al. 2006, 184). Though the impact of poor caregiver health on the well-being of children with impairments is not entirely understood (Murphy et al. 2006, 186), it is reasonable to assume that care recipients fare better when under the care of healthy caregivers. If robot caregivers function competently, this could provide caregivers with time to ensure that their own needs are met. Further, Heimlich (2001) refers to several studies indicating that the animal-assisted therapy not only benefits the intended recipient (e.g., an ill child) but it can elevate the mood of caregivers as well. Likewise, the intervention of a robot caregiver could provide necessary respite for caregivers, thereby reducing the presence of stress and anxiety in subsequent interactions with a care recipient. It will be necessary to address numerous ethical, technical, and other related issues in order for such a scenario to come to fruition.

A robot caregiver can offer a child the chance to interact with another type of entity, which can be designed to operate in multiple ways. For example, a child that does not regularly engage with other children due to the severity of her impairments could learn about reciprocity by interacting with a robot programmed to share. Since some children with severe physical or cognitive impairments might not be able to attend school, go to the park, or otherwise participate in typical childhood activities, ideally robots could contribute to our attempts to introduce activities that mimic as closely as possible those known to contribute to healthy development. Moreover, since our presumption is that human caregivers should still be present a significant portion of the time, they, too, may learn new things about the abilities of a child in their care as they observe the child’s interactions with robots (see, e.g., Poletz et al. 2010).

Not all robots will provide the same level of physical and intellectual engagement, and not all children react in precisely the same way to stimuli. For instance, Scassellati (2007) warns that the expectations about how a typical child will react to a robot do not necessarily apply to autistic children (e.g., loud noises tend to be very disconcerting to autistic children (Stiegler and Davis 2011)), and since autism is a spectrum disorder, even generalizations about autistic children are likely to be inaccurate (Scassellati 2007, 554–555). Hence, designers will have to be sensitive to variations among different care recipient populations. Designers should also keep in mind that children with multiple, or specific types of, impairments may not be able to operate a robot in the same manner as other children. In a study by Prazak and colleagues (2004), the focus was primarily on children with physical impairments. “Child 2”, however, also had a cognitive impairment, and he seemed to struggle to use the robot in a way that was helpful to him (134–135). Nevertheless, robots may help some children to play in atypical and unpredictable ways, which could give human caregivers more avenues for promoting an impaired child’s development.

Roboticists are actively designing robots to fill various roles in a child’s life, and the moral significance of these interactions will be partially dependent on how much of a child’s care and well-being is delegated to the technology. Depending on the task in question, a robot could complement the efforts of a primary caregiver or operate independently from that person. In general, the latter type of interaction may be more ethically troubling since human caregivers might be tempted to free themselves from their obligations. Thus, it is necessary to identify potential pitfalls, including the possibility of overreliance on robots causing a problematic shift in caregivers’ perceptions of their duties toward children with impairments.

The use of robotic toys and/or assistants might provide human caregivers with an “opportunity” to (increasingly) detach themselves from their charges. This concern has already been voiced by other authors, including Sparrow and Sparrow (2006) and Sharkey and Sharkey (2010a), in the context of elder care. With the increasing sophistication of robots, more and more tasks can potentially be delegated. Not only could this negatively impact care recipients, Vallor (2011) argues that relegating caregiving to robots might be detrimental to the development of caregiving-related virtues, such as empathy or reciprocity. Further, echoing Turkle’s point that certain types of technological intervention amount to giving up on people who seem inept at interpersonal relationships, Vallor (2011) is concerned that being deprived of the experience of caregiving will inhibit the development of certain of Nussbaum’s central capabilities (i.e., affiliation, practical reason, and emotions). However, the presence of robots might minimize damage inflicted on children by adults who fail to fulfill their caregiving obligations or whose attempts to do so are seriously flawed.

While those seeking ways to shirk their duties might use robots (or the television or a variety of other options) to accomplish this goal, it is not preordained that robots will encourage humans to be neglectful. Nor it is entirely clear whether all cases of leaving a person with a robot would constitute neglect. It is easy to imagine some circumstances where leaving a child with a robot might be the more responsible choice. For instance, an intoxicated or emotionally distraught adult could pose greater danger when present than absent. In short, how HRI will ultimately alter our behavioral tendencies remains to be seen. Admittedly, increased HRI could contribute to the deterioration of some types of interpersonal relationships; this possibility is a key motivating reason behind our commitment to examine this emerging phenomenon. However, the hope is that the ongoing, careful examination of the ethical facets of robots will increase the probability that HRI will improve rather than erode human–human interaction.

Self-Sufficiency

Ideally, all children are on the road to gaining at least some level of autonomy and perhaps a robot can allow them to gain a measure of control over their environment that they have not experienced before. But if a robot does “too much” for a child, will this hinder the child’s development? The phenomenon of “vulnerable child syndrome” may be instructive here (Duncan and Caughy 2009). Vulnerable child syndrome involves parental intervention aimed at protecting the child from harm but it has the effect of inhibiting their child’s well-being. Avoiding this, and other similar problems, is imperative when contemplating design pathways.

Robot intervention should not unnecessarily curtail the developmental potential of a child. For example, if a child has a physical impairment and a ball is just out of reach, should the child be “forced” to figure out a creative way to obtain the toy? Or, should a robot be programmed to follow the child’s commands automatically and merely retrieve the toy in question? Prazak and colleagues note that children in their study became too “passive” during play if a robot did too much for them (2004, 132). A balance must be struck in order to prevent excessive frustration on the part of children while also permitting them to choose whether and how to respond to their environment. It may be unwise to remove all frustration (given that some frustration is part of normal development). Instead, the goal should be to help the child manage challenging situations in a constructive manner.

A study by Kim and colleagues (2010) found that individuals using a robotic arm do not necessarily prefer its functioning when in “autonomous” mode as opposed to the “manual”, or what the researchers also call “Cartesian”, mode. The researchers state that “Compared with manual (Cartesian) control mode, Auto mode was seen to enable the users to perform the given tasks faster and with less effort, however, the manual mode of operation was perceived to be better by the users” (Kim et al. 2010, 222). If the researchers are correct, then granting users the option to exert more control over a given task, rather than merely letting the robot do it, might be preferable in some cases. One design option to consider would be allowing users to toggle between “active” and “passive” modes.

Flexibility within the context of play is important to both child development and the ability of caregivers to better understand children’s abilities and developmental needs. Psychologists such as Laura Berk claim that if the dynamics of play are too rigidly defined, this might limit a child’s creativity. Berk suggests that children need to have openings during play time to cultivate their imagination and to gain “self-regulation” skills. Poletz and colleagues (2010) claim that children with impairments exhibit cognitive abilities, which they do not demonstrate using standard tests, when they engage in unstructured play with robots. A key finding from their robot studies was that “teachers underestimated the abilities of children until they saw their abilities with the robot tasks” (Poletz et al. 2010, 125). Due to their impairments, children in the Poletz and colleagues’ study were thought to be “untestable”; yet robot intervention seemed to reveal important information about the children’s abilities. An accurate assessment of a child’s abilities can guide the selection of developmentally appropriate interventions, and robots appear to provide valuable assistance in this area. Arguably, this shows that highly structured play between a robot and child may neither be necessary nor desirable.

Building Relationships

In principle, a well-designed robot caregiver could create adequate space for the promotion of a child’s capability to play, and therefore promote her physical, cognitive, and social development. However, even technological optimists would be hard-pressed to say that the introduction of robots will alleviate all burdens or concerns relating to the care of impaired children. For example, will the introduction of robots facilitate bonding between children and other people or is the opposite more likely to be true? Sharkey and Sharkey (2010b) suggest that a robotic toy could enable family members to bond and become more comfortable with each other; it can become a talking point while a family learns to use it together. Additionally, Mone (2010) points out that autistic children in Mataric’s study were more inclined to interact with people when robots were present, even if other individuals (e.g., parents) were not interacting with the robot.

At least one child in Mataric’s study may have benefitted from interacting with a robot by gaining a better understanding of other people’s inner psychological states. For example, after realizing that Bandit (a robot) was not going to comply with his wish to engage in a game of tag, an autistic child said: “Now I know how my teachers feel,” a comment that stunned observers because the child “was not supposed to be aware of his teachers’ frustration” (Mone 2010, 93). The study ostensibly shows that robots can be catalysts for increased interaction and improved understanding of interpersonal relationships.

On the other hand, children might come to prefer robotic companions, at least in some circumstances, to human contact. For example, one of the participants in the Prazak study “was very persistent about playing for more than one hour w/the robot” and said that she enjoyed playing with it, because “this was the first time she could play herself and did not need another person”(Prazak et al. 2004, 136). Yet it would be troubling if a child’s reliance on a robot disrupted her physical or psychological development or ability to bond with other humans. An additional concern is that robots may be used as a “technological fix” for those whose social skills are lacking rather than putting forward a diligent effort to work with these individuals (Turkle 2011, 65–66). Ultimately, whether the use of robots interferes with children’s ability to bond will depend on human behavior, particularly decisions about the extent and contexts in which robots are used.

The introduction of a robot into a child’s life should not be used an excuse to abandon a child. Even if this does not occur, robots can evoke a variety of responses from users. A robot’s behavior might induce a negative emotional state in a child. For example, in Alone Together, Turkle describes a circumstance where a child’s feelings were hurt because she thought that a robot was deliberately refusing to talk with her; the child believed that the robot did not like her, but it was merely malfunctioning (96–97).

Despite the apparent benefits of HRI for children with some types of impairments, this must be weighed against the potential that a psychological disorder will result from introducing a robot, especially in early childhood. For instance, Sharkey and Sharkey (2010b) discuss “attachment theory”, which is the notion that the social and psychological development of infants and young children is integrally tied to both the behavior of and interaction with their principal caregiver. This raises questions about whether young children will latch onto robots and whether it is prudent to allow this to occur (Sharkey and Sharkey 2010b).

Whether children’s beliefs about their robot caregiver or playmate will lead to problematic emotional attachments to it is unclear. Considering this issue, Coeckelbergh (2011) cautions us against assuming that we have unmediated access to the real inner states of other humans. A child’s perception of another human being’s inner states is not necessarily more accurate than his or her beliefs about robots. To the extent that play is understood as creating and persisting in an “altered reality” for some period of time (Huizinga 1950), robots might be understood as expanding children’s opportunities to be creative and imaginative instead of intruding on their ability to grasp the truth about the world.

Turkle’s research, however, suggests a need to proceed with caution so that any emotional attachment that develops does not render children less capable of socializing with other humans. It may be ethically problematic to allow significant bonding to happen whereby the child becomes dependent on a robot before the child is able to understand the nature of the robot and its limitations. For example, it is probably unwise to allow a robot to be the only caregiver that feeds and holds an infant. Granted, infants/children become attached to blankets, pacifiers, etc., but the level of engagement is relatively low. Roboticists already employ a variety of design features (e.g., large eyes) with the intention of evoking strong emotional responses from children. However, as long as the interaction (including bonding) with a robot occurs in tandem with bonding to a principal human caregiver, this might mitigate some of the negative outcomes.

A child’s possible preference for robotic companionship over that of their parents or other human caregivers would not necessarily be more unsettling than some children’s current preference, in some instances, to spend a large amount of time watching television or playing videogames. Of course, there are reasons to be apprehensive about how much time children spend with electronic media. Yet the core concern is the lack of moderation; HRI does not seem to raise a unique problem. In fact, it is possible that the more interactive nature of robots might be less damaging than the often passive consumption of images and information from television programs or advertisements. Admittedly, however, because robotic technology could provide children and others with additional opportunities to become disengaged from human–human relationships, increased HRI might intensify a pervasive social problem.

A further objection to HRI, particularly child-robot interaction is the potential for deception (Sharkey and Sharkey 2010b; Sparrow and Sparrow 2006). As in interpersonal interactions, whether deception is ethically permissible can depend on the context. For example, we accept deception by actors and novelists as well as acquaintances who claim they are fine when we ask in passing how they are doing. A related issue is the extent to which we can attribute a case of deception to the actions of a robot rather than self-deception or deception by other humans about the nature of a robot. Though not the only consideration, design and use decisions should be orchestrated in ways that avoid morally problematic deception or manipulation of children or of other vulnerable populations.

If a child’s parents or primary caregivers are the ones with whom the child usually engages in play, that child might prefer to interact with other companions, possibly including a robot. Along related lines, Turkle illustrates the negative effects of parental distractedness in today’s world—chatting on cell phones while pushing their kids on park swings or texting at the dinner table (Turkle 2011, 266); this might be offset by the positive psychological impact of a robot that is not “distracted” by other concerns. For example, one 17-year-old interviewed by Turkle, preferred the undivided attention of a robot to his father whose focus was always divided between his son and his wireless device (Turkle 2011, 294). It would be preferable if individuals would overcome their costly “techno-obsession”. But doing so might take months or years of therapeutic intervention, assuming that it is similar in nature to other addictions (e.g., alcoholism) or related psychological problems. In some circumstances, a parent’s techno-obsession might be resolved too late and impinge on the child’s development. In an increasingly distracted world, where parents are often unable to dedicate adequate time and attention to meet their child’s needs, a robot might offer some companionship. While it is not realistic or desirable to think that the robot can replace a parent, it may provide substantial benefit by helping promote a child’s development of the capability to play.

A combination of the different scenarios might occur. In other words, children might desire human companionship during some play sessions with a robot, while at other times, they might prefer independent play. They might also prefer to play with a human companion and leave the robot alone except when the comparatively “boring” basic maintenance tasks are necessary. Yet the point to emphasize here is that children could, at a minimum, gain some level of freedom by allowing them to choose from a wider variety of interactions. For example, a child could decide to play with only the robot, with the robot and other humans, or with only other humans. This could help her to better understand the limits and benefits of HRI compared to her relationships with humans or animals.

The Dynamics of Play

Part of the importance of play is that ethical and social norms are directly and indirectly conveyed while it is happening. Children can start to appreciate the value of things such as fairness, reciprocity, and patience (e.g., if I let another child play with the toy now, I can use it later). Drawing from Narvaez (2008), providing children with a nurturing environment can facilitate their moral development. It is largely an open question whether and how robots can contribute to this goal, and whether the developmental benefits derived from play will offset any risks surrounding HRI. To the extent that robots can intervene positively, their presence could complement the efforts of human caregivers to nurture the moral growth of children.

In their book Wild Justice, Marc Bekoff and Jessica Pierce defend the notion that at least some non-human animals have systems of morality. Within that context, they discuss the importance of play in moral development. They state that “During social play, individuals can learn a sense of what’s right or wrong—what’s acceptable to others—the result of which is the development and maintenance of a social group (a game) that operates efficiently” (Bekoff and Pierce 2010, 116). Along these lines, there are questions about whether and how robots might provide such feedback. Roboticists must have a robust grasp of the nuances of playtime interaction if they are going to be able to encode them into robot. Presumably, the results of observational studies involving humans or nonhumans engaged in play could inform robotic design. For example, Bekoff and Pierce emphasize that play can promote increased behavioral flexibility and learning capacity in animals (2010, 118). Thus, one test of whether a robot used during play activities is well-designed is whether it achieves similar results.

It is necessary to examine not only whether robots can be programmed to play appropriately, but also how they should act if a child misbehaves. If a child violates the norms of play, what should a robot be permitted to do? For example, if a child repeatedly kicks a robot, should the robot ignore the act? Reprimand the child? Refuse to play? Arguably, if a robot is designed to assist with the socialization process, it needs to help convey the notion that a violation of social norms has consequences in much the same way that dogs and cats convey this by growling, scratching, or biting a person who mistreats them. But robots should not be programmed to harm a child intentionally, as a reaction from an animal might. Instead, a robot should have expressive abilities sophisticated enough to convey a clear message without causing harm, something that even human parents may fail to do.

Additionally, users of robots should adequately understand the sorts of reactions that robot behavior are likely to elicit from children and have the ability to detect and address any apparent negative psychological reactions. If human caregivers are around to supervise the child’s interaction with a robot, then they can decide what should be done. But in the absence of human supervision, the issue becomes fairly complex. This makes a compelling case for why a human caregiver should remain close at hand. Alongside such supervision, however, a robot may be given a limited ability for non-harmful disciplinary behavior (e.g., refusing to play for a given amount of time) aimed at reinforcing acceptable social interaction. This, of course, will also require us to examine seriously the reasons for accepting and rejecting certain types of interaction among humans, which is another potential benefit of increased HRI in this context.

Among other things HRI may provide a means for expanding human understanding of the nature and extent of apparent limitations related to certain types of impairments. If exposure to humans and to robots can draw out relevant differences between them (e.g., sentience, emotional life or lack thereof), this might help children better understand the features and limits of interactions with both. But is there reason to believe that robots have special properties that will elicit socially acceptable behaviors from children? Kahn and colleagues (2006) note that children in their study report similar beliefs about robotic and non-robotic toy dogs but the children’s behaviors differed with regard to the two types. They observed that robotic dogs seem to elicit nicer behavioral responses from children than stuffed dogs and that the children were more likely to “mistreat” the non-robotic toy dog (Kahn et al. 2006, 427–428). Of course, there is not enough data to derive a definitive conclusion, but a working hypothesis might be that the children believe that the robot deserves a measure of respect. According to the researchers, “children also appeared to believe that AIBO was the sort of entity with which they could have a meaningful social (human–animal) relationship” (Kahn et al. 2006, 429). However, it is possible that the children were merely more fearful of the robotic dog.

While there is much hope about the use of robots in care contexts, we need to avoid drawing hasty generalizations about HRI, especially with regard to how children will respond to the technology. Many studies of HRI are in controlled, limited settings, and they may not necessarily translate to how an interaction will transpire over months and years. Moreover, given that roboticists have conducted many of the empirical studies and might be overly enthusiastic about the uses of the technology, it is probably wise to temper our optimism about the use of robot caregivers. It should also be reiterated that the behavior of different children can vary greatly. Some mistreat their pets, for example, so it does not necessarily follow that they will treat a robot well. Furthermore, Turkle’s (2011) research has already provided some evidence of hostile behavior from some children toward robots. She recounts the case of 6-year-old Edward who told Kismet to “shut up” and then proceeded to stuff objects into Kismet’s mouth while barking out an order to the robot to “Chew this! Chew this!” (98). As with interaction among humans, HRI may fail to elicit socially acceptable behavior from children.

Conclusion

The capability to play, especially for children, is both inherently valuable and essential to the development of other central human capabilities. Allowing robots to become companions or caregivers for humans does raise serious and troubling ethical issues. However, keeping the focus on how HRI affects the capabilities of individuals can mitigate or prevent negative outcomes. Not only can the capabilities approach inform design decisions, but it can also prompt closer examination of current social attitudes and practices that require modification. It is our view that the intervention of an appropriately designed robot caregiver has the potential to contribute positively to the actualization of the capability to play and could enhance the ability of human caregivers to understand and relate better to care recipients. In particular, we argue that augmenting the role of the human caregiver with the intervention of a robot caregiver could mitigate the burdens of caregiving, and therefore preserve the welfare of human caregivers and care recipients alike. It could also promote a better understanding among human caregivers of the developmental level and potential of children whose impairments inhibit their ability to exhibit these characteristics by traditional means.