Abstract
We conducted an empirical study with older adults whose ages ranged from 60 to 73 and compared situations where they walked alone and with a robot. A parameter-based side by side walking model which uses motion, environmental and relative parameters derived from human–human side by side walking was used to navigate the robot autonomously. The model anticipates the motion of both robot and the human partner for motion prediction, uses subgoals on the environment and does not require the final goal of to be known. The participants’ perceptions of ease of walking with/without the robot, enjoyment, and intention to walk with/without the robot were measured. Experimental results revealed that they gave significantly higher ratings to the intention for walking with the robot than walking alone, although no such significance was found in ease of walking with/without the robot or enjoyment. We analyzed the interview results and found that our participants wanted to walk with the robot again because they felt positive about at present and expected it to improve over time.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Several potential applications exist with social robots for older adults who seek to maintain both exercise and social presences with others and could directly benefit from advances in social robots. The study reported in [3] demonstrated that companion type social assistive robots could positively affect the health care of older adults with respect to mood, loneliness and social connections with others. These social assistive robots mostly provided communication functionality with limited navigation capability. Older adults preferred a humanoid conversational social robot as a shopping assistant partner [16] in the context of shopping assistance. During the study, 83% (20 of 24) of the participants enjoyed the feeling of “being with someone” that is brought by the robot. The study in [4] demonstrated that older adults accepted a dancing partner robot as a social partner in a dance-based exercise. A social assistive robot was used in [8] as an exercise coach with older adults to demonstrate that a social robot was effective at gaining user acceptance and motivating physical exercise in older adults. The robots in [4, 8] had a limited navigation functionality. Overall, the scenarios of social dancing [4], exercise training [8] and shopping together [16] demonstrated that older adults accepted social robots for social reasons. There is no study conducted to determine whether older adults would accept an autonomous humanoid robot as a walking partner in a real environment.
Healthcare is an emerging application domain for a social partner robot [2]. Appropriate daily exercise is crucial for health, especially for seniors who tend to get insufficient exercise due to various physical barriers and a lack of motivation [30]. Hence, researchers have addressed using social partner robots to motivate people to exercise, although so far, most research has only been tested with younger people. One approach used robots as a coach who encouraged users to exercise, but the robot/coach didn’t exercise with them. For instance, Bickmore et al. [1] developed a computer-graphic avatar that interacts daily with users to encourage exercise. Kidd et al. [18] developed a robotic agent that encourages exercise to lose weight, and Fasola et al. [7] used a humanoid robot to instruct and encourage users to exercise using arm gestures. Mann et al. [21] compared robots and computer tablets equipped with an avatar software that could read instructions aloud and concluded that users enjoyed interacting more with robots and subsequently preferred them over computer tablets. One major point of this work is to establish whether social robots can successfully motivate or positively influence seniors to engage in daily exercise like walking.
This study is also aiming to use robots as a social presence to provide a sense of togetherness. Being a social partner is a factor that influences exercise behavior, including walking behavior [5, 11]. Thus we speculate that social robots could serve as partners that exercise with users. Studies have been conducted on telepresence [26, 34], where telepresence robots like RP-Vita [35], Giraff [10] are commercially available. However, in our proposed system, robot navigation is fully automated. Furthermore, since the rapid growth of robotic technologies has enabled robots to navigate with people [9, 12, 20, 25, 32], it might already be realistic to let people walk with autonomous robots. We believe that robots have technologically advanced enough to meet both needs.
As discussed above, this study provides useful information about robotics applications as walking partners for older adults. Even though it is becoming technologically realistic to let robots navigate with people, there is insufficient evidence that mobile robots have gained social acceptance by older adults. Since we don’t know whether they prefer to walk with an autonomous robotic partner at the current technology level, we propose and answer the following research questions:
-
Do older adults prefer walking-partner robots over walking alone?
-
Can the current state of robotics technology satisfy older adults for walking-partner applications?
The first research question, which considers the intentions of older adults, is inspired by the Almere Model proposed by Heerink et al. [14] that was derived from the general technology acceptance model, where users accept robots because they are easy and fun to use [13, 14]. In the Almere Model, enjoyment positively affected the ease of use and both enjoyment and ease of use positively affected intention to use. We speculate whether the study would fit the Almere Model. If Almere model holds and participants enjoy walking with the robot, it would positively affect the intention of walking with the robot.
The second research question considers the satisfaction of robotics technology used in walker-partner applications. We investigated whether a robot can both navigate well enough to be a feasible walking partner and be viewed as an enjoyable social agent. In doing so, we examined whether the robot’s navigational capabilities are sufficient, as observed by the participants. We also observed whether it was enjoyable to use the robot with the current technology.
We believe that the robot-controlling techniques are mature enough that older adults will accept a robot as a walking partner. Hence, we hypothesize as follows:
-
H1: Older adults will perceive more ease when they walk with the robot than when walking alone.
-
H2: They will perceive more enjoyment when they walk with the robot than when walking alone.
-
H3: They will perceive more intention to walk with the robot than to walk alone.
Apart from the above hypotheses and research questions, we also identify the factors of the robot that must be improved. We examine the desired communication features and design implications expected by people to enhance the robot capabilities and enjoyment. In addition, we identify whether the robot was perceived as some form of human entity with its current appearance as it becomes a social partner. The outcome of the appearance, communication method, and design implications will enhance the enjoyment of walking with a robot in future studies. We expect to use the results of this study to establish whether social robots can successfully motivate or positively influence older adults to engage in daily exercise, like walking.
2 Robot System
When people walk together, they usually maintain a side-by-side formation [24], which facilitates communication and allows them to maintain personal distance and eye contact. We assume that a walking-partner robot should also maintain a similar side-by-side walking formation. Among available methods for side-by-side control techniques, we used a previously proposed approach discussed in section C: side-by-side system.
2.1 Hardware Configuration
To move and keep pace with a human, a fast-reactive robot is required that can adjust to its partner. For this function, we used a fast-wheeled humanoid robot named Robovie-R3 (Fig. 1) whose maximum speed is 1 m/s and whose maximum acceleration is 0.80 m/s2. It can interact with people through utterances and gestures. We placed a 3D-laser range finder (HDL-32E from Velodyne) 1.40 m above the ground level to gain enough visibility of its environment for localization and people-tracking. In addition, we placed a 2D-laser range finder (UTM-30LX from Hokuyo) near the ground surface (0.07 m above the ground) for observing obstacles and making emergency stops. The robot is equipped with wheel encoders and an inertial measurement unit (IMU) (VG400 from Crossbow).
2.2 System
As shown in the Fig. 2 the system is composed of off-line and on-line processes where odometry data, 2D- and 3D-laser rangefinder data are the inputs. The map and subgoal list are components that were prepared off-line in advance. For moving alongside a human partner, the rest of the computations ran in real-time. The robot computes its own pose with a localization module and estimates the position of its human partner using a human tracker module. It computes the next best location using a side-by-side walking model and monitors the presence of obstacles with the safety-stop module. We briefly explain these modules below.
We used a 6-DoF localization using a 3D-laser rangefinder to estimate the x, y, z, yaw, pitch, and roll. The data obtained by the long-range 3D-range finder sensor, the pose data obtained by the inertial measurement unit, and the robot encoders information helped estimate accurate robot localization in outdoor environments that are not flat and contained people surrounding the robot.
We built an environmental map off-line to localize the robot. We manually drove it and logged sensor data, which were then fed to a SLAM framework [31]. For map representation, we used an octree with 0.10 m resolution [15]. The final 3D map is shown in Fig. 3. We implemented a localization module based on a particle filter [6]. In prediction step \( {\text{p}}(x_{t} |u_{t - 1} ,x_{t - 1} ) \), current pose state \( x_{t} \) is computed from previous state \( x_{t - 1} \), and robot motion \( u_{t - 1} \), which utilizes a differential drive model to propagate the particles. For update step \( {\text{p}}\left( {z_{t} |x_{t} ,m} \right), \) we used a 3D-laser scan \( z_{t} \) = (\( z_{t}^{1} \),…,\( z_{t}^{K} \)) and a likelihood field map (m) to compute the weight for each particle using the likelihood end-point model. In this implementation, the particle filter had 100 particles.
We tracked the human partners with a human tracker with 3D-range finder data. Background subtraction between the environmental map and the currently measured point cloud was done to obtain point data corresponding to dynamic objects (e.g., humans). The remaining dynamic object points were clustered, and each cluster within 15 m of the robot was tracked by a particle filter with 50 particles each. For safety purposes, we used a safety-stop module that has a 2D-laser sensor to stop the robot in case of imminent collisions. Figure 4 shows a processing example for localization and human tracking.
2.3 Side-by-Side Walking Model
Several methods are available for side-by-side navigation control for robots. Previous works commonly exploit the recently observed velocity of the person and extrapolate it to predict future locations [19, 28]. However, a simple velocity controller approach suffers from the instability of the velocity vector and fails in situations where mutual obstacle-free paths should be planned [22, 23].
The joint-planning approach models side-by-side walking as a collaborative activity where the walking-agent plans motions that are not only good for themselves but also for their partners. For instance, if there is an obstacle in front of the partner, agents make space so that their partner can avoid it. This idea is implemented as utility-based joint-planning [17]. In the model, the goodness of future situations is computed as utility. The robot system assumes that people will maximize their utility, anticipates the future motion of their partners, and plans their own future motion to maximize the utilities for both their partner and themselves.
2.3.1 The Navigation Method
We use the navigation method discussed in our previous work [17] where a side by side walking model was implemented as a parameter-based utility model with joint planning. The model does not require the robot to know the final goal and uses the parameter utilities to calculate the next best position for both robot and human partner. The parameters are derived and calibrated from people’s trajectories when they walk side-by-side.
Our model is composed of three types of utilities: environmental, motion, and relative (Fig. 5). Relative utility, which represents the goodness of the position relative to the partner, includes relative distance (\( R_{d} \)), relative angle (\( R_{a} \)), and relative velocity (\( R_{v} \)). These sub-utilities increase if their future positions form a better side-by-side formation. Motion utility is composed of linear velocity (\( M_{v} \)) and angular velocity (\( M_{w} \)). If the future motion is stable at a constant velocity while moving straight, these sub-utilities will also increase. Finally, the environmental utility is composed of the distance to obstacles (\( E_{O} \)) and the direction to the goal (or the next subgoal if both are moving along a path with several turning points) (\( E_{S} (\varvec{s}_{target} ) \)). These sub-utilities will be high if they are moving toward the goal/subgoal and there is enough distance from the nearby obstacles.
The total utility is given by the following function, where \( {\text{k}}_{\text{x}} \) represents the weight of each utility:
To limit the planning space, we only included likely positions for the utility computations. The present global locations of the robot and its partner (\( P^{i} \), \( P^{j} \)) are projected based on the current and angular velocities where an anticipation grid was placed with a cell resolution of 0.2 m. The center of each grid \( q_{i,j} \) for each partner, \( q \) is given by \( p_{t + 1}^{q} = p_{t}^{q} + v_{t}^{q} t_{pred} \), where extrapolation time \( t_{pred} \) was set to 2 s. We removed the grids that cannot be achieved based on the robot’s constraints, the current linear velocity, the angular velocity, and the robot’s acceleration. Figure 6 illustrates the anticipation grids that contain the possible future neighboring locations (\( P^{i} \), \( P^{j} \)) for the robot and its partner. For all the pairs of grid points in (\( P^{i} \), \( P^{j} \)), the utility is computed using Eq. (1). Finally, the future position of the robot (\( p_{t + 1}^{i} \)) and the anticipated position of its walking partner (\( p_{t + 1}^{j} \)) are chosen as the pair that yields the highest joint utility score.
2.4 Example
To understand how the system computed and controlled the robot to form a side-by-side formation, we present a typical computation example where a person walked along a street on a university campus (Fig. 7) as the robot successfully moved in a side-by-side formation. The Fig. 8 illustrates the joint-planning computation using our side-by-side walking model. As the robot and its human partner walk alongside each other, the robot estimates its own position and its partner’s position using the localization and human tracking modules. Then it performs joint-planning, anticipates where the person will be and computes where it will go itself. An example of the computation is shown in Fig. 8. From the current positions of the robot and the human, future possible locations are projected, and utilities are computed on them. The Eq. (1) computes the utility of each possible future location. The cells with higher utility values (brighter) represent the locations with better utility, meaning that the system estimates that motion to this location will maintain a better side-by-side formation. The planner chooses the locations with the highest utility for both the human and the robot by anticipating that the person will go to the best location (green arrow) and plans itself to go to the best location (orange arrow). Note that for simplicity, we plotted the utility for all locations and computed them based on a pair of robot and human future locations. Here, with joint-planning, the robot not only considers its self-motion but also the motion utility of its human partner.
3 Experiment
We conducted an experiment with older participants to address our research questions and the hypotheses raised in the introduction.
3.1 Method
3.1.1 Participants
The participants were Japanese adults whose ages ranged from 60 to 73 years (N = 20, males: 10, females: 10) with an average age of 67.50 years (SD = 3.90). They were paid for their participation.
3.1.2 Conditions
We compared these two conditions:
-
With robot: each participant walked with the robot, as explained in Sect. 2.
-
Walking alone: each participant walked alone.
The experiment was conducted with a within-subject design. The order was counter-balanced, where the number of each order was controlled to be the same, and the participants were randomly assigned to each of the orders. To investigate how the robot’s existence affected the participants’ perception of walking with it, the robot did not talk at all during the experiment.
3.1.3 Environment
The experiment was conducted on a university campus. As often as possible, the sessions were conducted outdoors (Fig. 9a). But on rainy or snowy days, they were conducted indoors. The outdoor experiments were conducted for 7 participants (5 males, 2 females). For each outdoor experiment, they walked with the robot on an 80 m route on a 4 m wide street (Fig. 9a, left). In each run, the participants walked to the end of the street and returned. The experiment was conducted indoors with thirteen participants because of rain or snow (5 males, 8 females). For each indoor experiment, they made two round trips in a 30 m corridor (Fig. 9b, right).
3.1.4 Procedure
We conducted each experiment as follows:
-
1.
We explained it to the participants who signed an informed consent form. We also distributed demographic questionnaires.
-
2.
For each condition, the participants stood at the starting point and learned the route. In the walking-alone condition, they started on their own. In the with-robot condition, they started when the robot was ready to start.
-
3.
After they finished walking, the participants were given questionnaires (explained in the measurement section).
-
4.
After the first session was completed, the second session for the other condition was immediately conducted (Steps two and three).
-
5.
The participants were interviewed after they completed sessions for both conditions.
3.1.5 Measures
After each experimental session, we gave questionnaires that contained three measurements inspired from the Almere Model [14]:
-
Perceived enjoyment: “I enjoyed walking this way.”
-
Perceived ease of walking: “I think I will know this way of walking immediately.”
-
Intention to walk: “I’d like to walk again this way over the next few days if I were given the opportunity.”
The questionnaire under each category is denoted in Table 1. Each question was evaluated on a 1–7-point Likert scale where 1 was the most negative (strongly disagree) and 7 was the most positive (strongly agree).
3.2 Results
3.2.1 Observations
During the study, 90% of the participants (18 out of 20) walked and maintained a side-by-side formation with the robot. As shown in Fig. 10 a participant is walking outdoors in a strictly side-by-side formation with the robot. They began at the bottom of the map around t = 0 s (t denotes time in s) and moved along the passage until t = 82.88 s where they turned around and returned to the starting point. Their trajectory is plotted in the map with photos and time steps and is shown in Fig. 10. They walked at a constant speed of 0.88 m/s and maintained a strict, stable side-by-side formation throughout their walk.
As shown in Table 2, 10% of the participants (2 participants, 1 out of 7 outdoors and 1 out of 13 indoors) walked at their own pace and started accelerating without giving the robot any opportunity to catch up to re-establish their paired formation (Fig. 11).
During the experiment, 90% of the participants (18 of the 20) sustained a side-by-side formation. Of the 20 participants, 7 people (35%, 2 out of 7 outdoors and 5 out of 13 indoors) initially walked side-by-side, but they eventually accelerated, creating a formation where the robot was slightly behind the person. One such example is illustrated in Fig. 12, where the robot started slightly behind the person at time t = 131.51 s and caught up to her at t = 136.56 s. However, by the end of the course at t = 151.72 s, the robot was again slightly behind her. In addition, 75% of the participants (15 of 20) walked without looking at the robot. The remaining 25% (5 of 20) sometimes looked at the robot while walking, as to confirm that the robot was still following (Fig. 13).
We also identified some behavioral differences between the outdoor and indoor situations. Since the indoor corridor was narrower (2.0 m) than the outdoor passage (4.0 m) the robot occasionally got too close to the wall and was slowed down by its safety mechanism. Similarly, in the indoor corridor the robot sometimes moved too close to the person before turning away. These behaviors prompted the person to stop momentarily before resuming. We also noticed some subtle differences. In the outdoor environment, a couple of participants stretched their arms as they might do while walking in the early morning. In the indoor environment, people walked with less freedom and slower than the outside environment. We analyzed the behaviors of 18 participants who maintained side-by-side formation and found that in the indoor environment they walked slower on average (M = 0.75 m/s, SD = 0.07) than the participants outdoors (M = 0.85 m/s, SD = 0.05) (F (1,16) = 9.83, p = .006, η 2 p = .381). This may be due to the fact that indoor space been narrower (2.0 m) than the outdoor space (4.0 m). There were no significant difference in their relative distance to the robot between indoor (M = 0.93 m, SD = 0.12) and outdoor spaces (M = 0.99 m, SD = 0.15) (F (1,16) = 1.06, p = .319, η 2 p = .062).
In the walking-alone condition, the participants walked alone on the same route that they would have done with the robot. Both in the indoor and outdoor environments, people walked along the route at a constant speed and tended to walk on the middle of the open space as shown in Fig. 14.
3.2.2 Hypotheses Testing
Table 3 shows the Pearson’s correlation coefficients among the scores for the three scales. In the walking-alone condition, we identified statistically significant correlations between the ease of walking and enjoyment scores at moderate levels (r = .46, p = .042) and statistically significant correlations between enjoyment and intention to walk scores at strong levels (r = .65, p = .002). On the other hand, in the with-robot condition, the correlation between ease of walking with and enjoyment scores was not significant (r = .13, p = .601), unlike the walking-alone condition; there were statistically significant correlations between enjoyment and intention to walk scores at strong levels (r = .64, p = .003).
To investigate the predictions, we statistically analyzed the difference between the conditions for each measurement. We conducted a test for normality, which indicated that, except for ease of walking with/without the robot, the data was normally distributed. Thus, for these normally-distributed measurements, we included the experiment’s location (indoors or outdoors) as one of the factors in the analysis, hereafter referred to as the location factor, and we applied a mixed ANOVA with one within-participant factor, the experimental condition (with robot or walking alone), one between-participant factor, and the location factor (inside or outside). For ease of walking, we conducted a non-parametric test (Wilcoxon’s test). Figures 15, 16, 17 show the mean and standard deviations of the three questionnaire scores and the results of the mixed ANOVAs.
Figure 15 shows the means and standard deviations for the Enjoyment factor based on the questionnaire scores: indoors with robot (M = 26.38, SD = 5.58), without robot (M = 24.23, SD = 4.09), outdoors with robot (M = 29.50, SD = 4.59), and without robot (M = 29.00, SD = 3.41). The ANOVA results revealed statistical significance regarding the location factor (F = 5.49, p = .032, η 2 p = .244). The outdoor session participants gave higher enjoyment scores than those who did indoor sessions. No statistically significant differences were found in the experimental condition (with/without robot) (F = .72, p = .408, η 2 p = .041) or in their interaction effect (F = .28, p = .604, η 2p = .016). Because there was no significance in the experimental condition, H1 was not supported.
Similarly, the Fig. 16 shows the means and standard deviations for the Ease of walking with/without the robot factor based on the questionnaire scores: indoors with robot (M = 26.50, SD = 6.02), without robot (M = 31.57, SD = 5.21), outdoors with robot (M = 31.00, SD = 5.83), and without robot (M = 34.33, SD = 1.63). Because the ease of walking with/without the robot is not normally distributed, we conducted a non-parametric test (Wilcoxon’s test) for ease of walking based on the conditions with/without the robot. Note that, because there is no non-parametric test that can be applied for mixed design 2-factor data, instead, we decided to merge outdoor and indoor data, and only analyzed the experimental condition about with/without the robot. The result showed a statistically significant difference (p = .022), but in an opposite direction to our prediction, suggesting that they rate that the ease of walking with the robot is lower than the walking-alone situation. Because there was no significance in the experimental condition, H2 was not supported.
Figure 17 shows the means and standard deviations for the Intention of walking with/without the robot factor based on the questionnaire scores: indoors with robot (M = 13.36, SD = 5.36), without robot (M = 11.86, SD = 5.35), outdoors with robot (M = 17.67, SD = 3.78), and without robot (M = 16.5, SD = 4.18). The ANOVA results revealed a statistical significance in the experimental condition (with/without robot) (F = 4.54, p = .047, η 2 p = .202). There was trend on the effect of the location factor (F = 3.59, p = .074, η 2 p = .166). There was no significant interaction effect (F = .07, p = .793, η 2 p = .004). The scores for walking with the robot significantly exceeded those for walking alone. Because there was significance in the experimental condition, H3 was supported.
3.2.3 Analysis of Interview Results
Our initial hypothesis (Section I) reasoned that since the participants would perceive the robot as enjoyable and easy to use, they would prefer to use it. The questionnaire results (Section III.B.2) indicate that participants preferred to walk with the robot than walking alone, as we hypothesized. However, the reasons they wanted to use the robot were contrary to our hypotheses. Ease of walking with/without the robot was rated low, and it was not correlated with enjoyment scores. We investigated their reasoning by semi-structured interviews and asked them the following five questions:
-
1.
Was walking with the robot enjoyable?
-
2.
Was walking with robot easy/difficult?
-
3.
Do you want to use the robot again?
-
4.
What entity did the robot resemble?
-
5.
How can the robot be improved?
For each item, we conducted coding to analyze the participant’s responses. First, we established a coding scheme (definition of categories). Two people who did not know our experimental hypotheses independently classified the responses into the defined categories and judged whether the concept in each category was mentioned in each participant’s response. In order to verify the coding validity, we checked the matching of their coding by computing the Cohen’s kappa coefficient (denoted as κ in Table 4), which showed moderate to strong levels of agreement. Thus, our coding scheme was valid. The two coders were asked to reach a consensus about their classification results in order to obtain the final results.
The Table 4 summarizes the final results obtained from the interviews. Each row lists the ratio with the number of participants who mentioned each concept during the interview and the Cohen’s kappa coefficients. Since the participants can provide multiple answers for each category, the sum of the ratios may exceed 100%. The detailed results for Table 4 are explained in the sections that follow.
(a) Reasons why participants enjoyed walking with the robot The first row of Table 4 denotes the two reasons they perceived more enjoyment with the robot than walking alone. Novelty, which was mentioned by 60% of the participants (12 of 20), was a major reason: “It was novel (to walk with a robot).” On the other hand, 60% attributed their enjoyment to the fact that they walked with the robot (due to companionship): “It was enjoyable because I got a sense that walking with the robot is like a playing with a dog or a child.”
The first row (why participants enjoyed walking with the robot) of Table 4 also denotes the questionnaire reliability based on Cronbach’s α-coefficients. For each scale, the score was calculated as the sum of the corresponding item scores. All the coefficient values were reasonably high, showing internal consistency.
(b) Reasons why participants felt walking with the robot was easy/difficult The second row of Table 4 denotes that 90% of the participants (18 of 20) mentioned the difficulty of walking with the robot. Their reasoning reflected different aspects of walking. For instance, one participant complained that “it was a little difficult to adapt myself to the robot’s speed.” Another participant said that “I had to adapt myself to the robot’s pace.” Some participants mentioned safety concerns about the robot’s behaviors: “When the robot approached me, I wondered, is this safe? It might bump into me.”
Even though almost all the participants commented on the difficulty, about 40% of the participants (8 of 20) also mentioned that the robot behaved well: “It moved mostly as I imagined. The experiment was in a corridor that had the same width. So, I didn’t think that the robot would go in an unexpected direction. I was able to easily walk with it.”
(c) Reasons why participants intended to use the robot again The third row of Table 4 elaborates the reasoning for using the robot again. One reasoning approach was to straightforwardly mention a positive aspect of the robot with which they had just walked. Of the participants, 55% (11 of 20) explained their reasons as some positive evaluation of walking with the robot such as more enjoyment than walking alone, the novel experience of walking with it, and its cuteness.
On the other hand, most participants commented on their desire to experience something new when we asked about their reasoning. Of the participants 80% (16 of 20) mentioned that they expected future development and improvement of the robot’s communication functions. For instance, one participant (P) had the following exchange with the interviewer (I):
-
I: Would you like to walk with the robot again?
-
P: Sure, that sounds like fun.
-
I: Why?
-
P: Because I’m with a robot and if it had facial expressions and some eye movements, it would be even more fun. Plus, I know that some robots can communicate with people.
-
I: Yes, that’s true.
-
P: If this robot had such an ability, it would be even more enjoyable.
Similarly, 35% of the participants (7 of 20) expected improvement in its walking-together functions as well. For instance, when we inquired one participant whether he intended to use the robot again he replied: “Yes, but I’d find it more fun if the robot moved a bit more smoothly.” Of the participants 45% (9 of 20) mentioned that they expect to use the robot in different situations. For instance, a participant whose experiment was conducted in the indoor environment said: “I’d like to walk with the robot in an open space like in a park.”
(d) What entity did the robot resemble? The fourth row of Table 4 denotes the types of entities participants perceived the robot to resemble: a robot-like pet animal 15% (3 of 20), a child/grandchild 30% (6 of 20), a friend/partner 35% (7 of 20), or something human-like 20% (4 of 20) as a doll or a doppelgänger.
(e) How to improve the robot The last row of Table 4 notes the responses for improving the robot. Communication functions were noted by 60% of the participants (12 of 20). For instance, one participant said that “I wished the robot could talk and understand me to some degree.”
The navigation functions were mentioned by 60% (12 of 20) of the participants. For instance, one participant said: “In our daily life, we already have cars, bicycles, pedestrians, and signals. I’d like a robot that could avoid these kinds of obstacles.” During the interview, 40% of the participants (8 of 20) mentioned diverse aspects of daily life functions beyond walking that would improve the robot. These includes navigation support in cities, providing protection from traffic accidents, and healthcare coaching.
4 Discussion
4.1 Findings
We started our study by asking two research questions (Sect. 1). In the first question, which addressed whether our participants wanted to walk with the robot again, we found that they generally accepted it as a walking partner. We compared walking with a robot and walking-alone situations and found that they preferred the situation with the robot in terms of their intention to walk again. Our study suggests the future possibility of using robots to support lonely seniors without walking partners (like family members or friends) to maintain their health by walking with them.
We also noticed that the reason why seniors accepted the robot might be different from what has been discussed in the literature. In the Almere Model proposed by Heerink et al. [14], enjoyment positively affected the ease of use, and both enjoyment and ease of use positively affected intention to use. We speculated whether the study fits the Almere Model.
In our study with the robot, we identified no correlation between enjoyment and ease of walking with/without the robot or between ease of walking with/without the robot and intention to walk. Moreover, the participants perceived a lower ease of walking in the with-robot condition than in the walking-alone condition. Nevertheless, they showed a higher intention to walk in the with-robot condition than in the walking-alone condition. This result suggests that the aspect of the technology acceptance of walking with robots differs from accepting existing robotics technology.
The analysis results of the interview data complement the above implication. The reason for the low ease of walking with the robot rating reflects dissatisfaction with the current state of the robot’s functions. Nevertheless, the participants showed an intention to walk based on their expectation of walking with robots in the future. As we discussed earlier about the Almere Model, enjoyment positively affected the ease of walking, and both enjoyment and ease of walking with/without the robot positively affected intention to walk. Yet our results show that the aspect of the technology acceptance of walking with robots differs from the acceptance of existing robotics technology. The reasons for this may lie on the fact that does not exactly fit the Almere Model and speculation that it does was false. However, such expectation was itself one big factor. Anticipating future use was mainly influenced by their intention to walk.
Concerning our second research question, i.e., whether current robotics technology satisfies senior users as walking-partner applications, our study indicates an in-between result. Since seniors expressed their intention to walk with the robot again, we believe that current robotics technology offers a certain level of satisfaction; however, our interviews also revealed that they often wanted the robot to improve its navigation behavior. For instance, one person wanted it to move faster, but another wanted it to move slower. We believe that since there are diverse individual differences about their preferences for how the robot moves with them, the required technique for robot control for walking-partner applications is more complicated than we previously thought. One of our future works is to improve the quality of side-by-side navigation technology.
4.2 Implications
Although our robot did not speak, one apparent design implication from our study is that our robot requires better communication capabilities. Participants wanted it to talk, to be more emotive with its facial and bodily expressions, and to communicate its intentions more. This matches the findings in other literature [16].
Based on interview results, we also found the robot’s resemblance had a diverse view, including a robot-like pet animal, a child/grandchild, a friend/partner, or something human-like such as a doll or a doppelgänger. Of the participants 85% (17 of 20) agreed that the robot resembled something human-like, including a child/grandchild or a friend/partner.
Some participants expressed an intention to walk with the robot even though they admitted that it was lacking some capabilities. They expressed their expectation-based desire to observe the robot’s improvement as they use it over time. This resembles previous findings [29], wherein a Japanese shopping mall, people accepted a robot as a kind of mascot that does not perform concrete functions but only has future roles. Like developing a relationship with one’s child or grandchild, perhaps senior users in Japan might accept robots in their technological infancy because they foresee and hope to enjoy the improvement of their functionality. When robot functions such as providing conversational functions [33] improve, people might accept such robots more, and applications of walking-partner robots might spread more rapidly in society.
During the study, we observed few styles of walking with the robot. For example, few participants completely ignored the robot while walking thus lost the side by side formation. We believe this was due to the participants being fast walkers with no interest in walking together with the robot. Similarly, we noticed that during some trails the robot was little bit behind the person after starting together and did not strictly maintain the side by side formation. This means the people were walking normally as they would do with a small child where the child is often walking behind. Yet most of the participants maintained a side by side formation. This is like the motion expected of two people walking together. Similar observations were read in the interviews as some participants identified robot with a small child or a pet while others identified the robot as a friend, partner or something human-like.
In addition, our study provided evidence that the walking location matters. Participants enjoyed walking more outdoors and perceived the robot as easier to use outdoors, possibly because the street was wider with better landscaping. Further investigation is necessary about what environmental factors are more influential (e.g., sunshine, wind, temperature, road width, and street surface). We should place users in better walking locations to improve their satisfaction, e.g., by letting a robot suggest or encourage users to walk at such locations.
4.3 Generalizability and Limitations
We tested the robot in a specific context. A humanoid robot was tested in a university environment utilizing a side by side walking model. Also, the robot was used for a short amount of time with each participant. We did not use the communication ability of the robot. Therefore, we haven’t yet identified to what extent our findings can be generalized. The results might be different with different robots in different contexts with different communication skills. Therefore, the results might not be generally applicable. Since we specifically focus on older adults, we expect that our result would be different if users from different age groups were involved, e.g., young people who often engage in exercise/sports activities with friends might be less interested in walking-partner robots.
Moreover, our findings might be culture-dependent. For instance, related to the intention to walk, Nomura et al. [27] found that Japanese people tended to expect a robot to serve as a communication partner more than people in other countries. Similar responses were given by our study participants when we talked about their reasons for intention to walk. Thus, since there are culture-dependency ideas about what role people expect robots to fill, whether people accept them as walking partners could be culture-dependent. In addition, in our study, participants expressed an intention to walk with the robot again even though they found it lacking some functions, and their acceptance doesn’t fully match the Almere Model, possibly because our study setting does not fit the Almere Model. Additionally, this mismatch might also reflect cultural reasons.
The experiment was conducted with 20 participants and only 7 of whom were placed in the outdoor condition. A larger sample size would provide many more unbiased results about the conditions. The current side-by-side model does not specify such partner preferences as distance and speed. One future work will specify such preferences at the start of interactions to enable a much more user-friendly navigation that caters to the differences of various people.
5 Conclusions
This paper investigated whether seniors would accept a robot as a walking partner. We prepared a robot that can move side-by-side with seniors and measured their perception of ease of walking, enjoyment, and intention to walk. Experimental results showed that our participants rated their intention to walk significantly higher for walking with the robot than walking alone, although no significance was found in ease of walking with/without the robot and enjoyment. Interview results showed that they wanted to use the robot again because they felt positive about it now and because they expect it to improve in the future.
References
Bickmore TW, Picard RW (2005) Establishing and maintaining long-term human computer relationships. ACM Trans Comput Hum Interact (TOCHI2005) 12:293–327
Broadbent E, Stafford R, MacDonald B (2009) Acceptance of healthcare robots for the older population: review and future directions. Int J Soc Rob 1:319–330
Broekens J, Heerink M, Rosendal H (2009) Assistive social robots in elderly care: a review. Gerontechnology 8:94–103
Chen TL, Bhattacharjee T, Beer JM, Ting LH, Hackney ME, Rogers WA, Kemp CC (2017) Older adults’ acceptance of a robot for partner dance-based exercise. PLoS ONE 12:e0182736
Dishman RK, Sallis JF, Orenstein DR (1985) The determinants of physical activity and exercise. Public Health Rep 100(2):158–171
Doucet A, Gordon N, de Freitas N (2001) Sequential Monte Carlo methods in practice
Fasola J, Mataric MJ (2012) Using socially assistive human–robot interaction to motivate physical exercise for older adults. Proc IEEE 100:2512–2526
Fasola J, Matarić MJ (2013) Socially assistive robot exercise coach: motivating older adults to engage in physical exercise. In Springer tracts in advanced robotics, pp 88, 463–479
Ferrer G, Zulueta AG, Cotarelo FH, Sanfeliu A (2017) Robot social-aware navigation framework to accompany people walking side-by-side. Auton Robots 41(4):775–793
Giraff (2018) An advanced telepresence robot for hospitals & home care 2018. https://telepresencerobots.com/robots/giraff-telepresence. Accessed 10 June 2018
Granner ML, Sharpe PA, Hutto B, Wilcox S, Addy CL (2007) Perceived individual, social, and environmental factors for physical activity and walking. J Phys Acti Health 4:278–293
Hebesberger D, Dondrup C, Koertner T, Gisinger C, Pripfl C (2016) Lessons learned from the deployment of a long-term autonomous robot as companion in physical therapy for older adults with dementia: a mixed methods study. In: The eleventh ACM/IEEE international conference on human–robot interaction, pp 27–34
Heerink M, Kröse B, Wielinga B, Evers V (2008) Enjoyment, intention to use and actual use of a conversational robot by elderly people. In: ACM/IEEE international conference on human–robot interaction (HRI2008), pp 113–120
Heerink M, Kröse B, Evers V, Wielinga B (2010) Assessing acceptance of assistive social agent technology by older adults: the almere model. Int J Soc Rob 2:361–375
Hornung A, Wurm KM, Bennewitz M, Stachniss C, Burgard W (2013) Octomap: an efficient probabilistic 3D mapping framework based on octrees. Auton Rob 34:189–206
Iwamura Y, Shiomi M, Kanda T, Ishiguro H, Hagita N (2011) Do elderly people prefer a conversational humanoid as a shopping assistant partner in supermarkets? In: ACM/IEEE international conference on human–robot interaction (HRI2011), pp 449–456
Karunarathne D, Morales Y, Kanda T, Ishiguro H (2018) Model of side-by-side walking without the robot knowing the goal. Int J Soc Rob 10:401–420
Kidd CD (2008) Designing for long-term human-robot interaction and application to weight loss. Ph.D. Dissertation. Massachusetts Institute of Technology, Cambridge, MA, USA
Kobayashi Y, et al (2011) Robotic wheelchair moving with caregiver collaboratively. In: International conference on intelligent computing, Springer, pp. 523–532
Kobayashi Y, et al (2011) Robotic wheelchair moving with caregiver collaboratively depending on circumstances. In: Extended abstracts on ACM conference on human factors in computing systems (CHI2011)
Mann JA, MacDonald BA, Kuo I-H, Li X, Broadbent E (2015) People respond better to robots than computer tablets delivering healthcare instructions. Comput Hum Behav 43:112–117
Morales Y, Satake S, Huq R, Glas D, Kanda T, Hagita N (2012) How do people walk side-by-side?–Using a computational model of human behavior for a social robot. In: ACM/IEEE international conference on human–robot interaction (HRI2012), pp 301–308
Morales Y, Kanda T, Hagita N (2014) Walking together: side by side walking model for an interacting robot. J Hum–Rob Interact 3:51–73
Moussaïd M, Perozo N, Garnier S, Helbing D, Theraulaz G (2010) The walking behaviour of pedestrian social groups and its impact on crowd dynamics. PLoS ONE 5:e10047
Murakami R, Morales Saiki LY, Satake S, Kanda T, Ishiguro H (2014) Destination unknown: walking side-by-side without knowing the goal. In: Proceedings of the 2014 ACM/IEEE international conference on human–robot interaction. ACM, pp 471–478
Niemelä M, Heikkilä P, Lammi H (2017) A social service robot in a shopping mall: expectations of the management, retailers and consumers. In: Proceedings of the companion of the 2017 ACM/IEEE international conference on human–robot interaction (HRI’17). ACM, pp 11–18
Nomura T, SuzukiT Kanda T, Han J, Shin N, Namin J, Burke J, Kato K (2008) What people assume about humanoid and animal-type robots: cross-cultural analysis between Japan, Korea, and the United States. Int J Hum Rob 5:25–46
Prassler E, Bank D, Kluge B (2002) Key technologies in robot assistants: motion coordination between a human and a mobile robot. Trans on Control Autom Syst Eng 4:56–61
Sabelli AM, Kanda T (2016) Robovie as a mascot: a qualitative study for long-term presence of robots in a shopping mall. Int J Social Robot 8(2):211–221
Schutzer KA, Graves BS (2004) Barriers and motivations to exercise in older adults. Prev Med 39:1056–1061
Slam6d (2011) Slam6d—simultaneous localization and mapping with 6 Dof. http://www.openslam.org/slam6d.html. Accessed 20 May 2011
The VN, Jayawardena C (2016) A decision-making model for optimizing social relationship for side-by-side robotic wheelchairs in active mode. In: 2016 6th IEEE international conference on biomedical robotics and biomechatronics (BioRob). IEEE, pp 735–740
Totsuka R, Satake S, Kanda T, Imai M (2017) Is a robot a better walking partner if it associates utterances with visual scenes? In: Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction. ACM, pp 313–322
Tsui KM, Desai M, Yanco HA, Uhlik C (2011) Exploring use cases for telepresence robots. In: Proceedings of the 6th international conference on human–robot interaction. ACM, pp 11–18
Vita (2018) High-tech, medical grade remote presence for hospitals with FDA clearance. https://telepresencerobots.com/robots/intouch-health-rp-vita. Accessed 10 June 2018
Acknowledgements
We extend our gratitude to Norifumi Ogawa and Tatsuya Matsushima for their help during our experiments.
Funding
This study was funded in part by Tateishi Science and Technology Foundation and by JST, CREST.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Karunarathne, D., Morales, Y., Nomura, T. et al. Will Older Adults Accept a Humanoid Robot as a Walking Partner?. Int J of Soc Robotics 11, 343–358 (2019). https://doi.org/10.1007/s12369-018-0503-6
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-018-0503-6