1 Introduction

Several potential applications exist with social robots for older adults who seek to maintain both exercise and social presences with others and could directly benefit from advances in social robots. The study reported in [3] demonstrated that companion type social assistive robots could positively affect the health care of older adults with respect to mood, loneliness and social connections with others. These social assistive robots mostly provided communication functionality with limited navigation capability. Older adults preferred a humanoid conversational social robot as a shopping assistant partner [16] in the context of shopping assistance. During the study, 83% (20 of 24) of the participants enjoyed the feeling of “being with someone” that is brought by the robot. The study in [4] demonstrated that older adults accepted a dancing partner robot as a social partner in a dance-based exercise. A social assistive robot was used in [8] as an exercise coach with older adults to demonstrate that a social robot was effective at gaining user acceptance and motivating physical exercise in older adults. The robots in [4, 8] had a limited navigation functionality. Overall, the scenarios of social dancing [4], exercise training [8] and shopping together [16] demonstrated that older adults accepted social robots for social reasons. There is no study conducted to determine whether older adults would accept an autonomous humanoid robot as a walking partner in a real environment.

Healthcare is an emerging application domain for a social partner robot [2]. Appropriate daily exercise is crucial for health, especially for seniors who tend to get insufficient exercise due to various physical barriers and a lack of motivation [30]. Hence, researchers have addressed using social partner robots to motivate people to exercise, although so far, most research has only been tested with younger people. One approach used robots as a coach who encouraged users to exercise, but the robot/coach didn’t exercise with them. For instance, Bickmore et al. [1] developed a computer-graphic avatar that interacts daily with users to encourage exercise. Kidd et al. [18] developed a robotic agent that encourages exercise to lose weight, and Fasola et al. [7] used a humanoid robot to instruct and encourage users to exercise using arm gestures. Mann et al. [21] compared robots and computer tablets equipped with an avatar software that could read instructions aloud and concluded that users enjoyed interacting more with robots and subsequently preferred them over computer tablets. One major point of this work is to establish whether social robots can successfully motivate or positively influence seniors to engage in daily exercise like walking.

This study is also aiming to use robots as a social presence to provide a sense of togetherness. Being a social partner is a factor that influences exercise behavior, including walking behavior [5, 11]. Thus we speculate that social robots could serve as partners that exercise with users. Studies have been conducted on telepresence [26, 34], where telepresence robots like RP-Vita [35], Giraff [10] are commercially available. However, in our proposed system, robot navigation is fully automated. Furthermore, since the rapid growth of robotic technologies has enabled robots to navigate with people [9, 12, 20, 25, 32], it might already be realistic to let people walk with autonomous robots. We believe that robots have technologically advanced enough to meet both needs.

As discussed above, this study provides useful information about robotics applications as walking partners for older adults. Even though it is becoming technologically realistic to let robots navigate with people, there is insufficient evidence that mobile robots have gained social acceptance by older adults. Since we don’t know whether they prefer to walk with an autonomous robotic partner at the current technology level, we propose and answer the following research questions:

  • Do older adults prefer walking-partner robots over walking alone?

  • Can the current state of robotics technology satisfy older adults for walking-partner applications?

The first research question, which considers the intentions of older adults, is inspired by the Almere Model proposed by Heerink et al. [14] that was derived from the general technology acceptance model, where users accept robots because they are easy and fun to use [13, 14]. In the Almere Model, enjoyment positively affected the ease of use and both enjoyment and ease of use positively affected intention to use. We speculate whether the study would fit the Almere Model. If Almere model holds and participants enjoy walking with the robot, it would positively affect the intention of walking with the robot.

The second research question considers the satisfaction of robotics technology used in walker-partner applications. We investigated whether a robot can both navigate well enough to be a feasible walking partner and be viewed as an enjoyable social agent. In doing so, we examined whether the robot’s navigational capabilities are sufficient, as observed by the participants. We also observed whether it was enjoyable to use the robot with the current technology.

We believe that the robot-controlling techniques are mature enough that older adults will accept a robot as a walking partner. Hence, we hypothesize as follows:

  • H1: Older adults will perceive more ease when they walk with the robot than when walking alone.

  • H2: They will perceive more enjoyment when they walk with the robot than when walking alone.

  • H3: They will perceive more intention to walk with the robot than to walk alone.

Apart from the above hypotheses and research questions, we also identify the factors of the robot that must be improved. We examine the desired communication features and design implications expected by people to enhance the robot capabilities and enjoyment. In addition, we identify whether the robot was perceived as some form of human entity with its current appearance as it becomes a social partner. The outcome of the appearance, communication method, and design implications will enhance the enjoyment of walking with a robot in future studies. We expect to use the results of this study to establish whether social robots can successfully motivate or positively influence older adults to engage in daily exercise, like walking.

2 Robot System

When people walk together, they usually maintain a side-by-side formation [24], which facilitates communication and allows them to maintain personal distance and eye contact. We assume that a walking-partner robot should also maintain a similar side-by-side walking formation. Among available methods for side-by-side control techniques, we used a previously proposed approach discussed in section C: side-by-side system.

2.1 Hardware Configuration

To move and keep pace with a human, a fast-reactive robot is required that can adjust to its partner. For this function, we used a fast-wheeled humanoid robot named Robovie-R3 (Fig. 1) whose maximum speed is 1 m/s and whose maximum acceleration is 0.80 m/s2. It can interact with people through utterances and gestures. We placed a 3D-laser range finder (HDL-32E from Velodyne) 1.40 m above the ground level to gain enough visibility of its environment for localization and people-tracking. In addition, we placed a 2D-laser range finder (UTM-30LX from Hokuyo) near the ground surface (0.07 m above the ground) for observing obstacles and making emergency stops. The robot is equipped with wheel encoders and an inertial measurement unit (IMU) (VG400 from Crossbow).

Fig. 1
figure 1

Robovie-R3: wheeled humanoid robot equipped with wheel encoders, inertial measurement unit, and 2D- and 3D-laser range finders that can interact through utterances and gestures

2.2 System

As shown in the Fig. 2 the system is composed of off-line and on-line processes where odometry data, 2D- and 3D-laser rangefinder data are the inputs. The map and subgoal list are components that were prepared off-line in advance. For moving alongside a human partner, the rest of the computations ran in real-time. The robot computes its own pose with a localization module and estimates the position of its human partner using a human tracker module. It computes the next best location using a side-by-side walking model and monitors the presence of obstacles with the safety-stop module. We briefly explain these modules below.

Fig. 2
figure 2

System overview: (left) robot sensors, (top) elements computed off-line, and (bottom) real-time modules

We used a 6-DoF localization using a 3D-laser rangefinder to estimate the x, y, z, yaw, pitch, and roll. The data obtained by the long-range 3D-range finder sensor, the pose data obtained by the inertial measurement unit, and the robot encoders information helped estimate accurate robot localization in outdoor environments that are not flat and contained people surrounding the robot.

We built an environmental map off-line to localize the robot. We manually drove it and logged sensor data, which were then fed to a SLAM framework [31]. For map representation, we used an octree with 0.10 m resolution [15]. The final 3D map is shown in Fig. 3. We implemented a localization module based on a particle filter [6]. In prediction step \( {\text{p}}(x_{t} |u_{t - 1} ,x_{t - 1} ) \), current pose state \( x_{t} \) is computed from previous state \( x_{t - 1} \), and robot motion \( u_{t - 1} \), which utilizes a differential drive model to propagate the particles. For update step \( {\text{p}}\left( {z_{t} |x_{t} ,m} \right), \) we used a 3D-laser scan \( z_{t} \) = (\( z_{t}^{1} \),…,\( z_{t}^{K} \)) and a likelihood field map (m) to compute the weight for each particle using the likelihood end-point model. In this implementation, the particle filter had 100 particles.

Fig. 3
figure 3

3D map of experimental environment

We tracked the human partners with a human tracker with 3D-range finder data. Background subtraction between the environmental map and the currently measured point cloud was done to obtain point data corresponding to dynamic objects (e.g., humans). The remaining dynamic object points were clustered, and each cluster within 15 m of the robot was tracked by a particle filter with 50 particles each. For safety purposes, we used a safety-stop module that has a 2D-laser sensor to stop the robot in case of imminent collisions. Figure 4 shows a processing example for localization and human tracking.

Fig. 4
figure 4

Robot and older adult walking side-by-side outdoors (left) and corresponding 3D-point cloud (right). Point cloud data were colored based on height (0–0.5 m in blue and 0.5–2.4 m in green). Localization and human tracking results are shown in trajectories of robot (green) and tracked human (brown). (Color figure online)

2.3 Side-by-Side Walking Model

Several methods are available for side-by-side navigation control for robots. Previous works commonly exploit the recently observed velocity of the person and extrapolate it to predict future locations [19, 28]. However, a simple velocity controller approach suffers from the instability of the velocity vector and fails in situations where mutual obstacle-free paths should be planned [22, 23].

The joint-planning approach models side-by-side walking as a collaborative activity where the walking-agent plans motions that are not only good for themselves but also for their partners. For instance, if there is an obstacle in front of the partner, agents make space so that their partner can avoid it. This idea is implemented as utility-based joint-planning [17]. In the model, the goodness of future situations is computed as utility. The robot system assumes that people will maximize their utility, anticipates the future motion of their partners, and plans their own future motion to maximize the utilities for both their partner and themselves.

2.3.1 The Navigation Method

We use the navigation method discussed in our previous work [17] where a side by side walking model was implemented as a parameter-based utility model with joint planning. The model does not require the robot to know the final goal and uses the parameter utilities to calculate the next best position for both robot and human partner. The parameters are derived and calibrated from people’s trajectories when they walk side-by-side.

Our model is composed of three types of utilities: environmental, motion, and relative (Fig. 5). Relative utility, which represents the goodness of the position relative to the partner, includes relative distance (\( R_{d} \)), relative angle (\( R_{a} \)), and relative velocity (\( R_{v} \)). These sub-utilities increase if their future positions form a better side-by-side formation. Motion utility is composed of linear velocity (\( M_{v} \)) and angular velocity (\( M_{w} \)). If the future motion is stable at a constant velocity while moving straight, these sub-utilities will also increase. Finally, the environmental utility is composed of the distance to obstacles (\( E_{O} \)) and the direction to the goal (or the next subgoal if both are moving along a path with several turning points) (\( E_{S} (\varvec{s}_{target} ) \)). These sub-utilities will be high if they are moving toward the goal/subgoal and there is enough distance from the nearby obstacles.

Fig. 5
figure 5

Utility function’s factors

The total utility is given by the following function, where \( {\text{k}}_{\text{x}} \) represents the weight of each utility:

$$ U_{C} \left( {p_{t + 1}^{i} ,p_{t + 1}^{j} , \varvec{s}_{target} } \right) = {\text{k}}_{{{\text{E}}_{\text{O}} }} {\text{E}}_{\text{O}} + {\text{k}}_{{{\text{E}}_{\text{S}} }} {\text{E}}_{\text{S}} \left( {\varvec{s}_{target} } \right) + {\text{k}}_{{{\text{R}}_{\text{d}} }} {\text{R}}_{\text{d}} + {\text{k}}_{{{\text{R}}_{\text{a}} }} {\text{R}}_{\text{a}} + {\text{k}}_{{{\text{R}}_{\text{v}} }} {\text{R}}_{\text{v}} + {\text{k}}_{{{\text{M}}_{\text{v}} }} {\text{M}}_{\text{v}} + {\text{k}}_{{{\text{M}}_{\text{w}} }} {\text{M}}_{\text{w}} . $$
(1)

To limit the planning space, we only included likely positions for the utility computations. The present global locations of the robot and its partner (\( P^{i} \), \( P^{j} \)) are projected based on the current and angular velocities where an anticipation grid was placed with a cell resolution of 0.2 m. The center of each grid \( q_{i,j} \) for each partner, \( q \) is given by \( p_{t + 1}^{q} = p_{t}^{q} + v_{t}^{q} t_{pred} \), where extrapolation time \( t_{pred} \) was set to 2 s. We removed the grids that cannot be achieved based on the robot’s constraints, the current linear velocity, the angular velocity, and the robot’s acceleration. Figure 6 illustrates the anticipation grids that contain the possible future neighboring locations (\( P^{i} \), \( P^{j} \)) for the robot and its partner. For all the pairs of grid points in (\( P^{i} \), \( P^{j} \)), the utility is computed using Eq. (1). Finally, the future position of the robot (\( p_{t + 1}^{i} \)) and the anticipated position of its walking partner (\( p_{t + 1}^{j} \)) are chosen as the pair that yields the highest joint utility score.

Fig. 6
figure 6

Joint-planning implemented as utility-based model for side-by-side walking

2.4 Example

To understand how the system computed and controlled the robot to form a side-by-side formation, we present a typical computation example where a person walked along a street on a university campus (Fig. 7) as the robot successfully moved in a side-by-side formation. The Fig. 8 illustrates the joint-planning computation using our side-by-side walking model. As the robot and its human partner walk alongside each other, the robot estimates its own position and its partner’s position using the localization and human tracking modules. Then it performs joint-planning, anticipates where the person will be and computes where it will go itself. An example of the computation is shown in Fig. 8. From the current positions of the robot and the human, future possible locations are projected, and utilities are computed on them. The Eq. (1) computes the utility of each possible future location. The cells with higher utility values (brighter) represent the locations with better utility, meaning that the system estimates that motion to this location will maintain a better side-by-side formation. The planner chooses the locations with the highest utility for both the human and the robot by anticipating that the person will go to the best location (green arrow) and plans itself to go to the best location (orange arrow). Note that for simplicity, we plotted the utility for all locations and computed them based on a pair of robot and human future locations. Here, with joint-planning, the robot not only considers its self-motion but also the motion utility of its human partner.

Fig. 7
figure 7

Robot walking side-by-side outdoors

Fig. 8
figure 8

Example of joint-planning computation: grid selection and velocity calculation

3 Experiment

We conducted an experiment with older participants to address our research questions and the hypotheses raised in the introduction.

3.1 Method

3.1.1 Participants

The participants were Japanese adults whose ages ranged from 60 to 73 years (N = 20, males: 10, females: 10) with an average age of 67.50 years (SD = 3.90). They were paid for their participation.

3.1.2 Conditions

We compared these two conditions:

  • With robot: each participant walked with the robot, as explained in Sect. 2.

  • Walking alone: each participant walked alone.

The experiment was conducted with a within-subject design. The order was counter-balanced, where the number of each order was controlled to be the same, and the participants were randomly assigned to each of the orders. To investigate how the robot’s existence affected the participants’ perception of walking with it, the robot did not talk at all during the experiment.

3.1.3 Environment

The experiment was conducted on a university campus. As often as possible, the sessions were conducted outdoors (Fig. 9a). But on rainy or snowy days, they were conducted indoors. The outdoor experiments were conducted for 7 participants (5 males, 2 females). For each outdoor experiment, they walked with the robot on an 80 m route on a 4 m wide street (Fig. 9a, left). In each run, the participants walked to the end of the street and returned. The experiment was conducted indoors with thirteen participants because of rain or snow (5 males, 8 females). For each indoor experiment, they made two round trips in a 30 m corridor (Fig. 9b, right).

Fig. 9
figure 9

Experiment environment maps a outdoor environment b indoor environment

3.1.4 Procedure

We conducted each experiment as follows:

  1. 1.

    We explained it to the participants who signed an informed consent form. We also distributed demographic questionnaires.

  2. 2.

    For each condition, the participants stood at the starting point and learned the route. In the walking-alone condition, they started on their own. In the with-robot condition, they started when the robot was ready to start.

  3. 3.

    After they finished walking, the participants were given questionnaires (explained in the measurement section).

  4. 4.

    After the first session was completed, the second session for the other condition was immediately conducted (Steps two and three).

  5. 5.

    The participants were interviewed after they completed sessions for both conditions.

3.1.5 Measures

After each experimental session, we gave questionnaires that contained three measurements inspired from the Almere Model [14]:

  • Perceived enjoyment: “I enjoyed walking this way.”

  • Perceived ease of walking: “I think I will know this way of walking immediately.”

  • Intention to walk: “I’d like to walk again this way over the next few days if I were given the opportunity.”

The questionnaire under each category is denoted in Table 1. Each question was evaluated on a 1–7-point Likert scale where 1 was the most negative (strongly disagree) and 7 was the most positive (strongly agree).

Table 1 Questionnaire items based on technology acceptance (Almere) Model

3.2 Results

3.2.1 Observations

During the study, 90% of the participants (18 out of 20) walked and maintained a side-by-side formation with the robot. As shown in Fig. 10 a participant is walking outdoors in a strictly side-by-side formation with the robot. They began at the bottom of the map around t = 0 s (t denotes time in s) and moved along the passage until t = 82.88 s where they turned around and returned to the starting point. Their trajectory is plotted in the map with photos and time steps and is shown in Fig. 10. They walked at a constant speed of 0.88 m/s and maintained a strict, stable side-by-side formation throughout their walk.

Fig. 10
figure 10

Man walking with robot in strict side-by-side formation

As shown in Table 2, 10% of the participants (2 participants, 1 out of 7 outdoors and 1 out of 13 indoors) walked at their own pace and started accelerating without giving the robot any opportunity to catch up to re-establish their paired formation (Fig. 11).

Table 2 Observations on human–robot motion both indoors and outdoors. Percentage values in parentheses were computed considering the total number of participants for each condition
Fig. 11
figure 11

Woman walking at her own pace and ignoring robot

During the experiment, 90% of the participants (18 of the 20) sustained a side-by-side formation. Of the 20 participants, 7 people (35%, 2 out of 7 outdoors and 5 out of 13 indoors) initially walked side-by-side, but they eventually accelerated, creating a formation where the robot was slightly behind the person. One such example is illustrated in Fig. 12, where the robot started slightly behind the person at time t = 131.51 s and caught up to her at t = 136.56 s. However, by the end of the course at t = 151.72 s, the robot was again slightly behind her. In addition, 75% of the participants (15 of 20) walked without looking at the robot. The remaining 25% (5 of 20) sometimes looked at the robot while walking, as to confirm that the robot was still following (Fig. 13).

Fig. 12
figure 12

Woman walking slightly ahead of robot

Fig. 13
figure 13

Man looking at robot while walking side-by-side with it

We also identified some behavioral differences between the outdoor and indoor situations. Since the indoor corridor was narrower (2.0 m) than the outdoor passage (4.0 m) the robot occasionally got too close to the wall and was slowed down by its safety mechanism. Similarly, in the indoor corridor the robot sometimes moved too close to the person before turning away. These behaviors prompted the person to stop momentarily before resuming. We also noticed some subtle differences. In the outdoor environment, a couple of participants stretched their arms as they might do while walking in the early morning. In the indoor environment, people walked with less freedom and slower than the outside environment. We analyzed the behaviors of 18 participants who maintained side-by-side formation and found that in the indoor environment they walked slower on average (M = 0.75 m/s, SD = 0.07) than the participants outdoors (M = 0.85 m/s, SD = 0.05) (F (1,16) = 9.83, p = .006, η 2 p  = .381). This may be due to the fact that indoor space been narrower (2.0 m) than the outdoor space (4.0 m). There were no significant difference in their relative distance to the robot between indoor (M = 0.93 m, SD = 0.12) and outdoor spaces (M = 0.99 m, SD = 0.15) (F (1,16) = 1.06, p = .319, η 2 p  = .062).

In the walking-alone condition, the participants walked alone on the same route that they would have done with the robot. Both in the indoor and outdoor environments, people walked along the route at a constant speed and tended to walk on the middle of the open space as shown in Fig. 14.

Fig. 14
figure 14

Man walking alone a walking-alone outdoors b walking-alone indoors

3.2.2 Hypotheses Testing

Table 3 shows the Pearson’s correlation coefficients among the scores for the three scales. In the walking-alone condition, we identified statistically significant correlations between the ease of walking and enjoyment scores at moderate levels (r = .46, p = .042) and statistically significant correlations between enjoyment and intention to walk scores at strong levels (r = .65, p = .002). On the other hand, in the with-robot condition, the correlation between ease of walking with and enjoyment scores was not significant (r = .13, p = .601), unlike the walking-alone condition; there were statistically significant correlations between enjoyment and intention to walk scores at strong levels (r = .64, p = .003).

Table 3 Pearson’s correlation coefficients between questionnaire scores

To investigate the predictions, we statistically analyzed the difference between the conditions for each measurement. We conducted a test for normality, which indicated that, except for ease of walking with/without the robot, the data was normally distributed. Thus, for these normally-distributed measurements, we included the experiment’s location (indoors or outdoors) as one of the factors in the analysis, hereafter referred to as the location factor, and we applied a mixed ANOVA with one within-participant factor, the experimental condition (with robot or walking alone), one between-participant factor, and the location factor (inside or outside). For ease of walking, we conducted a non-parametric test (Wilcoxon’s test). Figures 15, 16, 17 show the mean and standard deviations of the three questionnaire scores and the results of the mixed ANOVAs.

Fig. 15
figure 15

Means and standard deviations of questionnaire scores and results of mixed ANOVAs

Fig. 16
figure 16

Means and standard deviations of questionnaire scores

Fig. 17
figure 17

Means and standard deviations of questionnaire scores and results of mixed ANOVAs

Figure 15 shows the means and standard deviations for the Enjoyment factor based on the questionnaire scores: indoors with robot (M = 26.38, SD = 5.58), without robot (M = 24.23, SD = 4.09), outdoors with robot (M = 29.50, SD = 4.59), and without robot (M = 29.00, SD = 3.41). The ANOVA results revealed statistical significance regarding the location factor (F = 5.49, p = .032, η 2 p  = .244). The outdoor session participants gave higher enjoyment scores than those who did indoor sessions. No statistically significant differences were found in the experimental condition (with/without robot) (F = .72, p = .408, η 2 p  = .041) or in their interaction effect (F = .28, p = .604, η 2p  = .016). Because there was no significance in the experimental condition, H1 was not supported.

Similarly, the Fig. 16 shows the means and standard deviations for the Ease of walking with/without the robot factor based on the questionnaire scores: indoors with robot (M = 26.50, SD = 6.02), without robot (M = 31.57, SD = 5.21), outdoors with robot (M = 31.00, SD = 5.83), and without robot (M = 34.33, SD = 1.63). Because the ease of walking with/without the robot is not normally distributed, we conducted a non-parametric test (Wilcoxon’s test) for ease of walking based on the conditions with/without the robot. Note that, because there is no non-parametric test that can be applied for mixed design 2-factor data, instead, we decided to merge outdoor and indoor data, and only analyzed the experimental condition about with/without the robot. The result showed a statistically significant difference (p = .022), but in an opposite direction to our prediction, suggesting that they rate that the ease of walking with the robot is lower than the walking-alone situation. Because there was no significance in the experimental condition, H2 was not supported.

Figure 17 shows the means and standard deviations for the Intention of walking with/without the robot factor based on the questionnaire scores: indoors with robot (M = 13.36, SD = 5.36), without robot (M = 11.86, SD = 5.35), outdoors with robot (M = 17.67, SD = 3.78), and without robot (M = 16.5, SD = 4.18). The ANOVA results revealed a statistical significance in the experimental condition (with/without robot) (F = 4.54, p = .047, η 2 p  = .202). There was trend on the effect of the location factor (F = 3.59, p = .074, η 2 p  = .166). There was no significant interaction effect (F = .07, p = .793, η 2 p  = .004). The scores for walking with the robot significantly exceeded those for walking alone. Because there was significance in the experimental condition, H3 was supported.

3.2.3 Analysis of Interview Results

Our initial hypothesis (Section I) reasoned that since the participants would perceive the robot as enjoyable and easy to use, they would prefer to use it. The questionnaire results (Section III.B.2) indicate that participants preferred to walk with the robot than walking alone, as we hypothesized. However, the reasons they wanted to use the robot were contrary to our hypotheses. Ease of walking with/without the robot was rated low, and it was not correlated with enjoyment scores. We investigated their reasoning by semi-structured interviews and asked them the following five questions:

  1. 1.

    Was walking with the robot enjoyable?

  2. 2.

    Was walking with robot easy/difficult?

  3. 3.

    Do you want to use the robot again?

  4. 4.

    What entity did the robot resemble?

  5. 5.

    How can the robot be improved?

For each item, we conducted coding to analyze the participant’s responses. First, we established a coding scheme (definition of categories). Two people who did not know our experimental hypotheses independently classified the responses into the defined categories and judged whether the concept in each category was mentioned in each participant’s response. In order to verify the coding validity, we checked the matching of their coding by computing the Cohen’s kappa coefficient (denoted as κ in Table 4), which showed moderate to strong levels of agreement. Thus, our coding scheme was valid. The two coders were asked to reach a consensus about their classification results in order to obtain the final results.

Table 4 Summary of interview results

The Table 4 summarizes the final results obtained from the interviews. Each row lists the ratio with the number of participants who mentioned each concept during the interview and the Cohen’s kappa coefficients. Since the participants can provide multiple answers for each category, the sum of the ratios may exceed 100%. The detailed results for Table 4 are explained in the sections that follow.

(a) Reasons why participants enjoyed walking with the robot The first row of Table 4 denotes the two reasons they perceived more enjoyment with the robot than walking alone. Novelty, which was mentioned by 60% of the participants (12 of 20), was a major reason: “It was novel (to walk with a robot).” On the other hand, 60% attributed their enjoyment to the fact that they walked with the robot (due to companionship): “It was enjoyable because I got a sense that walking with the robot is like a playing with a dog or a child.”

The first row (why participants enjoyed walking with the robot) of Table 4 also denotes the questionnaire reliability based on Cronbach’s α-coefficients. For each scale, the score was calculated as the sum of the corresponding item scores. All the coefficient values were reasonably high, showing internal consistency.

(b) Reasons why participants felt walking with the robot was easy/difficult The second row of Table 4 denotes that 90% of the participants (18 of 20) mentioned the difficulty of walking with the robot. Their reasoning reflected different aspects of walking. For instance, one participant complained that “it was a little difficult to adapt myself to the robot’s speed.” Another participant said that “I had to adapt myself to the robot’s pace.” Some participants mentioned safety concerns about the robot’s behaviors: “When the robot approached me, I wondered, is this safe? It might bump into me.”

Even though almost all the participants commented on the difficulty, about 40% of the participants (8 of 20) also mentioned that the robot behaved well: “It moved mostly as I imagined. The experiment was in a corridor that had the same width. So, I didn’t think that the robot would go in an unexpected direction. I was able to easily walk with it.”

(c) Reasons why participants intended to use the robot again The third row of Table 4 elaborates the reasoning for using the robot again. One reasoning approach was to straightforwardly mention a positive aspect of the robot with which they had just walked. Of the participants, 55% (11 of 20) explained their reasons as some positive evaluation of walking with the robot such as more enjoyment than walking alone, the novel experience of walking with it, and its cuteness.

On the other hand, most participants commented on their desire to experience something new when we asked about their reasoning. Of the participants 80% (16 of 20) mentioned that they expected future development and improvement of the robot’s communication functions. For instance, one participant (P) had the following exchange with the interviewer (I):

  • I: Would you like to walk with the robot again?

  • P: Sure, that sounds like fun.

  • I: Why?

  • P: Because I’m with a robot and if it had facial expressions and some eye movements, it would be even more fun. Plus, I know that some robots can communicate with people.

  • I: Yes, that’s true.

  • P: If this robot had such an ability, it would be even more enjoyable.

Similarly, 35% of the participants (7 of 20) expected improvement in its walking-together functions as well. For instance, when we inquired one participant whether he intended to use the robot again he replied: “Yes, but I’d find it more fun if the robot moved a bit more smoothly.” Of the participants 45% (9 of 20) mentioned that they expect to use the robot in different situations. For instance, a participant whose experiment was conducted in the indoor environment said: “I’d like to walk with the robot in an open space like in a park.”

(d) What entity did the robot resemble? The fourth row of Table 4 denotes the types of entities participants perceived the robot to resemble: a robot-like pet animal 15% (3 of 20), a child/grandchild 30% (6 of 20), a friend/partner 35% (7 of 20), or something human-like 20% (4 of 20) as a doll or a doppelgänger.

(e) How to improve the robot The last row of Table 4 notes the responses for improving the robot. Communication functions were noted by 60% of the participants (12 of 20). For instance, one participant said that “I wished the robot could talk and understand me to some degree.”

The navigation functions were mentioned by 60% (12 of 20) of the participants. For instance, one participant said: “In our daily life, we already have cars, bicycles, pedestrians, and signals. I’d like a robot that could avoid these kinds of obstacles.” During the interview, 40% of the participants (8 of 20) mentioned diverse aspects of daily life functions beyond walking that would improve the robot. These includes navigation support in cities, providing protection from traffic accidents, and healthcare coaching.

4 Discussion

4.1 Findings

We started our study by asking two research questions (Sect. 1). In the first question, which addressed whether our participants wanted to walk with the robot again, we found that they generally accepted it as a walking partner. We compared walking with a robot and walking-alone situations and found that they preferred the situation with the robot in terms of their intention to walk again. Our study suggests the future possibility of using robots to support lonely seniors without walking partners (like family members or friends) to maintain their health by walking with them.

We also noticed that the reason why seniors accepted the robot might be different from what has been discussed in the literature. In the Almere Model proposed by Heerink et al. [14], enjoyment positively affected the ease of use, and both enjoyment and ease of use positively affected intention to use. We speculated whether the study fits the Almere Model.

In our study with the robot, we identified no correlation between enjoyment and ease of walking with/without the robot or between ease of walking with/without the robot and intention to walk. Moreover, the participants perceived a lower ease of walking in the with-robot condition than in the walking-alone condition. Nevertheless, they showed a higher intention to walk in the with-robot condition than in the walking-alone condition. This result suggests that the aspect of the technology acceptance of walking with robots differs from accepting existing robotics technology.

The analysis results of the interview data complement the above implication. The reason for the low ease of walking with the robot rating reflects dissatisfaction with the current state of the robot’s functions. Nevertheless, the participants showed an intention to walk based on their expectation of walking with robots in the future. As we discussed earlier about the Almere Model, enjoyment positively affected the ease of walking, and both enjoyment and ease of walking with/without the robot positively affected intention to walk. Yet our results show that the aspect of the technology acceptance of walking with robots differs from the acceptance of existing robotics technology. The reasons for this may lie on the fact that does not exactly fit the Almere Model and speculation that it does was false. However, such expectation was itself one big factor. Anticipating future use was mainly influenced by their intention to walk.

Concerning our second research question, i.e., whether current robotics technology satisfies senior users as walking-partner applications, our study indicates an in-between result. Since seniors expressed their intention to walk with the robot again, we believe that current robotics technology offers a certain level of satisfaction; however, our interviews also revealed that they often wanted the robot to improve its navigation behavior. For instance, one person wanted it to move faster, but another wanted it to move slower. We believe that since there are diverse individual differences about their preferences for how the robot moves with them, the required technique for robot control for walking-partner applications is more complicated than we previously thought. One of our future works is to improve the quality of side-by-side navigation technology.

4.2 Implications

Although our robot did not speak, one apparent design implication from our study is that our robot requires better communication capabilities. Participants wanted it to talk, to be more emotive with its facial and bodily expressions, and to communicate its intentions more. This matches the findings in other literature [16].

Based on interview results, we also found the robot’s resemblance had a diverse view, including a robot-like pet animal, a child/grandchild, a friend/partner, or something human-like such as a doll or a doppelgänger. Of the participants 85% (17 of 20) agreed that the robot resembled something human-like, including a child/grandchild or a friend/partner.

Some participants expressed an intention to walk with the robot even though they admitted that it was lacking some capabilities. They expressed their expectation-based desire to observe the robot’s improvement as they use it over time. This resembles previous findings [29], wherein a Japanese shopping mall, people accepted a robot as a kind of mascot that does not perform concrete functions but only has future roles. Like developing a relationship with one’s child or grandchild, perhaps senior users in Japan might accept robots in their technological infancy because they foresee and hope to enjoy the improvement of their functionality. When robot functions such as providing conversational functions [33] improve, people might accept such robots more, and applications of walking-partner robots might spread more rapidly in society.

During the study, we observed few styles of walking with the robot. For example, few participants completely ignored the robot while walking thus lost the side by side formation. We believe this was due to the participants being fast walkers with no interest in walking together with the robot. Similarly, we noticed that during some trails the robot was little bit behind the person after starting together and did not strictly maintain the side by side formation. This means the people were walking normally as they would do with a small child where the child is often walking behind. Yet most of the participants maintained a side by side formation. This is like the motion expected of two people walking together. Similar observations were read in the interviews as some participants identified robot with a small child or a pet while others identified the robot as a friend, partner or something human-like.

In addition, our study provided evidence that the walking location matters. Participants enjoyed walking more outdoors and perceived the robot as easier to use outdoors, possibly because the street was wider with better landscaping. Further investigation is necessary about what environmental factors are more influential (e.g., sunshine, wind, temperature, road width, and street surface). We should place users in better walking locations to improve their satisfaction, e.g., by letting a robot suggest or encourage users to walk at such locations.

4.3 Generalizability and Limitations

We tested the robot in a specific context. A humanoid robot was tested in a university environment utilizing a side by side walking model. Also, the robot was used for a short amount of time with each participant. We did not use the communication ability of the robot. Therefore, we haven’t yet identified to what extent our findings can be generalized. The results might be different with different robots in different contexts with different communication skills. Therefore, the results might not be generally applicable. Since we specifically focus on older adults, we expect that our result would be different if users from different age groups were involved, e.g., young people who often engage in exercise/sports activities with friends might be less interested in walking-partner robots.

Moreover, our findings might be culture-dependent. For instance, related to the intention to walk, Nomura et al. [27] found that Japanese people tended to expect a robot to serve as a communication partner more than people in other countries. Similar responses were given by our study participants when we talked about their reasons for intention to walk. Thus, since there are culture-dependency ideas about what role people expect robots to fill, whether people accept them as walking partners could be culture-dependent. In addition, in our study, participants expressed an intention to walk with the robot again even though they found it lacking some functions, and their acceptance doesn’t fully match the Almere Model, possibly because our study setting does not fit the Almere Model. Additionally, this mismatch might also reflect cultural reasons.

The experiment was conducted with 20 participants and only 7 of whom were placed in the outdoor condition. A larger sample size would provide many more unbiased results about the conditions. The current side-by-side model does not specify such partner preferences as distance and speed. One future work will specify such preferences at the start of interactions to enable a much more user-friendly navigation that caters to the differences of various people.

5 Conclusions

This paper investigated whether seniors would accept a robot as a walking partner. We prepared a robot that can move side-by-side with seniors and measured their perception of ease of walking, enjoyment, and intention to walk. Experimental results showed that our participants rated their intention to walk significantly higher for walking with the robot than walking alone, although no significance was found in ease of walking with/without the robot and enjoyment. Interview results showed that they wanted to use the robot again because they felt positive about it now and because they expect it to improve in the future.