Keywords

1 Introduction

Telepresence robotics integrates solutions from Mobile Robotics (MR) and Information and Communication Technologies (ICTs) to provide a fruitful interaction between two distant locations: (i) the robot environment, where a mobile robot performs, and (ii) the visitor environment, where an user teleoperates the robot and actively interplays with the surroundings through its sensory system.

This form of communication is increasingly gaining attention in Ambient Assisted Living (AAL) contexts, particularly in applications aimed at supporting services for ageing-well at home. In such applications, the final goal is to support the remote social interaction between caregivers and elderly people in a more natural fashion than traditional methods do (e.g., phone calls or video-conference). To this goal, robotic telepresence envisages the robot’s mobility as an added-value to enable the visitor to freely move within the remote environment. In this way, the assisted person is relieved from being at a specific location, i.e. in front of a computer, and/or holding a device while the interaction occurs.

Achieving the required levels of safety in the robot motion when guided by the user is the first, unavoidable requisite for a teleprecesence robot. Usability, also, is highly necessary since the robot, as a mobile multimedia platform, must permit a visitor to put the focus on the main purpose of the visit, the social interaction, instead of on negotiating obstacles along the commanded path. Meaningful use cases of robotic telepresence affected by this issue are presented by Tsui et al. in [7, 8]. The first study discusses the limitations of a telepresence robot to operate in scenarios involving movement while simultaneously having a conversation, while the second one points the difficulties of visitors to explore an art gallery because of the presence of people near the robot, long hallways, and network latency.

Enhancing the robot mobility with obstacle avoidance and assisted driving features (i.e., collaborative control) has been proved to be a suitable approach to deal with such problems [1, 5, 6].

For example, in [6] a method based on the readings of a 2D laser range finder is proposed, reporting increased levels of safety in assisted driving at the cost of longer times required to complete the same task.

Macharet and Florencio propose in [5] a system with increased perception capabilities based on the 3D range information of a RGBD camera. They report benefits provided by the collaborative method in both terms, safety and task time performance, but they also point out a negative effect of the assistance on the level of usability perceived by the users. In particular, users found assisted driving less intuitive than manual, and most of the participants demanded some feedback about the autonomous behavior during the task.

Other approaches, like the system presented in [1], address the problem of designing collaborative control methods using low-cost sensory systems, e.g. an array of ultrasonic sensors. However, low cost sensory systems are, in general, insufficient to deal with typical problems of mobile robotics like, for example, self-localization, and, thus, most of the robotic telepresence platforms targeting autonomous behaviors rely on laser scanner and RGBD approaches.

In this work, we describe our collaborative control method for telepresence robots that integrates off-the-shelf robotic algorithms relying on the scans of a laser rangefinder and a RGBD camera to provide assisted driving. The method provides a natural and transparent way to assist the user in typical maneuvers like door-crossing, narrow passages, and cluttered spaces. The collaborative control has been integrated into a commercial telepresence robot (see Fig. 1) equipped with the required sensors and a control architecture based on the MOOS robotic framework [3]. In addition, a convenient web-based visitor interface has been implemented to solve usability and accessibility issues of other existing approaches and, finally, a user study (N = 24, 12 visitors performed manual driving and the other 12 used the collaborative control) has been conducted to experimentally assess the suitability of the assisted guiding method.

Fig. 1.
figure 1

Overall architecture of the considered robotic telepresence application. The collaborative control combines information from the user intentions and sensory data to guide the robot to the destination marked by the user, while automatically negotiate obstacles.

2 A Collaborative Control Method for Telepresence Robots

The main features that characterize a collaborative control method for telepresence robots are (i) collision avoidance, to ensure the security of the robot workspace, and (ii) obstacle negotiation, to relieve visitors from complex maneuvers, helping them to concentrate in the social interaction instead of on the robot teleoperation.

Relying on the abilities of the visitor guiding the robot, the collaborative control overcomes current limitations of mobile robotics to handle with the intricacies and complexities of human-like environments. Thus, the challenge is to coherently merge user intentions with autonomous behaviors in order to provide an intuitive assisted driving. Figure 2 illustrates some common situations from our experiments in which navigation assistance is typically required, namely, crossing doors (Fig. 2(a)), going through narrow passages and/or hallways (Fig. 2(b)), and negotiating obstacles in cluttered/dynamic environments (Fig. 2(c)).

Fig. 2.
figure 2

Situations prone to cause collisions. (a) passing through doors, (b) narrow passages, and (c) cluttered spaces.

Our approach provides a suitable solution to these cases through a collaborative control with the following features:

  1. 1.

    The sensory system provides 3D range data of the proximity of the robot exploited to detect obstacles. To that aim, the 3D sensed data is projected into a 2D occupancy grid by selecting the minimum measured distance from each column of the range image and fused with the scan of the 2D laser rangefinder. If the closest obstacle point is under a specified threshold, the collision avoidance mechanism is activated, stopping the robot to prevent any crash.

  2. 2.

    A collaborative layer provides a guiding assistance by combining (i) a destination target selected by the visitor in the interface and (ii) the detected obstacles around the robot. Both are used to generate a collision-free motion based on the method presented in [4].

  3. 3.

    The visitor interface includes effective visualizations to complement the video received from the telepresence robot (see Fig. 3(b)). More specifically, a set of graphical user interfaces (GUIs) have been integrated in order to (i) provide the visitor with feedback on the obstacles in the robot surroundings and (ii) inform about the operations being conducted autonomously in the collaborative layer, which helps in keeping teleoperation intuitive. Furthermore, the interface allows the user to easily enable/disable the guiding assistance of the robot motion at any time, and considers controllers for multiple input devices (i.e., keyboard, mouse, and touchscreens), which is a key aspect in terms of user accessibility.

3 Experiments

In order to test the collaborative control, it has been implemented as part of the MOOS-based robotic control architecture deployed in one of our commercial Giraff robots [2]. The set of trials involved N = 24 participants (12 of them performed fully manual driving and the other 12 used collaborative control) that were requested to steer the robot through a specific path (see Fig. 3(a)). Two metrics are taken into account to evaluate the execution of the task: time spent and number of collisions made.

Conducted tests included a training period of \(\tilde{4} 5\) s. in which the participants familiarized with the controls. A task is given to the visitor at locations 1, 2, and 3 to simulate the social interaction of the visitor in a real situation. Tasks consisted on searching and identifying an specific item printed in an A5 paper size. Note that in intermediate paths 0–1 and 2–3 the participant must deal with additional obstacles included in the setup to clutter the workspace (the additional obstacles are depicted in Fig. 3(a) as starred objects).

Fig. 3.
figure 3

(a) Floorplan of the robot workspace considered in the experiments. Locations identified by users as problematic for teleoperation are mark with stars. (b) User interface showing one of such locations.

Table 1 presents the test results. The results point out that the collaborative control fulfills its purpose of enabling safer and faster robot operations: it improves the performance of manual driving in both metrics number of collisions and task completion time. Based on the ANOVA test of the data collected, we can report strong evidence on the collisions and safety indicators (p < 0.05), while the results obtained for the rest of indicators point out soft tendencies of the difference between control modes (p \(\approx 0.1\)). The tests also revealed a weak point of the system in terms of usability reported by users as sporadic incoherent behavior in the assistance provided. This effect is mainly due to the limited perception capabilities of the telepresence robot, which leads to a misinterpretation of the free-space and, therefore, to a robot motion incoherent with the visitor intentions. Dealing with this issue requires a more systematic study of the autonomous behavior integrated in the collaborative layer and a deeper understanding of the user expectations in the problematic maneuvers, problems that will be addressed in future work.

Table 1. Test Results

4 Conclusion

In this paper we have described a collaborative control for telepresence robots and a first working prototype which has been tested with 24 users in a controlled environment. The study has pointed out that visitors performed safer and more efficient when using the collaborative control, and the comments and suggestions of the participants have revealed particular issues that will be addressed in the extension of the presented work.