Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The use of technology to improve the quality of life of people is becoming a common feature of the modern society. Quality of Life (QoL), however, is not easy to define, since it is an elusive construct connoting a multidimensional appraisal of a variety of important aspects of life [1]. The wider this variety is, the more difficult is to give a single definition which includes all the possible technological solutions related to it. For this reason Quality of Life Technology (QoLT) is generally defined as any technology which impacts the quality of life of individuals who are using it. When addressing people with special needs, the definition of QoLT usually becomes more specific, referring to intelligent systems that augment body and mind functions for self-determination of older adults and people with disabilities [2], which is basically an alternative definition of Ambient Assisted Living (AAL) technology.

AAL is typically classified according to the functional domain targeted, and might include safety systems, medical devices, telemedicine platforms, assistive robots and many others. Among them, during the last years, assistive robotics for domestic use has been a rapidly increasing field of research and development [3]. In the literature, several are the papers addressing the development of mobile robots which can autonomously navigate, detect and interact with the user, usually called home robot companions. In [4] a good overview of mobile robots with manipulation skills is presented, which includes the well-known MOVAID, Hermes, ARMAR and Care-OBot. A recent mobile robot with assistive capabilities has being developed at the Carnagie Mellon University [57]: it can perform localization, navigation and user detection, moreover it is well known for its ability to finely interact with the user through a robotic arm. These systems, however, require an extremely complex hardware, are expensive and their possible integration inside real homes in a near future is not easy. The recent European FP7 COMPANIONABLE project [8] is trying to overcome these problems using a low-cost solution for the robot hardware, a friendly appearance and an adequate size and weight for adoption in common houses, showing an increasing interest of simple and affordable assistive robots.

Navigation and user tracking are functions typically available in robotic companions. Few researchers, however, have focussed on the assistive functionalities that such robots should possess in order to be distinguished from classical mobile robot solutions.

The present paper deals with a home robot companion actually under development at Università Politecnica delle Marche, which can perform navigation and tracking by using a low cost platform, to implement two assistive functions: respiratory rate measurement and stance detection of a person being monitored. Breathing is an important physiological tasks in living organisms, and there are many respiratory diseases which require attentive care and respiratory training. So it is very important to monitor the respiratory activity. Among the possible respiratory rate measurement techniques, a non invasive method has been adopted. In the literature several such approaches have been investigated: CCD camera (see [9]), Structured Light Plethysmography (SLP, see [10]), Slit Light Projection Pattern (see [11]) or Ultra Wide Band (UWB) sensors (see [12]). The use of stereocameras for the detection of breathing is a quite recent technique and is described only in a few papers in the literature (see [1316]). The present paper presents a RGB-D (Red, Green, Blue-Depth) camera system for both respiratory rate measurement and stance detection. The information about the stance of the person can both be used for detecting possible anomalies in his/her behaviour or for deciding a proper way to interact with him/her.

The robot can move inside an indoor environment and look for the person to monitor: when a person is found, then the procedure to determine his/her stance is activated. If the person is standing, the tracking algorithm starts. If the person is sitting, the respiratory rate algorithm starts. The robot can detect if the person is lying on the ground, in which case it can ask for assistance. These procedures can improve the quality of life of people at home during their daily activities.

The paper is organized as follows. Section 2 describes the hardware setup of the autonomous mobile base and the software layers which are adopted to programme and pilot it. Section 3 gives a short overview of the navigation problem and the algorithms adopted to solve it. Section 4, which represents the core of the paper, details the problem of detecting a person, finding his/her stance, tracking him/her and detecting his/her respiratory rate through the use of the robot sensors. Section 5 finally summarizes the main points of the paper and gives an overview of the future developments of the home robot companion.

2 Robot Configuration

The robot companion under development is based on the TurtleBot kit: a commercial, low-cost mobile robot which can be equipped with sensing capabilities and programmed through an open-source software, see [17]. The TurtleBot represents a solid starting point for developing a robot companion, since it can perform simple tasks with a minimal set of sensors and, at the same time, its intelligence can be enhanced by integrating additional sensors and developing custom software algorithms, while keeping development costs low.

2.1 Hardware

The hardware configuration of the robot companion includes a mobile support, a vision sensor, a gyroscope sensor, a set of mounting plates and a laptop for data processing (see Fig. 1).

Fig. 1
figure 1

Robot companion: hardware configuration of the autonomous mobile base

The robot mobile base is the iRobot Create, a mobile base equipped with two wheels driven by DC motors, two support wheels and a microcontroller for differential guide. A gyroscope sensor measuring the robot orientation is mounted directly inside the mobile base in order to minimize the effect of vibrations. Three round plates are fixed on the mobile base at different heights, and are used to carry the laptop, the vision sensor and possible additional sensors.

A RGB-D camera has been used as vision sensor, that is to say a camera which can also measure the distance of objects within its field of view. A RGB-D camera is effective for the identification of persons and objects, even if the background and the person or the object have the same color. RGB-D cameras can recognize overlapped objects by calculating the distance for each of them. Structured Light sensors proved to be effective for breath rate measurement as well (see [1316]): the sensors resolution is usually adequate to sense small movements like those performed by the thorax during the respiratory phase (see [18]).

More in detail, an Asus Xtion Pro Live camera has been adopted, which falls within the category of RGB-D Structured Light (SL) cameras. With a range of 3.5 meters, the Asus Xtion Pro Live is suitable for navigation, user tracking and breath rate measurement in a home environment (see [19] for more information).

2.2 Software

The algorithms of the robot companion are programmed using the C++ language, and are integrated with the Robot Operating System (ROS), a meta-open source operating system which provides hardware abstraction, device drivers, libraries, visualizers, message-passing, package management, and more. Open Natural Interaction (OpenNI), instead, is used to implement further functionalities of the vision sensor.

3 The Navigation Problem

The robot companion must be capable of moving inside an indoor environment, whose map is assumed to be known, looking for a user to track. The knowledge of the map is not a strict requirement: the first time the robot is required to operate in a new environment, the user has the task to guide the robot through it, while a mapping algorithm is activated. In this way a map of the surrounding is always available, however for best results maps generated through laser scanners or surveys should always be preferred.

The navigation problem can thus be decomposed into:

localization, the robot must localize itself within the map;

target definition, the robot must select the next point where to navigate;

path planning, the robot must calculate the trajectory for reaching the next target.

3.1 Localization Algorithm

The robot must be able to estimate its position on the map in order to navigate autonomously: this problem is named localization. Solving the problem of localization means to identify global coordinates in the 2D plane and the orientation of the robot (i.e. the robot pose). It is a key problem in mobile robotics, especially in dynamic environments, e.g. if robots operate in the proximity of people who corrupt the robot’s sensor measurements. Addressing the problem within a probabilistic framework, it is then possible to use a Monte Carlo Localization (MCL) approach.

The dynamic system is represented by the mobile robot together with its surroundings. The states to estimate are the robot pose, while the measurements are the distances acquired through the Asus camera infrared sensor and the indirect odometry readings. The localization algorithm is able to find the position of the robot within the known map: this information is then used to determine the new initial pose of the robot after each movement, for further information about the localization algorithm see [20, 21]. The algorithm is already present in ROS, in the package named AMCL.

3.2 Target Definition Algorithm

The objective of the robot is to move and search for a person to monitor. If he/she is found, then the detection procedure is activated, otherwise the robot returns to its initial position and starts again from the beginning.

We developed a target definition algorithm which selects a precise set of points within the map. Each point is identified by three coordinates: the x-position, the y-position and the orientation that the robot must have when reaching that point. At each step the robot has the information on the initial pose, provided by the localization algorithm, and on the next goal that must be reached, provided by the target definition algorithm. This information is fed into the path planning algorithm which calculates the path to the goal.

3.3 Path Planning Algorithm

Path-planning is the process of choosing a course of actions to reach a goal, given the current position. To perform this activity we developed a path planning algorithm based on the potential field method. The main idea of this method is to model a virtual potential field in which the robot, modelled as a particle of defined radius, moves following the gradient field until reaching the objective point (see [22] for a description of the theory behind the developed algorithm).

4 User–Robot Interaction

When the robot finds an obstacle during the navigation, it tries to identify it. If the identification succeeds, i.e. the obstacle is identified as a person, then the robot detects his/her stance. If the person is standing, the robot starts to follow him/her. At the present state, the robot can distinguish between three different stances: standing, sitting and lying.

4.1 Stance Identification Algorithm

The identification procedure initially requires an operation called calibration. The calibration algorithm recognizes different parts of the person’s body, associating a point (joint) to each of them. In particular, after the calibration, the positions of 15 joints are estimated. The calibration operation is required by the vision sensor to track the person and is activated every time the camera finds a moving object within its field of view (see Fig. 2).

Fig. 2
figure 2

a User calibration and b user tracking c the robot remains at a fixed distance from the joint torso of the calibrated user

Once the calibration data are obtained, they are stored for future use. In this way the calibration is performed only once per person. The robot companion is supposed to be a personal assistive device, thus even if it can recognize multiple persons, calibration can be performed for only one person at a time (i.e. the user).

The stance identification algorithm identifies whether the person is standing, sitting or lying on the ground. To do this, the algorithm analyses the relative position of the joints of the torso, hips, knees and feet. The values to define the stance of the user have to be found experimentally according to the person which is going to be assisted by the personal companion. The parameters experimentally calculated for a specific case of use within the development lab are reported in Table 1 for completeness.

Table 1 Joints values (in mm) that allow to determine the stance of the user

If the person is standing, the tracking algorithm starts (see Sect. 4). If the person is sitting, the respiratory rate algorithm starts (see Sect. 4). The robot can recognize when the person is lying on the ground: this information can be used to understand if the person is in need of assistance (e.g. fallen), in this case the robot can send a message asking for human intervention.

4.2 Tracking Algorithm

The developed tracking algorithm exploits the potential field theory already described in Sect. 3. Once a person has been detected in the standing position, the goal is calculated at each iteration step so that the robot keeps a fixed distance of \(2\) m from the joint associated to the torso, and has a front alignment with respect to the same joint (see Fig. 2). If the person is lost (e.g. his/her orientation w.r.t. camera does not allow to compute the joint position) then the robot performs navigation looking for the person to calibrate again.

4.3 Respiratory Rate Detection Algorithm

The respiratory rate algorithm counts the number of breaths per minute of a person. When the robot identifies a person in the sitting position, the algorithm for the detection of the respiratory rate begins. Using the depth information provided by the camera, the algorithm identifies the person’s chest and calculates the mean value of the depth of the chest at each time:

$$\begin{aligned} {\bar{D}(k)}=\frac{\sum _{i=1}^{N}D_{i}(k)}{N} \end{aligned}$$
(1)

where \(D_{i}(k)\) is the information of the depth of the \(i\)th point associated to the chest at the sampling instant \(k\) and \(N\) is the number of points of the chest. The mean value \(\bar{D}(k)\) is calculated using data sampled at the frequency of \(1/T_c=7\) Hz, where \(T_c\) is the sampling time. The initial position of the chest of the person monitored is used as the reference value. The subsequent measurements are used to identify the number of breaths. During the time of the measurement, if the person moves, the algorithm recalculates the position of the chest and it uses this information as the new reference value. Once the time of the measurement is finished, the algorithm calculates the weighted average of the mean values of the depth. This weighted average \(WA(k)\) is calculated over a window of 4 samples with the following formula:

$$\begin{aligned} WA(k)=\sum _{i=0}^{3}w_{(k-i)} \bar{D}(k-i) \end{aligned}$$
(2)

where \(\bar{D}\) is calculated according to Eq. (1) and \(w_{(k-i)}\) is the weight associated to the mean value \((k-i)\). After calculating the weighted average, the algorithm calculates the derivative of the weighted average:

$$\begin{aligned} dWA(k)=\frac{WA(k)-WA(k-1)}{T_c} \end{aligned}$$
(3)

The derivative was chosen because through its analysis it is possible to identify the maxima and the minima of the average value. The analysis of the derivative also allows to eliminate irregularities in breathing. In fact, in case of irregularities, the analysis of the weighted average is not sufficient to calculate the number of breaths. The algorithm analyses the derivative and counts the number of times that it becomes positive to calculate the respiratory rate. It is possible to extract further information from the weighted average and the derivative:

time of exhalation,:

\(\bigtriangleup TE_{i}=TE_{i}-TI_{(i-1)}\)

time of inhalation,:

\(\bigtriangleup TI_{i}=TI_{i}-TE_{(i-1)}\)

depth if exhalation,:

\(\bigtriangleup DE_{i}=WA(TE_{i}/T_c)-WA(TI_{(i-1)}/T_c)\)

depth of inhalation,:

\(\bigtriangleup DI_{i}=WA(TI_{i}/T_c)-WA(TE_{(i-1)}/T_c)\)

where \(TE_{i}\) is the instant of time in which the exhalation of the \(i\)th breath ends, \(TI_{i}\) is the instant of time in which the inhalation of the \(i\)th breath ends, \(WA\) is the average value of the mean values of the depth of the chest at the sampling instant in which the exhalation or the inhalation of the considered breath ends.

Fig. 3
figure 3

The Region of Interest. The white arrows indicate the vertices of the box and their coordinates. The red arrows indicate the Torso and Shoulders joints found in step 1

The algorithm introduced above can then be described by the following steps:

  1. 1.

    Detection of torso and shoulder

  2. 2.

    Definition of the Region of Interest (ROI), see Fig. 3

  3. 3.

    Depth measurements

  4. 4.

    Calculation of the mean value of the depth

  5. 5.

    Calculation of the weighted average on a window of 4 samples

  6. 6.

    Calculation of the derivative of the weighted average

  7. 7.

    Calculation of the respiratory rate, the depth and the time of inhalation and exhalation.

A flowchart describing both the navigation algorithm and the user interaction algorithm is provided in Fig. 4.

Fig. 4
figure 4

Flowchart of the navigation and user interaction algorithms implemented in the home robot companion

5 Conclusion

In this paper a robot companion capable of performing navigation, user identification, tracking and respiratory rate measurement is presented. The robot can navigate inside an indoor environment, look for a person, identify his/her stance, follow him/her if he/she is standing, calculate his/her respiratory rate if he/she is sitting and send a message if he/she is lying on the ground. The robot companion is thus capable of performing a set of assistive functions to improve the safety of people at home.

The following aspects are actually under investigation. The first one regards the hardware configuration: in order to improve the estimation of the robot pose (especially the orientation) the robot should be equipped with an inertial measurement unit. Several low-cost units are available on the market, thus the robot sensors can be improved without a major impact on the costs. The second one is about the use of the OpenNI library for part of the calibration, stance detection, tracking and respiratory rate algorithms. The library has a limitation: the person must be in frontal position with respect to the robot. When this does not happen, then two or more body joints overlaps and the system has not enough information to detect the target anymore. In order to avoid that, the algorithm should be able to identify a higher number of joints: in this way the system should be able to keep track of the person even when several joints overlap. The third one regards validation of the respiratory rate algorithm assuming different scenarios: the authors are currently evaluating the performances of the algorithm w.r.t. light conditions variation, different clothes of the user and different angles of view of the camera.