Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In a context, such as Europe, in which the population is becoming increasingly older, technology plays a fundamental role. The Ambient Assisted Living (AAL), a combination of technologies focused on making active, cooperative and smart the environment for an independent and self-sufficient life and the Assistive Technologies (AT), that enable people to perform tasks that they were formerly unable to accomplish, are possible approaches, generally processed independently, to deal with the problem.

Fusing together AAL and AT, new type of information can be obtained such as a sort of Behavioral Analysis (BA) of the user. This aims at evaluating a person’s health status and behavioral evolution and, in case of particular variation of the parameters, providing information to the doctors and/or the caregivers. An example of the BA approach is described in [1, 2]. Here a PIR (Passive InfraRed) sensor is placed on a bathroom entrance in order to detect a person access.

During a two year long period, it was possible to collect the data from the sensor: information underlined user habits during the day and eventually showed a decline of his/her activity during months. Although this approach is simple and reliable, it has few limitations: firstly ambient sensors are fixed to a particular position of the house and secondly they are not able to distinguish which person has been detected. For such reasons, the BA approach could be expanded to wearable sensors as well.

This article focuses on Human Activity Recognition (HAR) and Indoor Localization (IL). The first is a recognition system based on a decision tree. It classifies particular activities exploiting inertial sensors (accelerometer, gyroscope and compass): first it distinguishes between static and dynamic activity and after it identifies an action in each class of activities: for example walking or running for dynamic activity and sitting or standing for static ones. An Indoor Localization (IL) system has also been implemented based on step detection, walking distance and a heading estimations.

For this project, MuSA (Multi Sensor Assistant) [3], conceived for personal activity and vital parameters continuous supervision, has been used.

2 MuSA

MuSA is based on a CC2531SoC [4] and is compliant with ZigBee 2007 Pro standard protocol [5]. It is designed to be worn on a belt and it features small dimensions and light weight (Fig. 1).

Fig. 1
figure 1

MuSA

MuSA basic functionalities are fall detection and a call button that asks for immediate assistance. MuSA has been extended with further functions, hosted by the same hardware platform: a single-lead ECG system is used to evaluate heart rate and respiration rate using EDR technique [6] and a NTC thermistor is included to evaluate body temperature. A third version of this device features an inertial measurement unit (IMU) composed by an accelerometer, a gyroscope and a compass [7]. In Fig. 2 the coordinates system of the inertial sensor unit available on MuSA are showed. Signal acquisition and processing are carried out by MuSA on-board circuitry: only information about detection of abnormal behaviors or deviation of vital signs from their “normal” range are sent through the network. Radio communication is hence kept at a bare minimum (alarm messages and network management), saving battery energy. Two basic building blocks can be identified: a IEEE 802.15.4 radio transceiver, and the CC2531 microcontroller taking care of ZigBee stack management.

Fig. 2
figure 2

MuSA inertial sensor coordinates

3 Human Activity Recognition

Regarding the BA context, the focus of this project relies on Human Activity Recognition (HAR) [8]. The aim of this approach is to distinguish general activities such as walking, run, postures, etc. or, also via data fusion from other sensors, specific actions like sitting on armchair, open fridge, cooking, go to the toilet, etc. Daily activity analysis is useful in order to evaluate physical and behavioral changes in the person, especially elderly and people suffering from chronic diseases. Suitable parameters for this analysis can be divided in four principal groups: environmental values (temperature, humidity, occupancy, etc.) and personal ones (accelerations, positions) and vital signs (heart-beat, breathing rate, etc.).

As for every machine learning approach, HAR techniques rely on two phases: training and testing. During the first, acquired sensors data are processed in temporal windows in order to extract relevant features from the raw signal. Later, during the second phase, an automatic feature detector has to be implemented on the device in order to work autonomously.

Since inertial sensors are available on MuSA, here the HAR focuses on user movements. Relevant importance resides on the position where the inertial sensor, i.e. accelerometer, is applied on the human body. It can be applied on arms, wrists, ankles, thighs [9], feet [10], back [11], waist [12]. The more the sensors the more activities can be detected.

Extraction feature techniques rely on supervised learning, whereby acquired data are manually labeled and then extracted features are automatically detected using a proper algorithm. Different solutions are available: decision trees are hierarchical models in which every branch corresponds to a classification rule [13]; Bayesian methods gives the probability of every actions from a training set [14]; Instance Based Learning (IBL) methods perform the distance of two sets in order to evaluate the similarity [13, 14]; finally Support Vector machines (SVM) and artificial neural networks set a more complex set of rules than the previous approaches [15].

Activity recognition can be performed offline in which processing is later run on a computer or online on the device where actions are real-time detected.

Since MuSA features a low-power SoC, the HAR algorithm is based on a decision tree solution [16]. As depicted in Fig. 3, in the top level the algorithm classifies broad activities and recognizes more detailed activities in the lower. In the first stage the system process the acceleration data in order to distinguish static and dynamic cases. After that, it needs gyroscope and compass information to evaluate for example the direction of walking or the orientation of a posture. During the static phase the framework classifies upright or lying positions: the firsts happens when the user is sitting or standing, the seconds are different due to the body orientation. On the contrary dynamic movements are classified as walk, transitions, falls and other movement.

Fig. 3
figure 3

HAR: binary decision tree

In order to classify resting and activity states the algorithm processes the acceleration components through a high-pass FIR and these are used to compute the acceleration modulus. By integrating the modulus over a 1 s window it is possible to obtain a measure of the metabolic energy expenditure (EE) [16]. It is possible thus to distinguish static from dynamic activities by comparing the EE to a proper threshold: if below, a rest situation is detected, otherwise a dynamic activity is inferred (Fig. 4).

Fig. 4
figure 4

Body orientation angle evaluation

After that, the algorithm proceeds to the proper branch of the decision tree. During a static phase it is possible to observe the posture of the person carrying MuSA. For example, by simply computing the arctangent of the ratio between two accelerations orthogonal to the gravity component (y and z in this case), the tilt angle (body orientation) can be extracted.

An example is depicted in Fig. 5, where the user is in the upright position (sitting and standing). The same results can be inferred from the gyroscope: by integrating the angular rate Ωy (pitch angle), the orientation can be obtained.

Fig. 5
figure 5

HAR 1st level: static/dynamic activity recognition

Thus is possible to recognize if a person is sitting/standing or he/she is lying. In order to properly discern if a person is sitting or standing the system needs to look back in the activity history. For example if the systems knows a sitting transition has occurred right before a static phase, this last will have high probability to be classified as a sitting phase.

If otherwise a dynamic activity is detected, the algorithm operates in order to distinguish few actions. So far, movements can be classified as walk, transition (sit/stand, stand sit) and other activity. Here the algorithm first looks for maximums observing the modulus of the low-pass filtered accelerations (3rd FIR, 3 Hz). Maximums closer than 3 s are labelled as “steps” and these belong to a walking series. If isolated maximums are detected, the algorithm looks for a possible transition (stand to sit or sit to stand). Here an assumption has been made: by looking the training set acquired during tests, it is clear that actions like sitting or raising take more time than a single step. For such reason, the system observes the dynamic interval in which the maximum has been found. If this time is big enough the action is then classified as transition, otherwise as other movements. The classification just described is depicted in Fig. 6. Although this process, MuSA runs the fall detection algorithm showed in [3] independently: since a fall occurrence is a major problem its management requires the highest priority.

Fig. 6
figure 6

HAR 2nd level: walking and transition detection

4 Indoor Localization

Indoor localization is useful to know the exact position of a person and avoid loss due to chronic diseases as Alzheimer or to quickly assist the user if an alarm has arisen.

In the previous paragraph a step detection exploiting a tri-axial acceleration has been introduced. In order to implement a localization system, a gyroscopes and a magnetic sensor are necessary to evaluate the direction of the user during the walk.

Human gait (passo-andatura) can be described by an inverted pendulum model [17]. Each step is characterized by different phases. When a heel is touching the floor, the height of the waist is in its lowest position, as a consequence the vertical velocity is equal to zero and the vertical acceleration has reached the maximum. When one foot is totally on the floor and the other is moving forward the waist is at the shortest distance from the equilibrium. When the foot on the ground is now on its tiptoe, body is pushed up and the vertical acceleration changes to the opposite direction.

Instead of computing a double integral the vertical acceleration (aX) in order to measure the walking distance executed, here an empirical relation on aX with the stride length is used [18]:

$$Step\,Length = K \cdot \sqrt[4]{{a_{MAX} - a_{MIN} }}$$

where \(a_{MAX}\) is the maximum and \(a_{MIN}\) is the minimum of the first harmonic of aX and K a calibration constant that has to be extracted from experimental data (Fig. 7). Here K is different for every user.

Fig. 7
figure 7

Maximum and minimum detection

Last phase of the localization system needs to process gyroscope and magnetic sensor data to evaluate the orientation. Gyroscope vertical component is integrated to obtain the orientation angle ΩX (yaw). For the same purpose the arctangent of ratio of the orthogonal magnetic sensor components to the ground (y and z) is computed. Figure 8 shows the heading direction obtained from both sensors.

Fig. 8
figure 8

Yaw angle: gyroscope (blue) and compass (green)

Heading values have been obtained measuring the yaw at the exact time when the peak of the step has been detected (Fig. 9). Due to the drift caused by the integration on the gyroscope signal, the heading is corrected by the compass values.

Fig. 9
figure 9

Step detections (above) and average yaw (bottom)

Through step detection, walking distance estimation and heading estimation, the framework is able to provide the result of the localization system (Fig. 10). In the example, the user walked in a room along a rectangular path (6 m x 4 m).

Fig. 10
figure 10

IL experiment on a rectangular path

5 Results and Conclusions

In this paper a wearable multi-sensor device oriented at the behavioral analysis has been described. This is based on Human Activity Recognition (HAR) which detects particular activities and on Indoor Localization (IL) that is useful to know where the user is if a quick assistance is needed.

The HAR framework is a solid algorithm based on a binary tree and is able to recognize main activities as walking, sitting/standing, fall and rest situations as upright and lying positions. Preliminary tests state that the algorithm easily distinguishes static and dynamic situations and also detects walking phases with a very high accuracy (99.6 %). More problems arise when the framework has to distinguish between step and a sitting/standing transition: here the accuracy of the walking phase reduces to 95.2 % and the sitting/standing transition accuracy is 96.3 %. So far, the algorithm is not able to distinguish if a person is sitting or standing, thus it needs a history of activities to understand what happened before. A useful solution is to exploit information coming from ambient sensors, i.e. a pressure sensor placed on a sofa capable to detect if a person is seated.

The indoor localization system has also been tested. Users were asked first to walk back and forth on a straight line and later to complete a rectangular open path and a closed one. The last two experiments are showed in Figs. 11, 12, here only three users are showed for simplicity.

Fig. 11
figure 11

Tests on open rectangular path

Fig. 12
figure 12

Tests on closed rectangular path

The IL major problem of error consists on the integral operations on the inertial sensor components. Thus is necessary to support MuSA with another system capable to reset the error and give a precise position from time to time. A solution could be to use localization techniques using wireless technologies. For this purpose the ZigBee communication is suitable since this protocol is already in use on MuSA. A solution relies on the Received Signal Strength Indicator (RSSI) evaluation [19]. In principle it is possible to realize a gate capable to detect if a person has passed by placing two ZigBee nodes close to each other on a door. These two nodes send messages to each other and evaluate the RSSI value for every message. It is known that whenever a person passes through the gate, due to physical causes, the evaluated RSSI parameters will have lower values. For the same principle, exploiting the RSSI of MuSA, the gate can recognize which person has been detected and thus update its location coordinates.