Keywords

1 Introduction

Human movement analysis can be defined as the interdisciplinary field that describes, analyzes, and assesses human movement. Movement analysis has become a valuable tool in clinical practice, e.g. instrumental gait analysis for children with cerebral palsy (see also Chap. 2). The data obtained from measuring and analyzing limb movement enables clinicians to assess the impaired function and prescribe surgical or rehabilitation interventions. Despite the recognized potential of human movement analysis for diagnosis of neurological movement disorders, rehabilitation from trauma, and performance enhancement its use is restricted to limited specialized medical or rehabilitation centers. The lack of existing applications is mainly due to limitations associated with current motion capture equipment.

The currently available motion capture systems can be divided into vision based and sensor based systems (Zhou and Hu 2007). The vision based systems can be further divided into marker based systems and markerless systems. The former are considered as the gold standard, providing with the most accurate measurements. However, these systems present additional limitations on top of the high financial investment required. Present-day marker based systems are not portable, have a limited capture volume and require a trained technician to analyze the data. The latter is due to a need for a pre-calibration procedure to convert the marker data to a model representing the subject.

Markerless vision based systems, such as the Kinect are not as costly but still suffer from occlusion and illumination problems and a limited capture volume. Furthermore, the repeatability of their measurements is often limited. Traditional sensor based systems (e.g. acoustic or magnetic motion capture systems) also have a restricted capture volume and are sensitive to environmental conditions such as illumination and air flow (depending on the type of sensor used).

To extend the use and benefits of human motion analysis to non-laboratory settings the acquisition should be robust, reliable and easy to perform. Cluttered scenes, changing environmental conditions and non-limited capture volumes are common outside of the laboratory. Ideally, data collection and analysis would be automated to the point where no trained technicians are required. Over the last decade several inertial sensor approaches have been put forward that address most of the aforementioned limitations. Advancements in micro-electro-mechanical sensors (MEMS) and orientation estimation algorithms are boosting the use of inertial sensors in motion capture applications.

Use of magnetic and inertial measurement units (MIMU) is growing in ambulatory human movement analysis. A MIMU consists of a variety of sensors; generally these are three accelerometers, three gyroscopes and three magnetometers. Data fusion of these sensors provides orientation of the MIMU and therefore can provide orientation of the segments to which they are attached. The popularity of MIMU stems from their low cost, light weight and sourceless orientation. MIMU obtain a reference orientation by using the earth gravitational force and the geomagnetic north. Therefore, MIMU do not need a fixed spatial reference in the lab (usually defined at a forceplate center/corner). MIMU are starting to demonstrate their potential in motion analysis applications in robotics, rehabilitation and clinical settings. In addition to being less obtrusive and relatively inexpensive, the main advantage of MIMU is that they are not restricted to a defined capture volume and relatively easy to use.

The objective of this chapter is to provide an outline on the potential of MIMU in human movement analysis (see Fig. 16.1). To accomplish this objective, the chapter is organized in four main sections. Section 16.1 provides a short overview of inertial sensing approaches used to obtain orientation. This overview is by no means a complete review of the literature on orientation estimation using inertial sensors, but rather an overview of sensor alternatives that preceded the current popular approach. In Sect. 16.2 you can refresh your knowledge on 3D kinematics and mathematics. The reader is considered to have prior knowledge in this area. References are provided for those in search of a more complete introduction to 3D kinematics. In Sect. 16.3 an introduction to orientation estimation algorithms is provided, including a first case study. The case study addresses extracting orientation from inertial and magnetic sensor data. Section 16.4 is dedicated to a case study on human movement analysis with MIMU. A theoretical basis for human movement analysis with MIMU is provided, followed by a practical example: estimation of 3D knee joint angles during overground walking. This last case study contains all the information provided earlier in this chapter and can be used as a guide to perform human movement analysis outside of a specialized laboratory.

Fig. 16.1
figure 1

Flow diagram of human motion analysis with inertial sensors. When the patient enters the lab he/she is equipped with inertial sensors (one on each body segment adjacent to a joint subjected to investigation). A two-phased calibration procedure (static and dynamic) precedes the actual data collection and analysis. The last step is to interpret the data

A summary in layman terms is provided in text boxes before each technical section in an attempt to improve readability for those lacking a strong mathematical background.

2 Inertial Sensing Approaches

Currently three different sensors are often combined to obtain more accurate and robust orientation estimates. The strengths of accelerometers, gyroscopes, and magnetometers are combined in an attempt to address their individual weaknesses. In this section the type of sensors are described, followed by an overview of their use in the biomechanics community. A case study estimating orientation from accelerometer and magnetometer data concludes this section.

2.1 Type of Sensors

Accelerometers measure linear accelerations, originating either from the earth gravitational field or inertial movement. Time mathematical integration of the acceleration signal yields the momentary velocity of the point to which the device is attached, and a second integration yields a spatial displacement of that point, potentially providing an alternative measurement to that generated by a more expensive position measurement system. Under static and quasi-static conditions the accelerometer can be used as an inclinometer. However, under more dynamic conditions it becomes very hard to impossible to accurately decompose the signal into inertial and gravitational components.

Gyroscopes measure angular velocity. Integrating the angular velocity provides us with the angular change over time. A tri-axial gyroscope setup can, given initial conditions, thus track changes in orientation. However, gyroscopes are prone to unbiased drift after integration, limiting their use in time. This error occurs upon integration of the gyroscope signal with the inherent small temperature related spikes. Over time, the integration of these spikes causes the gyroscope signal to drift further and further away from the actual tilt angle. This drift error is strongly affected by temperature, and much less by velocity or acceleration; gyroscopes can thus be applied in highly dynamic conditions, but only for short periods of time.

Magnetometers measure the geomagnetic field, and as such indicate the earth north direction in the absence of other ferromagnetic sources. Magnetometers are often combined with accelerometers, where the former provide the heading of the coordinate system.

2.2 Historical Overview

Inertial technology was introduced in biomechanics for impact analysis in the 60–70s. The first studies were done with uni-axial accelerometers, but quickly configurations with three, six, and nine accelerometers were considered (Morris 1973). The initial results indicated that there were severe restrictions in time duration. Giansanti et al. more recently investigated the feasibility to reconstruct position and orientation (pose) data based on configurations containing respectively six and nine accelerometers. They concluded that neither of these configurations was suitable for body segment pose estimation (Giansanti et al. 2003).

In the following decade inertial sensors made their way into motion analysis, in particular in the clinical assessment of gait. Accelerometers were still the preferred sensor type. Willemsen et al. (1991) performed an error and sensitivity analysis to examine the applicability of accelerometers to gait analysis. They concluded that the model assumptions and the limitations due to sensor to body attachment were the main sources of error. The model used was a planar (sagittal plane) lower extremity model consisting of rigid links coupled by perfect mechanical joints (i.e. hinge joint representing the knee). Willemsen et al. (1990) used this two-dimensional model to avoid integrating and thus avoid the troublesome integration drift. They placed four uni-axial accelerometers organized in two pairs on each segment. This method was deemed acceptable for slow movements but considerable errors were reported for higher frequencies (faster movements). Still without additional sensors, Luinge and Veltink applied a Kalman filter (more information on Kalman filters is provided later) to the accelerometer data to improve the orientation estimate (Luinge and Veltink 2004). Luinge and Veltink estimated the contribution to acceleration due to gravity and due to inertial acceleration and used these estimates in their subsequent calculations to derive orientation. Previously low pass filters (only letting that part of the signal through that has low frequency, in this case gravity) were used to eliminate as much as possible the unwanted inertial acceleration signal from the accelerometer data. The filter designed by Luinge and Veltink outperformed these low-pass filters, especially under more dynamic conditions, and might be one of the bases of the popularity of Kalman filters in current orientation estimation algorithms.

By the start of this century, both the cost and size of micro-electro-mechanical sensors (MEMS) had dropped severely. This led to an influx of their application in biomechanics and research in general, and allowed for novel methods in orientation estimation. In particular, it allowed researchers to combine various sensors and thus exploit their individual strengths. Initially accelerometers and gyroscopes were combined (Williamson and Andrews 2001), and later magnetometers were added (Bachmann 2004). Currently the most popular fusion method is based on a Kalman Filter where the information from all three sensors is taken into account. Accelerometers and magnetometers combined act as an electronic 3D compass. This information can be used to provide the initial condition and correct the drift error present in the gyroscope estimation. The gyroscopes in turn are used to smooth the previous estimate, which is especially valuable under dynamic conditions. More information on fusion algorithms is provided in Sect. 16.3. Despite the improvements realized by sensor fusion, there is still room and need for improvement. Additional sensors [GPS, Kinect, magnetic sources and sensors] and anatomical constraints (Luinge et al. 2007) are some of the approaches that have been put forward as potential solutions. Most efforts however are directed to improve the fusion and filtering algorithms.

Prior to start to work with the MIMU, an introduction to human movement analysis related algebra is given in the appendix. Readers that are already familiarized with this knowledge can go to the first practical example at the end of this section. For those in need for more basic or in depth information, we refer to the following publications (Winter 2004; Vaughan et al. 1999).

2.3 Case Study: Electronic Compass by Fusion of Accelerometer and Magnetometer Data

Kinematic technology allows measuring spatial segment movement. The type and format of data obtained depends on both the movement under investigation as well as on the technology used to record this movement. The type of sensors used and the way in which the information from these sensors is exploited determines the accuracy, reliability, and potential field of application.

This case study exists in determining three unit vectors (a vector is a representation of direction and magnitude of the quantity represented by its data (e.g. Gravity, voltage…); a unit vector is a vector that has a magnitude of 1, and can be obtained by normalizing or taking out the effect of magnitude by dividing a vector by its absolute value) that are perpendicular to each other (for three vectors A, B, C: A to B, A to C, and B to C). These three unit vectors together form a coordinate system from which we can extract orientation. We will make use of sensor data to provide us with two of the three desired vectors, and use a mathematical trick to obtain the third (cross product).

In the absence of motion it is assumed that the only acceleration measured by the accelerometers is gravity. Accelerometer data can thus be used to obtain a reference (\( \vec{Y} \)) of the global vertical axis, the gravity vector (Kemp et al. 1998). In the absence of ferromagnetic perturbations we can use a similar construct to obtain a horizontal vector based on the magnetometer data (\( \vec{H} \)). Since both gravity and the geomagnetic field are earth bound, it should be clear that we are obtaining sensor orientation with respect to the global or earth reference frame. Data from the accelerometers and magnetometers has to be normalized in order to obtain unit vectors. Taking the cross product of the unit vectors \( \vec{H} \) and \( \vec{Y} \) gives us a third unit vector (\( \vec{Z} \)), normal to both \( \vec{H} \) and \( \overrightarrow { Y} \). Consecutive cross products ensure that the obtained system is orthogonal. We can thus obtain \( \vec{X} \) by taking the cross product of \( \vec{Z} \) and \( \vec{Y} \). The obtained vectors can be organized in matrix format to obtain the rotation matrix (see appendix for more information on matrix and vectors). From this matrix we can then extract the Euler angles using the X–Y′–Z″ rotation sequence (see appendix: How to extract rotation angles using Euler convention). A pseudo-code version and numerical example are provided to further clarify this process. A pseudo-code is an easy way to give a steps sequence to achieve a given goal. The name pseudo-code comes from computer programming science, where “pseudo” is given since the code is not written in any computer language but in a sequence of steps.

Solving this for a numerical example gives us:

  • Get raw data

  • Sensor data of the individual sensors the TechMCS (Technaid, S.L.) consists of: accelerometer data is displayed in m/s2, gyroscope data in rad/s, magnetometer data in uT, and temperature in degrees Celsius.

AcceX

AcceY

AcceZ

Temp

9.66E + 00

1.67E + 00

3.48E – 01

3.40E + 01

GyroX

GyroY

GyroZ

 

−1.10E − 02

6.73E − 03

1.55E − 03

 

MagnX

MagnY

MagnZ

 

−3.18E + 01

−7.91E + 00

−1.83E + 01

 
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{unit}} \, {\mathbf{vectors}}} \\ \begin{gathered} \begin{array}{*{20}c} {\vec{Y} = [ 9.85{\text{E}} - 01 } & {1.70{\text{E}} - 01} & {3.55{\text{E}} - 02 ]} \\ \end{array} \hfill \\ \begin{array}{*{20}c} {\vec{H} = [ - 8.47{\text{E}} - 01} & { - 2.10{\text{E}} - 01 } & { - 4.88{\text{E}} - 01]} \\ \end{array} \hfill \\ \end{gathered} \\ \end{array} $$
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{sensor}} \, {\mathbf{orientation}}} \\ {\begin{array}{*{20}c} {\overrightarrow {X'} = {\text{cross}}(\vec{Y},\vec{H}) = [ - 7.57{\text{E}} - 02} & {4.50{\text{E}} - 01} & { - 6.28{\text{E}} - 02]} \\ {\vec{X} = {\text{norm}}\left( {\overrightarrow {X'} } \right) = [ - 1.64{\text{E}} - 01} & { 9.77{\text{E}} - 01 } & { - 1.36{\text{E}} - 01]} \\ {\vec{Z} = {\text{cross}}\left( {\vec{X},\vec{Y}} \right) = [5.79{\text{E}} - 02 } & { - 1.28{\text{E}} - 01 } & { - 9.90{\text{E}} - 01]} \\ \end{array} } \\ \end{array} $$

The obtained vectors can be organized in matrix format; from this rotation matrix we can then extract the Euler angles (see Sect. 16.2).

$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{rotation}} \, {\mathbf{matrix}}} \\ {\begin{array}{*{20}c} {^{\text{G}} {\text{R}}_{\text{s}} = } & {\left[ {^{\text{G}} {\text{X}}_{\text{s}}\,^{\text{G}} {\text{Y}}_{\text{s}}\,^{\text{G}} {\text{Z}}_{\text{s}} } \right]_{ 3\times 3} } \\ \end{array} } \\ {^{\text{G}} {\text{R}}_{\text{s}} = \begin{array}{*{20}c} {X\cdot x} & {Y\cdot x} & {Z\cdot x} \\ {X\cdot y} & {Y\cdot y} & {Z\cdot y} \\ {X\cdot z} & {Y\cdot z} & {Z\cdot z} \\ \end{array} = \begin{array}{*{20}c} { - 1.64{\text{E}} - 01} & {9.85{\text{E}} - 01} & {5.79{\text{E}} - 02} \\ {9.77{\text{E}} - 01} & {1.70{\text{E}} - 01} & { - 1.28{\text{E}} - 01} \\ { - 1.36{\text{E}} - 01} & { 3.55{\text{E}} - 02} & { - 9.90{\text{E}} - 01} \\ \end{array} } \\ \end{array} $$
$$\begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{Euler}} \, {\mathbf{angles}}} \\ \begin{gathered} \theta_{ 1} = { 177}. 9 4 7^{ \circ } \hfill \\ \theta_{ 2} = - 7. 8 30^{ \circ } \hfill \\ \theta_{ 3} = { 99}. 5 3 8^{ \circ } \hfill \\ \end{gathered} \\ \end{array} $$

As mentioned earlier, the method explained above is only valid in static or quasi-static situations. In motion trials, such as gait analysis, we can no longer assume that the acceleration due to movement is insignificantly small compared to gravity. Therefore, the accelerometer can no longer be used as a standalone inclinometer (providing with an attitude reference) and a more elaborate method should be used to obtain orientation with respect to the global reference system.

3 Orientation Estimation Algorithms

If, as is the case with the MIMU used in our case studies (Technaid 2013), sensor production is not fully automatic then axis misalignment and cross axis sensitivity have to be accounted for, on top of the sensor noise. One of the types of noise that is to be expected is drift error in the gyroscope signal. This error occurs upon integration of the gyroscope signal with the inherent small temperature related spikes. Over time the integration of these spikes causes the gyroscope signal to drift further and further away from the actual tilt angle.

Sensor fusion can be defined as “the conjoint use of various sensors to improve the accuracy of the measurements under situations where one or more sensors of the network are not behaving properly” (Olivares et al. 2011).

The listed difficulties can be dealt with due to the redundant information available to obtain orientation estimates. Orientation can either be obtained by integrating the gyroscope data or by combining the accelerometer and magnetometer data into an electronic compass.

There are several different methods to derive orientation from sensor information; in the following we briefly highlight the main groups of algorithms and the various ways in which they use the available data. A survey of all published methods would be too technical and lengthy to strive for in this section. We will therefore highlight the two main approaches and briefly explain (one of) the most popular solutions within each approach.

The deterministic approach is based on vector matching. To derive orientation three independent parameters are needed. Two, non-parallel, vector measurements are sufficient to generate these three parameters. This approach has been demonstrated earlier when we derived orientation from magnetometer (local magnetic field vector) and accelerometer (gravity vector) data. The example given closely corresponds to the TRIAD (tri-axial attitude determination) method. Other least-squares approaches are the QUEST (Quaternion estimation) methods, factorized quaternion methods and the q-method (Cheng and Shuster 2005; Shuster 2006). All aforementioned methods are single-frame methods, i.e. they rely on the data from the current frame to derive orientation in that same time frame. The TRIAD method has several limitations: it only allows 2 input vectors and is sensitive to the order in which they are presented; using only data from one time frame it is more sensitive to random error.

Sequential approaches, the most well-known being the Kalman filters, are able to attain better results by taking advantage of more data and thus reducing the sensitivity to random error. The Kalman filter is a recursive filter, meaning that it reuses data to improve the estimate of the state of the system and to moderate the noise present in the measurement data. It is since long the most commonly used orientation estimation algorithm (Yun and Bachmann 2006; Sabatini 2006; Roetenberg 2006; Park 2009). The most used version is the extended Kalman filter (EKF). The extended KF accounts for a certain degree of non-linearity by linearizing about the current best estimate. If the non-linearity is high then a different filter type, better fit to cope with non-linearity (e.g. Particle filter methods), should be chosen instead. The EKF is also the filter type used to obtain the orientation data in the second case study.

The equations behind the EKF can be separated into two groups: time update or predictor equations and measurement update or corrector equations (see Fig. 16.2). To be able to remove the drift error present after integrating the gyroscope data we have to estimate it. Upon removal of this drift the gyroscope signal will be closer to the actual rotations and changes in orientation. We furthermore need a reference to help us identify the drift in the gyroscope signal. We are using orientation data as input into our EKF, thus the reference will be provided by combining the data from the accelerometers and magnetometers (see case study 1). The filter parameters are altered depending on the movement or activity under investigation. The gains of each parameter are calculated continuously to indicate the importance (level of trust to be given to) of each input for the estimation. The initial tuning of the parameters has a strong impact on the performance of the filter. It is hard to impossible to find a configuration that is suitable for both static or slow movements and highly dynamic activities.

Fig. 16.2
figure 2

Scheme of a Kalman filtering algorithm. Stochastic filters such as the EKF use a model of the sensor measurements (measurement model) to produce an estimate of the system state. Stochastic filtering thus exist of two stages in a loop, a prediction stage of the new state (time update) and an update stage where this prediction is verified by the new measurements (measurement update)

The EKF thus balances the strengths and weaknesses of the various sensors to achieve a compromise of orientation estimation with higher accuracy and reliability. The most important limitation of the EKF is that, being an adaptive filter, its behavior depends on the tuning of the parameters and the motion being analyzed. The data provided and analyzed in this chapter was obtained using the on-board algorithm from the TechMCS MIMU (Technaid, S.L.).

4 Human Movement Analysis with MIMUs

In this section we will start from both raw and orientation data provided by the TechMCS sensors, but you can apply the following to any sensor or system providing this data. The orientation estimation was obtained using the on-board algorithm from the TechMCS MIMU (Technaid, S.L.). The raw sensor data will be used in the sensor-to-body calibration. We will use the sensor orientation data to derive anatomical joint angles. In particular we will look at the right knee joint during normal over-ground walking. Two sensors are placed in an elastic strap and tightened with velcro on the lateral side of the right leg; one on the thigh (1/3 up from the knee joint) and the other on the shank (1/3 down from the knee joint) (see Fig. 16.3). It is important to create a significant, but not uncomfortable, pre-load while attaching the straps to avoid excessive motion artifacts during data collection. The same principles hold for other joints in the human body, as well as other movements. Starting from sensor orientation, we have to obtain anatomical segment orientation. Once we have the orientation of all of the segments involved we can calculate the rotation matrix between two adjacent segments and extract the relevant joint angles.

Fig. 16.3
figure 3

Two-step calibration procedure to calibrate the MIMU to their respective body segments of the lower limb. The first step (A) consists of maintaining an upright posture with the leg fully extended. In this posture the segment length is aligned with the earth gravity vector (vertical axis, in red). The second step (B) determines the second vector (green). We have opted for a planar movement around the hip joint (hip flexion–extension) with a straight leg. During the movement, both shank and thigh move in the same plane with a common flexion–extension axis (dotted green line at the hip). The third calibration axis (black) is then obtained by taking the cross product from the vectors measured in A and B. To correct for any misalignments due to measurement error (e.g. poor execution of the flexion–extension movement), one of the measurement axis is subsequently corrected by taking the cross-product between the third axis and the other measurement axis

4.1 Sensor-to-Body Calibration

To obtain anatomical segment orientation a sensor-to-body calibration is required (see Fig. 16.3). The purpose of the calibration is to identify, for each sensor attached to a segment, a constant rotation matrix relating the sensor frame to the anatomical frame of the segment to which it is attached. The ISB recommendations (see appendix) to quantify joint motion are based on systems providing position data (Grood and Suntay 1983; Wu et al. 2002). However, current IMUs and MIMU are unable to provide position data. When only orientations of body segments are available, positions have to be determined by linking segments to each other, using a linked-segment model based on segment orientation and fixed segment lengths (Faber et al. 2010; Van den Noort et al. 2012). Therefore, several calibration methods have been proposed that do not rely on or require position data of bony anatomical landmarks (Favre et al. 2009; O’Donovan et al. 2007; Picerno et al. 2008). Three main groups can be distinguished: reference posture or static methods, functional methods, and those requiring additional equipment. The static methods rely on one or several predefined postures and predominantly use accelerometer and magnetometer data. Functional uni-articular joint movements are added in the functional methods. The functional joint axis of rotation is derived from the gyroscope data (Luinge et al. 2007; Jovanov et al. 2005). The calibration method used here belongs to the latter category and can be divided into two parts, the first being static. The participant, equipped with a sensor on the thigh and shank, is required to stand still with both legs parallel and knees extended. It is assumed that the longitudinal axis of the segment (\( \vec{Y} \)) coincides with the gravity vector. To obtain this unit vector, the accelerometer data during a specific frame is extracted. It is recommended to verify the absence of motion artifact of amplitude spikes during the chosen frame, or alternatively average the accelerometer data over a short interval. After doing so, we have obtained the first axis of the anatomical coordinate system (ACS).

$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{raw}} \, {\mathbf{data}}} \\ \begin{gathered} {\text{raw }}\left( {\overrightarrow {Accel} } \right) = raw(XYZ \,accelerometer\, signal) \hfill \\ {\text{norm }}\left( {\overrightarrow {Accel} } \right) = {\text{raw }}\left( {\overrightarrow {Accel} } \right)/|{\text{raw }}\left( {\overrightarrow {Accel} } \right)| \hfill \\ \end{gathered} \\ \end{array} $$
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{unit}} \, {\mathbf{vector}}} \\ {\vec{Y} = {\text{norm }}\left( {\overrightarrow {Accel} } \right)} \\ \end{array} $$

For the data provided in the previous section this gives us:

$$ \overrightarrow {Y\_thigh} { = }\left[ {\begin{array}{*{20}c} {0.9958 0.0821} & { - 0.0416} \\ \end{array} } \right] $$
$$ \overrightarrow {Y\_shank} = \left[ {\begin{array}{*{20}c} {0.9874 0.0877} & { - 0.1317} \\ \end{array} } \right] $$

Subsequently a functional motion is executed; we have opted for a pure hip flexion without bending the knee (see Fig. 16.3). During this movement, thigh and shank are assumed to move strictly in the sagittal plane, perpendicular to the direction of rotation. Assuming a pure hip flexion–extension motion, for which the flexion–extension axis would be in the same plane as the knee flexion–extension axis. Other movements can also be executed, such as knee flexion–extension or leg adduction-abduction. Here, the mean value is taken over a single hip flexion motion.

$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{raw}} \, {\mathbf{data}}} \\ \begin{gathered} {\text{mean }}\left( {\overrightarrow {Gyro} } \right) = mean\left( {XYZ \,gyroscope \, signal} \right) \hfill \\ {\text{norm }}\left( {\overrightarrow {Gyro} } \right) = {\text{mean }}\left( {\overrightarrow {Gyro} } \right)/|{\text{mean }}\left( {\overrightarrow {Gyro} } \right)| \hfill \\ \end{gathered} \\ \end{array} $$
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{unit}} \, {\mathbf{vector}}} \\ {\vec{H} = {\text{norm }}\left( {\overrightarrow {Gyro} } \right)} \\ \end{array} $$

Applied to the dataset provided,Footnote 1 the vectors derived from the dynamic calibration trials are:

$$ \overrightarrow {{H_{thigh} }} = \left[ {0.1750\quad - 0.1676\quad 0.9702} \right] $$
$$ \overrightarrow {H\_shank} = \left[ {0.5033\quad 0.0398\quad 0.8632} \right] $$

The two obtained vectors, \( \vec{Y} \) and \( \overrightarrow { H} \), both originate from measurements and can thus be non-perpendicular due to measurement error. In patient populations performing a pure motion can be a demanding task, therefore the longitudinal vector (derived from the static trial) is chosen as the base of our calculations. Taking the cross product between (\( \vec{Y} \)) and (\( \vec{H} \)), we obtain a third vector (\( \vec{X} \)) that is orthogonal to the two original vectors. To ensure an orthogonal coordinate system we then compute the cross product between (\( \vec{Y} \)) and (\( \vec{X} \)), and obtain (\( \vec{Z} \)). (\( \vec{H} \)) is thus a temporary vector that is later corrected, resulting in (\( \vec{Z} \)) (see Fig. 16.4).

Fig. 16.4
figure 4

Double cross-product to ensure mutually perpendicular vectors

$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{unit}} \, {\mathbf{vectors}}} \\ \begin{gathered} \overrightarrow {Y\_segment} = {\text{norm }}\left( {\overrightarrow {Accel} } \right) \hfill \\ \overrightarrow {H\_segment} = {\text{norm }}\left( {\overrightarrow {Gyro} } \right) \hfill \\ \end{gathered} \\ \end{array} $$
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{sensor}} \, {\mathbf{orientation}}} \\ \begin{gathered} \overrightarrow {Z\_segment} = {\text{cross}}(\overrightarrow {Y\_segment} ,\overrightarrow {H\_segment} ) \hfill \\ \overrightarrow {X\_segment} = {\text{cross}}(\overrightarrow {Z\_segment} ,\overrightarrow {Y\_segment} ) \hfill \\ \end{gathered} \\ \end{array} $$
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{calibration}} \, {\mathbf{matrix}}} \\ \begin{gathered}^{\text{G}} {\text{R}}_{{{\text{s}}\_{\text{segment}}}} = \, \left[ {^{\text{G}} {\text{X}}_{\text{s}}\, ^{\text{G}} {\text{Y}}_{\text{s}}\, ^{\text{G}} {\text{Z}}_{\text{s}} } \right]_{ 3\times 3} \hfill \\^{\text{G}} {\text{R}}_{{{\text{s}}\_{\text{segment}}}} = \, \left[ {\overrightarrow {X\_segment} \overrightarrow {Y\_segment} \overrightarrow {Z\_segment} } \right]_{ 3\times 3} \hfill \\ \end{gathered} \\ \end{array} $$

In the numerical example we obtain the following matrices for the thigh and shank:

$$ ^{\text{G}} {\text{R}}_{{{\text{s}}\_{\text{thigh}}}} = \begin{array}{*{20}c} {0.0732} & { - 0.9805} & { - 0.1826} \\ {0.9958} & {0.0821} & { - 0.0416} \\ {0.0558} & { - 0.1787} & {0.9823} \\ \end{array} $$
$$ ^{\text{G}} {\text{R}}_{{{\text{s}}\_{\text{shank}}}} = \begin{array}{*{20}c} {0.0878} & { - 0.9961} & { - 0.0053} \\ {0.9874} & {0.0877} & { - 0.1317} \\ {0.1317} & { 0.0063} & {0.9913} \\ \end{array} $$

The matrix GRsegment allows us to represent the sensor orientation data provided by the TechMCS in the local coordinate system of the segment to which it is attached. This is done by multiplying the constant calibration matrix GRs_segment by the inverse of the sensor data matrix Rs_segment at each frame. The CS in which the sensor data is obtained, is not conform the ISB guidelines. The output of the TechMCS is a measure of its orientation with respect to a reference frame fixed to the earth; we therefore need to multiply the data by an ISB_conversion matrix to comply with the ISB recommendations (Grood and Suntay 1983; Wu et al. 2002). It was deemed easier to correct this mathematically post-data collection, and prioritize optimal IMU to segment attachment during trials.

$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{data}}} \\ \begin{gathered}^{\text{G}} {\text{R}}_{{{\text{S}}\_{\text{segment}}}} = {\text{ calibration matrix to transfer from sensor to anatomical frame }}\left( {\text{constant}} \right) \hfill \\ {\text{R}}_{{{\text{S}}\_{\text{segment}}}} = {\text{ sensor orientation data}},{\text{ updated each frame}} \hfill \\ \end{gathered} \\ \end{array} $$
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{ISB}}\_{\mathbf{conversion}}} \\ {{\text{R}}_{{{\text{ISB}} }} = \begin{array}{*{20}c} 0 & { - 1} & { 0} \\ 1 & { 0} & { 0} \\ 0 & { 0} & { 1} \\ \end{array} } \\ \end{array} $$
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{segment}} \, {\mathbf{orientation}}} \\ \begin{aligned}^{\text{G}} {\text{R}}_{\text{thigh}} & ={^{\text{G}} {\text{R}}_{{{\text{S}}\_{\text{thigh}}}}} * {\text{ inverse }}\left( {{\text{R}}_{{{\text{S}}\_{\text{thigh}}*}} {\text{R}}_{\text{ISB}} } \right) \\^{\text{G}} {\text{R}}_{\text{shank}} & ={^{\text{G}} {\text{R}}_{{{\text{S}}\_{\text{shank}}}}} * {\text{ inverse }}\left( {{\text{R}}_{{{\text{S}}\_{\text{shank}}*}} {\text{R}}_{\text{ISB}} } \right) \\ \end{aligned} \\ \end{array} $$
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{joint}} \, {\mathbf{orientation}}} \\ {^{\text{G}} {\text{R}}_{\text{knee}} ={^{\text{G}} {\text{R}}_{\text{thigh}}} *{\text{ transpose }}\left( {^{\text{G}} {\text{R}}_{\text{shank}} } \right)} \\ \end{array} $$
$$ \begin{array}{*{20}c} {{\mathbf{Get}} \, {\mathbf{Euler}} \, {\mathbf{angles}}} \\ \begin{gathered} (Shorthand \, notation{:} \, c1 \, = \, cos\theta_{1} ,\,s2 \, = \, sin\theta_{2} ) \hfill \\ \left( {ISB \, recommended \, Euler \, sequence{:} \, X - Y^{\prime} - Z^{\prime\prime}} \right) \hfill \\ \end{gathered} \\ \end{array} $$
$$ ^{\text{G}} {\text{R}}_{\text{s}} = \begin{array}{*{20}c} {\varvec{c}2\varvec{c}3} & {s3c1 + s1s2c3} & {s1s3 - c1s2c3} \\ { - c2c3} & {c1c3 - s1s2s3} & {s1c3 + c1s2s3} \\ {\varvec{s}2} & { - s1c2} & {\varvec{c}1\varvec{c}2} \\ \end{array} $$
$$ \begin{array}{*{20}c} {\theta_{ 2} = {\text{ asin }}\left( {^{\text{G}} {\text{R}}_{\text{s}} \left( { 3, 1} \right)} \right)} \\ {\theta_{ 1} = {\text{ acos }}(^{\text{G}} {\text{R}}_{\text{s}} \left( { 3, 3} \right)/{ \cos }(\theta_{ 2} ))} \\ {\theta_{ 3} = {\text{ acos }}(^{\text{G}} {\text{R}}_{\text{s}} \left( { 1, 1} \right)/{ \cos }(\theta_{ 2} ))} \\ \end{array} $$

Applying this to the full data-set gives us the following knee joint angles (see Fig. 16.5)

Fig. 16.5
figure 5

Three dimensional knee joint angles during an unrestricted walking trial performed at self-selected speed by a healthy subject. Data representing right knee movement

5 Conclusion

Inertial/magnetic sensors are relatively robust to environmental factors, which is one of the drawbacks of traditional technologies for movement analysis. Fusion algorithms allow perform 3D movement analysis, but two main concerns must be taken into account. First, sensor performance and data reliability depends on the appropriate selection of filter parameters, depending on the nature of the movement under analysis. Second, the compatibility with position-based systems through ISB standards is not guaranteed yet, although novel methods for anatomical calibration are being proposed.

MIMU thus offer valuable opportunities to almost restriction-less motion capture and monitoring of health state and activities of daily living, for example in a telemedicine application (Jovanov et al. 2005).