Keywords

1 Introduction

Sensors and sensor fusion play a fundamental role in the sensorimotor behavior of animals and humans. Their use offloads computational burdens to the periphery and early processing stages of the central nervous system (CNS; e.g. [1]). Furthermore, sensor data fusions represent the basis for the perceptual reconstruction of the external world and the interaction with it. Current understanding of the involved mechanisms in humans owes mainly to sensory physiology and to psychophysics, a research method that relates the perception to the physical stimuli it evokes, allowing inferences on the underlying information processing. The founders were, more than a century ago, Fechner and Weber (see [2]) and major contributions dealt with visual and vestibular mechanisms. Cybernetics then introduced engineering methods of describing information processing and control into biomedical research [3]. The present study uses psychophysical findings on human ego-motion perception and their model-based descriptions for the sensorimotor control of a humanoid robot. This represents a neurorobotics approach where neuroscientists apply engineering methods to unveil human neural control and roboticists draw inspirations from the human control methods [4].

Human sensorimotor control involves not only movement planning and movement commanding, but also posture control. Posture control is an instrumental constituent of skeletal motor activity. It copes with inter-segmental coupling torques and movement coordination, adequate buttressing of movements (e.g. push off), maintaining balance, and automatizing the compensation of external disturbances. Posture control functions may be selectively impaired in neurological patients as witnessed by disabling consequences. Both, sensory loss and cerebellar lesions cause ataxia with jerkiness of movements, dysmetria (inappropriate metrics), falls, and motor timing problems [5]. In basal ganglia diseases such as Parkinson’s disease, the posture control impairment causes falls, akinesia (difficulties in movement execution), movement freezing, impaired motor adaptability to external disturbances, and muscular stiffness (‘rigor’) [6].

Modeling the role of sensors and sensor fusions in human posture control has been successful only recently. The problem to overcome was how humans manage to deal with sensory feedback despite long neural time delays (see [7]). Before, it was often held that passive joint stiffness and viscosity, stemming from intrinsic musculoskeletal properties and acting virtually without time delay, play a major role, for example in stabilizing biped stance [8]. Later work showed, however, that this owes primarily to the neural reflexes (ankle joint: [9, 10]; ankle, knee and hip joint: [11, 12]). Several types of reflexes appear to be involved, some with short time delay (40–80 ms) and others with long time delay (>100 ms), and this applies not only to proprioceptive reflexes, but also to the vestibular reflexes [13].

The total time delay of the reflexive feedback mechanisms in biped balancing is approximately 180 ms (e.g. [10]). Yet the neural control of biped balancing in the ankle joints is stable, owing mainly to the fact that the loop gain is very low, hardly exceeding the minimum required for the balancing [10, 14]. The sensory feedback stems primarily from joint angle and torque proprioception, the vestibular system and vision (see [15]). The underlying neural sensor fusions, often referred to as ‘multi-sensory integration’, allow humans to adapt their posture control to changes in the environmental conditions and to the availability of sensory information. They do so mainly by changing sensory weights, which has been called ‘sensory reweighting’ [10, 14, 1618]. The sensory integration and reweighting mechanisms are still a topic of on-going research.

This paper presents a concept of human-derived sensor fusion mechanisms for use in the posture control of a humanoid robot that balances biped stance. In the following, basic aspects of the multi-sensory fusions are explained, before their use in the human posture control model is described and the model is implemented in a humanoid robot for balancing biped stance in the ankle joints. The model is then extended to include the hip joints in the balancing and is again re-embodied into a robot for direct robot-human comparisons. Finally, an outlook is given on how the control concept can further be extended in a modular control architecture for humanoid robots that we expect to show human-like characteristics when interacting behaviorally with humans or in the form of prostheses or exoskeletons.

2 Sensor Fusion and Posture Control Mechanisms

Sensor fusion is an important technical issue. Position tracking design technologies rely heavily on the integration of several sensors: e.g. inertial measuring units (IMUs) integrates gyros and accelerometers, and IMUs output itself is often fused with global positioning system (GPS) data. Published work on sensor fusion for postural control in robots typically used Kalman filters [1922]. Simulation models for human posture control [23, 24] also implemented Kalman filters, combining in ‘sensory integration centers’ multiple sensory signals with centrally generated information (motor command) to find the most accurate sensory representation for a given environmental situation. Drawbacks of these approaches are high demands on computational power in multi degree of freedom (DoF) systems and problems of control stability if the plant is not accurately reflected in the model.

A different disturbance estimation method was used in the posture control model considered here. It proceeded from psychophysical work that investigated (i) which sensory information are humans using for their ego-motion perception during passive motion of the body or parts of it (e.g. head, trunk, legs, feet with respect to each or in space), (ii) how humans fuse sensor data to obtain information that is not directly available from their sensors (e.g. trunk motion in space), and (iii) how they obtain estimates of external disturbances that may affect the ego-motion. The approach was model-based and originally aimed to formally describe the experimentally obtained human responses in the form of time series and performance data.

The psychophysical studies showed, for example, that humans involve joint proprioceptive information when using the vestibular information arising in the head for estimating the kinematic state of the trunk and legs in space as well as of the haptically experienced body support. From this information, humans internally reconstruct the external disturbances, which in the experiments consisted of support surface rotation and translation, and experienced their self-motion as a consequence of these external physical stimuli (see [25, 26]).

The concept of external disturbance estimation was extended to include field forces such as gravity or Coriolis forces (e.g. [27]) and to contact forces such as a push against, or a pull on the body [17, 28]. Neural correlates of some of the observed sensor fusions were found in neuron recordings in the vestibular nuclei [29, 30] and in cortical vestibular centers [31]. Furthermore, down and up channeling of vestibular signals in pathways of the spinal cord and their convergence with proprioceptive signals, have been described [32]. Also, representations of processed sensory signals in terms of kinematic variables have been observed in spino-cerebellar pathways [3335].

It was hypothesized that humans use the same or similar sensory information as observed in, or inferred from the psychophysical studies also for their sensorimotor control, at least as concerns re-active (sensor-driven) responses to external disturbances. On this basis, human posture control experiments were performed and modeled, leading to a disturbance estimation and compensation, DEC, concept.

2.1 Sensor Fusion in the DEC Concept

The DEC concept involves essentially two steps of sensor fusion, schematically illustrated in Fig. 1. In the first step, information from several sensory transducers is fused to obtain measures of kinematic and kinetic variables. In the second step, these physical variables are combined to yield estimates of the external disturbances.

Fig. 1
figure 1

Schematic illustration of the sensor fusion mechanisms. Information of sensory transducer signals is fused in the first step to yield physical variables. These variables are used in the second step to reconstruct external disturbances

2.1.1 Fusion of Sensory Transducer Data

An example of the first step is the human sense of joint angle proprioception. It combines information from several sensory transducers such as muscle spindles, Golgi tendon organs and cutaneous receptors [36]. This also applies to the human perception of head on trunk rotation, which in addition is complicated by the fact that rotations between several segments of the cervical vertebral column are involved. Yet, the result is a sense of angular head-on-trunk velocity and position, as if an angular rate sensor and a goniometer in a single joint were measuring head-trunk speed and rotation, respectively [37, 38].

Another example for the first step, well known to engineers who work with IMUs, is the fusion of angular and linear accelerometer signals. A problem with linear accelerometers is that they do not distinguish between inertial and gravitational forces (i.e. between linear acceleration and tilt of the sensor). There exists also a problem with the angular accelerometers, often used in the form of gyros that measure angular velocity. They show low frequency signal variations over time (‘drifts’). Both problems can be solved for the earth vertical planes by fusing the inputs from the two sensors in an appropriate way. This has an analogy in the human vestibular system that is located in the inner ears. Its otolith organs and canal systems represent biological equivalents of linear and angular accelerometers, respectively [39]. The solutions for both, the technical system and its biological equivalent involve information of the gravitational vector. In the horizontal translational and rotational planes, however, there is no such information available, so that further sources of information are required. In technical systems, often the GPS is used. Humans usually use the visual system for this purpose.

In the following we will speak of joint angle and angular velocity sensors and by this we mean virtual sensors that result from step one. The same applies when we refer to the vestibular sensor and its three output measures, i.e. 3D angular velocity and linear acceleration in space and 2D orientation with respect to the gravitational vertical. These measures of the physical variables represent the inputs to the second step of the sensor fusions.

2.1.2 Disturbance Estimation

In the second step of Fig. 1, the signals of the variables resulting from step one are combined to reconstruct external disturbances that have impact on the body. In the DEC concept, it is assumed that four physical quantities suffice to define the external disturbances that affect human balancing in moderate stimulus conditions (body sway amplitudes and velocities, <8° and <80°/s; frequencies, <3 Hz). The four types of external disturbances are: (1) Support surface rotation, (2) support surface translational acceleration, (3) field forces such as gravity, and (4) contact forces such as a pull on, or push against the body.

The second step in Fig. 1 was originally motivated by reports of the subjects in the aforementioned psychophysical experiments. When asked to report their percepts during passive rotations on a rotation chair, subjects typically started the report with the rotation of the chair, even though the percept primarily stems from the vestibular system in the head. Thus, without being aware of it, the subjects reconstructed the physical cause of their body rotation, i.e. the chair rotation in space, by internally reversing the linkages from the vestibular signal ‘head rotation in space’ via the proprioceptive signal ‘trunk rotation relative to the head’ to the haptical information of ‘sitting on the chair’. This can formally be described in terms of a transformation by which the trunk and chair kinematics are referenced to the vestibular derived notion of inertial space [25]. The concept applies to both, the vestibular-able subjects’ estimation of ‘support rotation’ and ‘support translational acceleration’ in Fig. 1 (formal description in Sect. 2.2).

Vestibular-able subjects furthermore use vestibular information for estimating body lean with respect to the earth vertical when balancing stance in the sagittal plane. From lean of the whole-body’s center of mass (COM B ) above the ankle joints and knowledge about body mass and COM height they to estimate the required ankle joint torque to compensate for the gravity effect. For field forces in general, it is known that subjects, when presented with a new aspect of a field force, they perceive it and readily learn to counteract its impact on the body. Thereafter, they no longer perceive it consciously, as has been shown in Coriolis force experiments by Lackner and DiZio [27]. The subconscious estimation and compensation of field forces makes it difficult to study them psychophysically.

Estimation of contact force effects on the ankle joint balancing requires internal measurement of the overall ankle torque (or related measures such as the center of pressure, COP, shift) and the distinct contributions to the ankle torque such as active torque and the gravitational torque. Details have been described before [40] for sagittal plane balancing of moderate disturbances, where the balancing is performed predominantly in the ankle joints (‘ankle strategy’; [41, 42]). In such situations, a single inverted pendulum, SIP, can approximately mimic human biomechanics.

2.1.3 Feedback Control Model

The two steps of sensor fusion are used for feedback control of one joint (Fig. 2). Its lower half represents a servo control consisting of a negative joint angle proprioceptive feedback and a controller with a proportional and a derivative factor (PD controller). The controller provides the motor command that is transformed by the muscles into joint torque (not shown in Fig. 2). Given appropriate parameters of the servo control, actual joint angle approximately equals the desired joint angle without requiring a feed forward of plant dynamics. Feedback from passive stiffness and viscosity with virtual zero delay is assumed to amount to 10 % of the proprioceptive feedback (not shown in Fig. 2).

Fig. 2
figure 2

Simplified feedback control scheme of the Disturbance Estimation and Compensation (DEC) concept. The Proprioceptive Feedback loop yields a servo control, by which actual joint angle approximately equals the desired joint angle. Signals from the Disturbance Estimation part command the servo to compensate the disturbances

Noticeably, in the SIP scenario, the P and D factors identified in human stance control are surprisingly low [10, 14, 43]. They appear to be geared to the pendulum mass m, the height h of the COM, and gravitational acceleration g (mgh; P ≈ mgh; D ≈ mgh/4). The values that humans use for balancing are only slightly higher. A consequence is that the servo alone is insufficient to cope with external disturbances such as gravity or a push against the body.

The upper half of Fig. 2 shows schematically the loop that carries the estimates of the external disturbances and compensates for them. To insure control stability in face of the neural time delays, the field and contact force estimates are not used directly, but in the form of body-space angle equivalents. For example, the estimate of body lean commands the servo to compensate for the gravitational torque it produces. Then, the loop gain (at the level of the controller) is raised accordingly. Noticeably, the increase occurs only at the time of, and to the extent that the disturbance has impact. Note furthermore that disturbance compensation applies even with superposition of several disturbances as well as with superposition of disturbances and voluntary movements [39].

The DEC loops are not simply representing additional sensory feedback loops, but are thought to represent long-latency loops through basal ganglia and cerebral cortex [40]. They contain central detection thresholds and allow for voluntary scaling the disturbance compensations and for predictions of the disturbance estimates (e.g. self-produced disturbances during voluntary movements).

It has been shown by comparing human data with model simulations that the DEC concept describes the human ankle joint balancing in a variety of disturbance scenarios. Furthermore, the control automatically adapts to changes in disturbance scenario and magnitude as well as sensor availability. This also applied when the model was implemented in a humanoid robot with ankle joint actuation, and tested in the human experimental setup (PostuRob I; overview [39, 40]). These experiments demonstrated that the DEC concept is robust against real world problems such as inaccurate and noisy sensors and mechanical dead zones.

The following describes an extension of the DEC concept to include the hip joints in the balancing. The hip joints contribute considerably when strong transient disturbances are applied (‘hip strategy’; [41, 42]). Then humans may use hip joint accelerations to produce shear forces under the feet to counteract body COM excursions. Another, more common involvement of the hip joints deals with adding to the task of body COM balancing a secondary task of keeping the orientation of the upper body upright. This ‘head stabilization in space’ task is thought to improve under dynamic conditions such as walking the sensory feedback from the vestibular and visual cues arising in the head [44, 45].

2.2 Extended DEC Concept: Sensor Fusion in Ankle and Hip Joint

The extension of the DEC concept for including the hip joints entails that double inverted pendulum (DIP) rather than SIP biomechanics are considered, and with this the occurrence of inter-segmental coupling torques [46]. In an extended DEC concept for DIP biomechanics, we postulated two DEC controls, one for the hip joint and the other for the ankle joint. This approach allowed to use again the above described sensor fusion principles for disturbance estimation.

2.2.1 DIP Biomechanics

The DIP biomechanical model is shown in Fig. 3. In Fig. 3a, COM T , COM L and COM B stand for the COM of the trunk (including head and arms), leg and whole body, respectively. Leg length is given by l L , the trunk and leg COM heights are given by h T and h L , respectively. Figure 3b shows the angular excursion of the trunk and leg segments with respect to earth vertical (trunk-space angle α TS , leg-space angle α LS ). Angular excursion of COM B is defined as body-space angle α BS . The foot has firm contact with the support surface, therefore platform tilt angle equals foot angle with respect to earth horizontal (foot-space angle α FS ). The trunk-leg joint angle is α TL and the leg-foot joint angle is α LF . In perfectly upright body position, all angles are 0°. Angular speed during reactive human balancing can be assumed to be slow enough such that the Coriolis and centrifugal forces can be neglected; the model can be linearized using small angle approximation, assuming that the subject is maintaining his upright position close to the vertical.

Fig. 3
figure 3

DIP biomechanics

Maintaining upright stance in the situation of a support surface tilt in the sagittal plane requires corrective joint torque in the ankle and hip joints. This torque can be expressed by the following equations for hip torque T H

$$ \begin{array}{ll} {T}_H=&\left({J}_T+{m}_T{h}_T^2+m_Tl_Lh_T\right){\ddot{\alpha}}_{LS}+\left({J}_T+{m}_T{h}_T^2\right){\ddot{\alpha}}_{TL} - ({m}_T{g}h_T){{\alpha}}_{LS}\\&-({m}_Tg{h}_T){\alpha}_{TL}\end{array} $$
(1)

and for ankle torque T A

$$ \begin{array}{ll} {T}_A=&\left({J}_L+{J}_T+{m}_L{h}_L^2+m_T(l_L^2+h^2_T+2l_Lh_T)\right){\ddot{\alpha}}_{LS}\\&+\left({J}_T+{m}_Th^2_T+m_Tl_Lh_T\right)\ddot{\alpha}_{TL}-\left(m_Lgh_L+m_Tgl_L+m_Tgh_T\right)\alpha_{LS}\\&-(m_Tgh_T)\alpha_{TL} \end{array}$$
(2)

where \( {\ddot{\alpha}}_{LS} \), and \( {\ddot{\alpha}}_{TL} \) represent angular accelerations, m L and m T are the segment masses, and J L and J T the segment moments of inertia (details in Al Bakri [47]).

In the extended DEC concept for the DIP, the hip joint is used for orienting and balancing the trunk segment and the ankle joint for balancing the whole-body using two separate controls. The vestibular-derived signals used for the controls are: the trunk-space angle α ts , trunk-space angular velocity \( {\dot{\alpha}}_{ts} \), and head translational acceleration \( {\ddot{x}}_{Head} \). The proprioceptive signals are: the trunk-leg angle α tl and the trunk-leg angular velocity \( {\dot{\alpha}}_{tl} \); the leg-foot angle α lf and the leg-foot angular velocity \( {\dot{\alpha}}_{lf} \). Uppercase letters in the angle subscripts indicate physical angles, lowercase letters the sensory derived representations of these angles.

2.2.2 Hip Joint Control

The DEC control of the trunk reflects the principles described already above for the SIP biomechanics. Considering the support surface tilt scenario in the sagittal plane shown in Fig. 3, the legs tend to rotate somewhat with the platform, due to passive ankle joint stiffness and a imperfect tilt compensation that is typical in humans with eyes closed. Since the legs represent the support base for the trunk, an eccentric hip rotation represents:

  1. (a)

    A support base tilt disturbance for the trunk, evoked by the leg rotation, α LS .

  2. (b)

    A hip translational acceleration \( {\ddot{x}}_{Hip} \). It produces a hip torque (T H_in ) in relation to m T , h T and J T . This torque is treated here as if it were an external disturbance rather than an inter-segmental coupling effect.

Furthermore, trunk lean is associated with a gravitational hip torque disturbance (T H_grav ).

These three disturbances are estimated in the DEC control of the hip joint control in the following form:

  1. (i)

    Estimation of leg tilt, \( {\widehat{\alpha}}_{LS} \). This estimate is derived from fusing the vestibular velocity signal \( {\dot{\alpha}}_{ts} \) with the proprioceptive velocity signal \( {\dot{\alpha}}_{tl} \) by \( {\dot{\alpha}}_{ls}={\dot{\alpha}}_{ts}-{\dot{\alpha}}_{tl} \) (Assumption: these transformations are performed as vector summations of co-planar rotations, separately for the three body planes). \( {\widehat{\alpha}}_{LS} \) is obtained by applying to the signal a detection threshold and a mathematical integration.

  2. (ii)

    Estimation of hip translational acceleration \( {\widehat{\ddot{x}}}_{Hip} \). The estimate is derived from fusing the vestibular signals \( {\dot{\alpha}}_{ts} \) and \( {\ddot{x}}_{Head} \) in the form

    $$ {\widehat{\ddot{x}}}_{Hip}={\ddot{x}}_{Head}-\frac{d\left({\dot{\alpha}}_{ts}\right)}{dt}{l}_T, $$
    (3)

    where the trunk length l T gives the height of the vestibular system above the hip. \( {\widehat{\ddot{x}}}_{Hip} \) is, in turn, used to estimate the inertial disturbance torque in the form of

    $$ {\widehat{T}}_{H\mbox{\_}in}={\widehat{\ddot{x}}}_{Hip}{m}_T{h}_T. $$
    (4)
  3. (iii)

    Estimation of gravitational hip torque \( {\widehat{T}}_{H\mbox{\_}{\it grav}} \). Using the vestibular signal α ts , the third and fourth term of Eq. (1) becomes

    $$ {\widehat{T}}_{H\mbox{\_}{\it grav}}={m}_Tg{h}_T{\alpha}_{ts}. $$
    (5)

2.2.3 Ankle Joint Control

The DEC control of the ankle joints is used to balance the whole body above the ankle joint. To this end, it combines the leg and trunk angular excursions in the form of COM B excursions in space, α BS . In this respect, also the DEC control of the ankle deals with a SIP. The following three disturbances that have impact on the ankle torque during support surface tilts are:

  1. (a)

    The support surface tilt, α FS .

  2. (b)

    The gravitational ankle torque, T A_grav . It results from α BS .

  3. (c)

    Inter-segmental coupling torque in the ankle joint, T A_coup . It arises with angular acceleration of the trunk segment.

For the estimation of these disturbances, the DEC control of the ankle fuses sensory signals from the vestibular system and the hip and ankle joint proprioception. To this end, sensory signals from the hip DEC control are transmitted (“down-channeled”) to the ankle joint DEC control. The estimates are:

  1. (i)

    Estimation of foot-space rotation, \( {\widehat{\alpha}}_{FS} \). This estimate uses a down-channeled version of \( {\dot{\alpha}}_{ls} \) and combines it with the ankle joint angular velocity signal \( {\dot{\alpha}}_{lf} \) in the form

    $$ {\dot{\alpha}}_{fs}={\dot{\alpha}}_{ls}-{\dot{\alpha}}_{lf}. $$
    (6)

    Analogous to \( {\widehat{\alpha}}_{LS} \), the estimate \( {\widehat{\alpha}}_{FS} \) contains a detection threshold and a mathematical integration.

  2. (ii)

    Estimation of gravitational ankle torque, \( {\widehat{T}}_{A\mbox{\_}{\it grav}} \). This estimate relates to the~third and fourth term of Eq. (2), which are mathematically combined in the COM B excursion α bs . From this, the gravitational torque is obtained in the form

    $$ {\widehat{T}}_{A\mbox{\_}{\it grav}}={m}_Bg{h}_B{\upalpha}_{bs} $$
    (7)

    where m B represents whole-body mass and h B represents COM B height. Small angular excursions allow approximating h B by a constant value.

  3. (iii)

    Estimation of the inter-segmental coupling torque, \( {\widehat{T}}_{A\mbox{\_}{\it coup}} \). This torque arises upon trunk rotational acceleration and tends to evoke a leg counter-rotation. In view of the DEC concept, the trunk acceleration exerts a ‘push’ against the hip like a contact force disturbance (compare external torque estimate in [40]). This disturbance is expressed by the second component of Eq. (2). Since its implementation was not critical for the stability of the DIP control in the present context (compare [48]), it is omitted in the following.

The hip and the ankle DEC controls can be viewed as separate control modules that are interconnected by ‘down-channeling’ of sensory information from the hip DEC control to the ankle DEC control. Recent experimental evidence suggests in addition an “up-channeling” of information between them (details in [49]). A schematic illustration of the DIP control is given in Fig. 4.

Fig. 4
figure 4

Basic aspects of the DIP control concept used for PostuRob II. CH and CA are the hip and ankle controllers, Vest. is the vestibular input while Hip Prop. and Ankle Prop. are the proprioceptive inputs

3 Human and Robot Experiments

The extended DEC concept was tested experimentally by comparing sway to support surface tilt in the sagittal plane with sway of a bipedal robot (PostuRob II) in a human posturography laboratory.

3.1 Bipedal Robot PostuRob II

PostuRob II consists of mechanical, mechatronic, and computer control parts. The mechanical part comprises one trunk segment, two legs and two feet, with a total mass of 59 kg and a total height of 1.78 m. Two hip joints and two ankle joints connect the segments (4 DOF in the sagittal plane; Fig. 5). The mechatronic part comprises an artificial vestibular sensor [39] that is fixed to the trunk segment. Artificial pneumatic ‘muscles’ (FESTO, Esslingen, Germany; Typ MAS20) connected with serial springs (spring rate 25 N/mm) are used for actuation. An electronic inner torque control loop ensures that actual torque equals approximately desired torque. Sensory signals are sampled at 200 Hz by an acquisition board. Computer control is performed through a real time PC that executes a compiled Simulink model using Real-Time Windows Target (The Math Works Inc., Natick, USA).

Fig. 5
figure 5

PostuRob II. The robot consists of trunk, leg, and foot segments interconnected by the hip joints (a) and ankle joints (b). Sensory information stems from artificial vestibular system (c) and ankle and hip joint angle and angular velocity sensors. Actuation is through pneumatic ‘muscles’ (d). PostuRob II stands freely on a motion platform (e)

3.2 Experimental Methods

Seven healthy human subjects (3 female; mean age, 28 ± 3 years) participated after giving their informed consent. The subjects (eyes closed) and the robot stood freely on a motion platform (see Fig. 5), while six successive pseudorandom ternary tilt sequences, each 60.5 s long, with peak-to-peak amplitude of 4° were applied (PRTS stimulus; frequency range 0.017–2.2 Hz; [10]). The first rows in Fig. 6a, b show one 60.5 s long tilt stimulus sequence.

Fig. 6
figure 6

Tilt stimulus and angular excursion responses of body in space and trunk in space from one representative subject (a) and of PostuRob II (b)

Trunk, leg, and COM B angular excursions in space were calculated on the basis of opto-electronically measured marker data (Optotrak 3020®; Waterloo, Canada) that were recorded with a sampling frequency of 100 Hz. Data analysis took into account human anthropometric measures [50] and was performed using custom-made software programmed in Matlab (The MathWorks, Natick, USA). The responses were expressed as gain and phase from the frequency response function [10] in a form where zero gain means no body excursion and unity gain means that body angular excursion equals platform tilt. Phase represents the temporal relationship between stimulus and response. Variability of averaged values was expressed as 95 % confidence limits [51].

3.3 Results

Subjects and PostuRob II balanced the tilts in similar ways. Time series of the responses of one subject and the robot are shown in Fig. 6. Note that the responses resemble each other, both for the body-space and the trunk-space responses. As shown in Fig. 7, the resemblance also holds for the mean gain and phase curves of the human subjects and the robot. In both, the gain values of trunk-space (TS) and of body-space (BS) vary similarly with stimulus frequency. In the low frequency range (<0.3 Hz), TS gain is lower than BS gain. In contrast, in the high frequency range (>0.3 Hz), TS gain exceeds BS gain, while the phase shows a larger phase lag.

Fig. 7
figure 7

Tilt responses in terms of gain, phase and coherence curves of human subjects (a; 7 subjects, medians ±95 confidence intervals) and PostuRob II (b)

4 Conclusions

The here proposed feedback control system of a bipedal robot takes advantage of the sensor fusion and posture control mechanisms that were derived from findings in human experiments. The disturbance estimators that were used are non-iterative and remarkably simpler than estimators that were used in Kalman filters. Furthermore, the multi-sensory feedback control is performed without integrating any dynamic model of the whole body in the control architecture. Filtering the estimates through a nonlinear operation provided by a deadband threshold tends to reduce noise, which appears to stem mainly from vestibular signals [39]. The noise shows 1/f properties and therefore overlaps with the bandwidth of human sensorimotor behavior. The threshold shuts off any estimator if there is no corresponding disturbance, which in a multi-DoF system may help to prevent accumulation of noise. The threshold also explains a non-linear behavior in the human disturbance responses that were observed with increase in stimulus magnitude [14]. Due to the non-linearity, small stimuli yield relatively smaller responses than larger stimuli. This is an aspect of the automatic sensory re-weightings, which emerged from the sensory network of estimators. Other important aspects of it are that the control automatically adjusts to changes in disturbance type and to sensor availability (for SIP, see [40]).

The here obtained good match of the data between the human subjects and PostuRob II suggests that the proposed sensor fusion and posture control mechanism capture important constituents of the human balancing system. The application of the extended DEC concept to the balancing of upright stance using hip and ankle joints in terms of a DIP required the integration of sensory signals from almost the whole body. In a recent study that used this approach, coordination between hip and ankle joint emerged from the multi-sensory feedback control [49]. These experiences with the extended DEC concept led us explore its usefulness with further DoF in a modular control architecture. In the generalized description, each DoF is controlled by one DEC control, which stabilizes a SIP (defined by the COM and moment of inertia of the segments above) on a moving support (given by the upper end of the segment below). Adjoining DEC controls are synergistically interconnected to exchange sensory information and disturbance estimates [52].

Taken together, although optimizing the DEC concept and its control parameters is still under research, the concept proved to have several promising features. These include: (i) a computationally very simple implementation, since almost all sensor fusions are based on algebraic operations; (ii) the control complexity scales linearly with the number of joints, since every joint is controlled as if a SIP and the signals are exchanged only between adjoining modules; (iii) noise rejection makes it possible to fuse the input of an high number of sensors; and (iv) the system, originally proposed for its predictive power of human behavior, can be employed to control actuated prostheses and exoskeletons to provide users with a human-like feeling.