Keywords

1 Introduction

Biomechanical analysis of human movement has been undertaken in various fields to aid the comprehension and evaluation of essential human motions. In the fields of sports and medical rehabilitation, biomechanics has recently being considered to be an indispensable field of study. In particular, electromyography (EMG) is extensively used as an effective way of understanding human muscle dynamics. An electromyogram provides a real-time representation of muscle activity and is commonly used to improve performance in electrophysiological studies and rehabilitation. Although existing methods provide some benefits, they still have drawbacks with respect to spatial and time consistency, which are important elements of biofeedback. In other words, simultaneous understanding of the relationship between muscle activity and body motion is difficult and requires appropriate equipment and suitable training of the subject.

Neuromuscular rehabilitation with biofeedback is important for individuals with disabilities in order to learn motor function. Because of the plasticity of the brain, repetitive actions can help the brain, spinal cord, and nervous system to work together to re-route the signals that were interrupted by strokes, injuries, and other illnesses. Different robotic approaches are aimed at providing robot-aided sensorimotor stimulation and additional sensorimotor training of the paralyzed or paretic upper/lower limb delivered by a robotic-device-enhanced motor outcome [1].

For example, MIT-MANUS [2] was introduced as a pilot system to investigate the potential applications of using robots to support the neuro-rehabilitation of the motor function of the upper limb, as illustrated in the leftmost image of Fig. 7.1. Further, LOKOMAT has been used to help people whose ability to walk has been impaired by a stroke, spinal cord or brain injury, or neurological or orthopedic condition to learn to walk again [2, 3]. On the other hand, the full-body exoskeleton-type robot, robot suit Hybrid Assistive Limb (HAL), that has been developed to support a physically challenged person’s daily life can help elderly and disabled people. The HAL has already been used as an assistive tool for neuro-rehabilitation [46].

Fig. 7.1
figure 1

Rehabilitation robots: The rehabilitation of deficits in sensory motor function may not suppress the cause, and hence, rehabilitation robots/wearable devices may lead the brain to find new solutions

In addition to such robotic approaches, biofeedback technology has been widely used since the 1960s and the action-perception and sensory-motor coordination have been studied extensively to treat certain medical conditions and improve human performance. The following standard definition of biofeedback has been formulated by leading professional organizations [7].

Biofeedback is a process that enables an individual to learn how to change physiological activity for the purposes of improving health and performance. Precise instruments measure physiological activity such as brainwaves, heart function, breathing, muscle activity, and skin temperature. These instruments rapidly and accurately “feed back” information to the user. The presentation of this information—often in conjunction with changes in thinking, emotions, and behavior—supports desired physiological changes. Over time, these changes can endure without continued use of an instrument.

In this chapter, a cognitive neuroscience approach for realizing augmented human technology in order to enhance, strengthen, and support human cognitive capabilities is described. Wearable devices allow the subject high mobility and broaden the spectrum of environments in which bodily motion and physiological signal recognition can be carried out. In this scenario, augmented human technology is regarded as the wearable device technology that enhances human capabilities, particularly cognitively assisted action and perception. The key issues related to augmented human technology can be summarized in Table 7.1.

Table 7.1 Key issues related to augmented human technology

In studies related to augmented human technology, the coherence of how similar in time and frequency the two signals, i.e., sensory input and outcomes, are plays an important role. Coherence training is needed for the coupling or connection between the brain and the motor functions. The salient temporal features and frequency characteristics of these physiological signals are mapped into visual or sound features by compact and lightweight wearable devices. These devices allow people to get visual or auditory feedback based on muscle tension while preserving the property of the original signal.

From various studies, it is considered that these feedbacks are sufficient for displaying the amount of and change in bodily motion and muscle activity, and wearable devices are appropriate in different situations. In principal, the rehabilitation of deficits in a sensory motor function may not suppress the cause but may lead the brain to find new solutions. Neuromuscular retraining with biofeedback is very useful for patients in learning new motor controls. Moreover, qualitative and affective characteristics such as facial expressions are important in several different domains.

2 Related Works

It is known that there are visual, auditory, and somatosensory spatial representations in the superior colliculus [8]. In recent years, sonification and visualization have attracted attention, as they enable an intuitive understanding of muscle activity. Visualization is effective for presenting multichannel muscle activity information, thereby facilitating an easy understanding of the interaction between multiple muscles, whereas sonification is an effective method of presenting the variation in muscle activity with respect to time. For instance, Nakamura et al. [9] and Delp et al. [10] developed a graphical interface for visualizing musculoskeletal geometry. Mixed reality is also an effective visualization method that combines actual human motion and the corresponding muscle activity as a color variation on the display [11]. Although these methods are aimed at providing an intuitive understanding of muscle activity, they require an LCD display or large-scale equipment such as a motion capture device, thereby limiting the range of the application considerably.

In addition to visual and auditory display, multisensory feedback has recently been paid attention to with the aid of advanced technologies. For example, Narumi et al. [12] investigated the illusion-based “Pseudo-gustation” method for changing the perceived taste of food by using a wearable device, which allows users to change the perceived taste on the basis of the effect of the cross-modal interaction of vision, olfaction, and gustation. Hamanaka et al. [13] proposed a headphone-type interface with auditory feedback according to the wearer’s head direction. When the user changes his orientation and directs it to different musical instruments one at a time, different auditory feedback is given to the user. Several haptic devices were invented, and some are commercially available.

On the other hand, in addition to typical sensory processing such as visual, auditory, olfactory, taste, and haptics, several approaches have attempted to recognize human affective and emotional performance. For instance, as a wearable approach for facial expression recognition, an example of this is the use of displacement sensors attached to the facial skin, as in MIT’s Expression Glasses [14]. SixthSense [15] is an attempt to overlap the virtual world onto human reality. A camera and a small projector allow the user to demonstrate a bi-directional feedback loop: a person’s physical experience and information from computing devices are fed into both worlds. As shown in Figs. 7.2, 7.3 and 7.4, these kinds of works are also regarded as augmented reality (AR) [16] or mixed reality (MR) [17]. Wearable devices are often used in these works and are successfully employed in various domains.

Fig. 7.2
figure 2

Somatosensory computation for a man-machine interface from motion capture data and a musculoskeletal human model [8]

Fig. 7.3
figure 3

Wearable devices for enhancing human capabilities: not only visual feedback, auditory, olfactory and haptic display are developed so far

Fig. 7.4
figure 4

SixthSense (left and center). A wearable gestural interface to combine the virtual and physical world. Expression Glasses (right). A wearable approach for facial expression recognition

Several technologies are used for detecting human intentions, such as physiological measurement and analysis of bodily motion. In addition to traditional sensing components for movement sensing and the recognition of body posture and motion [18], bioelectrical signals such as EMG and pulse waves are often used in the related works. As EMG signals are electrical signals, they propagate to and from neighboring muscles in a phenomenon called crosstalk [19]. Taking this into account, it is advised that electrodes be placed away from the front of the face. However, crosstalk has a critical drawback: The signals from all facial muscles, even those not involved in facial expressions, are propagated. Further, even during a single facial expression, the signals from all contracted muscle fibers are detected simultaneously as a mixed signal. Some attempts have been made to overcome the problem of distal detection in the upper extremities. In one case, crosstalk was used for successfully predicting finger movements from signals measured distally on the arm.

Tsenov et al. [20] used an independent component analysis (ICA) to separate the acquired signals on a person’s arm into their independent components for better classification. Naik et al. [21] used both ICA and an artificial neural network (ANN) to classify hand and finger movements.

In the following sections, several case studies with different wearable devices are then described, which are not only capable of measuring human physiological signals such as EMG and pulse but also designed to give feedbacks to the wearer in terms of light-emitting, sound, and robotic actuators. A number of different devices for reading muscle activity, bodily motion, heart rate, and facial expressions are presented with the potential applications to assistive technology, rehabilitation, and entertainment.

3 Case Studies

3.1 BioLights: Visual EMG Biofeedback

A wearable interface is developed, which allows users to perceive muscle activity in an intuitive manner while providing an unrestricted system [22]. Muscle activity or muscular tension is visualized on the surface of the body in the shape and position of the muscle in real time; this interface aids an intuitive understanding of multichannel muscle activity. It is designed as a wearable, thin, and light interface device, which enables a wide range of uses. Several experiments were conducted to evaluate the system performance. In addition, an experiment was conducted to investigate the possible applications of the interface to neuro-rehabilitation; this experiment involved the use of the interface in combination with an exoskeleton.

Figure 7.5 shows an overview of the developed interface. It is focused on the rectus femoris, biceps femoris, and semitendinosus of both legs because these muscles contribute to the following basic movements of the lower limbs: extension, flexion, internal rotation, and external rotation. The developed interface consists of three modules: (i) measurement module, (ii) control module, and (iii) display module. Disposable electrodes and an amplifier are installed in the measurement module, and signal processing and filtering are conducted via the control module. These modules and all other equipment are installed and sewn onto a pair of sports pants. Two muscle-activity visualization systems are developed in this case study: the maximum voluntary contraction (%MVC) visualization system and the muscular tension visualization system.

Fig. 7.5
figure 5

BioLights: light emitting wear for visualizing upper and lower-limb muscle activity

The interface is intended to be wearable and is hence designed using stretch fabric. For developing the display module, a light-emitting stretch fabric is utilized, which is composed of a warp of numerous optical fibers and a woof of nylon threads. The light-emitting surface is designed in accordance with the shape of the muscle for an intuitive understanding of the muscle activity. Scratching on the optical fibers creates a slightly coarse surface from which light is emitted in the arbitrary shape of the muscle. Consequently, it appears as if the user’s muscles are glowing red. In order to enhance the brightness, two super luminosity LEDs are used as light sources for each muscle; these LEDs generate sufficient brightness for the glow to be identified even under fluorescent light. Several buttonhooks sewn onto the side of the garments allow the user to slip these garments on and off easily. Small bend sensors are positioned at the knee and hip joints to measure the angle variation associated with physical exertion. These sensors are also used for calculating the muscular tension by utilizing a biomechanics model. The total weight of the wearable interface is 1.1 kg, which enables users to use it without feeling restricted or experiencing difficulty in movement.

Maximum voluntary contraction (%MVC) is used in this system as the degree of muscle activity. The calibration process is mandatory before using the interface. In this case study, the signal captured under resting conditions and maximum voluntary conditions is considered to be 0 % and 100 %, respectively. A microprocessor is used as a controller, and a Lipo-battery, as the power source. This system realizes unrestricted muscle-activity visualization. The EMG signal is acquired through a 12-bit A/D converter operating at 1 kHz. A full-wave rectifier, band-path, and comb filter are used for reducing artifacts and noise. After integral processing, the signal undergoes PWM for lighting the LEDs. The brightness is corrected using an exponential function by considering the logarithmic characteristics of human vision. This system’s properties can be modified by changing the number of integrations and the maximum PWM value. An increase in these values results in an improvement of the resolution and realizes a relatively smooth light emission. On the other hand, a decrease in these values improves the response time of the system. The former is considered effective for rehabilitation, and the latter, for sports training.

On the other hand, because muscular tension is indispensable for analyzing the interaction between multichannel muscles, the visualization of %MVC depends only on the activity of each muscle, while muscular tension reflects the interaction between muscles, i.e., the difference between muscle forces. A muscular-tension visualization system is then developed by utilizing a modified Hill-Stroeve model [23, 24], which is a simplified version of the model proposed by Winters and Stark [25]. The knee and hip joint angles (obtained from the bend sensors positioned at each joint) and the EMG signal of the lower-limb muscles are used as the data for the models.

As the possible applications in the field of neurorehabilitation were clearly demonstrated as illustrated in Fig. 7.6, the developed interface is currently used at the clinical trials. The effectiveness in allowing users to perceive muscle activity in both static and dynamic states is verified throughout the study. A notable feature of the developed interface is that the activity of the target muscles can be observed in real time at the position of the muscle by the wearer as well as by other observers. It is believed that this interface has various applications in the fields of sports and rehabilitation; it enables better coaching and a better relationship between patients and physical therapists. For the visualization of multichannel muscle activities, the interference of each muscle EMG must be considered in future systems. This sort of wearable approach for visualizing human physiological signals provides a new tool for biofeedback devices. In addition to EMG signals, other biosignals such as heartbeats can be considered for a further implementation of the device.

Fig. 7.6
figure 6

BioLights: differences in muscle activity during squatting motion, (i) without and (ii) with assistance

3.2 BioTones: Auditory EMG Biofeedback

In order to directly convert human movement to sound, a wearable device that generates sounds on the basis of bioelectrical signals, particularly surface EMG, is developed [26] as shown in Fig. 7.7. There are some studies [27, 28] on the sound generation from EMG signals in the field of computer music. Some biomedical studies [2931] employed EMG feedback delivered in the auditory mode as a physiological indicator. However, they focused very little on the kinds of the sound and the usability as a device. Thus, a useful wearable tool is proposed, which is available even for use in daily life, on the basis of the auditory biofeedback method. This novel technology can be of assistance in preventive healthcare, rehabilitation, and sports training, among others.

Fig. 7.7
figure 7

BioTones: A wearable device for converting a person’s bioelectrical signals on the basis of electromyogram signals into audio sounds. The device is capable of extracting the signals and generating audio sounds. The users can simply listen to the generated sound by using normal headphones

In the proposed method, the salient features of an EMG signal are mapped onto sound features by a wearable device that is capable of obtaining the EMG signal and creating an audio signal. This device allows people to obtain auditory feedback from muscle tension while preserving the property of the original signal. The proposed approach is suitable for several applications such as biofeedback treatment, sports training, and entertainment. In particular, an application to the biofeedback treatment for migraine headaches and tension headaches is considered.

The electromyogram monitor is used for monitoring human neuromuscular function as the visualization of muscular activities. The monitor is widely used for not only medical purposes but also the analysis of muscular activities observed during exercise and considered in the field of sports science. However, there are several critical problems in terms of visual feedback: (i) people are forced to stay in front of the monitor, and (ii) multiple EMG signals are displayed using traditional monitors because of the complexity of the signal features although the principle feature is the activity level. On the other hand, the auditory feedback is also effective for showing the change in and the characteristics of the bioelectrical signals caused by the muscular activity. Sound has three basic characteristics: loudness, pitch, and timbre. Not only the control of loudness and pitch but also timbre control makes it possible to represent a variety of muscular activity.

BioTones consists of a pair of disposable electrodes, a bioelectric amplifier, a microprocessor, a digital signal processor, and an audio amplifier. This enables it to extract bioelectrical signals and generate audio signals. The user can listen to the generated sounds simply through a normal headphone system. The developed prototype is designed to measure the bioelectrical signals on the surface of the flexor carpi radialis muscle. This muscle of the human forearm is used for flexing and abducting the hand. The device is fixed to the forearm with a tightened belt.

There are several mapping rules in accordance with the target application. There are two features of bioelectrical signals: level and frequency characteristics as well as sound features. Direct mapping is regarded as the direct correspondence between bioelectrical and audio signals in terms of the level and frequency. On the other hand, cross mapping is regarded as the alternation of the level and frequency characteristics between bioelectrical and audio signals. This device does not aim to extract the salient features of bioelectrical signals but to preserve the original features as much as possible. The purpose of these mappings is to represent a variety of the muscular activity by a variety of sound features. Examples of bioelectrical and audio signals are shown in Fig. 7.8.

Fig. 7.8
figure 8

Examples of bioelectrical and audio signals: (a) a bioelectrical signal, and (b) the converted audio signal. Signal conversion is performed by two types of mappings: direct mapping and cross mapping

A wearable device for EMG auditory biofeedback is introduced, which allows the wearer to cognize the muscles’ activity using a compact and lightweight device. Moreover, the characteristics of auditory stimuli are evaluated as biofeedback compared to common visual biofeedback. From the experimental results, it can be seen that the auditory feedback is appropriate for displaying the amount of and change in muscle activity. It is considered that the auditory feedback device can be used easily in different situations, such as in the office, while moving, and while playing sports, because it does not require any display unit. Furthermore, the sound conversion with varying loudness, frequency, or rhythm conversion can be used as an alternative to solve the complexity of showing several muscle activities simultaneously, which is not easy by common visual biofeedback. The system can be extended for multiple-channel auditory biofeedback. The advantage of putting multiple BioTones on the body and listening in parallel to multiple channels will be investigated. A device with a built-in speaker has already been developed. Multiple channels can be implemented by giving each device a different pitch by using loudness conversion.

This novel method of sonification is an alternative to the visualization technique. It should be noted that the temporal and pressure resolution is higher than the visual perception because of the characteristics of auditory perception. The wearable device benefits a wide range of users because people can obtain auditory feedback solely by wearing it and listening to sound at any time and place, even in transit or while walking.

3.3 Enhanced Touch: Physical Touch and Haptic Biofeedback

Haptic modality is often used for human communication. In this case study, a novel bracelet-type device has been developed for sensing physical contact among people in order to support direct communication between people by inducing touch with appropriate visual feedback [32]. The device detects and records the touch of users when they simply wear the device on their wrists as illustrated in Fig. 7.9.

Fig. 7.9
figure 9

Enhanced touch: This wearable device with electrodes senses touch and identifies other users. Six full-color LEDs are installed in the bracelet, which light up when a handshake occurs

Physical touch is a fundamental element of human communication, and several benefits and positive effects of such touch have been reported in the communication and therapeutic domain, such as Positive Touch and Deep Touch Pressure [33]. The typical symptoms of autism among children include avoidance of direct touch with other people and the tendency to engage in lone activities. Some studies have reported that the training of touch by therapists contributes to the alleviation of these symptoms. Thus far, human coders have attempted to observe their activity via recorded video, but this is not an objective measure and is time consuming for checking all the touches among people in a session. Measuring the time of touching, partner, and frequency are desirable data, but there is no practical equipment for this purpose. Similar technology is used for an instrumental device [34], but users needed to grasp and hold the same device together.

The communication technology based on a body area network [35] is used in order to detect touching between people and communication through the human body. This technology is known as an alternative solution of communication between humans and objects. Since the information is transferred via the human body, it can be utilized for sensing physical contact among people. The developed device is used for sensing touches and identifying others, which can be performed by the wearable device with electrodes. The six full-color LEDs are installed in the bracelet, which light up when a handshake occurs. The pair of electrodes is located on the inside of the case so as to fit the wrist. The device communicates with another device using a specific protocol. The received conducted signal is first amplified and demodulated, and then handled by the microprocessor. Every microprocessor attempts to transmit a synchronous signal at random intervals within 10 ms in order to detect if touching has occurred and to synchronize with the other device.

Several visual effects are programmed to visualize not only the physical touch but also the touching condition such as the duration of physical touch and the history of past touching. For example, color blending is implemented for effective visual feedback to show the duration of touching. A unique color—from the three primary colors (red, green, and blue)—is assigned to each device.

When a user wears the device and touches another person with the developed device on both their hands, the LEDs of both devices light up with the corresponding unique color. During the handshake, the two different colors change and are then blended gradually as long as the touching lasts. In other words, the degree of color blending represents the duration of the touching. The LED colors in the two devices are changed to the same color. This manner of lighting allows the proposed method to measure the duration of physical contact along with the device’s ability to identify other devices.

The devices for sensing human contact can be used recognizing a social network based on physical contact. The bracelet-type device lights up—with a different color for each—when the wearer shakes hands with another wearer. Not only the contact sensing but also the electrical communication is used for identifying and sharing the device ID. It is considered that the proposed device can provide a novel playful interaction method between humans. It is also planned to verify if the device contributes to motivating touching among users by lighting LEDs or by playing interactive social games. This technology can be used for supporting and enhancing the experiences on play and social interaction among people by using playful devices. The device mediates between humans without missing the fundamental properties of human activities. This is a cyber-physical system of measuring and presenting human physical activities such as physical contact, spatial movement, and facial expressions, where the psychological and social aspects of human activities can also be enhanced.

3.4 HOTARU: Visual Biofeedback Based on Heartbeats

The heartbeat is one of the fundamental vital signs of human beings. As the heart beats independently of any nervous or hormonal influences, the rhythm of the heart gives an important signal from the body. In addition to the rhythm of the heart, the heartbeat is regulated by the autonomic nervous system.

In this case study, a novel method of heartbeat tracking is proposed and a wearable device to visualize the heart beat, named HOTARU (“firefly” in Japanese), is developed [36]. A number of systems and devices for heartbeat measurement exist, which can be used for measuring heart function or exercise volume and as a psychological barometer for measuring stress or relaxation. However, since the measurement of biological signals is not stable because of several unexpected noises, the user is asked to firmly attach the sensor, for example, the electrode, and to keep quiet during the measurement of the heartbeat rate. Fast Fourier transform (FFT) is used as the traditional method of measuring the heartbeat rate, while the signals with unexpected noises in the measured signal are ignored.

A wearable device is developed to indicate the heartbeat in real time with a different color of LED. The color changes according to the heartbeat rate and blinks in synchronization with the heartbeat pulse. The developed system cannot only track the heartbeat but also interpolate it from the noisy signals in real time. The heartbeat is extracted from the original signal of the photoplethysmographic (PPG) sensor, which contains the noise delivered by body movement or other unexpected causes. In the proposed method, when the system cannot determine the heartbeat, because of the sensor’s alignment or a temporary lack of pulse, the heartbeat is interpolated on the basis of the past signal and the linear prediction algorithm.

The developed device consists of a microprocessor, LED displays, and a PPG sensor that can measure the heartbeat pulse by using optical absorptance of the human body. The user is asked to attach the PPG sensor that is a clip-type interface on the ear and to wear a bracelet-type interface with LEDs on his/her wrist. The brightness of the LEDs changes in sync with the heartbeat and their color corresponds to the heartbeat rate (HBR). As shown in the left image of Fig. 7.10, blue implies that the HBR is less than 60, green means that the HBR is from 60 to 80, and red implies that the HBR is more than 80.

Fig. 7.10
figure 10

HOTARU (“firefly” in Japanese): a conceptual image of using the developed device

Traditionally, the measurement of the heartbeat is focused on the heartbeat pulse itself, but the tracking accuracy depends upon the environment. It is usually not stable because of the noise, and the users are asked to rest during the measurement. The intervals of the heartbeat pulse are at a low frequency from approximately 0.5 to 2.0 Hz, and these intervals are assumed not to change rapidly. However, precise intervals are difficult to recognize from the measured signals by using only peak detection because of the noises, which are usually impulse noise and look like the heartbeat pulse. FFT is used for their analysis.

A novel method of heartbeat tracking is then implemented. The pattern matching is carried out between the measured signal P(x,t) and the ideal heartbeat pulse Pc(x), which is prepared in advance. The cross-correlation z(x,t) is calculated with a fixed time window, which is the same as the length of the ideal heartbeat pulse. The matching result z is expected to be a periodic signal, and it is synchronized with the real heartbeat pulse. The computational cost to obtain the coefficients of all possible cross-correlation values is very high and not suitable for real-time calculation; hence, only limited coefficients are used in this process. In addition, in order to reduce the computational cost, the cross-correlation values are obtained only at a certain time, which is estimated using the linear prediction process.

Then, a modified peak detection algorithm is employed by combining a Kalman filter to predict the intervals of the heartbeat, which is based on the linear prediction and uniform distribution function. Assuming that these intervals do not change rapidly, the next heartbeat interval can be estimated from the transition of the previous several intervals. Further, the center value of the probability density function (PDF) is solely used for the detection of the peak candidate in P. The peaks are detected within the z time range of reliability on the basis of this PDF.

This wearable device opens new experiences among users to understand each other’s physiological status during day-to-day activities for presenting the current heartbeat in a different color. The LED lights up in sync with the heartbeat, and the color changes according to the calculated HBR. The developed device allows users to freely move and play without attaching the sensor or electrode firmly. Potential applications include tools for children to promote social interaction. The user testing with several people is planned. The sound feedback according to the heartbeat pulse for computer games and VR avatar will also be implemented.

3.5 Head Orientation Sensing for Cognitively Assisted Locomotion

From the point of view of support for motion and locomotion, it is quite important to consider not only the lower limbs but also the gaze and the head because they are tightly coupled with the gait behavior during human locomotion. In addition, qualitative and affective characteristics such as facial expressions are important in several different domains. In this section, a head-mounted wearable device for detecting the head orientation is described in order to utilize kinematic cues during human locomotion.

Mobility aids such as manual and electric wheelchairs are widely used by people with reduced mobility. Such equipment allows elderly and disabled people to support their mobility needs. In addition, robotic-assisted locomotion such as exoskeletons has received a considerable amount of attention in recent years because of its potential use not only as a mobility aid but also for locomotion training.

It is known that the head is turned toward the future walking direction during natural human locomotion, and this head anticipation and changes in gaze direction occur according to the path [3739] as illustrated in Fig. 7.11. The head direction thus anticipates the future body trunk direction and the walking direction. Along with body balance and posture, the human head orientation plays an important role in the prediction of the walking direction and future motions such as standing and sitting.

Fig. 7.11
figure 11

Head stabilization: Physiology of perception and action during various locomotor tasks in humans [37, 38]

In this case study, a novel wearable device is proposed for the measurement of the head orientation and position, which can be applied to extend the existing mobility aids. The wearable device can provide important cues for predicting the future walking direction and behavior by observing the head direction and the difference between this direction and the body trunk direction. The developed device, which can be easily worn and removed, measures the head orientation and position irrespective of the location and enables the prediction of the future walking direction in real time for assisted locomotion, such as exoskeleton robots and wheelchairs. It is also designed to be small and lightweight for long-term comfortable use.

Head Anticipation Measurement in Natural Walking: It is known that during human locomotion, the gaze turns first, the head turns next, and then the body direction follows sequentially, and finally, the walking direction changes. By observing this head anticipation, one can predict the future walking direction. An experiment is conducted to detect the head anticipation by using the developed device and evaluate the detection accuracy by using a motion capture system.

The subjects were asked to walk naturally to form 8-shaped trajectories in the 2.5 × 3.0 m2 measurement space of the motion capture system. There were no visual cues except the lines indicating the end of the measurement space. One trial consisted of two laps of walking, and each subject was asked to perform three trials. The subjects were three adults, and their ages were 24, 25, and 30 years.

Figure 7.12 indicates that the head angle against the body, as measured by the developed device is summed with the body angle measured by the motion capture system. The average latency time and its standard deviation from the head to the walking direction measured by the developed device was 687(196) ms, and the one measured by the motion capture system was 707(178) ms.

Fig. 7.12
figure 12

Experiments with an electric wheelchair and natural locomotion (right) and an example of head anticipation during walking

The measurement accuracy of the developed device was evaluated using the motion capture system. The head anticipation latency to the walk/locomotion direction during natural walking and during wheelchair locomotion with the developed device was also measured; the latency was 687 ms in the case of walking and 694 ms in the case of wheelchair locomotion. Therefore, it was verified that it is possible to use the developed device for predicting the direction of walking/wheelchair locomotion.

This is a novel wearable device for the measurement of the head orientation on the basis of both the inertia sensors and the optical marker tracker without accumulated errors, which is designed for robot-assisted locomotion, particularly, the prediction of the direction of walking/wheelchair locomotion. Cognitively assisted locomotion is a new approach to lower-limb exoskeleton control based on head and gaze motor behavior. Using behavioral analysis and cognitive neuroscience findings based on head and gaze tracking, we developed a head-mounted measurement device for sensing the head orientation. This study included the analysis of patients who recovered their locomotor skills at cognitive and meta-cognitive levels, such as biofeedback, mood influence, self-consciousness, and confidence, rather than at mechanical levels.

3.6 Face Reader: Reading Facial Expressions for Affective Feedback

In the previous sections, the sensing and recognition of body posture and motion were mainly described. In addition to sensing movement, the valence and intensity of affective reactions play an important role in human interactions. In particular, facial expressions play a significant role in the exchange of interpersonal information by providing additional information about the emotional state or intention of the person displaying them [40]. Thus far, several approaches have been followed in order to read emotions automatically from the face. The most traditional approach to recognizing emotional facial expressions uses video and photographic cameras and subsequently computer vision algorithms to identify facial expressions. Another approach for facial expression recognition is a wearable approach. However, to date, no reliable and unobtrusive interfaces to read facial expressions and display them for a long time have been developed.

In this case study, the use of the facial bioelectrical potentials captured on areas on the side of the face is proposed in order to obtain information about facial expressions. Because of the mixed nature of crosstalk, it is necessary to transform the sampled signal. A classification method is introduced by combining two techniques: ICA to transform the signals into independent components and ANN to accurately identify facial expressions.

This is a novel method for reading expressions on the human face through an unobtrusive wearable device by applying computational methods to bioelectrical signals captured on the side of the face. In contrast to the previous approaches, the proposed approach offers robustness against occlusion, changing lighting conditions, and changing facial angles. Electrode locations were carefully selected on the basis of the facial displacement and physiology in order to capture usable signals without covering or inhibiting the expressions. The captured signals were considered a mixture of distal electromyographic signals and other biological signals and were used for achieving a personal, pattern-based identification of the facial expressions. More than 90 % accuracy of facial expression recognition of a “smile” and more than 85 % of both the “smile” and the “frown” were ascertained using this method even when presented with crosstalk from other muscles. Figure 7.13 shows the developed wearable device, called Face Reader, which cannot only identify emotional facial expressions in real time but also display them in a continuous manner.

Fig. 7.13
figure 13

Face Reader: This is a device for reading facial expressions on the basis of bioelectrical signals and the model of EMG signal propagation (Modified from, electrode position, and proposed interface device)

The goal of this research is to develop an emotional communication aid to improve human-human and human-system interactions through an emotion reading system that can recognize the subject’s emotions in real time and can display the output in different formats. Further, it must be unobtrusive to the user and not inhibit expressions; it should also work in any environment irrespective of the changing lighting conditions and the changing positions of the subject.

Face Reader [41] has applications in several areas, particularly in therapy and assistive technology. Further, it can aid the visually impaired in the following manner: the listener can perceive the speaker’s facial expressions through alternative forms of communication such as audio or vibro-tactile stimulation. Another application lies in increasing the quality of life for patients suffering from facial paralysis, where the signals obtained from the healthy side of the face can be used for controlling a robot mask that produces an artificial smile on the paralyzed side [42]. Because it is an unobtrusive wearable device, it can be used outside the laboratory for continuous expression detection in environments where cameras are not supported or where subjects require high mobility.

For example, Face Reader can be used in human-computer emotional interactions for diverse types of agents, such as animating an on-screen avatar or for emotion-based coaching of a robot [43] by using facial expressions. Figure 7.14 shows examples of potential applications of the device.

Fig. 7.14
figure 14

Smiling Avatar (right) and Emotionally Assisted Interaction: Emotion reader is used for controlling an avatar and a humanoid robot

4 Conclusions

In this chapter, a cognitive neuroscience approach for realizing augmented human technology by using several wearable devices in order to enhance, strengthen, and support human cognitive capabilities was described. Different physiological signals and human kinematic and physiological characteristics were considered throughout the presented case studies.

In addition to augmented human technology (ATH), human enhancement technologies (HETs) were regarded as techniques that could be used not only for treating illness and disability but also for enhancing human characteristics and capacities. Several approaches using the developed wearable devices were attempted. As biomedical sciences and enhancement technologies progress, new ethical and social implications should be considered [44]. Thus far, many such enhancement technologies have already become widely available, for example, cosmetic surgery for aesthetic enhancement. Future enhancement technologies include those related to genetics, pharmacology, cognitive functions, and longevity.

In order to create a future society where assisted lifestyles will become widely available, we need technology that will support, strengthen, and enhance limited human capabilities. The supports for both physical and cognitive functions are definitely needed for future rehabilitation and physical exercise because the rehabilitation of deficits in sensory motor functions may not suppress the cause but may lead the brain to find new solutions.