Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

During the last century technology has become an integral part of our modern society. It is hard to imagine life without having access to the internet, being able to communicate with people through mobile phones, or share personal experiences with friends all over the world in electronic social networking services. Traveling large distances with motorized vehicles like cars, trains or planes appears to be somehow normal in a globalized world. Technology in general helps to overcome the natural limitations of mankind and extend the physical capabilities of each human being. The limits of each individual person cannot be defined in general, but strongly depend on the physical capabilities of an individual and his or her environment. In the case of individuals with motor, sensory, or cognitive disabilities technology can be helpful in order to perform functions that might otherwise be difficult or impossible. In this case technology is called assistive technology (AT). The definition of assistive technology most frequently cited in the relevant literature first appeared in the US ‘Technology-Related Assistance of Individuals with Disabilities Act of 1988’ as “any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities”. This is the generally accepted definition of AT internationally. Assistive technologies are meant to help people in their primary functional tasks. Wheelchairs, scooters, walkers, and canes are assistive technologies for mobility; related products include lifts on vehicles and portable ramps. More people use assistive technologies related to mobility (6.4 million in Germany) than any other general type of assistive technology (Scherer 2002). But while AT for mobility is the largest single group of AT products, there are many others. As of April 2013, ABLEDATA (http://www.abledata.com), the AT product database sponsored by the Institute on Disability and Rehabilitation Research, US Department of Education, lists almost 40,000 assistive devices (ADs). Among them are electronic or environmental aids for daily living as well as technologies for personal care and household management, augmentative communication devices, technologies to compensate for motor or sensory (hearing, eyesight) loss, and hardware, software, and peripherals that assist people with disabilities in accessing computers or other information technologies.

The latter is most important for individuals with severe motor impairments as a consequence of trauma or disease. Among them are individuals with Amyotrophic Lateral Sclerosis (ALS), brainstem stroke survivors, or people with high spinal cord injury. ALS is a neurodegenerative disease of unknown etiology which is characterized by rapidly progressive paralysis leading within a few years after symptom onset to a locked-in state with the complete loss of limb movements, the ability to speak, and – in the most severe cases – even the loss of voluntary eye movements. The incidence of ALS in the European Union is about 2.16 per 100,000 persons per year (Logroscino et al. 2010).

Stroke is one of the most prevalent neurological conditions worldwide and one of the leading causes of motor impairment in the population (Warlow et al. 2008). In Europe every year 1.1 million first strokes occur, of which around 4 % are a brainstem stroke (Truelsen et al. 1997). Severe brainstem stroke leads to nearly complete or total paralysis with preserved cognitive functions, the so-called locked-in syndrome.

In Europe an estimated 330,000 people are suffering from a spinal cord injury (SCI) with 11,000 new injuries per year (Ouzký 2002; van den Berg et al. 2010). Forty percent of them are tetraplegic due to injuries of the cervical spinal cord with paralyses of the lower as well as the upper extremities. The bilateral loss of the grasp function severely limits the affected individuals’ ability to live independently (Anderson 2004; Snoek et al. 2004) and retain gainful employment post injury (NSCISC 2011). Beside traditional ADs for daily living like adapted eating tools or tools for operating a keyboard, neuroprostheses based on Functional Electrical Stimulation (FES) are offered to individuals with tetraplegia for restoration of a completely lost or improvement of a weak grasping function (Rupp and Gerner 2007).

A survey among individuals with severe motor impairments revealed that the prioritized needs of these persons with high spinal cord injury, neurodegenerative diseases, or cerebrovascular disorders are “mobility” and “activities of daily living” (Zickler et al. 2009). The needs of participants who used communication aids were partially different from those of the rest of the participants. They wanted to improve their independence in personal expression and social interaction. Considering the adoption of a new AT solution, participants rated “functionality” as the most important aspect followed by “possibility of independent use” and “ease of use”. The study revealed dissatisfaction with their current ADs for communication (16 %) and manipulation (30 %). This shows that there is the need for better or/and alternative AT solutions in the area of manipulation, communication, environmental control, and entertainment.

For use of most of the existing ADs a substantial number of residual functions have to be preserved. As a consequence persons with the most severe impairments are not able to use these devices sufficiently. Even end-users who are basically able to use a certain AD may not be able to use it over an extended period of time due to mental and physical fatigue. Therefore, it is crucial that users have a choice of options and that healthcare and rehabilitation professionals make them available, since each individual will find that some of the available options are more productive and work better than others.

Brain–Computer Interfaces (BCIs) may serve as an alternative human–machine interface for the control of ADs. BCIs are technical systems that provide a direct connection between the human brain and a computer (Wolpaw et al. 2002). Such systems are able to detect thought-modulated changes in electrophysiological brain activity and transform such changes into control signals. Most of the BCI systems rely on brain signals that are recorded non-invasively by placing electrodes on the scalp (electroencephalogram, EEG). A BCI system consists of four sequential components: (1) signal acquisition, (2) feature extraction, (3) feature translation, and (4) classification output, which interfaces to ADs. These components are controlled by an operating protocol that defines the onset and timing of operation, the details of signal processing, the nature of the device commands, and the oversight of performance (Shih et al. 2012). At present, EEG-based BCI systems can function in most environments with relatively inexpensive equipment and thus offer the possibility for practical BCIs in the field of AT. BCIs may provide an additional control channel and may serve as a valuable adjunct to traditional user interfaces.

This chapter will be devoted to providing an overview of the state of the art of non-invasive BCIs for the control of electronic devices for communication and computer access, electronic mobility aids like wheelchairs or mobile telepresence robots, and upper extremity neuroprostheses for the restoration of grasping and reaching.

2 BCIs for Communication

BCI research in the field of communication started with the idea of supporting severely disabled people. The loss of speech and therefore the possibility to communicate thoughts and needs tremendously affects a person’s well-being and quality of life (Ganzini et al. 1999; Veldink et al. 2002). The first time a BCI was successfully used for communication was in 1988 (Farwell and Donchin 1988). With this spelling system words could be composed letter by letter, which were arranged in rows and columns (for a matrix example see Fig. 2.1).

Fig. 2.1
figure 1

Example of a P300 Speller matrix. Letters of the alphabet are arranged in rows and columns as are numbers and additional punctuation marks

One letter was chosen by implementing an oddball paradigm (Sutton et al. 1965). Rows and columns were highlighted randomly while the user was focusing on one specific letter (target letter) he or she wished to spell and tried to ignore all other letters that were highlighted in other rows or columns (non-target letters). Each time the target letter was highlighted, a P300 signal occurred. The P300 is a positive deflection in the EEG occurring 300 ms after stimulus onset and is a reliable, easy to detect event-related potential (Fig. 2.2). As one letter in the matrix is located on one exact position of one row and one column (for example the B in Fig. 2.1 is located at the cross section of the first row and the second column), each target letter can be identified by a classifier, which recognizes the largest amplitudes for rows and columns and selects the letter accordingly. Most BCI communication paradigms were later on based on this paradigm and successfully used for communication in unimpaired subjects and patients with severe motor impairments (Hoffmann et al. 2008; Nijboer et al. 2008; Guger et al. 2009; Kleih et al. 2010; Kaufmann et al. 2011). However, other brain signals were also used for the setup of a BCI. The first ever long-term independent use of a BCI was shown in a locked-in patient communicating by the regulation of slow cortical potentials (Birbaumer et al. 1999). Slow cortical potentials (SCP) represent shifts of the depolarization level of apical dendrites in cortical layers I and II and develop slowly after stimulus onset. The locked-in patient wrote the first communicated messages with such an SCP BCI system and used it for several years for independent communication at his home (Birbaumer et al. 1999). He wrote messages, for example to his caregivers, with the so-called ‘Thought Translation Device’ (Birbaumer et al. 1999) and also extensively used NESSI, an SCP-controlled browser for the world wide web (Bensch et al. 2007).

Fig. 2.2
figure 2

Example of a P300. Activation (μVolt) is plotted against time (ms). Approximately 400 ms after the stimulus (vertical line) the amplitude of the P300 deflection is highest for the target stimuli (bold curve) while for non-target stimuli (dotted curve) no deflection can be observed

2.1 Visual P300 Paradigms

Nowadays, researchers mostly work with the P300 signal for communication purposes because its signal characteristics (relatively easy to elicit, short delay after stimulus onset) allow for faster spelling compared to SCP-based systems. Additionally, the P300 signal is very robust. ALS patients used a P300-based BCI with 36 choices (letters and numbers) for more than 40 weeks and no decrease in accuracy (constantly around 80 %) was found (Nijboer et al. 2008). Similarly, in an ongoing study, an ALS patient in the locked-in state has been using a P300-controlled BCI for 1 year for painting (see below) and neither a decrease in speed or accuracy nor an attenuation of the P300 amplitude has been observed (Holz et al. 2011). Numerous other clinical studies confirm the efficacy of the P300-BCI in paralyzed patients with four choice responses, such as “Yes/No/Pass/End” (Sellers and Donchin 2006) or “Up/Down/Left/Right”, for cursor movement (Piccione et al. 2006; Silvoni et al. 2009).

In a recent study a new paradigm was introduced for the enhancement of the P300 control (Kaufmann et al. 2012). The authors superimposed a famous face on top of the matrix display, in this case the face of Albert Einstein. Every time the target letter was highlighted, not only was an increased P300 signal detected, but the recognition of the famous face also elicited the N170 (Bentin et al. 1996; Eimer 2000) and N400 (Eimer 2000) evoked potentials. Using all three evoked potentials improved the signal-to-noise ratio tremendously, thus allowing for a highly accurate classification and a more reliable selection of letters. This approach enabled for the first time two severely motor-impaired end-users, who were unsuccessful with the regular P300 speller, to spell with 100 % accuracy. Additionally, only one single sequence was needed, i.e. the target letter was highlighted only once in the row and once in the column before the letter was correctly selected (Kaufmann et al. 2012).

However, no completely locked-in patient has so far been successfully and reliably able to use a BCI system for communication. When a patient is in the complete locked-in state (CLIS), she or he completely loses control over any voluntary muscle activation including eye movements (Hayashi and Kato 1989; Murguialday et al. 2011). Therefore, the non-visual channels seem to be the only possible way to establish communication in individuals in CLIS, and tactile (Kaufmann et al. 2012) as well as several auditory BCI approaches have been investigated.

2.2 Auditory and Tactile Paradigms

Auditory BCIs allowing for a binary choice were recently introduced (Halder et al. 2010; Hill et al. 2012). In these systems a ‘Yes’ or ‘No’ decision could be detected by the system, therefore guaranteeing at least the most basic communication of approval or refusal, albeit tested in unimpaired volunteers only. The advantage of binary choice paradigms is that even users who are unable to focus on complex visual matrices are in principle able to use such a system. In those end-users it is better to present stimuli in a dichotic listening task, in which attention has to be focused on one of two streams of information (Hill et al. 2012; Pokorny et al. in press), rather than in a sequential order (Halder et al. 2010).

For more complex spelling applications, with which whole messages can be conveyed, the sequential presentation seems to be more advantageous (Furdea et al. 2009; Höhne et al. 2011; Schreuder et al. 2011). The user’s intention can be derived from the brain response more directly compared to a binary choice paradigm, in which several subsequent choices would be necessary to narrow down the target and to finally identify the target letter. One recently investigated approach for complex auditory BCI systems included spatial information. Six speakers were equally distributed around a user in a circle (Schreuder et al. 2011). By focusing attention on one of the speakers a group of letters can be selected. Each of the letters in this group is subsequently allocated to one speaker position. Therefore, it only needs two steps to finally select the desired letter. This paradigm is the auditory complement of the Hex-o-Spell paradigm for the visual modality (Blankertz et al. 2006). Of 21 unimpaired subjects testing the Hex-o-Spell auditory paradigm, 16 were able to spell a full sentence with at least 26 characters. In an extended approach (Höhne et al. 2011), the spatial information was provided by headphones and therefore facilitated the setup. A user chose one of nine groups of letters by focusing on one of three tones differing in frequency, presented on the left, the right, or both ears. There were nine groups of letters, similar to the grouping on mobile phones. After the detection of the selected letter group, again the user had to focus on one of the presented tones and thereby could select a single letter. In ten unimpaired subjects an accuracy of 77 % was achieved when spelling a 36-character sentence. However, again, this paradigm has not yet been tested with severely motor-impaired patients and thus it remains open whether this approach is feasible and effective in a clinical setting (Zickler et al. 2011).

The first tactile two-class BCI interface based on attention-modulated steady-state somatosensory evoked potentials (SSSEPs) was reported earlier (Müller-Putz et al. 2006). The index fingers of both hands were simultaneously mechanically stimulated in the “resonance”-like frequency range of the somatosensory nervous system. Four unimpaired subjects were trained to modulate their SSSEPs by focusing attention on one of their index fingers. Classification accuracies of up to 80 % were achieved using only three bipolar EEG-channels covering the primary somatosensory cortex.

In summary, there are several promising paradigms in BCI research that hold the potential to enhance the communication skills of severely motor-impaired end-users. For a first proof of principle it is enough to test these paradigms in unimpaired subjects. However, if sufficient performance is obtained in unimpaired subjects this may not directly apply to applications with motor-impaired end-users. In a single case study the best possible strategy was investigated to enable communication in an end-user diagnosed with locked-in syndrome (Kaufmann et al. 2013). In this person a clearly distinguishable P300 response elicited by a visual oddball paradigm was found, but communication with the visual speller matrix could not be established. However, the tactile P300 response was most prominent and most successful for classification. Following a user-centered approach (Zickler et al. 2011), tactile input will be used in this end-user for the setup of a P300-based communication device.

2.3 Alternative Implementations of BCI-Controlled Communication

So far this subchapter focused on communication in its purest sense as one major goal in BCI research and one major contribution to include severely motor-impaired people in social interaction. However, there are also extended applications of BCI-based inclusion which are (1) access to the internet and (2) a different form of communication, which is painting.

It was successfully shown that ALS patients could browse the internet using an application that was based on the P300 Speller matrix (Mugler et al. 2010). In this application, two screens were needed, one for the internet page display and one for a regular spelling matrix. By coding each link on the webpage with one letter or sign of the spelling matrix, a user could mimic a click on a link by focusing on the target sign. Using this method, all three ALS patients ordered a book on an online vending store without help from their family or caregivers. Furthermore, it has been shown that BCI-controlled e-mailing and internet surfing could be realized by combining a BCI with commercially available assistive technology (Fig. 2.3; Holz et al. 2011; Riccio et al. 2011; Zickler et al. 2011).

Fig. 2.3
figure 3

Connection of the commercially available software QualiWorld (QualiLife SA, Paradiso, Switzerland) and the P300 BCI. Instead of letters, red dots are being flashed, indicating the link on the screen to be chosen

Another P300-based application is Brain Painting (Munssinger et al. 2010). The P300 matrix was adopted so that instead of letters, painting commands could be selected from the flashing matrix (Fig. 2.4).

Fig. 2.4
figure 4

The P300 Brain Painting matrix with commands for colors, size, zooming, blurring, etc

For example, shapes such as rectangles or circles could be chosen and when selecting a color, the ‘object’ was transferred onto a ‘canvas’ on a separate screen. By zooming into the canvas, blurring objects, and playing with color, astonishing paintings were created (Fig. 2.5) by end-users in the locked-in state. Most recently, an exhibition was launched in Rostock, Germany, in which an ALS patient used Brain Painting on site.Footnote 1 In conclusion, with this fascinating application an entertaining and highly satisfactory way of inclusion has been established by the use of BCI technology. One locked-in patient, named HHEM, is using Brain Painting daily, as she used to be a painter before being diagnosed with ALS (Holz et al. 2013).

Fig. 2.5
figure 5

The Brain Painting picture “The Moths’ Revenge” by the artist and ALS patient HHEM

In summary, in the preceding paragraphs we presented promising BCI paradigms and applications that have been successfully used as communication tools by end-users with severe motor impairments. All of them are meant to support end-users to express their needs and wishes in their own words, to interact and communicate with their environment independently, or to allow them creative expression. Several end-users stated how important it is for them to contribute to the development of communication systems for end-users in need. Although at its current state they would not consider BCI as an option for communication and interaction in daily life, patients are highly satisfied and even happy about contributing to BCI research which could help future potential end-users. One end-user’s quote may suffice to illustrate this attitude: “The participation in this research truly is the one and only thing that I can now do that I could not have done without being diagnosed with ALS”.

3 Hybrid BCIs

A novel development in BCI research is the introduction of the hybrid BCI concept (Müller-Putz et al. 2011). A hybrid BCI (hBCI) consists of a combination of several BCIs or a BCI with other input devices (Allison et al. 2012). These input devices may be based on the registration of other biosignals than brain signals e.g. electromyographic activities. Using this approach a single command signal can be generated either by fusion of different input signals or by simply selecting one of them. In the latter case the input signals can be dynamically routed based on their reliability, i.e. continuously monitoring the quality, and the input channel with the most stable signal will then be selected (Kreilinger et al. 2011). In the case of signal fusion each of the input signals contributes with a dedicated weighting factor to the overall command signal (Leeb et al. 2011). These factors are in general not static, but can be dynamically adjusted according to their reliability, which is quantified by appropriate quality measures. The hBCI is fully compliant with the user-centered design concept (ISO 2010). The key message of this approach is that the technology has to be adapted to the individual users’ abilities and needs and not vice versa. By combining BCIs with established control devices more end-users may gain access to assistive technologies in general or the use of existing assistive devices may be simplified in certain applications.

3.1 BCIs as an Additional Input Channel

A concrete example of an hBCI is the control of a computer by the combination of an EEG-based brain switch and a mouth-controlled joystick namely the IntegraMouse® (LIFEtool Solutions GmbH, Linz, Austria). The IntegraMouse measures the direction of force applied to a stick put in the mouth in two dimensions and moves the cursor on a screen in this direction accordingly. It is intended to be used by individuals with high SCI, who still can control their head movements and are able to produce a change in air pressure in the sense of a suck-and-puff control for simulating a mouse-click. However, this user group has restrictions concerning the breathing volume due to the paralysis of muscles contributing to lung inflation or – even worse – may be ventilator-dependent. Therefore, it is hard or even impossible for end-users to generate relevant air pressure changes voluntarily and thereby produce a mouse-click. It has been shown that a one-channel BCI can reliably detect short imaginations of movements, which can be used for setting up a simple brain switch (Müller-Putz et al. 2010) substituting the mouse-click functionality of the IntegraMouse®. Unimpaired subjects were able to use this hBCI to control the mouse cursor on a screen with minimal movements of the head and selecting files or programs with the use of the brain switch (Clauzel et al. 2012).

3.2 BCIs as an Alternative Input Channel

Another way of using an hBCI is to provide an alternative input channel in the case of degrading reliability of input channels. This can happen either due to mental fatigue or stress (BCI) or due to muscular fatigue or spasticity (traditional user interface). A key prerequisite for using the BCI in such a setup is the implementation of measures that allow for continuous quantification of the reliability of each input channel and automatically switch between them. It was shown in a first implementation of the hBCI that unimpaired users could move a car in a game-like feedback application to collect coins and avoid obstacles either via a manual joystick or BCI control (Kreilinger et al. 2011). The outputs of both input devices were constantly monitored with four different long-term quality measures to evaluate the current state of the signals. As soon as the quality dropped below a certain threshold, a monitoring system would switch to the other control mode and vice versa. Additionally, short-term quality measures were applied to check for strong artifacts that could render voluntary control impossible. These measures were used to prohibit actions carried out during times when highly uncertain signals were recorded. The switching possibility allowed more functionality for the users. Moving the car was still possible even in a condition in which one control source did not work at all (Kreilinger et al. 2011).

3.3 Fusion of Multiple Input Channels

Apart from simply switching between multiple input signals, continuous fusion of at least two signal sources is also a promising method of setting up an hBCI. The basic idea behind the fusion approach is to improve the reliability and accuracy of the hBCI output(s) by dynamic weighting of the input signals based on their influence on the overall classification result. By using this approach an overall signal quality better than the quality of each of the single input signals could ideally be achieved.

A first practical implementation is based on the fusion of brain (EEG) and muscular (EMG) signals into one control signal (Leeb et al. 2011). The results obtained in unimpaired participants show that a good level of hBCI control could be achieved independently from the level of muscular fatigue. The multimodal fusion approach of muscular and brain activity yielded better and more stable performance compared to the single conditions. In a second experiment muscular fatigue was simulated by reducing the amplitude of the EMG-signals to 10 % and thereby decreasing the signal-to-noise ratio. Even in this case a good control, i.e. moderate and graceful degradation of the performance compared to the non-fatigued case, and a smooth handover could be achieved. This means that in a real-world scenario an end-user would rely exclusively on muscular control in the beginning and with increasing physical and muscular fatigue the BCI progressively takes over. Vice versa, if the EEG contains a lot of noise or if the end-user becomes mentally fatigued the weight of the muscular channels is increased. Therefore, such systems allow the users a constantly reliable hBCI control although they are becoming more exhausted or fatigued during the day.

4 BCIs for Grasping and Reaching

One type of EEG-based BCI exploits the modulation of sensorimotor rhythms (SMRs). These rhythms are oscillations in the EEG occurring in the alpha (8–12 Hz) and beta (18–26 Hz) bands and can be recorded over the sensorimotor areas on the scalp between the ears. Their amplitude typically decreases during actual movement and similarly during mental rehearsal of movements (motor imagery; MI) (Pfurtscheller and Lopes da Silva 1999; Neuper et al. 2005). Several studies have shown that people can learn to modulate the SMR amplitude by practicing MIs of simple movements, such as hand/foot movements, to control output devices (Pineda et al. 2003; Cincotti et al. 2008). This process occurs in a closed loop, meaning that the system recognizes the SMR amplitude changes evoked by MI and these changes are instantaneously fed back to the user. This neuro-feedback procedure and mutual man–machine adaptation enables BCI users to control their SMR activity and thereby the complete system.

With MI-BCIs the detection of an intended movement based on brain signals is possible. Thus, they are an exciting option for control of neuroprostheses based on Functional Electrical Stimulation (FES) for restoring permanent lost hand and arm functions after cervical SCI.

4.1 Grasp Neuroprostheses

Today, the only possibility of restoring permanently restricted or lost functions to a certain extent in the case of missing surgical options (Hentz and Leclercq 2002) is the application of FES. Over the last 20 years FES systems with different levels of complexity have been developed and some of them introduced into the clinical environment (Popovic et al. 2002). These systems deliver short current impulses eliciting physiological action potentials on the efferent nerves, which cause contractions of the innervated yet paralyzed muscles of the hand and the forearm (van den Honert and Mortimer 1979). On this basis FES artificially compensates for the loss of voluntary muscle control. In individuals with a chronic SCI a profound disuse atrophy of the paralyzed muscles occurs, which leads to a severely decreased fatigue resistance and capability for force generation. This atrophy can be reversed by a low-frequency FES training even many years after the SCI. The time needed for achieving a meaningful fatigue resistance and force is dependent on the individual status of the muscles and ranges from weeks to months (Gordon and Mao 1994).

When using the FES in a compensatory setup the easiest way of improving a weak or lost grasp function is the application of multiple surface electrodes. Generally, the major advantage of non-invasive systems is that they can be offered to patients for temporary application also at a very early stage of primary rehabilitation, during which the electrode setup has to be adapted to the neurological status due to spontaneous recovery.

With only seven surface electrodes placed on the forearm two grasp patterns, namely lateral grasp and palmar grasp, can be restored (Rupp et al. 2012). The lateral grasp pattern provides the ability of picking up flat objects between the flexed fingers and the flexing thumb (Fig. 2.6) and with the palmar grasp pattern, where the thumb is positioned in opposition to the index finger (Fig. 2.7), larger objects can be handled. With the combination of surface electrodes and a finger synchronizing orthosis the difficulties with daily reproduction of movements and huge variations of grasp patterns depending on wrist rotation angle can be overcome (Leeb et al. 2010). Nevertheless, the disadvantages of the limited excitability of deeper muscle groups and pain sensations persist. Additionally, patients describe the placement of the electrodes as complicated (Kilgore et al. 2001). Since surface electrodes tend to drop off over time an adjunct fixation mechanism in the form of a sleeve or an orthosis is needed, which users often rate as uncomfortable or not cosmetically acceptable.

Fig. 2.6
figure 6

Three states of the sequence of the lateral grasp pattern. Subfigures (a–c) show the hand fully open, fingers closed with an extended thumb, and the full lateral pinch

Fig. 2.7
figure 7

Two states of the palmar grasp pattern. Subfigure (a) shows the hand fully open and (b) the hand fully closed with the thumb touching the tip of the index finger

Since these are relevant limitations when using the systems in everyday life, implantable neuroprostheses for the permanent restoration of motor functions have been developed. Implantable devices include the BION (Loeb and Davoodi 2005), a small single-channel microstimulator that is injectable through a cannula, a stimulus router system (Gan and Prochazka 2010) – an implantable electrode that picks up the current from surface electrodes – a multichannel implantable stimulator (Smith et al. 1987), and a modular networked and wirelessly controlled system for stimulation and sensing (Wheeler and Peckham 2009). Implantable systems inherently bear the risk of infections and risks associated to the surgery. Complex revision surgeries are necessary in the event of a failure of any implanted component. Though it has been shown that these events occur rather rarely (Kilgore et al. 2003), it has to be clearly communicated to patients who decide to receive an implant.

One of the implantable grasp neuroprostheses – the Freehand system – achieved commercialization in 1997, and has been successfully used by over 300 C5/C6 individuals with SCI throughout the world and is therefore the most widespread implantable neuroprosthesis for the restoration of the grasp function (Keith and Hoyen 2002). Though the first systems have been operating for 15 years, its commercialization stopped in 2001 not for clinical but for financial reasons. Freehand users control hand grasp through operation of an external joint angle sensor controlled by movement of the opposing non-paralyzed shoulder, which through an implanted stimulator powered and controlled by radio frequency delivers electrical impulses to the hand muscles (Rupp and Gerner 2007). The results of a multi-center trial including 51 Freehand users quantitatively demonstrated its high level of functional efficacy (Peckham et al. 2001) and economic benefits (Creasey et al. 2000).

Despite all the technical progress made, it has to be clearly stated that the degree of functional restoration by the currently available neuroprostheses either based on surface or implantable electrodes is rather limited. Even with the most sophisticated systems the restoration of only one or two grasp patterns is possible, which does not include the independent activation of single fingers or joints (Wheeler and Peckham 2009). Additionally, the movements and forces generated by FES are less graduated when compared to the physiological condition. This is in particular the case when low forces for fine control are needed.

4.2 Hybrid Neuroprosthesis for Grasping and Reaching

Most of the current neuroprostheses for the upper extremity have only been used in individuals with SCI with preserved shoulder function and elbow flexion. Only a few experimental studies showed the feasibility of generating meaningful elbow movements with FES in very high spinal cord lesioned subjects (Crago et al. 1998). These systems have not been tested in real-world conditions during daily living, since a rapid muscle fatigue occurs due to the non-physiological, synchronous activation of paralyzed muscles by electrical stimulation. A major problem in FES-based restoration of movements is the occurrence of a combined lesion of the spinal fiber tracts and motoneurons in subjects with cervical SCI (Mulcahey et al. 1999; Dietz and Curt 2006). Stimulated denervated, flaccid muscles do not produce enough force to contribute effectively to any functional restoration (Kern et al. 2010). To overcome these limitations a so-called hybrid neuroprosthesis consisting of a combination of FES and an orthosis with actively driven or at least (de-)lockable joints is proposed. In general, an orthosis is a mechanical device that fits to a limb and corrects a pathological joint function. An actively driven orthosis supports the joints’ movements with active drives such as an electrical motor or a pneumatic actuator. The disadvantages of these exoskeletons are their mechanical complexity, limited possibility for use in daily activities, and their need for a sufficient power supply (Schill et al. 2011). Therefore, these systems are mainly intended to be applied in users in which sufficient movements cannot be generated by FES. If sufficient joint movements can be generated by FES a more efficient solution is the application of an orthosis with a (de-)lockable joint. In its released state this joint allows for free movements and keeps a fixed joint position in the locked state. This helps to avoid fatigue of the stimulated muscles needed to maintain a stable joint position. Both types of FES-hybrid orthoses may lead to an expansion of the group of potential users of an upper extremity neuroprosthesis in the future.

At this point it has to be emphasized that the neurological status and functional capabilities of individuals with SCI even with the same level of injury vary to a large degree. As a consequence, an upper extremity neuroprosthesis necessarily has to consist of several modules that can be personalized according to the capabilities, needs, and priorities of an end-user. Though this fact is well known in the AT community, only a few technical solutions incorporate it (Rohm et al. 2011).

4.3 BCIs for Control of Neuroprostheses

Through the last decade it has become obvious that the user interfaces of all current FES devices are not optimal in the sense of natural control, relying on either the movement or the underlying muscle activation from a non-paralyzed body part to control the coordinated electrical stimulation of muscles in the paralyzed limb (Kilgore et al. 2008; Moss et al. 2011). In the case of individuals with a high, complete SCI and the associated severe disabilities not enough residual functions are preserved for control. This has been a major limitation in the development of a reaching neuroprosthesis for individuals with a loss not only of hand and finger but also of elbow and shoulder function.

Several BCI approaches mainly based on steady-state visual-evoked potentials (SSVEPs) have been introduced as a substitute for traditional control interfaces for the control of an abdominal FES system (Gollee et al. 2010), a wrist and hand orthosis (Ortner et al. 2011), or a hand and elbow prosthesis (Horki et al. 2010). Another exciting application is the use of a BCI to detect voluntary movement intentions in the presence of arm tremor for control of a compensatory FES (Rocon et al. 2010). Beyond these applications, BCIs have enormous implications providing natural control of a grasping and reaching neuroprosthesis control in particular in individuals with a high SCI by relying on volitional signals recorded from the brain directly involved in upper extremity movements.

In 2003 pioneering work showed for the first time that a MI-BCI control of a neuroprosthesis based on surface electrodes is feasible (Pfurtscheller et al. 2003a). In this single case study the restoration of a lateral grasp was achieved in a tetraplegic subject, who suffers from a chronic SCI with completely missing hand and finger function. The end-user was able to move through a predefined sequence of grasp phases by imaging foot movements detected by a brain-switch with 100 % accuracy. He reached this performance level already prior to the experiment after several years of training with the MI-BCI (Pfurtscheller et al. 2003b) and has maintained it for almost a decade by regular continuation of the training (Enzinger et al. 2008).

A second feasibility experiment was performed in which short-term BCI training was applied in another tetraplegic individual. This subject had been using a Freehand system for several years. After 3 days of training the end-user was able to control the grasp sequence of the implanted neuroprosthesis with a moderate, but sufficient performance (Müller-Putz et al. 2005).

In these first attempts the BCI was used more as a substitute for the traditional neuroprosthesis control interface than as an extension. With the introduction of FES-hybrid orthoses (Fig. 2.8c) it has become more and more important to increase the number of independent control signals. With the recent implementation of the hybrid BCI framework it became feasible to use a combination of input signals rather than BCI alone. In a first single case study a combination of an MI-BCI and an analog shoulder position sensor is proposed (Rohm et al. in press). With upward/downward movements of the shoulder the user can control the degree of elbow flexion/extension or of hand opening/closing. The routing of the analog signal from the shoulder position sensor to the control of the elbow or the hand and the access to a pause state is determined by a digital signal provided by the MI-BCI (Fig. 2.8a). With a short imagination of a hand movement the user switches from hand to elbow control or vice versa. A longer activation leads to a pause state with stimulation turned off or a reactivation of the system from the pause state (Fig. 2.8b). With this setup a highly paralyzed end-user, who had no preserved voluntary elbow, hand, and finger movements, was able to perform several activities of daily life, among them eating a pretzel stick, signing a document, and eating an ice cone (Fig. 2.9), which he was not able to perform without the neuroprosthesis.

Fig. 2.8
figure 8

Schematic overview of the setup of the hybrid-BCI-controlled hybrid arm neuroprosthesis (a, top), example flowchart of the hybrid control scheme integrating the shoulder joystick and the MI-BCI (b, right), and a photograph of an end-user with the complete system (c, bottom)

Fig. 2.9
figure 9

Sequence of pictures showing the eating of an ice cone. The user starts in the hand control mode, lifts his left shoulder to open the right hand for grasping the ice cone (a). After successfully grasping the ice cone (b), the user emits a BCI command to switch from hand control to elbow control and lifts his shoulder to flex his elbow (c). Now, the user licks the ice (d). Finally, the user lowers his left shoulder to extend the elbow (e), he puts the cone in its original place and switches back to hand mode to release the cone (f)

Despite the tremendous progress that has been made in recent years there are still a lot of open issues that have to be addressed for a successful application of BCI-controlled neuroprostheses in tetraplegics. One of the major limitations of the human work is that the results were obtained either in unimpaired subjects or in selected users with SCI who already had a high BCI performance in the first screening session. This raises the question to which extent the published results can be generalized to a wider user population. To address this question the BCI performance of 15 end-users with complete SCI – eight paraplegic and seven tetraplegic – was assessed (Pfurtscheller et al. 2009). It was found that five of the paraplegic individuals had an initial accuracy above 70 % but only one tetraplegic achieved this performance level. Though the reason for this is still unclear, it was found that movement-related β-band modulations, which are necessary for a good BCI performance, are significantly different in SCI compared to unimpaired individuals (Gourab and Schmit 2010). Though only a small number of subjects with SCI were involved in the study, the results indicate a correlation between the decreased amplitude during event-related synchronization (ERS) immediately following the movement attempt and the severity of the impairment of the lower extremities in which the movement was attempted.

In general, the performance of a non-invasive BCI as a neuroprosthesis control interface is rather low compared to traditional control interfaces based on either the movement or the underlying muscle activation from non-paralyzed body parts (Hart et al. 1998; Rupp et al. 2008). This applies not only to the limited number of possible commands per minute, but also their nature, which is mainly digital (brain-switch). Furthermore, the EEG is a non-stationary signal and therefore BCIs require calibration and tuning. The latency and low number of degrees of freedom of non-invasive BCIs are major drawbacks for real-time, complex neuroprosthesis control (Lauer et al. 2000). This may be overcome with implantable BCI systems. However, these sometimes highly invasive systems have not yet reached a maturity beyond the experimental level (Hochberg et al. 2012; Collinger et al. 2013).

The ultimate goal of a BCI-controlled neuroprosthesis would be to establish a technical bypass around the lesion of the spinal cord and to provide end-users with a natural control, enabling them to accomplish movements in an unconscious and intuitive way. The current state of technology is far away from this goal, because imageries of movements are used that cause the highest effects on SMR signals. This might – in an extreme case – be an imagination of feet movements, which is then used for control of an upper extremity neuroprosthesis. A prerequisite for a natural BCI control of a neuroprosthesis is the independence of an imagined and FES-generated movement of the same limb. A first study with unimpaired subjects shows that MI of hand movements can be used to control the FES of the same hand for a grasping and writing task (Tavella et al. 2010). Nevertheless, a real breakthrough in neuroprosthesis control would be the decoding of body movements from EEG. First attempts into this direction have been started recently, which might pave the way for non-invasive BCI systems with a more intuitive control scheme (Bradberry et al. 2010; Ofner and Muller-Putz 2012). For further development of this revolutionary method of real-time neuroprosthesis control a deeper understanding of the underlying brain physiology has to be attained.

5 BCIs for Mobility

Being mobile is apart from communication and manipulation an essential need of motor-impaired end-users for participation in social life. Wheelchairs are the most common assistive device to allow for in-house mobility and also outside the home environment. Persons with severe motor disabilities are dependent on electrical wheelchairs controlled by hand- or chin-operated manual joysticks. If not enough residual movements are possible, eye-gaze or suck-and-puff control units may serve as a wheelchair user interface. Suck-and-puff control is mainly based on four types of commands. If air is blown into/sucked from the device with high pressure/vacuum, the controller interprets this as a forward/backward drive signal. If a low pressure or vacuum is applied, the wheelchair drives right or left. With this rather simple control scheme users are able to perform most navigation tasks with their wheelchair.

Though the thresholds for low/high pressure are individually calibrated, the end-user must be able to reliably generate two different levels of air pressure/vacuum over a sustained period of time to achieve a good level of control. Since these prerequisites are not present in all end-users, BCIs may represent an alternative control option. As already outlined in the preceding subchapters, at the moment all types of non-invasive BCIs provide only a limited command rate and are insufficient for dexterous control of complex applications. Thus, before the successful application of control interfaces with low command rates – including BCIs – in mobility devices, intelligent control schemes have to be implemented. Ideally, the user only has to issue basic navigation commands such as left, right, and forward, which are interpreted by the wheelchair controller integrating contextual information obtained from environmental sensors. Based on these interpretations the wheelchair would perform intelligent maneuvers including obstacle avoidance and guided turnings. In conclusion, in such a control scheme the responsibilities are shared between the user, who gives high-level commands, and the system, which executes low-level interactions with more or fewer degrees of autonomy. With this so-called shared control principle, researchers have demonstrated the feasibility of mentally controlling complex mobility devices by non-invasive BCIs, despite its slow information transfer rate (Flemisch et al. 2003; Vanhooydonck et al. 2003; Carlson and Demiris 2008).

5.1 Principles of Shared Control

Generally, the basic idea of shared control is the continuous estimation of the operator’s mental intent and providing technical assistance for completion of the intended tasks (Millán et al. 2004; Galán et al. 2008; Tonin et al. 2010). In order to improve the estimation of the user’s intent, the user interface outputs are combined with information about the environment, i.e. obstacles perceived by the robot sensors, and the robot itself, i.e. position and velocities (Fig. 2.10). A promising concept for the human–machine interaction in vehicle control is the H-metaphor concept (Damböck et al. 2011). This shared control concept has been specifically established to solve the problem of the human-out-of-the-loop in highly sophisticated mobility systems like autonomous cars and airplanes. The H-metaphor proposes a bidirectional interface, which consist of a mix of discrete and analog communication and a multimodal interface allowing both human and machine to be in the physical loop simultaneously. It suggests that operating a vehicle should be like navigating through an unknown and changing environment sitting on a horse, with notions of “loosening the reins”, allowing the system more autonomy or vice versa (Flemisch et al. 2003). Shared control is helping on a direct interaction with the environment but is conveying a different principle than autonomous control. In autonomous control more abstract, high-level commands, e.g. drive to the kitchen or the living room, are issued and executed completely autonomously by the mobility device without any possibility for intervention by the user (Carlson and Millán 2013). A completely autonomous control concept prevents the user from spontaneously interacting with other people. A critical aspect of shared control for BCI is coherent feedback – the behavior of the robot should be intuitive to the user and the robot should unambiguously understand the user’s commands. Otherwise, people find it difficult to form mental models of the mobility device, which results in an unreliable control.

Fig. 2.10
figure 10

Overview of the shared control structure: The user issues high-level commands via a BCI mostly on a lower pace. The system is quickly and precisely acquiring the environmental information with its sensors. The shared control system merges both information sources to achieve path planning and obstacle avoidance

Shared control is a fundamental component of BCI-controlled mobility aids, as it will shape the closed-loop dynamics between the user and the brain-actuated device in a way that tasks can be performed as easily and effectively as possible. The idea is to integrate the user’s mental commands with the contextual information captured by the intelligent mobility device, so as to reduce the user’s workload in reaching the target destination or to correct for mental commands in critical situations. In other words, the actual commands sent to the device and the feedback to the user will adapt to the context and inferred goals. In such a way, shared control can make target-oriented control easier, can inhibit pointless mental commands such as driving zigzag, and can help to generate meaningful motion sequences.

5.2 BCIs for Wheelchair Control

Although asynchronous, spontaneous BCIs seem to be the most natural control option for wheelchairs, there are a few applications using synchronous BCIs (Iturrate et al. 2009; Rebsamen et al. 2010). Like in most communication applications these BCIs are based on the detection of the P300 potential evoked by concentrating on a flashing symbol in a matrix. For wheelchair control the system flashes a choice of predefined target destinations several times in a random order and finally the stimulus that elicits the largest P300 is selected as the target. Afterwards the intelligent wheelchair drives to the selected target autonomously. Once there it stops and the subject can select another destination. The fact that the selection of a target takes approximately 10 s and that the user intent is only determined at predefined time points puts the usability of cue-based BCIs for control of mobility devices into question.

The European projects MAIA (Mental Augmentation through determination of Intended Action) and TOBI (Tools for Brain-Computer Interaction) largely contributed to the implementation of the shared control approach in brain-controlled robots and wheelchairs. In BCI-controlled mobility devices developed in the framework of these projects the users’ mental intent was estimated asynchronously and the control system provided appropriate assistance for wheelchair navigation. With this approach the driving performance of the BCI-controlled device greatly improved in terms of continuous human–machine interaction and enhanced practicability (Vanacker et al. 2007; Galán et al. 2008; Millán et al. 2009; Tonin et al. 2010). In the most recent approach of shared control the user asynchronously sends – with the help of a motor-imagery-based BCI – high-level commands for turning left or right to reach the desired destination. Short-term low-level interaction for obstacle avoidance is done by the mobility device autonomously (Fig. 2.11a). In the applied shared control paradigm the wheelchair proactively slows down and turns for avoidance of obstacles as it approaches them. For the provision of this functionality the wheelchair is equipped with proximity sensors and two webcams for obstacle detection. Using the computer vision algorithm described in Carlson and Millán (2013), a local occupancy grid with 10 cm resolution was computed (Borenstein and Koren 1991), which was later used by the shared control module for local path control. Generally, the vision zone is divided into three regions: Obstacles detected to the left or right trigger rotation of the wheelchair, whereas obstacles in front slow it down. Additionally, a docking mode is implemented in which any obstacle is considered to be a potential target if it is located directly in front of the wheelchair. Consequently, the user is able to dock to any “obstacle”, be it a person, table, or even a wall. One prerequisite for the quick transfer of the technological developments to end-users is that additional equipment should not cost more than the wheelchair itself. Thus, the decision to use cheap webcams instead of an expensive laser rangefinder was taken.

Fig. 2.11
figure 11

(a) Picture of a healthy subject sitting in the BCI-controlled wheelchair. The main components on the brain-controlled robotic wheelchair are indicated with close-ups on the sides. The obstacles identified via the webcams are highlighted in red on the feedback screen and will be avoided by the shared control system. (b) Averaged time in seconds required to complete the task, either in manual or BCI condition (Modified from Carlson and Millán 2013)

Four healthy subjects participated successfully in an experiment in which the webcam-equipped wheelchair is used to enter an open-plan environment through a doorway. The user was then to dock to two different desks whilst navigating around natural obstacles, and finally reach the corridor through a second doorway. It took the subjects on average 160.0 s longer to complete the task with the BCI compared to manual joystick control (Fig. 2.11b). In terms of path efficiency there was no significant difference between the distance traveled in the manual (43.1 ± 8.9 m) and the BCI condition (44.9 ± 4.1 m) (Carlson and Millán 2013). The fact that more time is needed with the BCI control is caused by a slightly higher number of turning commands. In particular, inexperienced BCI users had a bigger difference than experienced ones. This is likely associated with the fact that performing an MI task while navigating and being seated on a moving wheelchair is much more demanding than simply moving a cursor on a screen. Additionally, precisely controlling the timing of the commands under real-world conditions, where negative events such as a crash may also occur (although a supervisor was always in control of a fail-safe emergency stop button), is a challenging task (Leeb et al. 2013). Nevertheless, the users were able to successfully steer the wheelchair they were sitting in by BCI commands, even in stressful situations.

In the future start/stop or pausing functionality will be added. Using the hybrid BCI implementation such rare start/stop commands could also be delivered through other channels such as residual muscular activity. For this purpose any signal which the user is able to control reliably at a slow pace is suitable. Finally, recent research looks at supporting different feedback modalities and using cognitive states, real-time determination of signal reliability, and online task performance to adapt the degree of autonomous control provided by the shared control system.

5.3 BCIs for Control of Telepresence Robots

In end-users with severe motor impairments or autonomous dysfunctions mobilization in a wheelchair may not be possible. To still allow these end-users to navigate in a domestic environment, to join their relatives and friends located somewhere else, and to participate in their activities a telepresence robot might be very helpful. An example of such a mobility robot is Robotino™ (Festo, Esslingen, Germany), a small circular mobile platform (diameter 36 cm, height 65 cm) which is equipped with nine infrared sensors that can detect obstacles at up to 30 cm distance and a webcam that can additionally be used for obstacle detection. Furthermore, a conventional notebook with a webcam is added on top of the robot for telepresence purposes (Fig. 2.12a), so that the participant can interact with the remote environment via Skype™ (Skype Communications, Rives de Clausen, Luxemburg).

Fig. 2.12
figure 12

(a) A tetraplegic end-user maneuvering the brain-controlled telepresence robot by motor imagery in front of participants and press at the “TOBI Workshop IV”, Sion, Switzerland, 2013; (b) Averaged time in seconds required to complete the task for each path, either in manual or BCI condition

Exploration of an unknown environment with a robot controlled by a BCI would be a complex and frustrating task, in particular due to the limited temporal precision and low command rate of the BCI. Furthermore, the user has to share attention between the feedback of the BCI classifier, the telepresence screen, the current position, and the route to the desired destination. Here, the shared control principle comes into play. Its actual implementation is based on the dynamic system concept coming from the fields of robotics and control theory (Schöner et al. 1995). Two dynamic systems have been created which control two independent motion parameters: the angular and translation velocities of the robot. The systems can be perturbed by adding attractors or repellers in order to generate the desired behaviors. The dynamic system implements a navigation modality, in which the default device behavior is to move forward at a constant speed. If repellers or attractors are added to the system, the motion of the device changes in order to avoid the obstacles or reach the targets. At the same time, the velocity is determined according to the proximity of the repellers surrounding the robot.

Applying this principle allows subjects to drive the mobile telepresence platform remotely by a motor-imagery-based BCI (Tonin et al. 2011). In this example, end-users remotely control the robot turning to the left or to the right to reach a selection of four predefined targets within a natural office environment. The space contains natural obstacles such as desks, chairs, furniture, and people in the middle of the paths. Importantly, participants have never explored the environment prior to the experiment. The robot’s turnings to the left and right are controlled via a two-class BCI (Galán et al. 2008). Whenever the BCI output exceeds the threshold for left or right a command is delivered to the robot. In addition, the participant can intentionally decide not to deliver any mental commands to maintain the default behavior of the robot, which continues to move forward and avoids obstacles with the help of its on-board sensors (Leeb et al. 2013).

Nine severely motor-disabled end-users, who had never visited the lab environment in person, were able to use such a telepresence robot to successfully navigate around the lab whilst they were located in their own homes or in clinics at distances of up to 550 km away. The same paths were followed with BCI and manual control, i.e. button presses. Furthermore, shared control was either applied or not. Remarkably, the end-users with motor impairments (Tonin et al. 2011) performed similarly to the healthy users (Tonin et al. 2010), who were already familiar with the environment. Shared control also helped all subjects including novel BCI subjects or users with disabilities to complete a rather complex task in a similar amount of time and with similar numbers of commands to those required by manual commands without shared control (Fig. 2.12b). Thus, these results show that shared control reduces subjects’ cognitive workload as it (a) assists them in coping with low-level navigation issues such as obstacle avoidance and allows the subjects to focus the attention on the final destination and thereby (b) helps BCI users to maintain attention for longer periods of time, since the number of BCI commands can be reduced and their precise timing is not so critical.

6 Conclusion

Taken together, BCI research has made tremendous progress in recent years and end-users benefit from BCI-controlled Assistive Technologies in the application domains of communication, mobility aids, and neuroprosthesis control. However, BCIs are not yet ready for independent home use. To establish BCIs as AT in the end-user’s home, three gaps need to be bridged: (1) the usability, (2) the reliability, and (3) the translational gap. In general, the setup and handling of current BCI systems is relatively complicated compared to traditional AT and needs the (tele-)presence of technical experts. Thus, BCIs have to be improved to a stage at which end-users together with their caregivers are able to apply the systems independently at home. A key component for achieving this goal is the availability of easier to handle, gel-less electrodes providing sufficient signal quality. Only long-term studies with end-users will allow us to demonstrate the reliability of BCIs and further improve the systems. With the extensive implementation of intelligent shared control mechanisms, uncertainties and non-stationarities, which are inherent to non-invasive MI-BCI systems, may be partly tackled. Nevertheless, a MI-BCI should not be considered as an add-on to existing user interfaces for real-time neuroprosthesis control, if the initial BCI performance is low and not stable over sessions. The relatively new concept of the hybrid BCI holds promise that BCIs seamlessly integrate into traditional user interfaces and might expand the group of potential users. First studies incorporating the hybrid BCI approach show that a general setup of the system in different end-user groups does not exist. In fact, the possibility of a personalized configuration – something very common to the AT field – will be essential for the success of BCIs as control interface for ADs.

Most important, more translational studies involving end-users at their homes are needed to address the problems and issues arising from applications outside research labs. Adopting the user-centered approach in BCI research and development enables us – in an iterative process between developers and users – to further improve BCI and to address the specific needs and requirements of end-users.