Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The idea of a cybernetic organism–a being with both organic and biomechatronic parts–is a hallmark of modern science fiction, with iconic characters such as Darth Vader, Robocop, and The Terminator being prime examples. Yet novel medical technologies are turning this fiction into reality, allowing an alternative to treating diseased organs and limbs, simply replace them. This image of a new medical revolution, which promises permanent solutions for once unsolvable health problems, has inspired a generation of researchers to further the field’s knowledge.

This chapter will provide a brief background on some recent progress in the field of brain-machine interfaces (BMIs) , starting with overviews of select, highly developed neuroprostheses. In particular, it will discuss how BMIs allow for the creation of communication channels between the brain and prosthesis. The chapter highlights the communication channel as it presents the greatest difficulty in seamlessly integrating neuroprostheses with their users. To elaborate, the necessary electronic peripherals, such as robotic arms and cameras, are now well defined, leaving interfacing these peripherals with the brain as the current limiting factor in performance.

The chapter will start by reviewing the simplest efferent communication channel : a one-way connection, which reads from the brain, translates the signal into motor intent, and uses the results to control a robotic arm. Next, it will cover afferent communication channels: interfaces which acquire signals from electronic sensory prosthetics, convert these signals into a neural code, and finally write them into the brain in an attempt to create desired perceptions. Last, it will discuss the establishment of a communication channel directly between two brains, wherein it is necessary to read information from one brain and write it to another.

8.1 A Neuroprosthetic Arm

The loss of the ability to control one’s limbs commonly stems either from complete limb amputation or from nerve damage, such as spinal cord injury or brainstem stroke, which prevents the brain from communicating with the limb. There are nearly two million people living with limb loss in the United States alone, a statistic, which is expected to exceed three-and-a-half million by 2050 (Ziegler-Graham et al. 2008). Although non-lethal, limb loss presents a permanent disability for these individuals, often compromising their ability to live independently. Even more individuals—around six million in the United States—are paralyzed, with completely unusable limbs (The Reeve Foundation 2009). Biomechanical engineering has provided some solutions, such as mechanical prosthetic arms, for these individuals. However, it is apparent that these mechanical solutions could never fully restore the dexterity of a natural arm. Whereas a natural arm receives a multitude of signals from the brain and decodes them into muscle movements, a mechanical prosthesis receives its driving commands from a limited number of muscles, thereby vastly constraining its potential as a proper replacement.

Decades ago, science fiction writers were already thinking of a more elegant solution, even before the existence of the necessary technology to implement it. This solution involved combining a robotic arm with a human body in such a way that the final result would mimic the performance of a natural limb. In the 1980s, fans of Star Wars received a glimpse of how this technology may one day be implemented: after losing his hand in combat, Luke Skywalker has it replaced with a robotic hand. Post-procedure, this artificial hand works flawlessly and its performance and outer appearance are indistinguishable from his natural hand. Even today, such seamless integration remains a dream, a goal toward which many researchers are making important steps.

The past several decades have seen amazing advancements in robotics, driven by mainstream adoption of the technology by the manufacturing industry. Seeing how well robotic arms on an assembly line work, one may wonder why a neuroprosthetic arm has not already been perfected. The difficulty stems from the limited understanding of the motor system. The neural signal theoretically provides all the instructions the arm needs, but from where and how should this signal be read? Additionally, once acquired, how can spike trains be translated into precise movements? Finally, how can the robotic arm provide feedback, such as proprioception and tactile sensation, to the brain? This section will look at how several research groups have approached these problems, and how their groundbreaking results helped further advance the field along the path to developing a neuroprosthetic arm.

8.1.1 Rats Can Control Robotic Arms Using Only Their Minds

Although the technology to read signals from the brain had existed for many years, it wasn’t until the rapid improvement in computer technology during the 1990s that researchers were provided with the capability to digitally process these signals. Toward the end of the decade, it became feasible to process, in real time, many signals simultaneously recorded from multiple neurons. These advancements in digital signal processing facilitated the creation of a BMI, a direct communication channel between the brain and an electronic device. It was hypothesized that by using this new interface technology, it would be possible to decode an animal’s desired limb movement at any moment (Chapin et al. 1999). One of the most commonly used laboratory animals, the rat, was chosen as the first animal model on which to test this hypothesis. In theory, the hypothesis could be tested by reading and attempting to decode neural signals from the primary motor cortex (M1), which neuroscientists had long ago identified as a region fundamentally involved in volitional control of body movements. However, converting signals from multiple neurons in the M1 region into quantifiable movement of a robotic arm in a particular direction proves to be a complex task.

To investigate this hypothesis, multi-electrode arrays (MEAs) were implanted into M1 and the ventrolateral thalamus of the rats. Initially, the researchers wanted to learn what types of signals appear in the motor cortex when a rat moves its forelimb (Chapin et al. 1999). To record these signals, the rats were water deprived and then placed within a behavioral training box, which contained a joystick that the rats could manipulate to control a robotic arm to deliver water. Recordings were performed until enough data was collected to allow for the mapping of neural signals to the forelimb, and consequently, joystick movements. A diagram of the behavioral training box and recorded signals can be seen above in Fig. 8.1. Sophisticated decoding algorithms combining principal component analysis and artificial neural networks were applied to the neural data to convert spike trains from simultaneously recorded neurons into a neuronal population function, which represented the desired direction of arm movement. Amazingly, when this decoder was applied to the MEA recording in real time and used to direct the robotic arm’s movement, the rats were able to control the robotic arm with their minds, without physically manipulating the joystick. These results proved the feasibility of a neuroprosthetic arm, if only in its most basic sense.

Fig. 8.1
figure 1

A rat uses its forearm to manipulate a lever, which controls a robotic arm, to bring a water reward. During some trials, neural signals from M1, which encode the movements of the rat’s forearm, are recorded and used to successfully control the robotic arm. Adopted from Chapin et al. 1999 with permission

8.1.2 Monkeys Are Able to Use Neural Interfaces to Manipulate Computer Cursors

Encouraged by the success of creating an interface between a rat’s brain and a robotic arm, many labs began to pursue BMI-related research. To further assess the feasibility of ultimately using this technology in humans, a more humanistic model was necessary. Thus, many labs adopted a primate model as their primary interface-testing platform. Not only is the primate’s body and brain much more analogous to a humans, their higher intelligence allows researchers to test the interface’s performance under more demanding tasks. This provides us a better sense of how well these interfaces would function in a human patient performing real-world tasks. Also around this time, an MEA , commonly referred to as the Utah array , was developed and deemed a large step toward an MEA suitable for human implantation (Maynard et al. 1997). When implanted into the motor cortex of monkeys, this array allowed for the reliable, simultaneous recording of multiple neurons over many months (Serruya et al. 2002).

One of the first trials in primates involved the implantation of an MEA into the left dorsal premotor cortex of two owl monkeys (Wessberg et al. 2000). Using data recorded during the monkeys’ movement of a joystick allowed the researchers to create a decoder, which translated neural activity to directional movement. More complex nonlinear decoders, such as artificial neural networks, were also shown to provide adequate decoding. Finally, this research showed that a decoder, which continuously optimizes its parameters while in use, will have a much higher performance than a decoder whose parameters remain static.

Another study taught monkeys implanted with a Utah array to use a joystick to move a computer cursor along a pseudorandom path on the computer screen (Serruya et al. 2002). A linear filter system, constructed using 1 minute of continuous recording and hand tracking data, was then applied as a decoder to determine intended cursor position. When applied to subsequent data, this decoder allowed for the recovery of hand trajectory. The monkeys were then given the option to use the neural interface to directly control the cursor, and were presented with visual feedback to close the loop. The monkeys quickly learned how to use the neural interface with only visual feedback and no formal training, and, as previously seen in the rat model, stopped using the joystick altogether.

These initial results were quite promising, showing that a BMI in primates could be used to control the position of an object. However, there was still an issue with the developed system preventing its use by disabled humans. In all of the previous animal models, the decoding algorithm, which translates the neural activity into intended limb movement, was created using data recorded while the animals moved their working arms. Unfortunately, many parapelegics does not have this capability; therefore training data could not be collected. To solve this problem, attempts were made to create adaptive decoders, which required no initial training data. It was found that if a monkey was given visual feedback, it will be able to learn how to use a neural interface which employed an arbitrary decoding scheme (Taylor et al. 2002). After learning how to manipulate their neural signals to properly fit this decoder, monkeys were able to use the neural interface to guide a digital cursor to its target. Further analysis of the resulting data showed that the neurons in the monkeys’ motor cortex were actually shifting their tuning functions during the experiment so as to better interface with the decoder. This showed that a closed-loop neural interface could induce neural plasticity, allowing for adaptive improvements in performance. This suggests that providing the brain feedback from the neuroprosthetic arm may be just as important as reading information from the motor cortex.

Indeed, more recent advancements in decoding schemes support the idea that the optimal neural interface for controlling a neuroprosthetic arm may be fundamentally different than the natural interface between the brain and an organic arm. If this is true, then research should focus less on replicating natural communication and more on establishing a completely new communication protocol. In line with this, it was proposed that the decoder’s adaptation and neural adaptation do not need to be separated; instead, algorithms that capitalize on using both mechanisms to improve communication produce the most robust total systems for neuroprosthetic control (Shenoy and Carmena 2014). Therefore, a better understanding of the interaction between biomimetic designs and both user and decoder adaption could greatly improve the quality of motor BMIs (Bensmaia and Miller 2014).

All the neural interfaces described thus far have used the M1 region as their interface site. Due to the multitude of complex motions a hand can perform, it was estimated that signals from hundreds of motor cortex neurons would be necessary to precisely replicate the natural kinematics of reaching and grasping using a neuroprosthetic arm. Although decoding intended limb movement from neural activity in M1 was the most straightforward approach, some researchers hypothesized that there may be other brain regions whose neural activity might be valuable to consider when decoding arm movement intent. Certainly, neuroscientists had already shown that many other brain regions encode information related to limb movement. One such region is the parietal reach region (PRR) , a subregion of the posterior parietal cortex. The PRR is located earlier along the sensory-motor pathway than the motor cortex, and does not encode movement, but rather the desire to move and movement planning information such as the final target toward which the movement is made (Shenoy et al. 2003). Since trajectory planning is a trivial task for modern robotics, these earlier signals, if decodable, offer different possibilities for neuroprosthetic limb control. Maximum likelihood estimation was applied to neural recordings from the PRR to estimate what reach parameters could be resolved. From these recordings, a finite-state, machine-decoding algorithm was created. This algorithm was shown to be effective in determining when an animal is planning to reach and in which direction the animal will execute this reach. In addition to the intended target of the movements, other higher-level cognitive signals, such as the expected magnitude or probability of a reward upon the successful grasp of an object, were encoded in neural activity in this region (Musallam et al. 2004). This allows an interface into the PRR to read not only the desire of arm movement but also the preferences and motivation of the individual as the movement is carried out.

Although much progress had been made, the performance of neural interfaces at this time was still too slow and inaccurate to offer a real solution; eye tracking systems, which used pupil movement for control, still outperformed the cutting-edge neural interfaces. However, it was still widely thought that a direct neural interface, when optimally implemented, should be able to outperform these eye tracking systems. As researchers continued to look for ways to improve the performance of the neural interface, the dorsal premotor cortex (PMd) was investigated as a possible interface location (Santhanam et al. 2006). The PMd encodes information about the final target of desired arm movement. Instead of decoding every detail of movement necessary to move the arm and grasp an object from signals in the motor cortex, the desired object could be determined from PMd recordings, and standard robotic arm control algorithms, already commonly used in factories, could be easily applied to direct the arm to grasp that object. Monkeys with a neural interface in the PMd confirmed not only that this approach worked, but also that motions were executed many times faster than previous BMIs (Santhanam et al. 2006). Monkeys were then taught to use this interface to type on a digital keyboard with a digital cursor and were able to achieve rates of approximately 15 words per minute, although no works of Shakespeare were reproduced.

With neural interfaces and decoding systems becoming more defined, research continued to move toward preparing the system for real-world tasks. To this end, neural interfaces were designed through which monkeys could control two on-screen avatar arms. Research using this interface produced surprising results: it was found more effective to consider the two avatar limbs together than independently when decoding motor cortex signals in predicting movements (Ifft et al. 2013). A single fifth-order, unscented Kalman filter , instead of two independent filters and cortical networks, was found to allow for faster adaptation in the frontal and parietal cortical areas, resulting in better neural interface performance.

8.1.3 Monkeys Feed Themselves Using Neuroprosthetic Arms

Although the studies discussed above have shown that neural interfaces worked well in the digital world, it was important to consider how well they would perform in the physical world. Toward this goal, monkeys were trained to use a neural interface to manipulate a robotic arm. A more complex decoder, which used ensemble neural recordings from several brain regions, was able to extract several kinematic parameters, such as hand position, velocity, and gripping force (Carmena et al. 2003). It was found that by recording from these large neuronal ensembles, high accuracy arm movements could be resolved. This allowed monkeys to use a robotic arm, in conjunction with visual feedback, to perform reach and grasp tasks. Continuous use of this BMI led to significant improvements in performance as well as functional reorganization in multiple cortical areas. Recent work has further confirmed that cortical adaptation occurs during use of a BMI, which results in better performance of the BMI’s decoder (Rouse et al. 2013). In fact it has been shown that large-scale modifications of the cortical network and changes in directional tuning occur when an implanted monkey learns to proficiently use its BMI (Ganguly et al. 2011).

The ability to feed oneself is often taken for granted, but for individuals with tetraplegia this task is impossible, leading to reduced quality of life and necessitating daily assistance. Researchers thought that if monkeys could successfully perform reach and grasp tasks with a robotic arm and neural interface, perhaps they could then use this interface to feed themselves. This may seem like a simple extension of the previous tasks; however, feeding oneself requires more complex motor skills. During the study, the monkeys used a five-degrees-of-freedom robotic arm to interact with physical objects. Undeterred by the complexity of their new task, the monkeys were able to consistently grasp food placed at arbitrary positions and bring it to their mouths (Velliste et al. 2008). The results were promising, showing monkeys could successfully move the arm in three dimensions as well as open and close a gripper at the end of the arm. This suggests that one day humans could use neuroprosthetic devices to achieve dexterous functions at near-natural levels.

8.1.4 The Disabled Are Finally Getting a Hand as Motor Neuroprostheses Enter Clinical Trials

Today, the cutting edge of neuroprostheses exists within the world of clinical trials, and MEAs have been successfully implanted into the motor cortices of human patients. As research in primates suggested, humans could use associated neural signals to control a digital cursor. A quadriplegic , who had been paralyzed 3 years earlier by a spinal cord injury, was implanted with a neural interface that would allow him to manipulate a robotic arm (Hochberg et al. 2006). The interface, shown in Fig. 8.2, successfully recorded signals from the M1 region and decoded these signals into intended hand motions. The patient used the neural interface and decoder to manipulate the movements of a computer cursor, to type e-mails, control his television, and play video games. This achievement is quite wonderful when considering that the ability to control a cursor is much more meaningful for a human than a monkey. Long-term monitoring of these neural interfaces has shown sustained viability, addressing a classic concern with implantation of foreign material (Simeral et al. 2011). Several years after implantation, patients have still been able to accurately perform cursor point-and-click tasks.

Fig. 8.2
figure 2

This small MEA , when implanted into the M1 region of paraplegic patients, allows for the recording and decoding of the patient’s neural signals into movement intent which can be used to control a variety of peripherals, such as robotic arms and computer cursors (Hochberg et al. 2006)

The last step left toward full implantation of a motor neuroprosthesis was the addition of the robotic arm. Following the success of humans using neurally controlled prosthetic devices, researchers began new clinical trials, hoping to help patients accomplish even more complex tasks. Quadriplegics with neural interfaces were shown to be able to control advanced robotic arms and perform reach and grasp tasks. It was also found that simple visual feedback worked well toward helping individuals learn to use their new, neurally interfaced robotic arm. Adopted from Hochberg et al. 2012 with permission.

8.1.5 Researchers Give the Disabled More than a Hand, for Example, an F-35 Fighter Jet

The ability to restore motor function in disabled individuals provided more than enough reason to expand neural interface research. Now that the technology has moved beyond its infancy, new and more diverse uses of the interfaces are being proposed. One example of an alternative use comes from the University of Pittsburgh’s Human Engineering Research Laboratories. In 2012, researchers implanted a neural interface into a quadriplegic, and over the course of the next 2 years, she was taught to control a robotic arm. DARPA , the research branch of the military, was closely involved in this project, as many veterans suffer from limb loss or paralysis due to combat injuries. Once the patient had mastered the use of the arm, DARPA wondered what else she might be able to control. One proposal was an F-35 jet, a stealth multirole fighter only recently introduced into the air force. After an interface was created between the patient and a flight simulator, the patient could successfully control the F-35’s altitude, pitch, and roll purely using her mind (Stockton 2015)!

8.1.6 Brain-Controlled Stimulation of Muscles Presents an Alternative Pathway to Regaining Mobility

Completely replacing a damaged arm with a neuroprosthetic arm provides a solution to almost any type of circumstance preventing individuals from using their current arm, particularly for amputees. However, a large subset of handicapped individuals has lost control of their limbs due to a spinal cord injury. In these instances, the limbs remain fully intact and functional, but the damaged spinal cord prevents signals from traveling between the brain and the limbs, thereby precluding the individual from moving them. In these situations, it is apparent that replacing the healthy limb with a neuroprosthetic arm is an unnecessarily drastic approach. Instead, recent research has focused on reestablishing the communication channel between the brain and the healthy organic limb (Moritz et al. 2008).

As previously outlined in this chapter, BMIs can convert brain signals from the motor cortex into a desired movement of a robotic arm. It was proposed that perhaps these same, recorded signals could be used to control a healthy, organic limb. Functional electrical stimulation (FES) of the muscles was used to achieve this goal (Moritz et al. 2008). Electrically stimulating muscles in the arm causes them to contract, much as they would in response to an innervating nerve firing. The feasibility of restoring the communication channel between the brain and healthy limb, while bypassing the damaged spinal cord, was tested in a monkey model. Not only did these trials show that neuronal activity could be translated into muscle movement via a BMI-to-FES interface, it was also found that the monkeys were able to quickly adapt their neural signals so as to better utilize the new communication channel. Specifically, it was not necessary to match neurons with the same muscles they were associated with prior to the injury. This shows the monkeys were able to quickly learn the new mapping between the neurons and the muscles connected to the FES system, and adapt their neural activity to be able to best utilize these new connections.

Clinical trials have shown that FES can be used in patients with tetraplegia to regain control of hand movement. These trials use residual, proximal limb movements to trigger a preprogrammed hand grasp induced by an FES system (Keith et al. 1989). Due to the inability to perform any unique grasping gestures besides those that are preprogrammed, this system does not allow for nuanced or precise grasping motions, thereby limiting its usefulness. To remedy this problem, it was proposed that a BMI could be used to translate neuron signals into FES stimulation of muscles in the hand, allowing for the return of more natural grasping ability. This idea was tested in a primate model, where a BMI recording from 100 motor cortex neurons was used to control an FES system implanted in a primate’s hand (Ethier et al. 2012). To train the interface, both the motor cortex neurons and their corresponding muscles within the hand and forearm were recorded while the primate performed a task which involved picking up a rubber ball and placing it at a target location. Once the different neural activity patterns had been mapped to corresponding muscle contractions, the primate’s forearms and hands were temporarily paralyzed by a peripheral nerve block in the elbow. The primates were then asked to perform the same task again, but this time they could only move their hand and forearm using an FES system controlled by their neural interface. Using this system, the primates were able to grasp and move the ball reliably with movements that seemed natural to a casual observer.

8.2 A Somatosensory Neuroprosthesis

We’ve already covered, in depth, how BMIs can be used to control the motion of a robotic arm. However, besides allowing mechanical interaction with the environment, natural limbs serve another purpose, providing somatosensory feedback. Somatosensory feedback includes feedback from a wide variety of receptors, which encode senses such as nociception, proprioception, mechanoreception, and thermoception. The addition of somatosensory feedback has the potential to make the neuroprosthetic arm feel as it if is the patient’s own, natural limb. Even more importantly, efficient, natural limb movement is better achieved by using a closed-loop neuroprosthetic arm, thereby necessitating implementation of an artificial afferent system.

8.2.1 Adding Feedback to Neuroprosthetic Arms Is a Touching Story

As sensory and motor systems are inextricably linked, somatosensory feedback is essential in motor control. Removing or blocking sensory feedback via anesthetics or lesions dramatically impairs motor abilities in otherwise healthy subjects. Proprioception , or the ability to sense the relative position of body parts and the forces needed to maintain or move them, has been found to be essential in movement planning, especially during complex tasks (Sainburg et al. 1995). Expectedly, when deprived of any sense of mechanoreception , commonly referred to as the sense of touch, a human’s ability to interact with or even hold objects correctly goes awry (Monzee et al. 2003), and inactivation of the primary somatosensory cortex (S1), the brain area responsible for processing and relaying somatosensory information, results in severe loss of coordination and exaggerated movements in monkeys (Brochier et al. 1999).

While touch and proprioception are the most necessary senses for proper limb control, they are by no means the only senses that provide valuable feedback. While other senses, such as temperature and pain, are less critical to movement, a full gamut of senses is necessary to make neuroprosthetic limbs feel like a natural body part. Multiple robotic hands have been developed toward this purpose, with the latest models incorporating numerous sensors for encoding a broad range of information including measures of joint angle, tendon tension, temperature, vibration, and skin deformation (Hellman et al. 2015). These robotic hands employ numerous, novel techniques to produce these senses. For instance, one design employs a conductive liquid chamber enclosed between a synthetic elastomeric skin and an electrode array-covered artificial finger skeleton (Su et al. 2012). Hydraulic pressure measured and encoded by the electrode array provides a more natural, even sense of touch when pressure is applied to the hand. Other design features include an artificial fingerprint imprinted onto the skin, which allows a user to gauge surface texture by measuring the vibrations created by the friction between the fingerprint and the surface. Unfortunately, current neural interfaces cannot yet take advantage of these cutting-edge peripherals, and therefore the peripherals have not been tested in clinical trials. The first problem with providing somatosensory feedback is determining how to convert the signals collected from the sensors within the neuroprosthetic arm to a neural code, which can be understood by the brain. The second problem is finding the best brain location and stimulation techniques to effectively deliver this neural code.

8.2.2 Electrical Stimulation of the Primary Somatosensory Cortex Provides a Substitute for Natural Tactile Stimuli

The well-known cortical homunculus—a pictorial representation of the anatomical divisions of the M1 and S1 brain regions—was created using electrical stimulation to map each region, and provides the first example of artificial somatosensory perception (Penfield 1937). It was later demonstrated that focal stimulation of the cortical surface resulted in specific, localized tactile sensations on corresponding body parts (Rasmussen and Penfield 1947). It was not until the 1990s that a seminal work showed that intracortical microstimulation (ICMS) pulses delivered to S1 would result in tactile sensations that were indistinguishable from the sensations evoked by tactile stimuli applied to monkeys’ fingers (Romo et al. 1998). Monkeys were trained to discriminate the difference in frequencies between mechanical vibrations sequentially applied to their fingertips. On random trials, ICMS were delivered to S1 in place of the second mechanical stimuli. It was found that the animals were able to reliably determine the frequency change regardless of whether the second stimulus was mechanically delivered or simulated via ICMS stimulation. This was the first experiment to systematically show that animals could not distinguish between natural and ICMS induced sensations, thereby demonstrating the capability of ICMS for delivering varying somatosensory percepts (Romo et al. 1998). Several years later, it was shown that a rat’s movements could be remotely controlled, similar to how a radio-controlled toy car is steered (Talwar et al. 2002). This was accomplished by implanting electrodes into the barrel cortex, allowing for the delivery of signals, which the rat would perceive as a whisker deflection. The rat was then outfitted with a backpack containing the necessary electronics to wirelessly control the stimulation delivered to these electrodes. Using this system, “Robo-Rats” were successfully guided by stimulating the barrel cortex in different hemispheres. These influential works solidified the idea that ICMS could be used to write meaningful somatosensory information to the brain.

To minimize mental load during use, and to decrease the necessary training time, the ideal somatosensory neuroprosthesis would deliver sensations similar, if not identical, to those delivered via afferent neurons from a healthy limb. In practice, this would require the conversion of signals from multiple sensors into their representative pattern of neural activation, and their delivery to the correct brain regions using ICMS. Unfortunately, due to the current limited understanding of the somatosensory system, this biomimetic approach remains a challenge. Instead, most current systems rely on brain plasticity, which allows patients to adaptively learn and recognize the new input signals (Bensmaia and Miller 2014).

In order to construct a somatosensory neuroprosthesis, parameters for inducing these sensations via ICMS must be established. Work from several groups demonstrated that rats were an acceptable model for ICMS testing, as head-fixed rats could readily detect ICMS delivered to the barrel cortex (Bari et al. 2013). Moreover, it was shown that it was possible to convince rats of the presence of virtual objects in their environment by using ICMS to deliver a sensation that matches that of object-detecting whisking, a rat’s method of sweeping its whiskers to explore its environment (Venkatraman and Carmena 2011; O’Connor et al. 2013). Applying computer-simulated models to neural data recorded during behavioral tasks, researchers have begun to map certain neural activity patterns with specific sensory features. With this, researchers have shown that perceived intensity is primarily attributed to spatiotemporal integration of action potentials. For example, increased amplitude may be linked to increased size of the firing neuronal population, while increased frequency may be linked to greater firing rates (Fridman et al. 2010).

8.2.3 Closing the Sensorimotor Loop Allows for more Naturalistic Control of Neuroprostheses

As stated in the section on neuroprosthetic arms, a primate model is often necessary to prepare a design for human implantation. This is also true when considering a somatosensory neuroprosthesis. To create a primate model, researchers first successfully trained monkeys to discriminate spatial and temporal patterns of ICMS delivered to S1 (Fitzsimmons et al. 2007). Subsequently, it was shown that the delivery of these same ICMS patterns could be used to instruct a monkey about where to move a BMI-controlled computer cursor (O’Doherty et al. 2009). In another experiment, multiple somatosensory features, including contact location, pressure, and timing, were conveyed to monkeys through ICMS of the S1 region (Tabot et al. 2013). Monkeys were first trained to discriminate between mechanical stimuli differing in location and pressure, which were sequentially delivered to their palms. When the mechanical stimuli were randomly replaced with ICMS of S1, the monkeys were still able to properly gauge the target location and pressure of the ICMS-simulated stimuli. This resulted in the monkeys showing equivalent task performance with mechanical and artificial stimuli. The group was also able to mimic on- and off-responses associated with first and last contact with object, respectively. This mimicry was achieved by delivering phasic ICMS at the onset and offset of contact, while using tonic ICMS during contact to encode varying pressure and location.

A groundbreaking study was recently completed, which showcased the successful development of the first closed-loop, sensorimotor neuroprosthesis (O’Doherty et al. 2011). The system, outlined in Fig. 8.3, coupled a BMI, which allowed a monkey to move an on-screen cursor, with a sensorimotor neuroprosthesis, which delivered ICMS to the monkey’s S1 so as to evoke the sensation of the texture of whatever digital object the cursor was hovering over at any given time. Using these two BMIs, monkeys were able to identify a target digital object out of a group of digital objects simply by comparing the objects’ corresponding textures, proving the feasibility of bidirectional neuroprostheses.

Fig. 8.3
figure 3

A monkey outfitted with a BMI is able to use the interface to move a cursor on screen, while simultaneously using a virtual tactile sensation induced by patterned microstimulation of the S1 region to distinguish the texture of the object below the cursor. Adopted from O’Doherty et al. 2011 with permission

While most of the research has focused on mechanoreception, proprioception remains of great importance to neuroprosthetic limbs. However, proprioception is a more complicated sensation than mechanoreception and has proven more difficult to parameterize due, primarily, to the lack of a well-structured topographic map of associated encoding locations in the brain. One group demonstrated that a monkey could discriminate between different ICMS patterns delivered to the S1 sub-region, 3A (an area concerned with proprioception), suggesting proprioception could be restored using ICMS (London et al. 2008). In a subsequent experiment, S1 activity was recorded in monkeys carrying out both active and passive movements to determine how different proprioception is represented in S1. Neural data were recorded while monkeys directed a cursor using a manipulandum that allowed researchers to deliver pulses of force through its handle. The monkeys were then tasked with using the direction of these force pulses to determine which movements to make (Zaaimi et al. 2013). The mechanical forces were then replaced with their representative neural activity patterns, delivered to the S1 region by ICMS, which simulated a force from the manipulandum. The monkeys were found to treat the ICMS-delivered sensation as if it were an actual force through the manipulandum. Though still a simplified representation of proprioception, this study opened the door to conveying a complicated spectrum of sensations. Other approaches include using an MEA to interface with nerve stumps in the remaining section of the limb, instead of with the brain, to provide both tactile and proprioceptive feedback (Chapin 2004; Horch et al. 2011).

Somatosensory neuroprostheses have only recently been implemented in animal models, so it is quite amazing that they have already been introduced into the humans as well. However, due to the high-risk nature of electrode implantations, initial clinical trials have focused on less invasive means of delivering stimulation to the S1 region. One technique utilizes an electrocorticography array along the surface of the cortex to deliver ICMS. Using this system, patients were able to readily distinguish the presentation of different stimulation patterns, proving that direct cortical stimulation can offer unique sensory feedback in humans (Johnson et al. 2013). Transcranial focused ultrasound (tFUS) is another, even less invasive technique, which is a promising option for activating S1. tFUS has the ability to modulate sensory-evoked brain oscillations, thereby enhancing performance on sensory discrimination tasks (Legon et al. 2014). Furthermore, tFUS, targeted at areas corresponding to mechanoreception in the hand, has been shown capable of eliciting tactile sensations with precision on the individual-finger level (Lee et al. 2015). These early successes make it likely that future innovative techniques and a further understanding of the somatosensory system will lead to somatosensory neuroprostheses as successful as the auditory and visual neuroprostheses described in the following sections.

8.3 An Auditory Neuroprosthesis

Auditory neuroprostheses are designed to deliver audio signals to the brain while bypassing any damaged peripheries of the auditory pathway. The auditory nervous system is well defined, and there are many options along its pathway for a viable interface location. Cochlear implants (a type of auditory neuroprosthesis) represent the most widely adopted and commercially successful neuroprostheses. Implantable models are capable of restoring useful auditory perception to the 360 million individuals suffering from disabling hearing loss worldwide (Olusanya et al. 2014). Although communication with the cochlea is not direct communication with the brain, for completeness and given the commercial success of cochlear implants, this section will briefly review their history and development, before moving on to discuss auditory neuroprostheses that interface directly with the brain.

8.3.1 The Cochlear Implant Emerges as the First Commercially Successful Neuroprosthesis

Initial attempts at improving hearing using electronic stimulation began as early as 1748 when it was found that the hearing of a deafened woman could be improved by applying an electric potential across her temples using a Leyden jar, a rudimental type of battery (Wilson 1752). This concept remained untouched until 1930, when recordings from the cochlear nerves of cats showed that the nerve encoded both the frequency and amplitude of speech waveforms (Wever and Bray 1930). As hearing loss is commonly caused by damaged hair cells within the ear, it was proposed that one could bypass these damaged cells and interface directly with the healthy cochlear nerve. A couple decades later, it was confirmed that stimulating the cochlear nerve of a human patient caused that patient to perceive a noise (Gisselson 1950).

The first instance of a deaf human’s hearing being augmented by an intra-auricular implanted electrode was unplanned. While performing a surgery to treat cholesteatoma , a destructive growth within the middle ear, doctors implanted an electrical stimulation device within the cholesteatoma in the hopes that they could use electric current to treat it (Djourno et al. 1957). Due to the location of the cholesteatoma, the implantation location of the stimulation device was in close proximity to the internal auditory canal. Upon activation of the stimulation device, it was found that the patient had some of his hearing restored. Later, this was understood to be due to the stimulation device activating acoustic nerve fibers in the patient’s inner ear (Djourno and Eyries 1957). These accidental findings encouraged otologists to begin designing a cochlear implant for hearing restoration. The first models of such a device were single-channel interfaces, which electrically stimulated the acoustic nerve fibers. Due to their simplicity, these cochlear implants allowed their users to hear rhythms of speech but not to recognize the words being spoken (House and Owens 1973). However, the return of any hearing—no matter how distorted—was trailblazing, and unveiled the potential of a cochlear implant.

Many of the improvements made to the initial cochlear implant stemmed from research which provided an in-depth understanding of how the inner ear’s shape affects sound processing. One particularly important achievement was the demonstration that external sound is transduced into a traveling wave within the cochlea (Békésy 1928; Olson et al. 2012). The interface between the electronic stimulator and auditory receptors was also better defined to allow for optimal excitation (Davis 1968; Kiang and Moxon 1972). Another breakthrough came when researchers discovered that hair cells within the cochlea respond to unique frequencies, and that their corresponding cochlear nerves encoded those frequencies (Evans 1975). Current devices capitalize on this frequency specificity by splitting audio signals into their frequency components and then feeding these individual frequency components to their corresponding sections of the cochlear nerve. A six-channel electrode array served as the first multichannel implant to successfully transmit multiple frequency components to multiple nerve sites (Simmons et al. 1965). The basic design principles of these early devices remain the mainstay of modern cochlear implants.

8.3.2 Competition Within the Commercial Market Improves the Cochlear Implant

Once the performance of the cochlear implant had been proven in an academic setting, the device soon began to attract the attention of commercial investors, with the first patent for a multi-electrode cochlear implant submitted in 1977 (Chourard 1977). Production of the patented device was entrusted to the French company Bertin . The fact that this company held the patent until 1999 influenced the development of the cochlear implant by forcing competitors to try alternative designs. One example of such an idea was using only specific, high-frequency bands, which were known to convey acoustic information important for speech processing. These innovative ideas led competitors to ultimately surpass Bertin, leading the company to abandon the field.

Around this time, the company Cochlear Limited entered the scene, producing a multi-electrode cochlear implant (Clark et al. 1979). This implant—FDA approved in 1984—was the first successful, commercialized, multichannel cochlear implant. Concurrent research led to the development of the first microelectronic, multichannel cochlear implant (Hochmair et al. 1979) which lead to the formation of another cochlear neuroprosthetics company, Med-EL, in 1982. In 1993, an additional production company, Advanced Bionics, joined the competition to create the perfect auditory neuroprosthesis.

The market’s competition generated a large push for increased cochlear implant performance. At this time, it was thought that improving the electrode array interface offered the best hope for improving signal clarity and transmission. The first devices used a single conduit implanted into the cochlea with multiple electrode channels located along its length. This system was championed by companies like Cochlear Limited and was shown to allow for speech discrimination by a previously deaf individual (Michelson and Schindler 1981). However, it was also shown that multiple wire arrays inserted into the scala tympani of the cochlea offered higher performance by offering more channels and more options for the spatial placement of those channels (Clark and Tong 1982). This led all three companies to focus on increasing the amount of channels, postulating that more channels would allow for a higher-fidelity encoding of audio into neural signals.

However, researchers quickly found that further increasing the number of channels led to decreasing returns. This is due to the fact that the electrode array is not in direct contact with the cochlear nerve, but is separated from the nerve by the boney, medial wall of the cochlea. This small fluid filled space between the electrodes and the cochlear nerve causes the electrode’s current to spread. An increased density of electrodes corresponds with a decreased distance between neighboring electrodes, and at a certain density current spread will cause interference between the signals of these electrodes. Present research looks to overcome this issue by focusing on the design of new electrodes and implantation surgeries, which would allow for the insertion of the array directly into the nerve trunk in the modiolus of the cochlea (Middlebrooks and Snyder 2008). If successful, these arrays could make it possible for more electrodes to yield higher spectral and temporal resolution, without signal corruption due to current leak (Middlebrooks and Snyder 2010). Another attempt, referred to as current field focusing, seeks to reduce the effects of current spread with current fields that sum spatially in a predefined, beneficial manner. Therefore, instead of focusing on the current emitted directly from the electrode, and allowing the overlapping current regions to be sources of noise, overlapping current fields are used to create a three-dimensional electric field, which correctly targets sites along the cochlear nerve (Srinivasan et al. 2010).

Beyond the interfacing electrode array, other aspects of cochlear implants have also been improved. In particular, the shift from using an analog audio processing system to a discrete digital one has allowed for devices to present far more complex patterns to their electrodes. Instead of presenting continuous analog waveforms simultaneously to all electrodes, devices can employ a discrete interleaved sampling strategy, which presents brief pulses to each electrode in non-overlapping sequences (Wilson et al. 1991). Using this advanced audio processing, it was found that by extracting temporal envelopes of speech information from a limited number of broad frequency bands, higher performance could be achieved. This is because these envelopes can be designed to modulate noises of the same bandwidths, thus preserving the temporal envelope cues in each band. These band-limited temporal envelopes can then be non-simultaneously delivered to the electrodes (Shannon et al. 1995; Galvin et al. 2015).

Although cochlear implant performance may seem nearly optimal—especially as compared to other sensory neuroprostheses—there are still areas of the design that can be improved upon. For example, cochlear implants still perform poorly when faced with “the cocktail party problem,” a problem describing any auditory situation analogous to that of an individual that must distinguish the voice of their conversation partner from the plethora of voices at a cocktail party. To combat this issue, efficient techniques to bolster signal-to-noise ratio during times of high background noise are still highly sought after (Carroll et al. 2011). Another deficiency in cochlear neuroprostheses is their lack of tone perception when listening to music (McDermott 2004; Peng et al. 2004). Techniques such as bilateral implantation (van Hoesel et al. 1993; Laske et al. 2009) and the use of cochlear implants in combination with hearing aids for low-frequency amplification attempt to alleviate these issues (Francart and McDermott 2013).

8.3.3 Interfaces in the Cochlear Nucleus or Inferior Colliculus Deliver Audio Signals Directly to the Brain

Many separate research groups have taken steps toward developing auditory neuroprostheses that directly interface with the brain. This is, in part, because many individuals lack a functioning cochlear nerve, rendering a cochlear implant useless and necessitating an interface location further along the pathway. The next potential interface site along the auditory pathway is the cochlear nucleus, which exists within the dorsolateral side of the brainstem and receives direct input from the cochlear nerve. Since demonstration of this site as a successful interface (Edgerton et al. 1982), several multichannel systems implanted into patients’ cochlear nuclei were developed and carried through clinical trials (Nevison et al. 2002). Stimulating electrodes within the cochlear nucleus are able to successfully transfer auditory information; however, this information is not well received and decoded by the cochlear nucleus, making it difficult for patients to comprehend speech without concurrent lip-reading (Otto et al. 2002). While, as before, this may be due to distortion from overlapping electrical fields, it is also likely that the neurons encoding high frequencies, typical of normal conversation, are located below the surface of the brainstem and are therefore not easily accessible or well stimulated by the surface electrodes (Shannon et al. 1993).

In an attempt to reach these inaccessible neurons, an auditory, brainstem implant was designed which used surface electrodes in conjunction with electrodes that penetrate a couple millimeters into the cochlear nucleus. While this new electrode array improved some aspects of performance, overall speech understanding did not significantly increase (Otto et al. 2008). One confounding variable in the initial studies of these implants was that nearly all of the patients who qualified for clinical trials had lost their hearing from neurofibromatosis type II (NF2) , a disease characterized by tumorigenesis along the auditory pathway between the inner ear and the brainstem. After restricting testing to patients without NF2, greater levels of speech comprehension than with a cochlear implant were found, as expected (Colletti et al. 2009). While this suggested that NF2 renders the cochlear nucleus a poor interface site, some studies have shown that improved surgical approach and procedure may allow for the cochlear nucleus implant to deliver better speech recognition in patients with NF2 (Behr et al. 2007).

Researchers are also working on additional, possible interface sites, such as the higher-level inferior colliculus within the auditory midbrain. An electrode array was successfully implanted into the midbrain of six patients, but unfortunately yielded unsatisfactory results in speech recognition (Lim et al. 2007). That said, these midbrain implants will continue to be developed and improved as they offer the only option for patients with damage to lower sites along the auditory pathway, such as the cochlear nucleus. More importantly, even small successes in restoration of speech recognition are still helpful as they provide sound awareness and discrimination to support lip-reading.

8.4 A Visual Neuroprosthesis

Worldwide approximately 285 million are visually impaired, worldwide, and another 40 million people are blind (Pascolini and Mariotti 2012). Early medical treatment centered on drugs which only slowed the onset of blindness, leaving no options for those who had already lost their vision. In light of this obvious need for an effective solution, researchers have become very interested in designing an implantable, visual neuroprosthesis that reproduces natural functionality. Approaches to the creation of such a device vary, but the underlying goal is the same: convey visual information about the user’s surroundings in an intuitive manner. To accomplish this, the implanted visual neuroprosthesis must capture the incoming light, process it into a representative signal compatible with the neural region it interfaces with, and convey the signal to the targeted region through patterned electrical microstimulation.

Possible interface locations are limited by the status of the patient’s visual system, as the location must be intact and healthy. Highly studied stimulation targets include the retina, optic nerve, and visual cortex. Each of these locations encode the visual signal differently, necessitating uniquely encoded input signals, which leads to different locations performing better in different qualitative review tasks such as contrast, brightness, edge detection, and depth of vision. Device design is also dependent on the process of translating light into an electrical signal, which dictates the bandwidth and type of available information that will be processed and transmitted through the rest of the system.

8.4.1 Interfaces in the Visual Cortex Deliver Visual Signals Directly to the Brain

Shockingly, just like the auditory neuroprosthesis, the first iteration of a visual neuroprosthesis came as early as 1748, when it was shown that a voltage potential across the eyes of a blind patient caused him to perceive a flame passing in front of his eyes (Leroy 1755). Significant advancement would not come again until the 1920s when the capacity to induce visual percepts via electrical stimulation of the occipital cortex was formally shown (Culver 1929). Then, in 1968, the first successful implantation of an electronic stimulation device into the visual cortex took place when a pair of doctors connected an array of radio receivers to electrodes implanted in the occipital pole of the right hemisphere of a blind patient (Brindley and Lewin 1968). Certain radio signals were found to cause the patient to experience sensations of flashes of light, known as phosphenes. Even more promising was the amount of distinguishable phosphene patterns they could produce: the patient was able to resolve the difference between stimulation from electrodes placed only a couple millimeters apart from one another.

The effectiveness of this solution for patients who were blind for a long period of time was still unproven, and researchers worried that such patients’ visual pathway may degenerate and become unresponsive to stimulus (Brindley et al. 1972). However, further studies showed that implanted electrodes allowed for successful production of phosphenes in individuals who were blind for many years (Dobelle et al. 1974). In 1978, a team of researchers implanted a square array of platinum electrodes on the surface of a patient’s primary visual cortex. At the turn of the century, after two decades of monitoring this patient and tweaking their interface, the group published their research, unveiling the first visual neuroprosthesis capable of restoring vision by feeding a processed digital signal from a digital video camera into the visual cortex (Dobelle 2000).

The initial success of this device encouraged many other researchers to pursue the idea of a visual neuroprosthesis. At the same time, it also set an archetypical design for future devices. A CCD array , similar to those found in simple black and white cameras, was mounted to glasses and used to capture incoming light and convert it into a digital signal, which was sent to a small processing unit worn in a belt-pack, which converted the image into its representative neural signal. The output from the processing unit was then sent to a microcontroller, which delivered stimulation to electrodes in the visual cortex via a percutaneous pedestal. It was found that by delivering certain stimulation patterns, this system could produce phosphene-based images. Researchers also hypothesized that they could interface with other cortical locations in addition to the occipital lobes, allowing for greater information transfer and increased resolution (Dobelle 2000).

It is interesting to consider what these phosphene-based images may look like to a patient. Currently, it is though that they may look similar to simple, low-resolution images produced by the large light bulbs in older stadium scoreboards. Continued implantations of these visual neuroprosthesis were associated with high success rates and limited negative effects. After implantation, patients took only 10 days of training to become comfortable with the system and quickly progressed to routine, high performance on common eyesight tests, such as letter recognition and finger counting. The users were even able to achieve visual acuity scores of around 20/400 on standardized eye tests (Dobelle 2000). From these results, it was evident that visual neuroprostheses could dramatically improve quality of life and provide recipients with independence, including the ability to navigate alone (Dobelle 2000). Even more impressively, one of the implanted patients would go on to demonstrate that he could drive a car using only the visual data provided by the implant (Naumann 2012). In addition to spatial navigation tasks, it was found that the camera interface could be adapted to allow the user to watch television and control their computer.

Although Dobelle’s project was kept secret, others were concurrently expanding research in this field. One group, in particular, had shown the feasibility of using ICMS to deliver high-resolution visual percepts by utilizing higher-density, penetrating MEAs with reduced power requirements (Schmidt et al. 1996). Using this concept, a rival visual neuroprosthesis was developed, which is now in clinical trials (Srivastava et al. 2009). This new system is very similar to Dobelle’s but with several dramatic improvements (Lane et al. 2011). Instead of using flat, surface electrodes, a custom intracortical array of penetrating electrodes was created. The small footprint of this array permits for numerous arrays to be implanted into a patient’s occipital lobe, allowing for as many as one thousand unique intracortical stimulation sites. It is hoped that this increase in spatial resolution of stimulation would allow for effective transmission of higher resolution images. To increase the feasibility of this approach, a wireless telemetry system was developed, which uses a subminiature, autonomous, wireless stimulator module to communicate with the electrode arrays and power them wirelessly (Rush and Troyk 2012). In addition to making multiple, autonomous arrays possible, this system will prove crucial in promoting the development of devices that employ intracortical stimulation techniques, whose early safety concerns limited their entrance on the market (Srivastava et al. 2009).

Although now facing competition from visual neuroprostheses that interface with the retina, cortical-based implants still offer many benefits. The devices have reduced power requirements, more predictable phosphene production with less flicker and blur (Brindley and Lewin 1968), and the capability of higher resolution with the increased room for electrodes available on the cortex (Nordhausen et al. 1996). Cortical implants also remain a necessity for those with extensive damage to both the retina and optic nerve, thereby preventing the use of a retinal implant.

8.4.2 Interfacing with the Retina May Allow for Less Complex Encoding of the Visual Signal

Retinal implants, which interface with the retina of the eye, cannot be considered a BMI in the strictest sense, as the retina is actually part of the central nervous system. However, the retina offers a promising interface site for visual neuroprosthesis for those with diseases solely affecting the eye’s photoreceptor cells. For example, in the two million individuals, globally, that face blindness due to retinitis pigmentosa (Busskamp et al. 2012) and the 50 million that are blind due to age-related macular degeneration (Stanton and Wright 2014), it is still possible to directly interface with the retinal bipolar and ganglion cells. This interface location, at the beginning of the visual pathway, allows for minimal preprocessing of the signal by taking advantage of the visual system’s own processing circuitry. This inclusion of significant, natural processing helps shape the neural response at the visual cortex into a more instinctively familiar pattern, while providing a less invasive option than direct interfacing with the visual cortex.

The Argus II retinal implant manufactured by Second Sight Medical Products (SSMP) became the first approved retinal implant on the European market, in 2011, and in the United States, in 2013. Its predecessor, the Argus I, had completed the first successful clinical trial of an active epiretinal implant (Humayun et al. 2003). The design of the Argus II used the same image capture and processing scheme as seen in previous visual neuroprostheses, differing only with its interface. In the Argus II, this interface consists of an extraocular electronic case, attached to the temporal region of the skull, which produces and delivers stimulation signals via a subcutaneous cable into an intraocular electrode array placed on the epiretinal surface. The array contains a square arrangement of 16 flat platinum electrodes, allowing for its placement on the epiretinal surface (Piyathaisere et al. 2003). Interestingly, the design also utilizes the vitreous as a sink for heat dissipation of the device (Piyathaisere et al. 2003). However, arrays placed in this location were found to have difficulty maintaining prolonged attachment (Majji et al. 1999) and necessitated increased image processing to mimic the output of ganglion cells (Becker et al. 1999). When creating the Argus II, an array with 60 electrodes was used to allow for higher resolution; after a clinical trial, the device gained FDA approval (Humayun et al. 2012). Today, more than 80 patients have been implanted with the Argus II and Second Sight is working on developing a future model, employing a 200-electrode array (Fernandes et al. 2012).

Though the first to gain approval, Second Sight is not the only company to produce retinal implant-based visual neuroprosthetic systems. Bionic Vision Australia (BVA) has also been developing two implants. However, instead of implanting the electrode array into the epiretinal space, their first system is designed to be implanted into the suprachoroidal space, between the sclera and choroid. BVA believes this space offers a safer location for implantation allowing the visual neuroprosthesis to work in tandem with preoperative residual vision, increasing acuity. The device is undergoing clinical trials using a 22-electrode array, while BVA works on upgrading to a 98-electrode array (Ayton et al. 2014). The second system BVA is developing is a high-acuity epiretinal implant-based device that uses artificial diamond electrodes and casing for the implanted chip, replacing the standard platinum and silicon hardware (Hadjinicolaou et al. 2012). They believe this unique material could allow for over a 1000 electrodes in one array, while a model with 256 electrodes has already been developed (Smith et al. 2014).

It is thought that subretinal placement of an implant, between the photoreceptor layer and the retinal-pigment epithelium, may allow for normal processing by the middle and inner retinal layers. The subretinal space may also provide a more stable location for array fixation, allowing for longer-lasting functionality (Chow et al. 2004). While proximity to the retina is advantageous for many reasons, it provides added obstacles such as limited implant space (Volker et al. 2004) and increased likelihood of thermal injury to the retina. The use of a photodiode array , instead of a traditional MEA, is another reason some visual neuroprostheses utilize early stage interface locations along the visual pathway. A photodiode array takes the entire system, including the camera and imaging processing and places it within the implantable chip. Light enters the eye and is absorbed by the outer-facing photodiodes on the array. These photodiodes are then able to convert this light into an electrical current, which is sent through microelectrodes to stimulate the ganglion cells. A visual neuroprosthesis company, Optobionics , developed and successfully implanted a model with 5000 micro-photodiodes, becoming the first company to develop a subretinal implant evaluated in clinical trials (Chow et al. 2004). The initial study implanted the devices into six patients suffering from retinitis pigmentosa. After implantation, all of the subjects reported improved perception of contrast and motion detection, sharper resolution, and an increased visual field. However, current micro-photodiodes are unable to receive enough incident light from realistic environments to generate adequate currents for stimulation of the remaining retinal cells (Zrenner 2002). To counter this shortcoming, several other groups have developed designs that incorporate external power sources to amplify the effects of incident light. Recently, Retina Implant AG developed a chip suitable for subretinal implantation, which housed 15,000 independent micro-photodiode-amplifier-electrode elements, which were powered via transdermal current induction. This implant underwent clinical testing in nine patients, with most patients reporting improvements in light perception, light localization, motion and angular speed detection, grating acuity measurement, and visual acuity. Unfortunately, trials were eventually put on hold due to repeated failure of the implant (Stingl et al. 2013).

Much of the development of visual neuroprostheses has centered on improving the hardware used to interface with the visual system. While this has yielded good results, one must also consider optimizing the code that controls these implants. With this thought in mind, researchers have been working on mimicking the natural processesing performed by the retina on incoming light. This is important as it is thought that incorporation of the retina’s neural code is essential for creating stimulation patterns comprehensible by the visual cortex. Recent research has shown that an encoder—designed to convert incoming light into code—that mimics naturally occurring neural signals can be incorporated into the design of visual neuroprostheses to improve performance. (Nirenberg and Pandarinath 2012).

8.5 A Brain-to-Brain Interface

These last few sections have showcased the amazing modularity of the brain, which allows it to interface with a wide variety of man-made devices ranging from robotic arms to sensor arrays. In this context, it is easy to imagine the brain as a computer, and these neuroprostheses as connected peripheral devices. Yet today’s computers have moved beyond just computer-peripheral interfaces to a new type of interface: the worldwide web, a network enabling billions of computers to interface directly. It is thus easy to wonder about the possibility of creating a similar network using neural interfaces, a network of connected brains. Certainly, the technology exists to both read and write neural information; but, what would the actual network look like and what kind of data transfer could it actually support? Although anything close to a network of brains remains solidly within the realm of science fiction, within the last few years researchers have started to lay the groundwork for this concept by creating and testing its simplest configuration: a direct brain-to-brain interface.

8.5.1 Telepathically Linked Rats Are Able to Cooperatively Complete Tasks While in Separate Locations

Many variables were associated with creating the first brain-to-brain interface, including, what information to transmit. In the first proof-of-concept, brain-to-brain interface, researchers turned to a familiar neural interface location: the sensorimotor brain region in rats. To test the feasibility of this interface, two rats, identified as the encoder and decoder, were paired (Pais-Vieira et al. 2013). The encoder rat was placed in a cage and given a two-alternative-forced-choice task, such as that of pressing the correct lever when presented with two options. In one experiment, the encoder rat was tasked with choosing a lever based on an LED cue. While the encoder rat received this cue and performed the task, sensorimotor information was recorded from the rat’s M1 via an MEA. This information was transmitted to the decoder rat, where ICMS was employed to write the same neural signal into its M1. The decoder rat was then given the same selection task, but with the ICMS signals replacing the LED cue. Amazingly, the transmission of information via the brain-to-brain interface allowed the decoder rat to select the correct lever. Finally, feedback was introduced so that the encoder rat received additional reward if the decoder rat performed well. This created a dyad, with each rat dependent on the other for high task performance. The resulting data showed that the rats coordinated, using their real-time, brain-to-brain interface to achieve the highest performance and corresponding, highest possible reward rate.

After successfully sending signals between motor regions of two different rats using this brain-to-brain interface, the researchers wondered if they could also transmit sensory information (Pais-Vieira et al. 2013). To test this, a second experiment, very similar to the first, was performed. In this experiment, the encoder rat was given a tactile clue to indicate which lever to press. The encoder rat received this cue by poking its nose into an aperture, gauging the width of the opening with its whiskers, and then choosing the correct lever based on that width. The tactile signal produced by the encoder rat’s whiskers when measuring the aperture was recorded from the encoder rat’s S1 and transmitted into the decoder rat’s S1. Again, it was found that the decoder rat was able to use this transmitted sensory information to successfully determine which lever to press.

8.5.2 An Interspecies Brain-To-Brain Interface Allows a Human to Twitch a Rat’s Tail

Proof-of-concept of a brain-to-brain interface generated excitement about porting the technology to human subjects. EEG was selected as a noninvasive method of reading neural information and was used in an attempted, interspecies, brain-to-brain interface between a human volunteer and anesthetized rat (Yoo et al. 2013). Steady-state visually evoked potentials were used to identify whether the human volunteer was looking at a flashing light bar. Researchers then linked these potentials to an MEA within the rat’s motor cortex, causing the rat’s tail to move for each time the human viewed the flashing bar. This interface achieved a transmission success rate of over 90 %, with an approximate, two-second delay in transmission.

8.5.3 A Brain-to-Brain Interface in Humans Can Be Used to Cooperatively Play Video Games or Send Morse Code

After solving the problem of reading neural signals nonivasively, the next challenge to completing a human brain-to-brain interface was to successfully and noninvasively write neural signals. Research suggested that transcranial magnetic stimulation (TMS) could be a viable option for this task. TMS uses a magnetic field generator to produce small electric currents within a targeted brain region, but is limited by poor temporal and spatial resolution. To test the feasibility of using TMS to create a brain-to-brain interface, an experiment was set up consisting of a human encoder wearing an EEG-based BMI and a human decoder wearing a TMS-based BMI (Rao et al. 2014). These two human subjects, connected via the brain-to-brain interface, were separated in different rooms and then tasked to play a computer game cooperatively. The goal of the game was to identify incoming planes as friend or enemy, and then fire a cannon only at enemy planes. The encoder was able to view the game screen and identify the planes, but had no ability to fire the cannon. The decoder had a button for firing the cannon, but no knowledge of when to fire it. When the encoder identified an enemy plane and wished to fire a rocket, he/she would engage in right hand motor imagery. This motor imagery signal could be detected through the EEG, translated into a signal representing finger movement, and then transmitted into the receiving individual’s motor cortex, causing his or her finger to twitch and press the button, therefore firing the cannon at a correct time. Although performance with the brain-to-brain interface was not perfect, it was still statistically significant, with transmission latency of only 650 ms.

All aforementioned brain-to-brain interfaces were designed to transfer motor and sensory information. Could more abstract information, like words, be transferred? To test this theory, a similar EEG-TMS-based brain-to-brain interface was used, detailed in Fig. 8.4. However, instead of monitoring the motor cortex, the EEG monitored the encoder’s responses to motor imagery tasks and the TMS stimulated the decoder’s occipital cortex, creating a phosphene (Grau et al. 2014). On screen, the encoder was shown a flashing, Morse code representation of a word. The brain-to-brain interface then transferred this same Morse code signal to the decoder by delivering a phosphene whenever the EEG registered the encoder seeing a flash of code. Using this set up, simple words such as “hola” and “ciao” where transmitted between individuals in different cities with an error rate of less than 20 %.

Fig. 8.4
figure 4

A schematic overview shows how information from the motor cortex of one individual can be collected using EEG and transmitted to another individual using TMS . Using this interface and an internet connection, two individuals were able to communicate simple Morse code (Grau et al. 2014)

There is no question that the development of these brain-to-brain interfaces is an incredible achievement. While many work to transform today’s technologies into complex brain networks, others have begun to postulate about the possible dangers of using such technologies. In particular, there is a fear that individual minds could be assimilated into a group mind or hive mind (Trimper et al. 2014; Hildt 2015; Kyriazis 2015). Although some believe this could usher in an era of higher intelligence (Kyriazis 2015), others believe it could eliminate the aspect of individuality (Hildt 2015). Another issue raised is neural privacy: some are afraid that sensitive thoughts could be read and exposed to the public without the thinker’s consent (Trimper et al. 2014). If and when noninvasive, high-throughput, neural interface technologies become commercially feasible, these concerns will undoubtedly need to be addressed. However, current neural interfaces pose no immediate ethical danger and continue to provide us with novel and beneficial information about the brain. For those still concerned that others may be reading their mind, it is widely known that a thin layer of tin foil does an excellent job of preventing an EEG signal from being acquired.

8.6 Closing Words

The BMIs which were discussed in this chapter serve as some of the pillars behind this growing field of medical devices. Beyond the few applications touched on in this chapter exist many others, which can be read about in Moxon and Foffani (2015). Within the next century, researchers hope to develop a neuroprosthetic arm, which allows a user to effortlessly drink a glass of water and feel the glass’ temperature. In sensory neuroprosthetics, it is hoped that devices can improve so that instead of hearing electronic voices or seeing blurry contrast, users can indulge in symphonies and enjoy gazing upon works of art. On the biological side, these achievements will necessitate an increased understanding of the circuits of the brain: how they encode and process information, and how to best interface with them. On the technological side, it will require advances in computing power, wireless communication, electronic sensors, and material sciences.

Although this chapter serves only as a brief review of the field of BMIs, it is hoped that it piques the reader’s interest. As a growing field, on the cusp of bringing many different products to clinical trial, the industry will need many promising future scientists and engineers to contribute to the research. Beyond the products emerging from academia and being brought to clinical trial, there is another exciting expansion in the field of BMIs occurring right now: for the first time, noninvasive BMI technologies are available on the open consumer market. Within the last couple years, brain interface technologies have gone from prohibitively expensive and technically challenging to inexpensive plug-and-plays. Now, the public is able to order their own EEG recording equipment and use it by simply plugging it into their personal computer. For example, the bioinformatics company, Emotiv, produces a range of EEG headsets, which it markets to the general public. The availability of this equipment, coupled with the current generation’s love of technology and passion for hacking and improving electronic devices, will undoubtedly bring forth innovative ideas and exciting new uses for neural interfaces. Already, individuals are using the equipment to play videos games with their minds and track their stress levels. It is with much anticipation that the field looks forward to seeing what other uses will be discovered.