Introduction

The quest to bridge the gap between humans and machines has been going on for decades, and the recent advancements in Brain–Machine Interface (BMI) are the pioneers in that. The fact that brain neurons exchange information through electrical impulses paved a path for neuroscientists and roboticists to communicate with the human brain through electrical signals. Afterward, the gradual advancements in science and technology gave rise to the recent studies in wearable neuro-prosthetics. Whether to enhance one’s innate capabilities or re-establish one’s jeopardized brain–body connection, it is possible to translate brain signals and neural intent into machine language that can be used to control the actuation of external exoskeleton units. Though more profound research is needed to achieve its full potential, at present, BMI is the best option to combine human agility and mechatronic capability.

In the late nineteenth century, scientists confirmed the artificial actuation of animal body parts by electrically stimulating different parts of the animals’ brains [1]. Likewise, electrical impulses measurable with a galvanometer connected to their brains were discovered to be associated with various physical activities of the animals [2]. These, coupled with similar findings in humans in the early twentieth century [3], found the cornerstones for researchers and physiologists to ‘read’ brainwaves or neural signals and distinguish the ‘normal’ brain activity from abnormal ones. Subsequent innovations and discoveries opened the doors to a whole new era in neurophysiology.

Methods based on the Electroencephalogram (EEG) and Functional Magnetic Resonance Imaging (fMRI) are currently being used to detect abnormalities in patients’ brains and prescribe medications accordingly. Neuro-stimulators are being used to modulate abnormal brain signals in patients with epilepsy and Parkinson’s disease [4, 5]. These advancements have made it possible to receive information from, as well as send feedback signal to the human brain.

Today, researchers are in the process of designing an alternate communication path if the natural connection between the human brain and body is compromised. Starting from remote cursor movement on a computer screen [6] to statement generation from imagined speech [7] and use of an eye-tracker to control robotic hands using brain waves [8,9,10,11], research work by this community has come a long way.

This article presents, how different studies used neuro-signals to understand the control of different body parts and implemented sensors, actuators, and algorithms to communicate with the human brain. Figure 1 outlines the different modalities used in neural data acquisition processes and neural stimulation, as well as accounts for the methods and materials implemented in prosthetic actuators and neural data collecting devices.

Fig. 1
figure 1

Block Diagram of Different Components of a Typical Neural Interface

Brain Waves: A Brief History

The constant pursuit of physiologists to understand working of various biological systems in living beings dates back further than the eighteenth century. Around the 1780s, the findings of Luigi Galvani about ‘animal electricity’ [12], which Alessandro Volta later explored, gave new directions to physiology. Despite the technological inhibitions of the time, continual research was carried out to ascertain the relation between electricity and nerve impulses. Progressively, all those developments led Julius Bernstein and Emil du Bois-Reymond to present the first ever recording of action potential in 1865 [12]. The further-strengthened hypothesis of the electrical nature of nerve impulses inspired scientists to explore artificial actuation of animal and human body parts using electrical impulses.

In his 1874 work, Dr. David Ferrier used electrodes to give controlled electrical impulses to different segments of monkeys’ brains [1]. Ferrier recognized regions on both the right and left hemispheres of the subjects’ brains and electrically stimulated them. He discovered that specific body parts of the subjects responded to the stimulation of specific areas of their brains. He also understood that the regions might not have had clear boundaries and might have overlapped each other, but the fact that stimulation of different brain segments resulted in the actuation of corresponding limbs or muscles was well established by his findings.

Dr. Richard Caton, in his own work, further confirmed the relation between activation of specific regions of the brain and the respective bodily actuation as per Ferrier’s findings [2]. In 1875, Caton presented his work on ‘the electric currents of the brain.’ Unlike Ferrier, instead of actuating a brain segment, Caton connected electrodes between the gray matter and the external surface of the skull of the rabbits and the monkeys he experimented on. He discovered that the galvanometer could detect feeble alternating currents and that the currents of the gray matter had a relation to specified bodily functions. The areas determined by Ferrier to be associated with a particular activity displayed negative variation of electric current whenever the subjects of Caton performed such activities. Caton also confirmed that the sensory input to a specific part of the subject’s body affected the currents developed in its respective part of the brain.

Such discoveries inspired further work on the brain, neuronal activities, and electricity. While the majority of the researches were invasive (accessing the inner parts of the skull and the brain) studies on animals at that time, in 1929, Dr. Hans Berger presented his work on noninvasive recordings of brain activity of human beings [3]. In his study, Berger recorded brain signals of varying frequency and observed the change of brain signals during active and slumber periods of humans [13, 14]. Berger’s remarkable findings introduced physiologists to the existence of alpha and beta brain waves and offered the community a valuable tool for neural activity recordings: the EEG.

Berger’s discovery helped scientists understand that alpha (lower frequency) waves decreased and beta (higher frequency) waves increased as a subject changed its state from rest to active. Based on the information amassed over decades, basic brain waves classified into five types [15], viz. gamma, beta, alpha, theta, and delta waves (arranged in the decreasing order of frequency). It is understood that these waves correspond to their respective frequency bands and that each band is associated with a specific state of the human brain.

Today, we have several methods to ‘read’ (record and analyze) the neural signals. All these methods can broadly be categorized into two classes: invasive and noninvasive. As the name suggests, in invasive recordings, the electrodes need to be planted inside the subject’s brain to collect the signals; thus, requiring a skilled surgery to place the electrodes at the desired locations to receive the generated electrical impulses. Noninvasive methods, however, are equipped to record the brain generated electrical signals from the scalp of the subject’s head. Both these methods have their advantages and drawbacks. There are also partially invasive methods that use the area under the skin and above the skull to get the best of both invasive and noninvasive methods. The following sections are intended to shed some light on these methods and their use.

Neural Activities: Data Collection and Usage

Investigations involving noninvasive and invasive methods to analyze and utilize brain waves gave rise to the field of Brain–Computer Interface (BCI) or Mind–Machine Interface (MMI). BCI uses brain waves to command and control devices like exoskeleton system, display devices, robots, etc. In the following section, studies regarding a variety of methods used in developing different BCIs are discussed.

Using Noninvasive Data Collection Methods

When it comes to noninvasive methods of neural signal analysis, EEG is known for its versatility. The EEG devices enable recording of the electrical activity of the subject’s brain. Among present neural activity detection systems which make use of EEG signals, the Slow Cortical Potential (SCP) and Sensorimotor Rhythms (SMR) detectors are adopted by many. SCPs are evoked responses to an event and in order to connote excitability in response to an event, subjects must produce positive or negative SCP shifts beyond a predefined threshold [16]. Intuitively, BCIs using SCPs involve an extensive training period for subjects to learn how to control the evoked SCPs [6]. SMRs are neural activities in a specific frequency band and are recognized as changes in EEG signal when certain tasks are performed [16]. SMR is observed to decrease with actual motion or Motor Intention (MI) and increase after the movement task is carried out. Subjects can learn to control the SMR amplitude in the absence of any MI or related actions, and this can be used to control the BCI system [17].

MI is another neural activity detection system used to control EEG-based BCIs. In MI, the imagined movements create electrical activation signals in particular brain segments (especially in parts of the primary motor cortex). Such activations follow a pattern decipherable by classification algorithms. Subjects can use these algorithms to impart control over BCI systems by their intent to perform motor actions, viz. moving their arm, or flexing their lower limbs, etc.

Zeng et al. demonstrated the use of a hybrid human–robot interface that utilized an eye-tracker software and an EEG device equipped with Bluetooth [18]. The researchers provided a representational view of the actual robot work environment using camera and Graphical User Interface (GUI). Then, using perspective transformation function of OpenCV toolbox, the gaze coordinates on the GUI screen were mapped onto the robot coordinates. In this way, while the eye-tracker tool could employ the gaze coordinates to provide direction control to the robotic arm, the MI signal from the EEG modulated the speed of the robotic arm. A 2-class BMI classification model was calibrated to determine whether the EEG signals represented MI state or state at rest; which in turn was communicated to the computing unit via Bluetooth. To avoid apparent frustration of users associated with lack of control and to minimize exhaustion of users due to continual mental load, the authors proposed a moderate amount of autonomy in the robotic limb. Autonomy was used to avoid collision of the limb and to attract the limb toward the final target at relatively closer proximity. In combination, the hybrid gaze–BMI device successfully moved the end effector of the robotic limb through the set course with a failure rate that the researchers claim to be significantly reduced compared to other work incorporating no assistance from robot autonomy.

Steady State Visually Evoked Potential (SSVEP) is another signal used to implement EEG devices in the field of BCIs. In this method, a flickering visual stimulation is provided to the subjects, and the neural activity is recorded through an EEG device. Studies explicate that such stimuli generate frequency-dependent neural activities in the subjects’ brains [19,20,21,22,23,24]. Researchers take advantage of this phenomenon to devise different BCI concepts.

Another stimulated brain activity is when subjects are presented with a rare visual or auditory stimulus within a list of standard stimuli. Such neural activities are named P300 or Event-Related Potential (ERP). ERPs are generally induced after about 300 ms of the stimulus presentation and hence the name. The P300 waves can be used to develop BCIs that enable subjects to interact with their environment through audio–visual stimuli [16, 25].

In their work on portable wireless BMI, Mahmood et al. [24] used time domain analysis to classify SSVEPs in their subjects’ occipital lobe. The authors introduced the use of dry electrode-based soft electronic system (referred to as SKINTRONICS) capable of delivering enhanced EEG analysis. Deep Convolutional Neural Network (CNN) was used to isolate the best electrode locations from a cluster of electrodes having higher signal-to-noise ratio. Finally, a two-channel electrode, which was placed on the scalp of the occipital area, was used to measure the SSVEPs induced by a light-emitting diode (LED) stimulus. The subjects were made to gaze at four different LED locations and carry out a null task with closed eyes and the SSVEP data collected on a Bluetooth-enabled smartphone were classified into five sets using CNN. Then, the subjects could use those data to control: wireless wheelchair/motorized minicar/presentation software. Thus, the study successfully demonstrated use of wireless, real-time EEG classification system that can be used ubiquitously for three different purposes. The authors recognized the scope of further improvement using proximity sensors and motor imagination-based EEG systems.

In his 1976 work, Jacques J Vidal demonstrated real-time recording of visually evoked potentials using EEG [26]. Vidal implemented 5 electrodes (1 electrode for electrical reference) to collect Event-Related Potential (ERP) data from the occipital and parietal brain lobes. The visual stimuli consisted of around 30 µs long xenon flashes to light up a red and black checkered target arranged like a diamond. Four fixation points at each of the vertices of the diamond were set so that each stimulus would project the target at four different retinal locations in the subjects. In that way, the stimulus targeted the distributed neural activity on different sites of the primary visual cortex. For real-time classification of the obtained data, Vidal used a simulated maze on a CRT (Cathode Ray Tube), a commonly used display device of the time. The subjects were to move a triangular symbol through the maze by selecting the appropriate dot before each stimulus. The ERPs thus generated were classified into one of the four possibilities: up, down, right, and left. The classification algorithm was a signal detection process in which a predefined decision rule was employed to group the inputs into classes. The classification result was translated into the respective motion of the symbol, which provided the subjects with feedback about their decisions. By this experiment, Vidal was successful at detecting ERPs in real-time using an EEG recording device.

These methodologies have been continually improved upon with the technological advancement in the field of BCIs. In their recent work, Bai et al. [27] proposed a hybrid system that simultaneously incorporated the P300 and SSVEP stimulations in a BCI speller to enhance its accuracy and speed. In order to evoke the signals simultaneously, the authors proposed a frequency enhanced row and column (FERC) paradigm. Furthermore, a Wavelet and Support Vector Machine (SVM) combination was used to detect the P300 signal, whereas an ensemble task-related component analysis (TRCA) was used for SSVEP detection. With the paradigm mentioned above, the authors demonstrated an accuracy of 94.29% at an information transfer rate (ITR) of 28.64 bits per minute.

In a different study, Xiao et al. [28] introduced a novel v-BCI paradigm using three weak and small amounts of stimuli located in between eccentric instructions flashed in the row-column paradigm. The weak stimuli around each instruction evoked specific ERPs, and with a Discriminative Spatial Patterns (DSPs)-based template matching method, it recognized the features containing the users’ intentions. With the above setup, the authors accomplished an average offline accuracy of 93.46% and an ITR of 120.95 bits per minute.

Another neural activity detection system compatible with EEG devices is Mirror Neuron System (MNS) activation. MNS is activated in subjects who observe their own or someone else’s actions. Researchers have explored the usability of MNS activation for controlling kinematics of BCIs [29], for mapping neural activity to the physical behavior of subjects [16], and for use in rehabilitation processes [30].

Similar to EEG, other recognized noninvasive data collection methods include Magnetoencephalography (MEG), Functional Near-Infrared Spectroscopy (fNIR), Positron Emission Tomography (PET), and Functional Magnetic Resonance Imaging (fMRI). The fNIR uses near-infrared lights of wavelength in the range of 700–900 nm to detect the relative ratio of oxygenated and deoxygenated hemoglobin [31]. In response to neural actions, the body undergoes hemodynamic activity and changes the blood flow level, which results in the detection of neural action through fNIR studies. PET and fMRI are also dependent on hemodynamic activities [32]. The PET introduces a radioactive tracer into the subject’s blood and uses it to detect blood flow changes during different neural activities, while the fMRI uses the magnetic properties of oxygenated and deoxygenated hemoglobin (diamagnetic and paramagnetic, respectively) [33] to determine variation in blood oxygen level in response to neural activities.

Naseer et al., in their 2015 work, enumerated several instances of incorporating fNIR in BCIs [34]. Subjects were reported to be able to convey their decision mentally to impart binary control over the BCIs, such as giving ‘on’ or ‘off’ command [35], responding "yes" or "no" to questions [36]. Starting from actual motor tasks to covert complex MI [37] to imagining music, a variety of methods were noted to be successful in using fNIR in the development and use of BCI systems.

In their fMRI study, Almeida et al. aimed at analyzing the activation and deactivation of different brain regions, during multi-joint lower limb movement [38]. The researchers used verbal stimulus, manual stimulus, and combination of both to determine the brain activity using an fMRI tool. The results were recorded for three different stimuli on both the right and left lower limbs. The authors hypothesized that the elicitation of cortical activation might be related to the actuation of the dominant limb. In contrast, the activation of subcortical areas for left limb manual stimuli might have occurred due to more proprioceptive feedback and spatial reference for motor planning. As per the authors, the activation of auditory and visual brain areas was related to the processing of audible information and correlating the words with bodily movement. The deactivations found were concluded to be coherent with the activations and related results from previous upper limb studies.

Zhu et al. [39] noted the possibility of initiating a sense of orientation in rats by artificially stimulating the ventroposterior medial (VPM) nucleus. However, the uncertainty of the responsible brain regions for the orientation encouraged the authors to perform a PET scan on BCI-based VPM stimulation in rats. With electrodes connected to the right VPM of 12 male rats, the researchers studied PET results before and after the stimulations while focusing on a 30º-60º angular rotation in the subjects. The experiment resulted in an ipsilateral orienting performance of eight rats among the subjects.

In the study by Mellinger et al., the authors have explored the possibility of voluntary modulation of sensorimotor neural signals [40]. The authors pointed out the slow communication speed in EEG-based BCIs and how the indifference of magnetic signals to distortions by the human skull can help improve such outcomes using MEG. The research implemented MI and real-time feedback to train six subjects to successfully self-modulate their brain activity and convey their binary decision to move a cursor on the feedback screen. Further, it was found that in three subjects the origin of the modulated signal was localized at the motor cortex. The authors also presented spatial filtering methods and various processes to eliminate artifacts and improve Signal-to-Noise Ratio (SNR). Albeit several limitations, the final conclusion was that the system’s performance was at par with a highly sophisticated EEG-based BCI system.

Processes like fMRI, fNIR, and PET have better spatial resolutions, but relatively lower temporal resolution [32, 41] when compared to electrophysiological neuroimaging technologies like EEG. In these methods, noninvasive electrodes are placed on the scalp of the head, under which area there is a vast set of neurons responsible for an electrical potential generation. Furthermore, such signals need to travel a considerable distance relative to the single neuron dimensions until detected on the scalp. Therefore, in methods like EEG, it is cumbersome to record single neuron activities, and hence the spatial resolution is diminished. Due to this inability to access single neuron recordings coupled with its susceptibility to noises like Electrooculographic signals (EOG), Electromyographic signals (EMG), Electrocardiographic signals (ECG), and motion artifacts, etc. [41], EEG is yet to become a full-fledged modality to communicate with the human brain. Such drawbacks can be mitigated using modalities like MEG that have a higher spatial resolution than EEG. Further, various source-localization methods like Low Resolution brain Electromagnetic Topography (LORETA) may improve the spatiotemporal resolution of the noninvasive modalities. Furthermore, accelerometers, ECG sensors, and EOG sensors can be used to measure and eliminate the noise signals due to different artifacts present in the final noninvasive recordings [41, 42].

Using Invasive Data Collection Methods

In the field of invasive signal acquisition methods, the term Intracranial Electroencephalography (iEEG) comes into the picture and it uses Microelectrode Arrays (MEA) installed inside the subjects’ brains in targeted areas to obtain electrical signals representative of the desired tasks or behaviors. The iEEG can be referred to as Electrocorticography (ECoG) based on the use of subdural electrode grids or strips. Further, it can be cited as Stereotactic Electroencephalography (sEEG) when depth electrode wires are installed to access deeper or subcortical regions of the brain [43, 44].

In their project, Hochberg et al. [45] demonstrated the use of a 96-channel MEA by subjects (persons) with long-standing tetraplegia and CNS injury to perform activities like reach and grasp objects using a robotic hand. The electrode array used had a cross section of 4 mm × 4 mm and signals were collected from the motor cortex neurons, and the subjects used MI to control the state and velocity of the robotic hand. The neural activities inside the subjects’ brains were recorded as they watched and imagined controlling the robotic arm running on a predefined program. Methods like Kalman filter and linear discriminant classifier were used to estimate the state and velocity of the robotic hand. After the initial “open-loop” (without feedback to reduce any imposed error) training of the system, it was updated with a “closed-loop” calibration as the subjects controlled the robotic arm with visual feedback. The MI system successfully enabled the subjects to perform a reach and grasp task using the robotic arm with promising statistics.

In another work, Leuthardt et al. [46] focused on the possibility of using ECoG signals generated during MI in subjects for one-dimensional control of a cursor on a computer screen. The authors used their knowledge of SMR amplitude variation in conjunction with actual movements or MIs to target locations and frequency bands by the ECoG device. Each of the four subjects underwent surgery to implant a 48 or 64 subdural electrode grid over the left frontal-parietal-temporal region, including parts of the sensorimotor cortex. Subjects were to execute three physical tasks and their corresponding three MIs. The increase and the decrease in amplitudes of brain waves of desirable frequencies enabled the authors to select electrodes and frequency bands for efficient training of the subjects. With a minimal training period of 3–24 min, the researchers were able to gain a success rate of 74–100% in controlling 1-dimensional binary movement control of a cursor. With additional tests, the authors also confirmed the encoding of information of 2-dimensional joystick movement direction in ECoG signals with frequencies up to 180 Hz.

Mertzger et al. [47] used signals acquired from a high-density ECoG array implanted over the sensorimotor cortex of a paralyzed patient to drive a speech neuro-prosthesis using silent speech attempts by the patient. Deep learning and language modeling techniques were used to decode letter sequences using code words corresponding to the 26 English alphabets. The authors used the signals from both the speech motor cortex and the hand cortex and were able to decode sentences at 29.4 characters per minute at a median error rate of 6.13%.

In another work, Willett et al. [48] demonstrated the first speech-to-text BCI using intra-cortical microelectrode arrays to record spiking activities from an amyotrophic lateral sclerosis (ALS) patient. A trained recurrent neural network (RNN) decoder was used to interpret whole sentences in real-time. In this study, the authors could describe the implementation of spatially intermixed tuning and detailed articulatory representation of phonemes to decode speech at an improved rate of 62 words per minute with a maximum of 23.8% word error rate on large vocabulary.

Vansteensel et al. [49] utilized four subdural electrode strips to develop an autonomous communication BCI for a subject with late-stage Amyotrophic Lateral Sclerosis (ALS). With four electrodes on each strip, two electrode strips were implanted on the hand area of the left motor cortex, and the other two strips were placed on the prefrontal region as backup signal acquisition systems. After using ECoG for selecting the two optimal strips, three sequential training activities were carried out. As the first task, MI of the right hand was utilized to move a cursor. Subsequently, regulation of neural signal magnitude and timing with feedback in the form of a displayed moving ball image was achieved. Finally, selecting specific items in rows and columns by controlling brain activity referred to as “brain clicks,” the subject was trained to pick the desired letter or group of letters highlighted on display automatically and sequentially. With this, the authors were successful at helping the subject control the “brain clicks” consciously to communicate using a letter display.

Collinger et al. [8] developed a high-performance neuroprosthetic control system using two 96-channel intracortical microelectrodes to control a 7-DoF robotic arm. Initially, the subject was instructed to observe the programmed movement of a robot, and the resulting neural activities were recorded. This recorded data was used to develop a mathematical model of the neural firing rate as a function of the 7-dimensional velocity vector. The coefficient matrix for the velocity components was optimized using indirect Optimal Linear Estimation (OLE) with ridge regression. With different audio–visual target specifications like LED and computer-generated voice, the subject learned to control translation, orientation, and grasp actions by the robotic arm through several calibration processes. Though intermediate computer assistance was introduced to facilitate the training process, the final result of complete control of the 7-DoF robotic arm was achieved without any mechanical or computer assistance.

As a continuation of their previous research work, Wodlinger et al. [9] extended their study to control a robotic arm with 10-DoF. In the second study, four more DoF were introduced to replace the 1-DoF grasping system of the previous robot. The four new hand shaping activities, which were to be achieved by definite arrangements of the robotic fingers were: pinch, scoop, finger abduction, and thumb opposition or extension. Along with the previously mentioned methods (based on MI and MNS), the researchers used Virtual Reality (VR) with and without targeted virtual objects to successfully calibrate the system. The mathematical model replaced the grasping velocity component with four newly introduced components and their corresponding coefficients in the model equation. The robot’s 10-DoFs were mapped into a 4D (4 dimensional) space representing the hand shape. No classification was performed on the dimensions corresponding to the hand shape. Similar to the previous experimentation, the indirect OLE with ridge regression algorithm and variance correction method was used to optimize the coefficient matrix for the ten velocity components. After training with a 10-dimensional sequence and VR object task, the subject successfully controlled the robotic arm’s ten velocity components as per the neural firing rate.

Hendelman et al. [50] implanted two 96-channel arrays and two 32-channel arrays in the somatosensory cortex to enable the subjects to complete a complex bimanual self-feeding task using neurally driven shared control. With the shared control strategy, the study mapped four-DoF control inputs to 12-DoFs for specifying the robot pose and orientation. The users were trained using two virtual Modular Prosthetic Limbs (vMPL) and audio cues instructions for corresponding gesture production. The neural data was used to train a Linear Discriminant Analysis (LDA) classifier, with which the authors reported a success rate of 85% during online testing with the target reach task. Hence, this study provided proof of concept into how BMI signals infused with robot autonomy could maximize task performance while minimizing human workload.

To improve the current state of information transfer, Simeral et al. [51] demonstrated the first use of a wireless intracortical BCI (iBCI) to record neural activities from chronically implanted microelectrodes in humans. The external cables of a 192-electrode iBCI were replaced with wireless transmitters to achieve this task. The device employed the Manchester encoding technique to transmit data at a digitization speed of 20kS per second and 12 bits per sample per electrode. The fidelity of broadband recordings was benchmarked using the standard 96-channel NeuroPort Patient Cable, and the two patients under observation were able to achieve point-and-click control as well as on-screen keyboard control of a standard tablet Computer. The user was able to achieve a typing speed of 13.4 correct characters per minute (ccpm). Furthermore, with wireless technology, the authors could demonstrate untethered continuous recording of intra-cortical activities from one patient over a 24-h period. This was a significant achievement toward in-home and on-demand iBCI usage.

With advancements in this field, the drawbacks of noninvasive processes, primarily related to low SNR were reduced with invasive signal acquisition modalities. In case of external electrodes, higher number of neurons are needed to produce a detectable signal [52]. However, in invasive methods, the data collecting devices (usually electrodes or microwires) are in close proximity to the electrical field producing neurons. Since, the distance between neuron(s) and electrode wires is relatively small, the issues like neural electrical signal getting attenuated by other bodily components (like brain fluid, skull, scalp tissues) is avoided as compared to the case of using external electrodes. Hence, the invasive methods make it possible to record single neuron activation and gain insight into activities or behaviors obtainable only thereby [53]. However, invasive methods also come with inherent challenges like: socio-ethical issues and unavailability of a diverse set of subjects, etc. Nevertheless, the steady advancements in innovative methods and designs like thin-film surface electrode array [54] and wireless inductive link powered ECoG data acquisition system [55] may lead to ubiquitous acceptability and implementation of invasive neural analysis methods.

Neurofeedback: Methods and Significance

Similar to electrical signal acquisition from the brain, electrical stimulation of targeted brain regions is also possible in BCI. Additionally, sometimes it becomes necessary to have some sensory feedback for the reliable operation of a neuro-prosthetic device [56]. This not only enables the users to control their prosthetic devices better, but also provides them with a feeling that the artificial limb is a part of their bodies, which is vital for the users’ improved quality of life [57]. The modalities followed in the field of neurofeedback can be divided into noninvasive and invasive modes of operation.

Noninvasive Neural Stimulation

In regard to the noninvasive mode of brain stimulation, Transcranial Magnetic Stimulation (TMS) and Transcranial Current Stimulation (tCS) are two widely used modalities. The tCS, based on the type of current used, is further divided into Transcranial Direct Current Stimulation (tDCS) and Transcranial Alternating Current Stimulation (tACS). There are also records of using randomized noise frequency spectrum for transcranial stimulation, known as Transcranial Random Noise Stimulation (tRNS). The account by Boes et al. [58] presents the salient features of tCS. TMS uses electric current through the conductors inside an insulated coil to induce secondary neural currents in the subject’s targeted brain segments. Pulses from TMS can induce action potentials in the neurons connected to the region of stimulation. Movement triggers, as well as stimulated visual percepts, have been observed through TMS stimulation of associated brain regions. Electrodes attached to the scalp of the subject are used in tDCS to pass low amplitude direct current through the scalp in order to modulate the activity of targeted brain areas. The alternating current used by tACS is applied through similar electrodes as tDCS and it can alternate the frequency of neural signals in the targeted brain sections.

Nitsche et al. [59] demonstrated the feasibility of modulating neuronal excitability of motor cortex by application of tDCS through the scalp of their subject. Those electrical stimulations having a maximum signal amplitude of 1 mA were sent using a pair of surface sponge electrodes having surface area 35 cm2. The brain area representing motor–cortical region of the right Abductor Digiti Minimi (ADM) muscle (a skeletal muscle in the palm area) was subjected to the above stimulation, and the resulting Motor-Evoked Potential (MEP) was measured. The authors showed that a significant increase in motor–cortical activity was observed during anodal stimulation, and a similar decrease was noted during cathodal stimulation. The after-effect durations of the stimulations could be varied by altering the duration and intensity of the electric stimulation. Their study successfully demonstrated the possibility of targeted enhancement and reduction of neuronal activities using tDCS.

Hermann et al. [60] presented a delineated record of various studies supporting the use of tACS in modulating neural activities. Stimulating subjects’ brains with varying amplitude, frequency, and phase of the input sinusoidal signal of tACS elicited the corresponding variation in motor excitability, voluntary movements, sensory processing, and even higher cognitive processes like memory and decision making.

In their 1986 work, Hess et al. [61] used TMS to record responses in small muscles of human hand. A stimulating coil consisting of 26 concentric turns of copper wire was used to generate a peak magnetic field of 0.9–1.6 Tesla at the center of the device. During the experiments, the subjects were instructed to remain relaxed or make small contractions of the targeted muscles to determine the respective effects of coil orientation and location. Magnetic stimuli were then delivered to obtain desirable Compound Muscle Action Potential (CMAPS). Several experiments were carried out, like the comparison between magnetic and electrical neuro-stimulations, collision experiments, effects of stimulus intensity and voluntary force, effects of contraction of other muscles, single motor unit recordings, etc. The study demonstrated that motor excitation of small hand muscles was feasible through external magnetic stimulation of the brain. The response to the stimuli could be modified by small contractions of the target muscle or the ipsilateral or homologous contralateral small hand muscles. The same single motor units were forced to discharge by applying threshold magnetic stimuli (lowest stimuli magnitude required to obtain a reproducible response) over different parts of the scalp. These motor units were also recognized to be the same ones that were activated first during voluntary contraction.

In another work, Penton et al. [62] utilized tRNS to assess the possibility of emotion perception improvement. In this work, sixteen and eighteen female subjects were divided into two groups of seventeen members for active stimulation and sham stimulation. Two electrodes (5 cm × 5 cm) embedded inside the saline-soaked sponge were used to administer stimulation (1 mA current) to the subjects’ Inferior Frontal Cortex (IFC). The stimulation duration was 20 min for the active group, contrary to 15 s for the sham group. Then the subjects underwent facial emotion and identity perception tasks using the Cambridge Face Perception Test (CFPT) and the CFPT-Identity test for the first experiment. In the second experiment, baseline performance effects were judged by stimulating either the IFC or another similar active brain sector before and after the test. The results favoring the active group subjects in CFPT revealed the significance of IFC in emotion processing and indicated the successful effects of tRNS stimulation to modulate emotion perception.

In order to understand the possibility of differential modulatory effects of high-frequency band tRNS (hf-tRNS) and its relation with the width of the selected frequency band, Moret et al. [63] conducted experiments with split frequency bands. The findings suggest that reducing the frequency range, and thus the noise in the stimulating tRNS signals, impaired the efficacy of hf-tRNS in terms of modulation. The authors speculated that the reason for this phenomenon was the low noise signal’s inability to produce equivalent opening and closing modulation of Na+ channels. With the target to measure the change in excitability due to tRNS with respect to a common MEP baseline, this study concluded that an intermediate intensity of tRNS increases the excitability and duration of cortical activities. Additionally, only the full band condition of 100–700 Hz frequency range signal was observed to modulate cortical excitability after significant duration after stimulation.

Apart from TMS, tDCS, and tACS, EEG neurofeedback is another method recognized in clinical environments to influence neural activities. As outlined by Corydon Hammond [64], there are numerous instances of EEG biofeedback used to modulate the range of frequencies associated with different neural activities and modulate them to the desired level. This is achieved through training using auditory or visual feedback to subjects who learn voluntary control of their neural activities. The training of patient might also be designed to increase or decrease multiple targeted EEG activities [65].

Invasive Neural Stimulation

Pertaining to invasive neural feedback, Deep Brain Stimulation (DBS) provides a way to perform neural intervention using implanted hardware to send electrical pulses to targeted areas of the brain. DBS primarily focuses on resolving abnormal electrical bursts resulting in epileptic seizures. On that matter, several brain areas have been targeted as sources of seizures by DBS [4]. Exploring the effects of such stimulations revealed that electrical impulses of a specific frequency range affect neural activities in a distinct manner [66, 67].

Under the DBS category, the Responsive Neuro-stimulator (RNS) system plays a significant role. This system detects the onset of seizure-specific electrophysiological signatures in real-time and electrically stimulates the targeted brain region accordingly. Martha J Morell, in her work on responsive cortical stimulation, focused on resolving medically refractive epilepsy [68]. About 191 subjects were implanted with the neuro-stimulators, which recorded the electrocorticogram. This experiment saw a significant reduction in the frequency of disabling seizures in adults.

In their work, O’Doherty et al. [69] demonstrated the working of a Brain–Machine–Brain Interface (BMBI) enabled by Intracortical Microstimulation (ICMS) to provide neural feedback to the subjects regarding their motor decisions. Two monkeys were implanted with four 96-micro-wire arrays in the primary motor cortex and primary sensory cortex of their brains. ICMS was provided to the primary sensory cortex corresponding to the hand area for one subject and leg area for the other subject. It was found that both the subjects could interpret the ICMS in a time duration comparable to that for responding to peripheral tactile stimuli. The experiment had two sections. In hand control (HC), the actuator position was decided by a joystick that was controlled by the subject’s left hand. In brain control (BC), the actuation was done using neural activity, decoded by Unscented Kalman Filter (UKF), in the right hemisphere of the primary motor cortex. ICMS was produced upon the actuator’s contact with virtual objects. Further, the presence or absence of pulse trains of varying frequency and temporal patterns were used to give artificial tactile feedback or somatic sensations to the subjects. On that basis, the experiment demonstrated a successful selection of 1 out of 3 visually undistinguishable virtual objects associated with different Artificial Texture (AT).

In the context of ICMS, Klaes et al. [70] developed a closed-loop BMI system to stimulate the somatosensory cortex of a nonhuman primate (NHP). For sensory feedback, MEA was implanted on the primary somatosensory cortex of the subject’s brain; and for recording the neural activity from cortical neurons, four micro-wire-based electrode arrays were inserted in the posterior parietal cortex (PPC) responsible for planned movements in the subject. The authors probed the subject’s hand to determine the respective sensory receptive fields by observing the corresponding multi-unit activity captured by the electrodes. By this, specific electrodes were chosen for stimulation to obtain improved multi-unit activity during the experiment. The stimulator was a 96-channel programmable current generator with a maximum current and frequency rating of 100 µA and 300 Hz, respectively. A standard discrete linear Kalman filter was used for online decoding of the neural signals to control the virtual Modular Prosthetic Limb (vMPL). Three different types of tasks were carried out with feedback. In the ‘handbag task,’ the subject had to determine the position of an invisible target (target inside a virtual handbag) by recognizing the triggered stimulations that continued for the duration of contact between the virtual hand and the target. In the ‘match-to-sample task,’ two hidden virtual objects, placed in the virtual bag, were associated with ICMS frequency of 150 Hz and 300 Hz, respectively. The subject had to choose the object with a stimulation matching with the initial object that appeared on the screen before starting the task. In the ‘brain control task,’ the subject had to move a virtual arm, controlled by neural signals, and touch one of the two visible virtual objects that elicited an ICMS of 300 Hz. The researchers, through this experimentation, demonstrated the feasibility of a real-time simultaneous stimulating and decoding closed-loop cognitive neuroprosthetic system.

Audiovisual Feedback

Just as there has been a promising success in stimulating different parts of the human brain, it has been possible to control such stimulations provided to the part of the brain responsible for processing auditory and visual information. Several advancements in auditory and visual neuroprosthesis have been achieved in this regard.

In auditory neuroprosthesis, Cochlear Implants (CI) and Auditory Brainstem Implants (ABI) have noteworthy contributions. Though seemingly similar in design, CI and ABI are inherently different regarding their target location and operation method. Heiduschka et al. [71] included studies regarding auditory implants in their work on implantable bioelectronic interfaces. CIs use an electrode array implanted in the Cochlea to send series of short electrical pulses to its subsequent auditory parts and stimulate a sense of hearing [72]. ABIs utilize multichannel arrays to bypass the cochlear nerve and electrically stimulate second-order neurons at the Cochlear Nucleus (CN) [73]. In auditory neuroprosthesis, the recording devices capture audio signals and send them to processing units. The processors produce appropriate electrical signals transmitted wirelessly or through percutaneous devices to the receiver implanted inside the skull. As per this electrical signal, the electrodes in the implanted array stimulate their correlated regions on the auditory pathway or their associated neuron bundles, depending on the type of neuro-prosthetics used. The neurons responsible for the brain’s auditory perception, in turn interprets the audio information which is representative of the stimulations received. Both CI and ABI have numerous records of enabling subjects’ hearing to some, if not full, extent. As described in the literature by Boulet et al. [72] and Wong et al. [73], implementation of CI and ABI is associated with a variety of challenges. Nevertheless, the development in technologies and followed methodologies continues to offer better solutions to make auditory implants more efficient and successful.

Over years of studies, it has been well established that electrical stimulation of intracortical regions of the brain responsible for visual interpretation generates a sensation of light in subjects. These visual percepts, decipherable as snowflakes or clouds, are called phosphenes [74]. Passing current through multiple electrodes was found to generate multiple phosphene points. Hence, scientists began to explore the creation of controlled patterned phosphenes by electrically stimulating the visual cortex [75]. Based on that, studies were conducted on the efficient implementation of neural visual prosthetics by stimulating the neurons that are located in areas succeeding the damaged part on the visual pathway (retina–brain communication link). Thus, stimulation using MEAs [76], light-sensitive micro-photodiodes [77], etc., were examined in different areas of the visual system, such as the retinal segment, the optical nerve area, and the cortical region [78]. However, the preliminary successes achieved were far from replicating the output of natural vision. Further, the current techno-ethical challenges faced in testing and implementing visual neuro-prostheses necessitated finding more efficient methods to obtain better results.

Keeping in mind the future of visual prostheses, Vurro et al. [79] designed a simulation for analyzing reading with artificial sight. The study used the simulation of three thalamic visual prostheses and their associated phosphene pattern densities to analyze the reading performance of twenty normally sighted subjects. Their study shows us how phosphene counts and different letter properties might affect the performance of BCI-based neuro-visual prosthetic devices. The authors, however, recognized the lack of exact data for the electrophysiological response to electrical stimuli of specific patterns shown by different parts of the brain, which are hoped to be resolved with future progressions.

Paving the path for the future, Normann et al. [80] addressed the complexity of design and implementation for a cortically based visual neuroprosthesis. Focusing on the work of Cha et al. [81] and Dagnelie et al. [82], the authors stressed the necessity of developing advanced electrode arrays capable of evoking independent phosphene controllable by each of the electrodes. The authors also introduced the idea of a neuroprosthesis consisting of a miniaturized Charged Coupled Device (CCD) video camera and a video signal processing encoder. The processed input could be sent to the MEA implanted in the visual cortex, where the generation of phosphenes can be controlled, and vision can be restored up to some level.

In another study, Piedade et al. [83] described the architecture of a wireless visual neuroprosthesis system. The system used intracortical microstimulations by implanted MEA into the primary visual cortex. A miniaturized video camera collected the visual input and then sent it to a video processor and neuromorphic encoder. Appropriate electrical pulses are generated there to be transferred to the stimulating MEA located inside the skull using Radio-frequency (RF) link. The authors expected the system to generate a limited, but useful sense of vision in the profoundly blind.

Originating from similar ideas, Fernandez et al. have been successful at helping their subject ‘see’ a white line with the help of MEA-based visual neuroprosthesis [84]. Similar advancements and innovations like the patented retinal prosthesis by Suaning et al. [85] represent a promising future for visual prosthesis controllable by electrical stimulations.

Further advancing the concept of prosthetic vision, Steveninck et al. [86] explored the efficacy of strict scene simplification equipped with a deep learning-based surface boundary detection in the context of facilitating mobility in simulated prosthetic vision. While the authors found a simulated electrode resolution of 26 X 26 to provide sufficient information for mobility, the scene simplification amount had to be optimized for a trade-off between informativity and interpretability depending on the number of implanted electrodes. This study employed the frontal camera on a virtual reality (VR) device to capture image data and processed it using Python and OpenCV image pre-processing library. Then Canny Edge Detection (CED) or SharpNet surface boundary detection was used to simulate phosphene generation of equally sized Gaussian blobs on a rectangular grid in the center of the VR display. To account for the biological irregularities in phosphene mapping, distortion and a minor temporally constant variation were applied to the grid location and brightness of individual phosphene, respectively. With the experimental setup mentioned above, the authors found scene complexity-dependent performance degradation in all lower phosphene resolution experiments, whereas this effect was negligible with higher phosphene resolutions. Furthermore, performance using SharpNet was found to be worse than CED in all simple environment settings independent of the phosphene resolution used.

Skin Input and Feedback

In the development of real-time human-in-the-loop BCIs, tactile feedback plays a role of value. Receiving percepts of stretched skin, pressure, and temperature change corresponding to specific actions allows subjects to confirm or modify their neural activities associated with the performed operation. Studies to develop artificial skin capable of static and dynamic feedback have been carried out in this regard.

In their work on tactile sensing, Dahiya et al. [87] described the advantages and drawbacks of various tactile sensors, viz. resistive, piezoresistive, capacitive, ultrasonic, and photosensitive sensor. The authors depicted the application of MEA-based piezoelectric tactile sensing arrays that can be attached to targeted locations on the body of the robotic device and significantly reduce errors associated with the accuracy of the currently used force feedback sensors.

Ying et al. [88] discussed developing a Silicon Nanomembrane (Si NM) diode-based fingertip tactile feedback sensor. The study proposed using Si NM gages for high-sensitivity strain monitoring and using elastomeric capacitors for tactile sensing. The design demonstrated robust performance over varying input parameters. The study used square wave pulses to derive the inverse relationship between voltage and frequency of stimulating signals for a detectable percept. The authors also discussed multiplexed addressing of the diodes that enable stimulation patterns depending on the biasing of the Si NM diodes. The research also explained the utilization of the strong piezoresistive properties of Silicon and the interconnected serpentine configuration to help the Si NM diodes function as high-performance stretchable strain gages. The close relation between the practical and theoretical relative resistance change was also ascertained through experimentations. Likewise, in response to pressure changes, the relative capacitance change of electrode pairs on the opposing sides of the thin sheet was explicated in the literature. Hence, the design could detect pressure due to the associated thickness change of the used thin sheet for installing the electronics.

In another study, Jung et al. [89] reported using Porous Pressure Sensitive Rubber (PPSR) capable of conformal integration with the skin. The piezoresistivity and stress–strain characteristics of the PPSR material were verified using Finite Element Analysis (FEA) and experimental data. The analysis of relative resistance changes corresponding to strain and pressure gradient indicated the feasibility of developing pressure and strain gages using PPSR. The authors demonstrated successful control of a robot using BCI developed with skin-attached pressure and strain gages.

In their 2014 study, Kim et al. [90] describe the use of stretchable silicon nanoribbon electronics for developing skin neuroprosthesis. The smart prosthetic skin demonstrated in the study was equipped with strain, pressure, temperature, and humidity sensors, electroresistive heaters, and stretchable MEAs for nerve stimulation. The study utilized ultrathin single crystalline silicon nanoribbon (SiNR) arrays as strain, temperature, and pressure sensors. Stacked structures with staggered arrangements were applied to avoid interference between sensor arrays. A motion capture camera was adopted to analyze strain profiles generated during various movements of the targeted areas. On that basis, a linear or serpentine SiNR strain gage array was designed. The idea of introducing cavities in SiNR layers as pressure detectors was confirmed using Finite Element Analysis (FEA). The inverse relation between the curvature of sensors and the recorded temperature allowed for developing stable temperature sensors under varying strain. Stretchable thermal actuator arrays were introduced to control the temperature profile of the skin prosthesis. Low-impedance MEA decorated with Platinum Nanowires (PtNWs) was used to effectively stimulate peripheral nerves as per the input received by the sensory units. Synchronized sharp spikes, in response to inputs from pressure sensors, were observed with the recording electrodes attached to the Ventral Posterolateral Nucleus (VPL) of the rat subjected to the experiment. The results ascertained successful communication from the pressure sensor, through the peripheral nerves, to the central nervous system. The authors recognized the potential of the associated findings in future applications of smart prosthetics.

In a recent investigation on the effect of various neuro-stimulation waveforms on the sensation quality and perceptive fields on a user’s hand, Collu et al. [91], studied the stimulations generated using four non-rectangular waveforms. The study implemented an isolated bipolar constant current stimulator connected to the output of the waveform generator to deliver Transcutaneous Electrical Nerve Stimulation (TENS). Further, five wave shapes namely: rectangular, sinusoidal, triangular centered, linear increasing ramp, and linear decreasing ramp were selected for the experiment. The localization and the stimulation time required for sensations were observed to vary among waveforms. With the above setup, the experimental subjects could successfully distinguish between differently shaped waveforms with similar electrical characteristics through a two-alternative-forced-choice (2AFC) match-to-sample task. Hence, the authors suggest that signal information meant for the user can be encoded within stimulation waveform variation, which can open doors to potential developments in the context of neuroprosthetics and biofeedback mechanisms.

Actuators and Materials in Neural Interfaces

The previous sections portray the feasibility of acquiring neural signals as well as stimulating the human brain using various methods. In order to realize the effects of such neural analyses in real-world scenarios, it is necessary to design prosthetic or exoskeletal actuation units capable of responding to the neural command signals. Likewise, the material properties of the components used in the actuators and the signal collectors dictate the final output quality of the BMI systems. Based on this, numerous types of materials and actuators have been explored for developing different prosthetics, exoskeletons, data collecting electrodes, and neural implants.

For Actuation of Neuroprosthetics

In the context of developing an effective actuation unit, it is necessary to consider the specific demand of the task at hand as well as the need of the user performing the task [92]. The requirement for different electromechanical specifications in different activities led to the development of BMI actuators using mechanical, electrical, hydraulic, pneumatic, or the combination of these concepts.

The study by Veneman et al. [92] described an actuation system consisting of a servo motor, a flexible Bowden cable transmission, and a series-elastic-element-based force feedback loop. The utilization of Bowden cables in transmission allowed the detachment of the electric motor from the robot frame. The acting source of torque was designed as a rotating joint, and the force transfer from the cables to the disk was friction-based. Similar high friction-based concept and material were used to design the connection between the cable and the springs. The cables transmitted power through the motion of the inner cable with respect to the hollow outer segment. A feedback force control loop was introduced to provide friction compensation. The necessary force measurement was done using compression springs coupled with Linear Variable Differential Transformer (LVDT) sensors. Three different sets of springs were used as series elastic elements. An appropriate actuation with desirable force bandwidth was achieved following the mentioned design concepts.

In another work, Rosen et al. [93] explicated the design of an arm exoskeleton unit with 7-DoF. Developed with the goal to assist human performance, the design used surface electromyographic (sEMG) signals as input commands to share the majority of the physical load involved in an activity. Seven brushless DC motors were used to transmit torque to their respective joints equipped with a cable-based transmission system. Also, four force-cum-torque sensors were employed at the interface elements between the human arm, exoskeleton, and the external load. Different segments of the 7-DoF model were connected using a frictionless ball-and-socket shoulder joint, two-axis elbow joint, and two-axis wrist joint. The mechanisms were attached to a frame on the wall allowing both height and distance adjustment between the arms. Considering the power-to-weight ratio of the system, the researchers mounted four motors on the stationary base. The other three motors, with lesser torque requirements, were stationed on the forearm. Implementation of cable transmission reportedly provided an advantage in terms of friction and backlash in the exoskeleton unit. The authors implemented the concept of orthogonal axes and joint stop to achieve the necessary range of motion successfully and completed the targeted Activities of Daily Living (ADL) in the experiment.

Kim et al. [94] introduced a novel master arm designed using serial links in their research project. The final design had three controllable joints and three redundant joints for both the shoulder and the wrist. Further, one controllable joint was adopted for the part representing the elbow of the exoskeleton. The researchers included redundant free joints to achieve maximum human movement range. Along with that, an electric brake and a torque sensor beam using strain gages were the prime components of the force reflection system fitted to the design. The torque sensor beam could detect the amplitude and direction of the applied torque by the human operator. The electric brake was then used to constrain joint movements accordingly to help the operator feel the force. If the operator intended to move the arm opposite to the applied torque direction, the electric brake was released for free movement of the arm. It was decided for each joint to be equipped with an actuator for force feedback. Further, each controllable joint was designed to be equipped with an angle measuring encoder, an electric brake, a gear head, a torque sensor beam, and a cover with a link. After calibration and optimization, the exoskeleton-based master arm design was proposed to be employed for teleoperation and motion planning.

In their work on exoskeletons as man–machine interface, Bergamasco et al. [95] delineated the design concepts of the Light Exoskeleton (L-EXOS) [96] and the hand force feedback [97] haptic devices developed in the PERCRO laboratory. L-EXOS was designed as a wearable 5-DoF robotic device with serial kinematics capable of providing controllable force at the center of the subject’s palm. For the purpose of decreasing the power required to actuate associated weights of the exoskeletal parts, the motors were placed on the fixed link of the exoskeleton, and torque was delivered to the respective joints using steel tendons and reduction gear and pulley mechanisms. The design incorporates thin wall, carbon fiber material, hollow sections, and bonding with aluminum parts wherever required in structural components to: enhance stiffness, reduce weight, and facilitate required mechanical assemblies. The hand exoskeleton device was designed to be fixed on the dorsal part of the operator’s fingers. It consisted of four parallel finger exoskeleton structures connected to four of the fingers, excluding the little finger of the operator. Each of the four exoskeletons was connected by revolute joints representative of its respective finger joints. Three DC servo motors positioned on a cantilever structure fixed on the base frame of the finger exoskeleton used tendon transmission to pull the middle point of each phalanx of the finger in order to extend the finger. Further, a passive torsion spring integrated into the joint axis performed the flexion action of the finger. Conductive plastics technology was implemented to design rotation sensors that were integrated at the joints, while force sensors were installed on the dorsal surface of each phalanx link. The cantilever design corresponding to the thumb exoskeleton followed a different kinematic structure compared to the other fingers. Later on, these developments and other contemporary studies led to the concept of developing a full-body exoskeleton system by the PERCRO laboratory.

Grimm et al. [98], in the feasibility study on a hybrid upper-limb assistive device, presented the performance results of a neuroprosthesis that combined the concepts of Neuromuscular Electrical Stimulation (NMES) or Functional Electrical Stimulation (FES) with a multi-joint arm exoskeleton. A commercially available Armeo Spring device was used for the exoskeleton. It consisted of the shoulder, arm, and wrist joints with a total of 7-DoF and had two adjustable spring mechanisms to provide gravity compensation to the users. By implementing joint angle measurement systems and grip force sensors, the ‘bone-vectors’ and limb functions of a virtual hand were controlled according to the real-time state and position of the users’ arms. Controlled NMES was used to stimulate seven different muscle groups for reaching and grasping as per the relative alignment between the target vector and the estimated movement vector. Hybrid stimulation and gravity assistance rehabilitation methods reportedly provided desired results in the experiment.

In another research on a hybrid walking neuroprosthesis, Alibeji et al. [99] showcased the development of a system with combined FES and powered lower-limb exoskeleton technology. The neuroprosthesis and the walker were designed as 4-link mechanisms. The locking and rigidity of different links were designed to stimulate desired walking postures. Three motors for the knee joint and one for the hip joint were chosen based on the complexity of stimulation. The coordination between the FES of different muscles and electric drives was optimized to generate the required dynamic postures for walking. Such postures, when activated in a particular order, could successfully generate half-step and full-step walking sequences.

In further studies on prosthesis design, researchers have been working on developing smart prosthetics, which are conventional or hybrid actuation systems equipped with electrical sensor technologies. Weiner et al. [100] in their 2022 work, discussed the development of a hand prosthesis with semiautonomous grasping capabilities. To facilitate this feature, the 10-DoF hand was equipped with a camera, display, and on-board embedded system assemblies. Two DC motors were actuating each hand via an under-actuated force-distributing mechanism that implemented levers and free-floating sliders connected by motor tendons. Thus, using a multi-modal sensor system, this smart prosthesis could perceive the environment to extract required target features while also capturing the user’s state and intention.

Electrical actuators and hybrid technologies based on electrical components arguably are better in terms of linear control systems and simple integration methods [101]. A better power-to-weight ratio, however, is of high importance in designing portable wearable neuroprosthesis. Therefore, hydraulic and pneumatic actuators are explored through various investigations on exoskeleton systems.

The 1970s project by General Electric Company on research and development prototype for machine augmentation of human strength and endurance gave birth to the Hardiman-I project [102]. The Hardiman consisted of two overlapping wearable exoskeletons actuated by hydraulic devices and had a total of 30-DoF. The human operator dictated the movement of the inner master exoskeleton, and these movements were followed by the outer slave layer [103, 104].

Among other hydraulic actuator-equipped exoskeletons, Berkeley Lower Extremity Exoskeleton (BLEEX) was exemplary. Zoss et al. [105] described that BLEEX had 7-DoF for each leg, and four of those joints were powered by linear hydraulic actuators. The design provided 3-DoF at the hip, 1-DoF at the knee, and 3-DoF at the ankle. The foot design was equipped with pressure sensors and accelerometers to determine the center of pressure and load distribution, which were in turn used for controlling the exoskeleton. Sensors were not in direct contact with the operator [103]. The actuator size, the fixing points, and the servo valves used were designed based on Clinical Gait Analysis (CGA). BLEEX was demonstrated to move successfully, carrying its weight and using its own power.

The Defense Advanced Research Projects Agency (DARPA) funded the fabrication of another hydraulic power-based exoskeleton system called “Sarcos”. As reported by Dollar et al. [106], it utilized rotary hydraulic actuators, and they were fitted directly in the joints to power the exoskeleton. Among other force sensing elements installed between the exoskeleton and the wearer, Sarcos’ design included the foot interface with a stiff metal plate that contained force sensors. Sensors, at some segments, were in physical contact with the user. With the said designs, Sarcos exoskeleton reportedly demonstrated the successful execution of numerous activities, even with the inherent drawbacks of rotary actuators, viz. friction and internal fluid leakage [103].

In the pneumatic actuation-based exoskeletons segment, Japanese company Cyberdyne’s Hybrid Assistive Limb (HAL) demonstrated sophisticated design implementation [107]. HAL utilized sEMG and pneumatic technology along with stored gait pattern data to predict and carry out intended movements [103]. Aimed at rehabilitation purposes, HAL uses electrical battery packs to provide force augmentation to the wearer.

Lee and colleagues’ work on a force reflecting master hand and arm system [108] described a pneumatic actuation-based exoskeleton unit. The authors utilized the feasibility of the master and slave arm being different in design to reduce the number of actuating joints and the degree of freedom of the master arm. The final design had seven rotational and two prismatic joints, excluding the wrist. These joints were designed considering the range of human motion and the need for constraining movements wherever necessary. The linear pneumatic actuators were fixed directly in parallel for prismatic joints and with force to torque converter link mechanisms in rotational joints. Two position sensors were used in the elbow and wrist area to get configurational data of the master arm. The master hand had three fingers with 2- DoF each. An analytic model for pressure control valve and cylinder was derived and linearized for simplification. The pressure was regulated by changing the valve’s displacement using the position controller and a Proportional Derivative (PD) pressure controller. The researchers integrated the system with a KIST humanoid robot with two arms of 9-DoF, two hands (referring to palm and fingers) with 10-DOF, waist with 3-DoF, and neck with 2-DoF. The joints were fitted with linear and rotary actuators and force and torque sensors. The researchers used inverse kinematics to send joint space commands to the robot position controller as per the device data corresponding to the operator’s arm movements. The force imposed on the robot was measured using the installed sensors to provide force feedback to the master arm and hand. The system was also integrated with a virtual environment using graphics simulators. The interaction of the virtual robot with any surface was translated by force calculation based on the surface’s stiffness, and the human operator could feel the said interaction. The authors affirmed the successful utilization of the master arm and master hand system in teleoperation and virtual environment integration.

In another work, Tsagarakis and Caldwell [109] presented the development of a soft actuated exoskeleton unit. The design applied braided pneumatic Muscle Actuators (pMA) as the power source and had 3-DoF in the shoulder, 2-DoF at the elbow, and 2-DoF at the wrist. Aluminum and composite materials were utilized for the arm structure, and high-stress joint sections were manufactured with steel. Arm link lengths were designed to be easily adjustable. Ease of wearing and detachment of the device was achieved with Velcro attachments. High linearity position sensors and two strain gage-based torque sensors were fixed to the joints. The pMAs employed were constructed as two-layered cylinders with an inner rubber liner, outer nylon containment layer, and two end caps. Pressure sensors and miniature strain gage-based load cells were installed in the system to measure force. The authors incorporated Pulse Width Modulation (PWM) signal of 100 Hz to control the valves with four 3/3 ports to regulate airflow. In order to achieve joint torque, flexible steel cables and double groove pulleys were adopted to produce bidirectional torque in an antagonistic scheme. The pulleys were built from solid aluminum and were equipped with strain gages for joint torque sensing and bearings to minimize friction. The researchers installed cables and idler pulleys for torque transmission. Torque and pressure feedback based on the PWM control signal allowed improved torque response of the system. Through test results, the authors recognized the potential of the design as an exercise facility as well as an upper limb power assist orthosis, and a motion analysis system.

Gu et al. [110] illustrated the design of a pneumatic actuation-based prosthetic hand equipped with a myoelectric control system and tactile feedback mechanisms. The fiber-reinforced elastomeric structure with installed rigid segments was conceived to make the prosthetic hand function closer to a natural one. Five fingers along with a 3D printed palm resulted in a 6-DoF system. The fingers and the palm were covered with a soft elastomeric layer to emulate the human skin. The palm connected to a plastic socket could fit the user’s residual limb. Four EMG sensors were mounted on the skin of the residual limb to collect actuation trigger signals. The researchers installed one pump and twelve valves to independently control the 6-DoF of the exoskeleton as per the received command signals. For touch sensation feedback, five hydrogel-elastomer capacitive pressure sensors were placed on the prosthetic fingertips. The relative capacitance change due to contact with another surface could be converted to stimulate targeted regions on the residual limb electrically. Coupled with a pattern recognition algorithm to classify the EMG signals according to four grasp and one rest type configuration of the hand, the above design demonstrated the potential to provide soft, lightweight, and customized upper-limb neuroprosthetics.

In another work on active-compliant actuation systems, Folgheraiter et al. [101] described the design of a hybrid hydraulic–pneumatic actuation system. The bio-inspired design coupled with the kinematic structure of the system allowed for a wearable device with the ability to change joint stiffness as per requirement. To develop the targeted multi-contact haptic upper-limb interface, three points for force feedback were selected, viz. between the exoskeleton shoulder and user’s shoulder, the middle of the user’s upper arm, and the middle of the user’s forearm. Due to simplified configuration, the 6-DoF exoskeleton system needed only 3-DoF kinematic chain for its actuation. The researchers could reduce the load on the actuation system by deciding to trigger joints closer to the body’s barycenter. For the actuation system design, a pneumatic spring was connected in series with the hydraulic actuator. The elastic element (spring) could vary the stiffness of the whole joint while the hydraulic element maintained the desired position. The spring characteristics could be changed by varying the pressure of the pneumatic system. Thus, the design was capable of movements closer to that of a natural joint motion. The spring could also act as a passive component where safety assistance was necessary. Position sensors were used to determine the actuator’s angular position, and a Proportional Integral Derivative (PID) controller was used to control two solenoid valves according to the sign of the position error. The authors also explored various other control paradigms to compensate for the complexity and nonlinearity of the system. The final optimization was experimentally displayed to mimic joint actuation with safety and preferred stiffness parameters.

Along with the different modes of controlling the actuators, charging the power source of the prosthetic or exoskeleton unit is one of the significant design aspects. Singla et al. [111] explicated a variety of available alternatives to energize different exoskeleton units. Focusing on Human Powered Products (HPP), the authors explained the functioning of electrostatic, electromagnetic, and piezoelectric energy conversion technologies in the context of charging low-powered exoskeletons. Both the methods of charging through relative movement of electrically isolated charged capacitor plates (electrostatic) and through changing lattice structure with mechanical strain to produce electrical voltage (piezoelectric) were reported to develop low power output for charging exoskeleton units. The study focused on electromagnetic properties of devices like hand-crank and pedal-powered generators to produce electrical energy to power lower-limb exoskeletons with an alternate source of human-powered systems.

For Neural Activity Sensing and Neurostimulation

In order to achieve optimal biocompatibility, increased ease and efficiency of the signal detection and stimulation process, and increased stability and longevity of the devices installed, the material properties of the components used in neural analysis have been explored through numerous research and experimentations.

The success of both neural activity detection and neural stimulation depends on the properties of the interface created between the electrodes and the neural tissues. This is because electrical charge carriers of the electronic hardware and ionic charge carriers of the biological tissue get interchanged at the interface [112]. Depending on the polarizability of the electrode material, charge flow across the interface varies. Silver–silver chloride (Ag–AgCl) electrodes are categorized as non-polarizable electrodes and are typically used for neural recordings. Conversely, Platinum (Pt) electrodes, generally used for neural stimulation, are examples of polarizable electrodes.

One of the earliest neural recorder electrode groups was microwire-based, consisting of fine conducting wires coated with biocompatible insulators except for the non-insulated tips. Gradually, the use of microelectrodes became ubiquitous. Different MEA designs were developed, like the planar Michigan array and the 3-dimensional Utah Electrode Array (UEA). Different designs took in various materials, viz. the early glass pipette electrodes, inert conducting metals like gold and platinum/ iridium, rigid materials like stainless steel, and silicon and titanium [113]. However, the designs suffered from their inherent stiffness resulting in mechanical mismatch with the brain tissues accompanied by mechanical failure of electrodes and delamination of insulation. Apart from that, post-plant soft tissue scarring and inferior electrode integration impeded the long-term viability and signal quality of neural interfaces. These issues led to the development of high-density MEAs and biochemically modified neural interfaces [112, 114].

Ware et al. [115] delineated the development of a Shape Memory Polymer (SMP)-based flexible neural interface to counter the mechanical and geometrical mismatch between the electronic hardware and the neural tissues offered by its contemporary designs. The researchers designed an amorphous ternary thiol-ene/acrylate thermoset SMP as a substrate for nerve cuff electrodes. The flexible cuff could be implanted under the nerve in a planar state, and it could, then, change its shape to conform with a nerve in response to desired physiological conditions and bring the installed electrodes into contact with the targeted nerves.

In another work, Wang et al. [116] proposed a MEMS-fabricated flexible neural implant with an ultra-thin substrate to overcome the challenges faced in conventionally rigid and bulky implants that might cause trauma and result in subpar signal quality. Aimed for low-invasive applications, the microdevice, equipped with Pt-black modified electrode assembly, was temporarily attached to a silicone shuttle with an improved structure to facilitate reliable implantation. With the implant, the authors reported the ability to record neural signals even after one month of the implantation.

Keefer et al. [117] demonstrated enhanced neural recording and stimulation by coating the typically used tungsten and stainless steel wire electrodes with conductive carbon nanotubes (CNT). The combinations of CNT with various materials, such as metals like gold, conductive polymers (CP) like poly-pyrrole (PPy), etc., were employed to analyze the performance of the composite coating through different experiments. The results displayed higher charge transfer, reduced impedance, and better power spectra analysis of the CNT-coated electrodes. The efficacy of conductive nanomaterials was further established by Khraiche et al. [118] and Lu et al. [119], and they demonstrated the improved neural adhesion, output, and survival period using Single-Walled CNT (SWCNT) and PPy coating on electrodes.

Likewise, other methods were researched to develop alternative control paradigms. Pan et al. [120] described the functioning of a piezoelectric fiber-based mechanomyography (MMG) sensor to trigger the actuation of a lower limb exoskeleton system. The polyvinylidene fluoride (PVDF)-based trigger-driven MMG sensor was combined with electrophysiological signals to demonstrate improved SNR and sensitivity. The piezoelectric properties of the material were reported to have enabled precise detection of dynamic signals corresponding to the slightest of motions. A planar interdigitated (IDT) electrode was used to collect signals from the deformed PVDF fibers in response to limb motion, and Metal Oxide Semiconductor Field Effect Transistor (MOSFET) technology was used to control various phases of the motors. The authors carried out different experiments to establish the improved performance of the novel MMG sensor in comparison to conventional EMG sensors.

In a quest for further miniaturization of the neural devices, novel material technology was fused with concepts like Field Effect Transistors (FET) and small-scale, flexible electronics. Qing et al. [121] described the fabrication of free-standing kinked silicon nanowire (SiNW) probes equipped with nano FET detectors at the tips. These probes could be manipulated with the help of a standard microscope to target specific cells. The authors successfully recorded intra-cellular action potentials using the designed nanodevice performing on par with conventional methods.

Apart from achieving biocompatibility and chemical inertness, the means to power the installed electronics, especially in intracranial implants, has been a considerable challenge in the field of neural interface. In the given context, researchers have explored materials and polymer coatings like Barium Titanate (BaTiO3) and PVDF. Seo et al.[122] investigated the feasibility of ultrasonic, low-powered piezoelectric-based neural interfaces. The prime concept behind such devices is to use ultrasound waves to mechanically vibrate the piezoelectric crystals in the implants from outside the body, enabling them to convert the mechanical energy and power the embedded transistors electrically. The reverse mechanism is followed to transmit information out of the body back to the data collectors and is called the backscatter mechanism [123]. The neural dust technology explained by Seo and his team [122] made use of Complementary Metal Oxide Semiconductor (CMOS) front ends to disperse independent sensing nodes throughout the brain. An ultrasonic trans-receiver was placed under the skull, and an external trans-receiver was installed to facilitate wireless communication. The success of ultrasound data transmission was due to its low attenuation through the tissue environment. Additionally, the biocompatibility of piezoelectric materials coupled with CMOS technology allowed for improved scalability of the hardware systems.

Analysis Algorithms

While electronic hardware takes care of the signal transmission and device actuation in a neural interface, the feature extraction and control algorithms play a vital role in working as the ‘translators’ to process biological activities into signals decipherable by present computing devices and vice versa. Algorithms developed in that context follow a variety of algebraic and differential mathematical models and electronics concepts to achieve the desired outcomes in mapping biotic and abiotic signals. There are two aspects to the translator’s or decoder’s functionality: one in which the algorithm is improved continually to predict natural user intent or activity as per the received signal, and second, in which the subject is trained to generate neural activations with specific parameters in order to resemble a predefined task.

Schaeffer and Aksenova, in their work on transducer design and identification for motor BCI systems, presented a comprehensive account of different decoders [124]. The authors defined biomimetic type decoders (mapping natural neuronal activity to desired limb movements) and biofeedback decoders (user training-based models). They also introduced another class of decoders, called indirect decoders. This type of translator records distinguishable neural patterns, which may not be intended for a specific natural activity. Then, the detected signal pattern is related to a task of choice to develop BCIs that can carry out the desired activities.

More literature further outlines the feature extraction process of neural signals. The electrical signals associated with neural activities are coupled with some predictable distinctiveness, like spike count (high-frequency components), amplitude, frequency or phase variations, etc. These signal properties are unique to their corresponding neural activities. Tools, such as Short-Time Fourier-Transform (STFT), Hilbert transform, empirical mode decomposition, Bayes filtering, Common Average Referencing (CAR), etc. [125,126,127], have been used to analyze such parameters associated with the recorded neurosignals. Similar methods have been implemented as signal pre-processors to reduce artifacts, perform signal segmentation, resample signals, etc., to receive cleaner signal outputs and reduce the load on subsequent steps in the decoder algorithms [41, 128].

There are also records of data-driven translators. These decoders use features and parameters extracted from the acquired signals and store them as data sets. Then, algorithms based on these data predict the most probable output of a registered signal. For instance, in performing BCIs, there might be instances to distinguish between inputs that denote moving the subject’s hand in one of the four (up, down, left, or right) directions. Discrete decoders or data-based classifiers are used in this regard to classify incoming signals into one out of four groups corresponding to the desired movement. Logistic regression, Support Vector Machine (SVM), k-Nearest Neighbors (kNN), naïve Bayes classifier [24, 129,130,131,132,133] are a few examples of classifier algorithms. Likewise, the speed at which a user rotates his forearm might influence the amplitude or frequency of the obtained signal. Continuous decoders are used to derive a relationship between signal features and activity parameters, like the speed of hand rotation mentioned in the above example. By the derived relation, the algorithm can predict relevant activity parameters associated with any new incoming signals. Linear regression and UKF [8, 9, 69, 134] are some algorithms to fall under this category.

Another set of powerful algorithms used in BCI decoders is the class of neural networks. Artificial Neural Networks (ANN), CNN, and Deep Learning are a part of this family of algorithms. Neural Networks consist of a set of iterative calculator layers that execute optimization algorithms to derive a relation between their inputs and outputs. They are distinguished by their heightened performance and the ability to handle large data sets while performing tasks belonging to the category of both discrete and continuous decoders. In the study by Mahmood and team [24], the superior performance of 2-layer CNN models was established in comparison to 1-layer CNN and SVM algorithms through experimentation.

Table 1 gives an account of primarily used materials, methods, and their corresponding references mentioned in different sections of this literature.

Table 1 Methods and Materials Utilized by Referenced Literatures

Discussion

This literature survey provides an account of both invasive and non-invasive modalities used for neural interfaces. From the details of the study, however, it is apparent that invasive modalities are more prevalently employed for developing neuroprosthetics targeted at paraplegic patients. The current trend of the field is toward continually improving the bio-compatibility of materials used in the production of intracortical neural implants. Gradually, present-day researchers are exploring deeper into the application of implants for achieving feats like enabling vision in blind patients and improving paralyzed patients’ control over their neuroprosthetics. Until recently, the invasive modalities stood out to be more widespread in BCI applications than their non-invasive counterparts chiefly due to the higher quality of the collected neural signals. This equipped researchers with the possibility of developing a more reliable control paradigm and, thereby, a more reliable BCI.

Compared to intracortical implants, non-invasive methods, primarily involving EEG technology, produce higher noise in the collected neural signal. However, with the recent development of AI and ML algorithms, it is expected that better prediction programs can be developed that, when trained with ample amounts of healthy data, will correlate collected neural signal values to their corresponding physical activities, even in the presence of noise. Such an achievement, in future, will open up possibilities for the long-aspired widespread clinical adoption of BCIs. Non-invasive modalities equipped with heightened signal processing algorithms will vastly improve the modularity of existing BCIs while eliminating the risk involved in invasive brain surgeries. With researchers showcasing continuous improvement in their findings pertaining to modern-day AI algorithms, it is hoped that such a future is not far.

Conclusion

This review article has presented work done by various researchers in the field of neural signal harnessing, brain stimulation and implementation of various software and hardware for design and development of neural interfaces. Along with compiling various practices followed to harness electrical neural activities to control external actuators, the literature also presents theoretical and experimental research output available through previous studies on different BCI systems. In the context of truly closed-loop BCI systems, real-time data acquisition from the brain as well as feedback delivery to the brain is essential for improving the quality of life of users with the need for clinical care and physical augmentation. The methodologies documented in this paper have their corresponding scopes and limitations. Some of the prime challenges of contemporary practices in the area of neuro-interface are the reduced spatiotemporal resolution of non-invasive signal acquisition modalities, the techno-ethical challenges of invasive brain stimulators, and the complexities of recovering complete audiovisual sensation in the profoundly deaf and blind subjects. However, researchers have been working on their solutions and been reporting development toward success continually. In future, with the integration of knowledge from different interdisciplinary fields, fully closed-loop neuro-prosthetic BCIs are expected to be available for the users.