Keywords

1 Introduction

Since the term Brain-Computer Interface (BCI) was first coined in 1973 by Jacques J. Vidal [1], the interest and efforts in this field have grown tremendously and there are now thought to be several hundred laboratories worldwide developing research in this topic. The most typical example of this technology put in practice is the direct control of a graphical element (e.g., a computer cursor) by a person using a BCI based on electrophysiological signals, allowing the user to move it to the left or to the right, simply by imagining the movement of the left or right hand, respectively.

The aim of this research is not only to create an inventory of existing BCI technologies, but also to identify where they can or are applied, analyzing the challenges of implementing such a system to assist in the decision-making process. To that end, a BCI toolkit is presented. This resource is to be used with different target audiences (e.g., children, seniors, people with intellectual disabilities), thus enhancing Human abilities and consequently providing independent living.

Indeed, we intend to answer the following questions:

Q1. Which technologies and applications of BCI exist today?

Q2. How can one provide a BCI Toolkit to further test and enhance Human abilities?

To know the current state in terms of existing technologies and applications in BCIs, we followed the methodology of Levac et al. [7] and thus conducted a survey through an exploratory study. After covering the theoretical framework for this study, we share our research methodology and present the results. A toolkit covering the equipment and demonstration programs to be used is also conferred later on.

2 Literature Review

2.1 Brain-Computer Interface (BCI)

Since the term Brain-Computer Interface (BCI) [1] has been first coined, interest and efforts in this field have grown tremendously and there are today, it is believed, several hundred laboratories worldwide conducting research on this topic [2].

The term BCI was also extended to Brain Machine Interface (BMI), which is more comprehensive, since it is not limited only to the indication of a Brain-Computer Interface but also to a Brain-Machine Interface, which means any device other than a computer that can be controlled in any way by commands coming from the brain. BCI can be invasive or non-invasive, depending on whether an implant (placing an electronic device inside the body, usually in the head) or an external device needs to be used.

Indeed, some indicators of this field’s rapid growth are the number of research groups around the world doing work in BCI, peer-reviewed journal articles, abstracts and participation in relevant conferences. With dozens of companies and research groups actively participating in the development of this field and its associated technologies, topics such as collaboration, terminology and clear future planning are of great importance. To address these needs, the European Commission funded the coordination action “Future BNCI: A Roadmap for Future Directions in Brain/Neuronal Computer Interaction” in 2010/2011. This project was undoubtedly the first effort to promote collaboration and communication between key stakeholders [3].

BCI systems have been evolving over time in various ways. Some of the identified main trends have explored sensor enhancement, software usability, more natural and sensitive context, hybridization with other communication systems (including Brain/Neural - Computer Interface or BNCIs), new applications such as motor recovery and entertainment, testing and validating target users in home environments, or even using BCI technology for basic scientific and diagnostic research. Moreover, BCIs are gaining increased attention in academia, business, the assistive technology community, the media and the general public [3].

However, despite progress, BCIs remain quite limited in real-world scenarios. They are slow and unreliable, especially over long periods of use by target users. Also, BCIs require expert assistance in many ways: e.g., a typical end-user today needs help identifying, purchasing, installing, configuring, maintaining, repairing and updating the BCI. Besides, many of these interfaces still use gel-based sensors that require expert help for their setup and cleaning.

Another factor is design, which is often not taken into account considering the end user, but instead is implemented according to the designer’s goals and skills. Evidently, the integration of BCI with other systems is still in its infancy, which is the case with assistive technologies, different BNCI systems, head-mounted devices and practical and usable interfaces [3].

It is known that people can communicate only through thought. That is why BCIs, since they do not require movement, may be the only possible communication system for users with severe disabilities who cannot speak or use keyboards, mice or other interfaces [3].

2.2 Implementation of BCI Systems

There are often misunderstandings about what can and cannot be done with BCI systems. To clarify and facilitate deep reflection, this technology does not write information into the brain; it does not alter perception; or implants thoughts or images.

Indeed, BCIs cannot work remotely or without the user’s knowledge. To use a BCI, a person must have a sensor of some kind in their head and must voluntarily choose to perform certain mental tasks to achieve the proposed goals [4].

In the most commonly adopted definition, any BCI must meet four criteria [5]:

  1. 1.

    Direct: The system should be based on direct measurements of brain activity.

  2. 2.

    Intentional Control: At least one measurable brain signal, which can be modulated intentionally, must be provided as input to the BCI (electrical potentials, magnetic fields or hemodynamic changes). That is, users must choose to perform a mental task, with the goal of sending a message or command, each time they want to use the BCI.

  3. 3.

    Real Time Processing: The signal processing must take place online and produce a communication or a control signal.

  4. 4.

    Feedback: The user should get feedback on the success or failure of their communication or control efforts. If a BCI does not provide feedback, there is no “interface” and the device or system is simply a monitor of brain activity.

BNCI differs only on the first criterion; the signals can also reflect direct measurements of other nervous system activities, such as eye movement (EOG)Footnote 1, muscle activity (EMG)Footnote 2 or heart rate (HR)Footnote 3. BNCI systems are also referred to as hybrid BCIs or multimodal BCIs [5].

2.3 BNCI Horizon 2020

The Horizon 2020 BNCI project, as well as its predecessor, the Future BNCI, are important for the area of BCIs and, therefore, are presented and mentioned in this study, not only for the level of information they contain, but also for other relevant aspects. The project “Future in Brain/Neural-Computer Interaction: Horizon 2020” ran from November 2013 to April 2015.

BNCI Horizon 2020 was a Coordination and Support Action (CSA) funded under the European Commission’s 7th Framework Programme. Therefore, this project does not cover research, but instead aims to promote collaboration and communication between stakeholders in the field of Brain-Computer Interfaces (including research groups, companies, end users, policy makers and the general public) [6].

Indeed, BNCI Horizon 2020 aims to continue and enhance the efforts initiated by Future BNCI. Its main goal is to provide a global perspective on the field of BCIs now and in the future. The consortium includes eight major European BCI research institutions, three industrial partners and two end-user organizations (one of which is also a research partner) [6].

The applications of BCIs are diverse [6] and can:

  • Replace functions that have been lost due to injury or illness (e.g., communication and wheelchair control).

  • Restore lost functions (e.g., stimulation of muscles in a paralyzed person and stimulation of nerves to restore bladder function).

  • Improve functions (e.g., in stroke rehabilitationFootnote 4).

  • Enhance functions (e.g., detecting stress levels or attention lapses during demanding tasks).

  • Be used as a Research Tool to study brain functions.

Figure 1 shows some of BCIs’ scenarios of use.

Fig. 1.
figure 1

(source: [6]).

Usage scenarios

In terms of BCI Market and Stakeholders, the BNCI Horizon 2020 project has identified 148 related companies:

  1. 1.

    BCI sector (65 companies).

  2. 2.

    Automotive and Aerospace Sector (7 companies).

  3. 3.

    Medical Technology, Rehabilitation and Robotics Sector (46 companies).

  4. 4.

    Entertainment and Marketing Sector (10 companies).

  5. 5.

    Technology Sector (20 companies).

Figure 2 illustrates the relative proportion of large enterprises, public entities (non-profit organizations), small and medium-sized enterprises (fewer than 250 employees) and startups (founded in 2010 or later) for each sector.

Fig. 2.
figure 2

(source: [6]).

BCI market and stakeholders

Many companies in the BCI sector offer to use more than one signal type, but EEGFootnote 5 is the most prevalent, followed by EMG and ECGFootnote 6 (Electrocardiogram). Indeed, invasive brain signal acquisition solutions are offered by only 6% of companies. Other potential BCI-related signals such as near-infrared spectroscopy and heart rate have approximately the same share as invasive electrocorticography [6].

3 Research on Current BCI Technologies and Applications

An exploratory survey was conducted for data collection, following the methodology of Levac et al. [7] for conducting a survey of current BCI technologies and applications. After outlining the question that motivated the study and the objectives to be achieved, the approach was as follows. Firstly, searches were conducted for papers on Google Scholar platforms [38], IEEE Xplore [39], Science Direct [40] and Frontiers [41]. The first three are the most frequently used in academia and the fourth was founded by a group of neuroscientists. The key words employed in the search were “BCI” or “BMI” or “BNCI” + “Technology” or “Application” and the time range of publications was narrowed from 2015 to 2021. Secondly, the papers obtained through this search were carefully selected, so that similar and incremental publications by the same author were removed, leaving only those whose content is distinct and significant for the survey of BCI technologies and applications. Finally, all the information collected was synthesized and presented clearly and succinctly in order to facilitate the interpretation of the results.

3.1 Results

After the search, from the results obtained, a total of 30 diverse papers were selected. These papers were the ones that best answered the question initially formulated.

Depending on the needs of each case, BCI technologies can be: Invasive (installed inside the body, using surgery to place an implant), Minimally Invasive (making a small superficial incision and placing the implant under the skin) and Non-Invasive (easy to place and use, without the need for any surgical operation).

The non-invasive technology most common and most used in BCIs for brain signal is the EEG (Electroencephalogram, recording the brain’s electrical activity), among others such as MEG (Magnetoencephalogram), fNIRS (functional Near-Infrared Spectroscopy), fMRI (functional Magnetic Resonance Imaging) and fTCD (functional Transcranial Doppler).

Table 1 briefly illustrates the BCI technologies and applications currently used and presented in the papers obtained during the research.

Table 1. BCI technologies and applications

From the results obtained, several were the specific EEG signals used, such as: the P300 (Potential 300, event-related potential, elicited in the decision-making process), the ERP (Event Related Potential, an analysis that allows identifying the specific brain activity when the individual is exposed to certain stimuli), the VEP (Visual Evoked Potential, potential caused by a visual stimulus), the SSVEP (Steady State Visual Evoked Potential, signals that are natural responses to visual stimulation at specific frequencies), the SMR (Sensorimotor Rhythm, the sensorimotor rhythm is a brain wave with a frequency in the range of 13 to 15 Hz), the MRCPs (Movement-Related Cortical Potentials, used for the detection of intent to move) and the MI (Motor Imagery, motor imagery is a mental process by which an individual rehearses or simulates a particular action).

The applications of BCIs were the most diverse and go far beyond the simple use in the area of health [15, 29] and rehabilitation [9, 13]. Applications have been found in areas such as security [14, 28], communication [11, 16, 33], control of drones [10], obtaining private data, emotion classification [26], computer games [32, 34], mouse and keyboard control [19], virtual reality [18], authentication [24], augmented reality [25], among others.

4 Kit for Experimentation

A BCI toolkit was developed with different target audiences in mind (children, seniors, people with intellectual disabilities) and its main goal was to provide users a way to promote an independent life routine. This kit for experimentation included specific artifacts: free software dedicated to the study of the brain; dataset available online for download; equipment for experimentation; and demonstration applications developed specifically for this study in Python and Unity. Our intention was to understand if this all-in-one solution could be of value regarding BCIs.

4.1 Software Dedicated to the Study of the Brain

On the internet, a simple search on Google makes it possible to find free software dedicated to the study of the brain that can be used in research or in educational activities, such as teaching. Some examples of such software are detailed below.

BCI2000.

A software package oriented to the investigation of BCIs. It is usually used for data acquisition, stimulus presentation and brain monitoring applications. BCI2000 supports a variety of data acquisition systems, brain signals and study/feedback paradigms. During operation, data is stored in a common native format or in GDF (General Data Format for Biomedical Signals), along with all relevant event markers and system configuration information. Various tools for data import/conversion are also included, such as the possibility of directly loading data files into MATLAB and exporting resources to ASCII (American Standard Code for Information Interchange). More information and the corresponding download can be found at [42].

BioSig.

An open-source software library for biomedical signal processing, featuring, for example, the analysis of biosignals such as electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), respiration, etc. BioSig’s main application areas are: Neuroinformatics, Brain-Computer Interfaces, Neurophysiology, Psychology, Cardiovascular Systems and Sleep Research. It can be obtained from [43].

BrainBay.

A bio and neurofeedback application designed to work with various EEG amplifiers, including the open hardware OpenEEG and OpenBCI. It supports Human-Computer Interface functions and the NeuroServer Software Framework to transmit live recordings via Internet/LAN. BrainBay can be downloaded from [44].

Brainstorm.

An open-source collaborative application dedicated to the analysis of brain recordings: MEG, EEG, fNIRS, ECoG, depth electrodes and multi-unit electrophysiology. The goal is to share a comprehensive set of easy-to-use tools with the scientific community using MEG/EEG as an experimental technique. For clinicians and researchers, the main advantage of Brainstorm is its rich and intuitive graphical interface, which does not require any programming knowledge. The Brainstorm website is [45].

Cartool.

The EEG analysis software developed at the Functional Brain Mapping Lab (FBMLab) in Geneva, Switzerland. This project was started in 1996, and is still actively developed to this day. It was fully programmed by Denis Brunet in C++ and has no other dependencies to run. It can be downloaded from [46].

EEGLAB.

A MATLAB interactive toolbox for processing EEG, MEG and other continuous and event-related electrophysiological data, incorporating independent component analysis (ICA), time/frequency analysis, artefact rejection, event-related statistics and various useful modes of visualization of averaged and single-trial data. EEGLAB runs on Linux, Unix, Windows and Mac OS X. It can be reached at [47].

FieldTrip.

A MATLAB software toolbox for MEG, EEG and iEEG (intracranial electroencephalography) analysis. It is developed by members and collaborators of the Donders Institute for Brain, Cognition and Behavior at Radboud University, Nijmegen, The Netherlands. It offers pre-processing and advanced analysis methods, such as time-frequency analysis, source reconstruction using dipoles, distributed sources and beamformers, and non-parametric statistical tests. It supports the data formats of all major MEG systems and the most popular EEG and iEEG systems. New data formats can be added easily. FieldTrip contains high-level functions that the user can use to build their own analysis protocols as a MATLAB script. It is freely available as open source software under the GNU General Public License and can be obtained from [48].

MNE-Python.

An open-source Python module for processing, analysis and visualization of functional neuroimaging data (EEG, MEG, sEEG - stereoelectroencephalography, ECoG and fNIRS). There are also several related or interoperable software packages that the user may want to install, depending on the analysis needs they have. MNE-Python is available at [49].

NeuroExperimenter.

Allows visualization and monitoring of brain waves while the user tries different “mental states”, such as meditation, relaxation, concentration, etc. NeuroExperimenter uses MindWave and MindWave Mobile headsets by NeuroSky, thus providing access to and combination of these brain waves, and even, through “formulas”, specifying combinations that may characterize a mental state, in order to train the user to generate and achieve that state by means of visual and auditory feedback. It can be obtained at [50].

OpenViBE.

A software platform that allows designing, testing and using Brain-Computer Interfaces. OpenViBE can also be used as a generic real-time EEG acquisition, processing and visualization system. The software can be downloaded at [51].

4.2 Datasets Available Online

The preparation of an individual and the respective process of recording EEG or other biosignals can be laborious and time consuming. Nowadays, there are freely accessible data that have been recorded in the laboratory and are available to the scientific community and to the general public, which may be used in research, such as in the development of algorithms using machine learning techniques, in the study of diseases like epilepsy, in teaching or in the most varied scenarios. Table 2 shows the links to some of these datasets available online for download.

Table 2. Datasets

4.3 Equipment for Experimentation

There are several examples of equipment that can obtain signals from the human body. Nevertheless, there are others that give us the possibility, in a perspective of experimentation and DIY (Do It Yourself), to collect biosignals in a more relaxed and educational way.

Arduino.

Arduino [52] is a hardware that allows electronic prototyping in a simple way, being easy to learn for anyone, even if they have no knowledge of electronics or microcontroller programming. Arduino is basically a single board with an Atmel AVR microcontroller, with a series of analogue and digital input/output ports to which sensors or other components can be connected and which is programmed using a standard language, with an open-source IDE available for this purpose. There is a documentary film made in 2010 about the platform entitled Arduino: The Documentary, which was later made available online [53]. Nowadays there are several existing Arduino boards (https://www.arduino.cc/en/Main/Products), a true panoply of them for the most diverse interests and uses. But Arduino is not only limited to its single board, its capabilities can be expanded according to the needs of each project or application, through the addition of other boards called shields. One of these shields, named HackEEG [54] was funded in 2020 through crowdfunding and allows to carry out neuroscience studies. HackEEG (Fig. 3) is an affordable, open-source high performance shield, ideal for digitising biosignals such as EEG, EKG and EMG; or if the user wishes, establishing a Brain-Computer Interface. A maximum of four HackEEG can be stacked on an Arduino Due (https://docs.arduino.cc/hardware/due) so as to total 32 EEG channels. A rate of 4000 samples per second, or 16000 if only one HackEEG is used (8 EEG channels), can be achieved with this configuration. Data can also be transmitted by Wi-Fi (Wireless Fidelity) through the Lab Streaming Layer communication protocol. HackEEG is used by leading research institutions and pharmaceutical companies in the U.S. and Europe (Starcat, 2022).

Fig. 3.
figure 3

(source: [55]).

HackEEG

BITalino.

BITalino is an affordable, open-source biosignal platform that allows anyone, from students to professional developers to create projects and applications using physiological sensors [56]. BITalino (Fig. 4) consists of an electronic kit that provides a series of sensors with applications in areas as diverse as electroencephalography, electrocardiography, electromyography, electrodermal activity, electrogastrography (EGG) or sport (Higher Technical Institute, University of Lisbon, 2021). The signals collected can be consulted and stored in an application developed for this purpose, OpenSignals [57]. This software supports multiple channels, thus allowing the collection of data from several sensors simultaneously. Furthermore, OpenSignals has a set of add-ons that allow you to perform data analysis, create reports and extract features directly from the recorded signals, without any need for programming by the user. Some creative and interesting projects where BITalino has been employed can be consulted at [58], where it is also possible to view videos of these applications. BITalino was born at the Institute of Telecommunications, Portugal, in 2012 and since then has been conquering the world, having been licensed to Plux in 2013 and placed on the market (Higher Technical Institute, University of Lisbon, 2021). It was one of the 10 projects selected worldwide for the Engadget Insert Coin 2013 semi-final; highlighted by The New Zealand Herald as one of that year’s favorite technologies; and elected in 2014 as one of the 14 bets and one of the top 10 innovations [59, 60]. There are innumerous fields of interest using it: MIT (Massachusetts Institute of Technology) and Stanford University and Imperial College London, in their Laboratories; Facebook, to study the users’ reactions while they surf the Internet; Boeing, to study the reactions of clients to new services. Today, BITalino is already present in more than 30 countries. Moreover, a new version was launched in 2016 and goes by the name of BITalino (r)evolution (Fig. 4), presenting improvements compared to the previous one, such as greater reduction in the size of the sensors. This version has almost twice as many accessories in the same format, without changing the dimensions of the board, with support for Bluetooth Low Energy (BLE), whilst maintaining the price point. Interestingly, each BITalino carries in the back a representation of Portugal and the inscription “Designed in Portugal, built for the World”.

Fig. 4.
figure 4

(sources: [59, 60]).

BITalino (left) and BITalino (r)evolution (right)

4.4 Demonstration Applications

In order to experiment and demonstrate some functionalities and applications that can be used in real life, four applications were developed where the input mode is achieved through the use of brain waves. Three of these programs were implemented using the python programming and the fourth through the 3D/2D game development platform unity. In the first three applications, the NEuroSky’s BCI “MindWave Mobile 2” was employed, whereas the unity-based application used the BCI “NextMind”. We describe each of the applications’ workflow below.

In the first program (Fig. 5) the user can control the whole interface by intentionally varying their attention levels, meditation and blink intensity. The graphical interface is divided into three sections: “Speech Settings”, “Text to Speech” and “Feedback”. When “Setup” is selected, it is possible to change the speed (“Speed”) of pronouncing words, choose the desired voice (“Voice”) and increase or decrease its volume (“Volume”). When “Talk” is selected, it is possible to alternate between the different text options, varying the level of meditation. When the desired text is selected, by the blinking of the eyes, it is converted into voice, enabling the user to communicate if they are unable to do so in any other way or by regular means. To help the users interact with the application, they are given feedback on the levels of attention, meditation and intensity of the blink, so that they can have a better perception, in real time, and if necessary, shape their mental state to achieve the desired interaction.

The second program (Fig. 6) allows playing the “Rooster Game” (Tic Tac Toe). The mouse cursor is controlled by varying the player’s attention level, thus enabling positioning on the desired square (one of nine possible positions); the mouse click, in order to execute the move, is achieved by the blinking of the eyes. This way the player can interact without using the usual keyboard and mouse for program control. Whenever the game ends, because the player wins, draws or loses, a new one is automatically started and the result is added to the score. Feedback is given to the player, by the mouse cursor jumping from square to square, whenever the player has a certain level of attention.

Fig. 5.
figure 5

EEG text to speech

Fig. 6.
figure 6

EEG Tic Tac Toe

The third piece of software (Fig. 7) is an example of Physical Computing. Here, the user can control the switching on and off of three LEDs using Raspberry Pi [61], although other electronic components can also be used. The Raspberry Pi is a small and affordable single-board computer that can be used for learning how to code through fun and practical projects. Interaction with the hardware is achieved by using the computer’s GPIO (General-Purpose Input/Output). Two distinct operations have been implemented, one when a certain level of attention is reached and another when a blink of an eye is performed with a certain intensity.

Fig. 7.
figure 7

EEG physical computing

Fig. 8.
figure 8

EEG Pong

Finally, a game resembling the classic game “Pong” was developed through the IDE (Integrated Development Environment) Unity. This type of game that was first developed by Atari Inc., and that would eventually be released in the Arcade Machines, on 29 November 1972, having obtained enormous success. In this modern version (Fig. 8), the player, resorting to BCI NextMind, can control the up and down movement of the right paddle, while the left one being controlled by the computer. Here the control is achieved by means of the player’s visual attention, that is, when the player looks at the upper right square on the screen, the paddle moves upwards, and the opposite happens when he looks at the lower right square. Feedback is given to the player via the “green triangular crosshairs”, which indicate their level of visual attention.

5 Conclusions

BCIs can achieve very satisfactory practical results where no other technology can, with accuracy values of around 90% and more [8, 16, 22, 37]. Indeed, in some areas, BCIs are the only technology that enable actions to be carried out by the user that would otherwise be impossible to achieve [12]. We also recognize that some solutions allow BCI to be combined with signals other than brain signals, such as EOG (Electrooculogram) and EMG (Electromyogram), thus improving the effectiveness of the resulting system [16, 37].

In addition to EEG, other technologies have been developed and applied in order to obtain more information from the brain. We are aware that these technologies may lead to the creation of better BCIs, such as MEG, fMRI (measures brain activity through variations in blood flow associated with it) and more recently fNIRS and fTCD (ultrasound technique that uses sound waves to assess blood circulation in and around the brain) [27, 30, 36]. In this regard, we established a BCI toolkit that granted users an all-in-one solution to implement the interaction mode and developed four applications with the focus on the users’ interaction. Our intention was to propose a toolkit that grants interaction for different user profiles, such as children, seniors or people with intellectual disabilities, as to provide them with an independent life routine. We are aware of the need to conclude further testing of these applications and understand their performance in a real-life scenario, but acknowledge that this may be a starter point in the creation of user-friendly interaction modes regarding BCI.

To conclude, the future of BCIs is promising and the applications are immense. To be able to gather the literature’s resulting knowledge in a way that allows for a quick and clear consultation is extremely relevant, thus highlighting the research lines, technologies and the most relevant cases of applications so that policy makers, professionals and consumers can make effective use of the findings.