Next Article in Journal
Green IoT and Edge AI as Key Technological Enablers for a Sustainable Digital Transition towards a Smart Circular Economy: An Industry 5.0 Use Case
Previous Article in Journal
BRISK: Dynamic Encryption Based Cipher for Long Term Security
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Brain-Computer Interface: Advancement and Challenges

by
M. F. Mridha
1,
Sujoy Chandra Das
1,
Muhammad Mohsin Kabir
1,
Aklima Akter Lima
1,
Md. Rashedul Islam
2,* and
Yutaka Watanobe
3
1
Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
2
Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
3
Department of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu 965-8580, Japan
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(17), 5746; https://doi.org/10.3390/s21175746
Submission received: 17 July 2021 / Revised: 15 August 2021 / Accepted: 20 August 2021 / Published: 26 August 2021
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Brain-Computer Interface (BCI) is an advanced and multidisciplinary active research domain based on neuroscience, signal processing, biomedical sensors, hardware, etc. Since the last decades, several groundbreaking research has been conducted in this domain. Still, no comprehensive review that covers the BCI domain completely has been conducted yet. Hence, a comprehensive overview of the BCI domain is presented in this study. This study covers several applications of BCI and upholds the significance of this domain. Then, each element of BCI systems, including techniques, datasets, feature extraction methods, evaluation measurement matrices, existing BCI algorithms, and classifiers, are explained concisely. In addition, a brief overview of the technologies or hardware, mostly sensors used in BCI, is appended. Finally, the paper investigates several unsolved challenges of the BCI and explains them with possible solutions.

1. Introduction

The quest for direct communication between a person and a computer has always been an attractive topic for scientists and researchers. The Brain-Computer Interface (BCI) system has directly connected the human brain and the outside environment. The BCI is a real-time brain-machine interface that interacts with external parameters. The BCI system employs the user’s brain activity signals as a medium for communication between the person and the computer, translated into the required output. It enables users to operate external devices that are not controlled by peripheral nerves or muscles via brain activity.
BCI has always been a fascinating domain for researchers. Recently, it has become a charming area of scientific inquiry and has become a possible means of proving a direct connection between the brain and technology. Many research and development projects have implemented this concept, and it has also become one of the fastest expanding fields of scientific inquiry. Many scientists tried and applied various communication methods between humans and computers in different BCI forms. However, it has progressed from a simple concept in the early days of digital technology to extremely complex signal recognition, recording, and analysis techniques today. In 1929, Hans Berger [1] became the first person to record an Electroencephalogram (EEG) [2], which shows the electrical activity of the brain that is measured through the scalp of a human brain. The author tried it on a boy with a brain tumor; since then, EEG signals have been used clinically to identify brain disorders. Vidal [3] made the first effort to communicate between a human and a computer using EEG in 1973, coining the phrase “Brain-Computer Interface”. The author listed all of the components required to construct a functional BCI. He made an experiment room that was separated from the control and computer rooms. In the experiment room, three screens were required; the subject’s EEG was to be sent to an amplifier the size of an entire desk in the control area, including two more screens and a printer.
The concept of combining brains and technology has constantly stimulated people’s interest, and it has become a reality because of recent advancements in neurology and engineering, which have opened the pathway to repairing and possibly enhancing human physical and mental capacities. The sector flourishing the most based on BCI is considered the medical application sector. Cochlear implants [4] for the deaf and deep brain stimulation for Parkinson’s illness are examples of medical uses becoming more prevalent. In addition to these medical applications, security, lie detection, alertness monitoring, telepresence, gaming, education, art, and human enhancement are just a few uses for brain–computer interfaces (BCIs), also known as brain–machine interfaces or BMIs [5]. Every application based on BCI follows different approaches and methods. Each method has its own set of benefits and drawbacks. The degree to which a performance can be enhanced while minute-to-minute and day-to-day volatility are reduced is crucial for the future of BCI technology. Such advancements rely on the capacity to systematically evaluate and contrast different BCI techniques, allowing for the most promising approaches to be discovered. In addition, this versatility around BCI technologies in different sectors and their applications can seem so complex yet so structured. Most of the BCI applications follow a standard structure and system. This basic structure of BCI consists of signal acquisition, pre-processing, feature extraction, classification, and control of the devices. The signal acquisition paves the way to connecting a brain and a computer and to gathering knowledge from signals. The three parts of pre-processing, feature extraction, and classification are responsible for making the associated signal more usable. Lastly, control of the devices points out the primary motivation: to use the signals in an application, prosthetic, etc.
The outstanding compatibility of various methods and procedures in BCI systems demands extensive research. A few research studies on specific features of BCI have also been conducted. Given all of the excellent BCI research, a comprehensive survey is now necessary. Therefore, an extensive survey analysis was attempted and focused on nine review papers featured in this study. Most surveys, however, do not address contemporary trends and application as well as the purpose and limits of BCI methods. Now, an overview and comparisons of the known reviews of the literature on BCI are shown in Table 1.
Abiri, R. et al. [6] evaluated the current review on EEG-based various experimental paradigms used by BCI systems. For each experimental paradigm, the researchers experimented with different EEG decoding algorithms and classification methods. The researchers overviewed the paradigms such as Motor imagery paradigms, Body kinematics, Visual P300, Evoked potential, and Error related potential and the hybrid paradigms analyzed with the classification methods and their applications. Researchers have already faced some severe issues while exploring BCI paradigms, including training time and fatigue, signal processing, and novel decoders; shared control to supervisory control in closed-loop; etc. Tiwari, N. et al. [7] provided a complete assessment of the evolution of BCI and a fundamental introduction to brain functioning. An extensive comprehensive revision of the anatomy of the human brain, BCI, and its phases; the methods for extracting signals; and the algorithms for putting the extracted information to use was offered. The authors explained the steps of BCI, which consisted of signal acquisition, feature extraction, and signal classification. As the human brain is complex, human-generated thoughts are non-stationary, and generated signals are nonlinear. Thus, the challenging aspect is to develop a system to find deeper insights from the human brain; then, BCI application will perform better with these deeper insights. Vasiljevic, G.A.M. et al. [8] presented a Systematic Literature Review (SLR) conclusion of BCI games employing consumer-grade gadgets. The authors analyzed the collected data to provide a comprehensive picture of the existing reality and obstacles for HCI of BCI-based games utilizing consumer-grade equipment. According to the observations, numerous games with more straightforward commands were designed for research objectives, and there was a growing amount of more user-friendly BCI games, particularly for recreation. However, this study is limited to the process of search and classification. Martini, M.L. et al. [9] investigated existing BCI sensory modalities to convey perspectives as technology improves. The sensor element of a BCI circuit determines the quality of brain pattern recognition, and numerous sensor modalities are presently used for system applications, which are generally either electrode-based or functional neuroimaging-based. Sensors differed significantly in their inherent spatial and temporal capabilities along with practical considerations such as invasiveness, mobility, and maintenance. Bablani, A. et al. [10] examined brain reactions utilizing invasive and noninvasive acquisition techniques, which included electrocorticography (ECoG), electroencephalography (EEG), magnetoencephalography (MEG), and magnetic resonance imaging (MRI). For operating any application, such responses must be interpreted utilizing machine learning and pattern recognition technologies. A short analysis of the existing feature extraction techniques and classification algorithms applicable to brain data has been presented in this study.
Fleury, M. et al. [11] described various haptic interface paradigms, including SMR, P300, and SSSEP, and approaches for designing relevant haptic systems. The researchers found significant trends in utilizing haptics in BCIs and NF and evaluated various solutions. Haptic interfaces could improve productivity and could improve the relevance of feedback delivered, especially in motor restoration using the SMR paradigm. Torres, E.P. et al. [12] conducted an overview of relevant research literature from 2015 to 2020. It provides trends and a comparison of methods used in new implementations from a BCI perspective. An explanation of datasets, emotion elicitation methods, feature extraction and selection, classification algorithms, and performance evaluation is presented. Zhang, X. et al. [13] discussed the classification of noninvasive brain signals and the fundamentals of deep learning algorithms. This study significantly gives an overview of brain signals and deep learning approaches to enable users to understand BCI research. The prominent deep learning techniques and cutting-edge models for brain signals are presented in this paper, together with specific ideas for selecting the best deep learning models. Gu, X. et al. [14] investigated the most current research on EEG signal detection technologies and computational intelligence methodologies in BCI systems that filled in the loopholes in the five-year systematic review (2015–2019). The authors demonstrated sophisticated signal detecting and augmentation technologies for collecting and cleaning EEG signals. The researchers also exhibited computational intelligence techniques, such as interpretable fuzzy models, transfer learning, deep learning, and combinations for monitoring, maintaining, or tracking human cognitive states and the results of operations in typical applications.
The study necessitated a compendium of scholarly studies covering 1970 to 2021 since we analyze BCI in detail in this literature review. We specialized in the empirical literature on BCI from 2000 to 2021. For historical purposes, such as the invention of BCI systems and their techniques, we selected some publications before 2000. Kitchenham [15,16] established the Systematic Literature Review (SLR) method, which is applied in the research and comprises three phases: organizing, executing, and documenting the review. The SLR methodologies attempted to address all possible questions that could arise as the current research progresses. The recent study’s purpose is to examine the findings of numerous key research areas. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines were used to put together the essential materials for this study, which consists of four parts: identification, scanning, eligibility testing, and inclusion. We gathered 577 papers from a variety of sources and weeded out duplicates and similar articles. Finally, we carefully chose 361 articles and sources for monitoring and review. The PRISMA process is presented in Figure 1.
However, this research looks at the present challenges and difficulties in this BCI field. Furthermore, this study generates ideas and suggestions for future research subjects. The following are the research’s total contributions:
  • The paper explicitly illustrates Brain-Computer Interface’s (BCI) present, past, and future trends and technologies.
  • The paper presents a taxonomy of BCI and elaborates on the few traditional BCI systems with workflow and architectural concepts.
  • The paper investigates some BCI tools and datasets. The datasets are also classified on different BCI research domains.
  • In addition, the paper demonstrates the application of BCI, explores a few unsolved challenges, and analyzes the opportunities.
After reading this section, one should understand BCI and how to get started with it. Our motivation to work with BCI started from a desire to learn more about this domain. Furthermore, the BCI has a bright future ahead of it, as it has a lot to offer in the medical field and in everyday life. BCI can change one’s incapability and can make life and work easy, as detailed in the following section. The applications, problems, future, and social consequences of BCI have also fueled our enthusiasm for this research.
The remainder of the paper is constructed as follows. The motivation of this work and diverse applications of BCI systems are illustrated in Section 2. Section 3 describes the structure of BCI and briefly reviews the most popular techniques of BCI. In Section 5, different categories of datasets available publicly are displayed. In Section 7, the most widely used methods for signal enhancement and feature extraction of BCI are discussed. The most commonly known classifiers are reviewed in Section 8. A broad discussion on the evaluation metrics for BCI is given in Section 9. The challenges faced most commonly during the BCI process are reviewed in Section 10. Lastly, this paper provides a conclusion in Section 11.

2. Applications of BCI

BCIs may be used for various purposes and the application determines the design of a BCI. According to Nijholt [17], applications based on BCI have two methods of usability. One can command whether the other one can be observed or monitored. The majority of command applications concentrate on manipulating brain impulses using electrodes to control an external device. On the other hand, applications that involve observation focus on recognizing a subject’s mental and emotional state to behave appropriately depending on their surroundings. Some applications of BCI [18] based on usability are described below:

2.1. Biomedical Applications

The majority of BCI integrations and research have been focused on medical applications, with many BCIs aiming to replace or restore Central Nervous System (CNS) functioning lost with sickness or by accident. Other BCIs are more narrowly targeted. In diagnostic applications, on treatment and motor rehabilitation following CNS disease or trauma, BCIs for biological purposes are also employed in affective application domains. Biomedical technologies and applications can minimize extended periods of sickness, can provide supervision and protection by empowering persons with mobility difficulties, and can support their rehabilitation. The necessity to build accurate technology that can cope with potentially abnormal brain responses that might occur due to diseases such as brain stroke is a significant challenge in developing such platforms [19]. The following subsections go through each of these applications in further detail.

2.1.1. Substitute to CNS

These substitution means that it can repair or replace CNS functioning lost due to diseases such as paralysis and spinal cord injury due to stroke or trauma. In addition, due to changed brain functions, individuals with such illnesses might suffer and developing such technology can be difficult. Myoelectrics, known as a motor action potential, which captures electrical impulses in muscles, is now used in several robotic prosthetics. Bousseta, R. et al. [20] provided an experimental technology for controlling the movement of a robotic prosthetic arm with mental imagery and using cognitive tasks, which can move in four directions like left, right, up, and down.

2.1.2. Assessment and Diagnosis

The usage of BCIs in a clinical context can also help with assessment and diagnosis. Perales [21] suggested a BCI for assessing the attention of youngsters with cerebral palsy while playing games. Another research [22] looked into using BCI to capture EEG characteristics as a tool for diagnosing schizophrenia. There are also various diagnostic methods such as the detection of brain tumors [23], the identification of breast cancer [24], parkinson’s disease [25] etc. Diagnoses of several diseases in children including epilepsy, neurodegenerative disorders, motor disabilities, inattentiveness, or different types of ADHD [26] are possible. Assessment and diagnosis technologies are essential to patient well-being. Their functioning must be fine-tuned to guarantee that they are safe, acceptable, and accurate to industry standards.

2.1.3. Therapy or Rehabilitation

BCI is being used in therapeutic applications besides neurological application and prosthetics nowadays. Among the many applications, post-stroke motor rehabilitation shows promising results using BCI. Stroke is a disease that causes long-term disability to the human body and hampers all kinds of motor or vigorous activity due to an impediment of blood flow. Stroke rehabilitation application has promised to aid these activities or user imaginations through a robot or other types of machinery [27,28,29]. Some other applications treat neurological disorders such as Parkinson’s disease (PD), cluster headaches, tinnitus, etc. Deep Brain Stimulation (DBS) is an established treatment for PD as it delivers electrical impulses to a targeted area of the brain responsible for the symptoms [30]. Some stimulation BCI devices are used to process calmness during migraine attacks and cluster headaches. Lastly, a CNS disorder known as tinnitus is also in development to provide treatment by identifying brain patterns that are changed due to the disease [31]. Lastly, treatment for auditory verbal hallucinations (AVHs), best known as schizophrenia, is a possibility besides diagnosis [32,33].

2.1.4. Affective Computing

Users’ emotions and state of mind are observed in affective computing BCIs, with the possibility of altering their surrounding environment to improve or change that emotion. Ehrlich, S. et al. [34] created a closed-loop system in which music is generated and then replayed to listeners based on their emotional state. Human emotional states and sensory connections can be studied with a device that is related to BCI system. Patients suffering neurological diseases also can benefit from affective computing to help them convey their feelings to others [35].

2.2. Non-Biomedical Applications

BCI technologies have shown economic promise in recent years, notably in the field of non-biomedical applications. Most of these applications consist of entertaining applications, games, and emotional computation. In comparison, researchers focus on robustness and high efficiency in medical and military applications, and innovations targeted at leisure or lifestyle demand a greater emphasis on enjoyment and social elements. The most challenging aspect of this entertainment application is that it must be a user favorite to be commercially successful. As an example, some of the most popular forms of amusement are as follows:

2.2.1. Gaming

BCIs focused mainly on the gaming sector have grown in importance as a research topic. However, gaming BCIs are currently a poor substitute for standard game control methods [36]. BCI in gaming is an area where further research is needed to make games more user-friendly. In some cases, EEG data make BCI games more utilizable and increase engagement, and the system tracks each player’s enthusiasm level and activates dynamic difficulty adjustment (DDA) when the players’ excitement drops [37]. When developing such systems, fine-tuning the algorithms that regulate the game’s behavior is a big challenge. Some other games are based on BCI, as it is not visually intense and the graphics are not compatible with the recent generation. With setbacks, there is an engaging future for an Adaptation of P300 based Brain-Computer Interface for Gaming [38], which is gaining more popularity as these are very flexible to play.

2.2.2. Industry

EEG-based BCIs can also be used in industrial robotics, increasing worker safety by keeping people away from potentially demanding jobs. These technologies could substitute the time-consuming button and joystick systems used to teach robots in industrial applications; can detect when a person is too tired or ill to operate the machinery; and can take the necessary precautions to avoid injury, such as stopping the machinery [38].

2.2.3. Artistic Application

The four types of artistic applications recognized by BCIs are passive, selective, direct, and collaborative. Passive artistic BCIs need not require active user input to use the user’s brain activity to determine which pre-programmed responses to produce. Every user has had some limited control over the process within selective systems. Still, they will never be in charge of the creative product. Direct artistic BCIs provide users with far more flexibility, generally allowing them to choose items from extensive menus, such as brush type and managing brush stroke movements [39]. Lastly, the collaborative system is controlled by different users [40].

2.2.4. Transport

BCI is used in transportation monitoring which tracks awareness to assess driver weariness and to enhance airline pilot performances. In the BCI system, mistakes can be costly regarding lives and monetary obligations on the entities involved when such technologies are utilized in critical applications [41,42].

3. Structure of BCI

The BCI system operates with a closed-loop system. Every action taken by the user is met with some feedback. For example, an imagined hand movement might result in a command that causes a robotic arm to move. This simple movement of this arm needs a lot of processes inside it. It starts from the brain, which is one of our body’s most extensive and most complicated organs. It is made up of billions of nerves that link billions of synapses to communicate. The processes from taking signals from the human brain to transforming into a workable command are shown in Figure 2 and described below:
  • Signal acquisition: In the case of BCI, it is a process of taking samples of signals that measure the brain activity and turning them into commands that can control a virtual or real-world application. The various techniques of BCI for signal acquisition are described later.
  • Pre-processing: After the signal acquisition, the pre-processing of signals is needed. In most cases, the collected signals from the brain are noisy and impaired with artifacts. This step helps to clean this noise and artifacts with different methods and filtering. That is why it is named signal enhancement.
  • Feature extraction: The next stage is feature extraction, which involves analyzing the signal and extracting data. As the brain activity signal is complicated, it is hard to extract useful information just by analyzing it. It is thus necessary to employ processing algorithms that enable the extraction of features of a brain, such as a person’s purpose.
  • Classification: The next step is to apply classification techniques to the signal, free of artifacts. The classification aids in determining the type of mental task the person is performing or the person’s command.
  • Control of devices: The classification step sends a command to the feedback device or application. It may be a computer, for example, where the signal is used to move a cursor, or a robotic arm, where the signal is utilized to move the arm.
The basic architecture of the BCI system was explained in the preceding section. It prompts us to investigate the classification of BCI system. Based upon various techniques, BCI system is classified. The BCI techniques are discussed in following parts.
From the above Figure 3, we can classify BCI from different aspects such as dependability, invasiveness, and autonomy.
  • Dependability: BCI can be classified as dependent or independent. Dependent BCIs necessitate certain types of motor control from the operator or healthy subjects, such as gaze control. On the other hand, independent BCIs do not enable the individual to exert any form of motor control; this type of BCI is appropriate for stroke patients or seriously disabled patients.
  • Invasiveness: BCI is also classified into three types according to invasiveness: invasive, partially invasive, and non-invasive. Invasive BCIs are by far the most accurate as they are implanted directly into the cortex, allowing researchers to monitor the activity of every neuron. Invasive varieties of BCI are inserted directly into the brain throughout neurosurgery. There are two types of invasive BCIs: single unit BCIs, which detect signals from a single place of brain cells, and multi-unit BCIs, which detect signals from several areas. Semi-invasive BCIs use Electrocorticography (ECoG), a kind of signal platform that enables electrodes to be placed on the attainable edge of the brain to detect electrical impulses originating from the cerebral cortex. Although this procedure is less intrusive, it still necessitates a surgical opening in the brain. Noninvasive BCIs use external sensing rather than brain implants. Electroencephalography (EEG), Magnetoencephalography (MEG), Positron emission tomography (PET), Functional magnetic resonance imaging (fMRI), and Functional near-infrared spectroscopy (fNIRS) are all noninvasive techniques used it to analyze the brain. However, because of the low cost and portability of the gear, EEG is the most commonly used.
  • Autonomy: BCI can operate either in a synchronous or asynchronous manner. Time-dependent or time-independent interactions between the user and system are possible. The system is known as synchronous BCI if the interaction is carried out within a particular amount of time in response to a cue supplied by the system. In asynchronous BCI, the subject can create a mental task at a certain time to engage with the system. Synchronous BCIs are less user-friendly than asynchronous BCIs; however, designing one is substantially easier than developing an asynchronous BCI.
As the motive of this research work is to focus on advancements of BCI, the most advanced and mostly used techniques that is based on invasiveness are described in the following part. Based on invasiveness, BCI is classified into three categories that are more familiar. In the consequent section, we address these three categories and describe them elaborately.

3.1. Invasive

The types of BCI that are invasive are inserted directly into the brain with neurosurgery. Invasive BCIs seem to be the most accurate even though they are implanted directly into the cortex as it is allowed to track every neuron’s action. Invasive BCI also has two units rather than parts. The first unit is single-unit BCIs that detect signals from a single location of brain cells, whereas multi-unit BCIs detect numerous areas, the second unit [43]. However, the neurosurgery treatment has various flaws, such as the possibility of scar tissue formation. The body responds to the foreign object by forming a scar around the electrodes, leading the signal to deteriorate. Since neurosurgery is a dangerous and costly procedure, invasive BCI is mainly used on blind and paralyzed patients.

3.2. Partially Invasive

Although this approach is not as intrusive, it still involves brain surgery. Electrocorticography (ECoG) is a sort of partially invasive BCI monitoring system that places electrodes in the cortex surface of the brain to produce signals with electrical activity. For example, blinking allows your brain to discharge electrical activity. When investigating signals, though, these involuntary actions are generally not of interest since they are in the way of what we search for. It is a form of noise. ECoGs are less considered with noise than non-invasive BCI, making interpretation easier [44].

Electrocorticography (ECoG)

Electrocorticography (ECoG) [45] is an partially invasive method that measures the brain’s electrical activity. In another sense, the participant’s skull must be evacuated, and the electrodes must be placed right at the brain’s service. Consequently, this electrode is located on the skull. The particular resolution of the recorded signals is considerably better than EEG. The signal-to-noise ratio is superior compared with the closer proximity to cerebral activity. Furthermore, motion artifacts such as blinks and eye movement have a significantly lower impact on ECoG signals. However, ECoG would only be helpful in the accessible brain area and is close to impossible to utilize outside of a surgical setting [46].

3.3. Noninvasive

Noninvasive neuroimaging technologies have also been used as interfaces in human research. Noninvasive EEG-based BCIs account for the vast bulk of published BCI research. EEG-based noninvasive technologies and interfaces have been employed in a considerably more comprehensive range of applications. Noninvasive apps and technologies are becoming increasingly popular in recent years since they do not require any brain surgery. In the noninvasive mode, a headpiece or helmet-like electrode is utilized outside the skull to measure the signal by causing electrical activity in the brain. There are some well-known and widely used ways for measuring these electrical activity or potentials, such as Electroencephalography (EEG), Magnetoencephalography (MEG), Functional Magnetic Resonance Imaging (fMRI), Facial Near Infrared Spectroscopy (fNIRS), and Positron Emission Tomography (PET). An elaborate description of BCI techniques is given below:

3.3.1. Electroencephalography (EEG)

EG monitors electrical activity in the scalp generated by activating a few of the brain’s neurons. Several electrodes implanted on the scalp directly, mainly on the cortex, are often used to record these electrical activities quickly. For its excellent temporal resolution, ease of use, safety, and affordability, EEG is the most used technology for capturing brain activity. Active electrodes and passive electrodes are indeed the two types of electrodes that can be utilized. Active electrodes usually feature an integrated amplifier, whereas passive electrodes require an external amplifier to magnify the detected signals. The prime objective of implementing either embedded or external amplifiers is to lessen the impact of background noise and other signal weaknesses caused by cable movement. One of the issues with EEG is that it necessitates the use of gel or saline solutions to lower the resistance of skin-electrode contact. Furthermore, the signal quality is poor, and it is altered by background noise. The International 10–20 system [47] is often used to implant electrodes over the scalp surface for recording purposes. The electrical activities across various frequency bands are used to describe EEG in general.

3.3.2. Magnetoencephalography (MEG)

The magnetic fields created by current flow in the brain are measured using MEG (Magnetoencephalography). Electric fields have significantly more interrupted travel via the skull than magnetic fields, therefore it has superior spatial resolution than EEG. A functional neuroimaging technique is applied to measure and evaluate the brain’s magnetic field. MEG operates on the outside of the head and is now a part of the clinical treatment regularly. David Choen [48,49] was the first to invent it in 1968 by utilizing a conduction copper detector inside a shielded chamber to reduce background noise. Improved MEG signals have recently been produced using more sensitive sensors such as superconducting quantum interference devices (SQUID) [50]. MEG has become significant, especially for patients with epilepsy and brain tumors. It may aid in detecting regions of the brain with average function in individuals with epilepsy, tumors, or other mass lesions. MEG operates with magnetic waves rather than electrical waves so that it could contribute additional information to EEG. MEG is also capable of capturing signals with high temporal and spatial resolution. Therefore, to detect cerebral activity that creates tiny magnetic fields the scanners must be closer to the brain’s surface. As a result, specific sensors are required for MEG, such as superconducting quantum interference (SQUID) sensors [51].

3.3.3. Functional Magnetic Resonance Imaging (fMRI)

Noninvasive functional magnetic resonance imaging (fMRI) is used to evaluate the fluctuation in blood oxygen levels throughout brain activities. fMRI has an excellent spatial resolution, which makes it ideal for identifying active areas of the brain [52]. The time resolution of fMRI is comparatively low, ranging from 1 to 2 s [53]. It also has low resolution when it comes to head movements, which could result in artifacts. In the 1990s, functional magnetic resonance imaging (fMRI) was created. It is a noninvasive and safe technology that does not include the use of radiation, is simple to use, and has great spatial and temporal resolution. Hemoglobin in capillary red blood cells in the brain transports oxygen to the neurons. As a result of the increased demand for oxygen, blood flow increases. If haemoglobin is oxygenated, its magnetic properties vary. The MRI equipment, which is a cylindrical tube with a strong electromagnet, can determine which regions of the brain are activated because of this difference. That is how fMRI works. There is also a specific application or software known as diffusion MRI, which generates images from the data or results that use water molecules’ diffusion. Diffusion-weighted and diffusion tensor imaging (DWI/DTI) facilitates this exploration of the microarchitecture of the brain. Diffusion-weighted magnetic resonance imaging (DWI or DW-MRI) imaging renders picture variation depending on variances in the degree of diffusion of water particles inside the brain. Diffusion depicts the stochastic thermic mobility of particles. Diffusion inside the brain is defined by several agents, including representing particles beneath study, the temperature, and the microenvironmental structure in which the diffusion occurs [54]. Diffusion tensor imaging (DTI) investigates the three-dimensional form of the diffusion, also recognized as diffusion tensor. It is a powerful MRI modality that produces directional knowledge about the water motion in a voxel. It exhibits noninvasively microscopic tissue features that surpass the ability of any other imaging methods [55].

3.3.4. Functional Near-Infrared Spectroscopy (fNIRS)

The infrared radiation is projected into the brain using fNIRS equipment [53,56] to monitor improvements in specific wavelengths as the light is reflected. fNIRS often detects changes in regional blood volume and oxygenation. When a particular area of the brain works, it requires additional oxygen, which is given to the neurons via capillary red blood cells—the increased blood flow in the brain areas that would be most active at a given time. fMRI is a technique that monitors variations in oxygen levels caused by various activities. As a result, images with a high spatial resolution (1 cm) but lower temporal resolution (>2–5 s) could be obtained, comparable with standard functional magnetic resonance imaging.

3.3.5. Positron Emission Tomography (PET)

PET (positron emission tomography) is a sophisticated imaging tool for examining brain activities in real-time. It enables noninvasive measurement of cerebral blood flow, metabolism, and receptor binding in the brain. Due to the relatively high prices and complexity of the accompanying infrastructure, including cyclotrons, PET scanners, and radio chemistry laboratories, PET was previously only used in research. PET has been widely employed in clinical neurology in recent years due to technological improvements and the proliferation of PET scanners to better our understanding of disease etiology, to help in diagnosis, and to monitor disease progression and response to therapy [57]. PET medications such as radiolabeled choline, fluciclovine (18F-FACBC), and compounds targeting prostate-specific membrane antigen are now being researched and explored to improve noninvasive prostate cancer localization diagnostic performance [58].

4. Brain Control Signals

The brain-computer interface (BCI) is based on signal amplification that comes directly from the brain. Several of these signals are simple to extract, while others are more difficult and require additional preprocessing [53]. These control signals can be classified into one of three groups: (1) evoked signals, (2) spontaneous signals, and (3) hybrid signals. A detailed overview of the three categories is given below. The control signals classification is shown in Figure 4.

4.1. Visual Evoked Potentials

Electrical potentials evoked by short visual stimuli are known as VEPs. The visual cortex’s potentials are monitored, and the waveforms are derived from the EEG. VEPs are generally used to assess the visual pathways from the eye to the brain’s visual cortex. Middendorf et al. published a procedure for measuring the position of the user’s gaze using VEPs in 2000 [59]. The user is confronted with a screen that displays several virtual buttons that flash at varied rates. The frequency of the photic driving reflex over the user’s visual brain is determined after the user focuses their gaze on a button. Whenever the frequency of a shown button equals the frequency of the user, the system concludes that the user wants to pick it. Steady-State Evoked Potentials (SSEP) and P300 are two of the most well-evoked signals. External stimulation is required for evoked signals that can be unpleasant, awkward, and exhausting for the individual.

4.1.1. Steady-State Evoked Potential (SSEP)

SSEP signals are produced when a patient experiences periodic stimuli such as a flickering picture, modulated sound, or even vibrations [60,61]. The strength of the EEG signal in the brain must grow to meet the stimulus frequency. Signals in many brain locations are observed in terms of the sensory process. SSEP signals of different forms, such as steady-state visual potentials (SSVEPs), somatosensory SSEP, and auditory SSEP, are found. SSVEP is widely used in a variety of applications. These are normal brain reactions to repeating stimuli, which vary depending on the frequency with which they are presented. Although there are instances of BCI paradigms utilizing somatosensory (SSSEP) or auditory (SSAEP) stimuli, they are generally induced using visual stimuli (steady-state visually evoked potentials, SSVEP) [62].

4.1.2. P300 Evoked Potentials (P300)

The peaks in an EEG generated by infrequent visual, auditory, or somatosensory inputs are known as P300 evoked potentials. Without the need for training to use P300-based BCI systems. A matrix of symbols, in which selection is dependent on the participant’s gaze, is a prominent use of P300-based BCI systems. Such a signal is typically produced using an “odd-ball” paradigm. The user is asked to respond to a random succession of stimuli, which is less frequent than others [63]. The P300-based EEG waves are triggered when this unusual stimulus is significant to the person. P300 does not reasonably require any subject training, although, it does need repetitive stimulation, which may tire the subject and may cause inconsistencies.

4.2. Spontaneous Signals

With no external cues, the person produces random signals willingly. These signals are produced without any external stimuli (somatosensory, aural, or visual). Motor and sensorimotor rhythms, Slow Cortical Potentials (SCPs), and non-motor cognitive signals are some of the most prominent spontaneous signals [53].

4.2.1. Motor and Sensorimotor Rhythms

Motor activities are linked to motor and sensorimotor rhythms. Sensorimotor rhythms are rhythmic oscillations in electrophysiological brain activity in the mu (Rolandic band, 7–13 Hz) and beta (13–30 Hz) frequencies. Motor imagery is the process of converting a participant’s motor intentions into control signals employing motor imagery conditions [64]. The left-hand motion, in an instance, may result in EEG signals in the and rhythms and a decrease in certain motor cortex areas (8–12 Hz) and (18–26 Hz). Depending on the motor imagery rhythms, various applications can be used such as controlling a mouse or playing a game.

4.2.2. Slow Cortical Potentials (SCP)

SCP is indeed an EEG signal with a frequency less than 1 Hz [65]. It is a low-frequency potential observed in the frontal and central portions of the cortex and depolarization level variations throughout the cortical dendrites. SCP is a highly gradual change in brain activity, either positive or negative, that can only last milliseconds to several seconds. Through operant conditioning, the subject can control the movement of such signals. As a result, extensive training may be required in addition to that needed for motor rhythms. Many studies no longer choose SCP, and motor and sensorimotor rhythms have taken their place.

4.2.3. Non-Motor Cognitive Tasks

Cognitive objectives are utilized to drive the BCI in non-motor cognitive tasks. Several tasks, such as musical imagination, visual counting, mental rotation, and mathematical computation, might be completed [66]. Penny, W.D. et al. [67] used a pattern classifier with unclear parameters. The individual performed simple subtraction in one of their non-motor cognitive activities.

4.3. Hybrid Signals

The term “hybrid signals” refers to the utilization of a mixture of brain-generated signals for control. As a result, instead of measuring and using only one signal in the BCI system, a mix of signals is used. The fundamental goal of using two or more types of brain signals as input to a BCI system is to increase dependability while avoiding the drawbacks of each signal type [68].
Some research is addressed that the types of brain signals are classified into two categories [10]. These are event-related potentials and evoked brain potential. Three varieties are organized for evoked brain potential: Visual Evoked Potential (VEP), Tactile Evoked Potential (TEP), and Auditory Evoked Potential (AEP) [69].

5. Dataset

While analyzing the literature on BCI systems, we discovered various often used datasets that researchers used while implementing these techniques. In terms of the research, EEG is now the most frequent method for collecting brain data in BCI. As this is a noninvasive method and has convenient handling for most datasets, an EEG signal is used. However, for a variety of reasons, EEG does not provide a comprehensive method of data collection. It needs a variety of fixed things to acquire the data. Firstly, the signal must be acquired and stored by some subject, participants, or patients. It is unsuitable when only one subject requires the same arrangement as multiple subjects to obtain data. After the subjects are prepared, the electrodes (a gear mounted on the scalp) are attached to the individuals to capture and measure data. This data collection method lasted for several sessions, with a particular recording period determined by the work’s purpose. The saved data in these sessions and recordings are primarily brain signals measured by a brain’s action on a sure thing, such as a video or a picture. EEG signals differ from one participant to the next and from one session to the next. In this section, the datasets as well as the subjects and electrodes, channels, and sessions are described. The explanation is tabulated in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. In Table 2, some popular motor imagery datasets are illustrated. The most beneficial option for creating BCIs is motor imagery (MI) impulses captured via EEG, which offers a great degree of mobility. It enables people with motor disabilities to communicate with the device by envisioning motor movements without any external stimuli generated from the motor cortex. A few datasets based on error-related potentials (ErrPs) are exhibited in Table 3. ErrPs is an EEG dataset that utilizes a P300-based BCI speller to boost the performance of BCIs. Detecting and fixing errors of the neuronal signature of a user’s knowledge linked to a brain pattern is known as error-related potentials (ErrPs). Affective computing improves human–machine communication by identifying human emotions. Some mostly used emotion recognition datasets are shown in Table 4. Various EEG-based BCI devices can detect the user’s emotional states to make contact effortless, more useable, and practical. The emotions extracted in emotion-recognition datasets are valence, arousal, calm, positive, exciting, happy, sad, neutral, and fear. In addition, it is certainly clear by now that brain signals or memory are a mixed emotion. The part where all of these mixed emotions are gathered from different body parts is known as a miscellaneous part of the brain. Therefore, miscellaneous datasets include memory signals, brain images, brain signals, etc. Some miscellaneous datasets are represented in Table 5. In EEG-based BCI, the signals can detect eye movement such as eye blinks, eye states, etc. The BCI datasets of eye blinks or movements include voluntary and involuntary eye states, blinks, and activities are illustrated in Table 6. Subsequently, the electrical response in the brain to a specific motor or cognitive event such as a stimulus is known as an event-related potential (ERP). An unwanted sound, a sparking light, or a blinking eye can be an example of a stimulus. BCI utilizing ERPs attempts to track attention, weariness, and the brain’s reaction to this event-related stimulus. Table 7 is encapsulated with popular ERP datasets around. Moreover, the visual information-processing mechanism in the brain is reflected in Visually Evoked Potentials (VEPs). Flashing objects in the form of shifting colors or a reversing grid are frequent visual stimulators. The CRT/LCD monitor or flash tube/infrared diode (LED) is utilized for stimulus display in VEP-based BCIs. Frequently used VEP-based datasets with these utilized objects are represented in Table 8.
However, the dataset covers information recorded from the beginning of BCI. To extract information from datasets, feature extraction methods are necessary, which is reviewed in the following section.

6. Signal Preprocessing and Signal Enhancement

In most situations, the signal or data measured or extracted from datasets are filled with noise. With a natural human activity such as eye blinks and heartbeats, the collected data might become noisy. These noises are eliminated during the pre-processing step to produce clean data that may subsequently process the feature extraction and classification. This pre-processing unit is also known as signal enhancement since it cleans the signal in BCI. Some methods are used for signal enhancement in the BCI system, and these are explained elaborately in the following subsections.

6.1. Independent Component Analysis (ICA)

The noises and EEG signals are isolated in ICA by treating them as distinct entities. Furthermore, the data are retained during the removal of noises. This method divides the EEG data into spatially fixed and temporally independent components. In the case of computing and noise demonstrable, the ICA shows more efficiency [256].

6.2. Common Average Reference (CAR)

It is most commonly employed as a basic dimensionality reduction technique. This approach decreases noise across all recorded channels, but this does not address channel-specific noise and may inject noise into an otherwise clean channel. It is a spatial filter that can be thought of as the subtraction of shared EEG activity, retaining only the idle action of each EEG particular electrode [256].

6.3. Adaptive Filters

The adaptive filter is a computational device for mathematical processes. It connects the adaptive filter’s input/output signals iteratively. There are filter coefficients that are self-adjusted utilizing an adaptive algorithm. It works by altering signal properties depending on the characteristics of the signals under investigation [257].

6.4. Principal Component Analysis (PCA)

PCA is a technique for detecting patterns in data represented by a rotation of the coordinate axes. These axes are not aligned with single time points, but they depict a signal pattern with linear combinations of sets of time points. PCA keeps the axes orthogonal while rotating them to maximize variance along the first axis. It reduces feature dimensions and aids in data classification by completing ranking. In comparison with ICA, PCA compresses separate data better whether noise is eliminated with it or not [258].

6.5. Surface Laplacian (SL)

SL refers to a method of displaying EEG data with a high spatial resolution. SL can be generated using any EEG recording reference scheme as their estimates are reference-free. Based on the volume conductor’s exterior shape, it is a general estimate of the current density entering or exiting the scalp through the skull, and it does not require volume conduction details. The advantage of SL is that it improves the spatial resolution of the EEG signal. However, SL seems not to demand additional operative neuroanatomy premises as it is sensitive to spline patterns and artifacts [259].

6.6. Signal De-Noising

Artefacts frequently corrupt EEG signals taken from brain. These artifacts must be removed from EEG data to obtain valuable information from it. The technique of eliminating sounds or artefacts from EEG signals is known as de-noising [260]. Some de-noising methods are given below:
  • Wavelet de-noising and thresholding: The multi-resolution analysis is used to transfer the EEG signal to the discrete wavelet domain. The contrasting or adaptive threshold level is used to reduce particular coefficients associated with the noise signal [261]. Shorter coefficients would tend to define noise characteristics throughout time and scale in a well-matched wavelet representation. In contrast, threshold selection is one of the most critical aspects of successful wavelet de-noising. Thresholding can isolate the signal from the noise in this case; hence, thresholding approaches come in several shapes and sizes. All coefficients underneath a predetermined threshold value are set to zero in hard thresholding. Soft thresholding is a method of reducing the value of the remaining coefficients by a factor of two [262].
  • Empirical mode decomposition (EMD): It is a signal analysis algorithm for multivariate signals. It breaks the signal down into a series of frequency and amplitude-regulated zero-mean signals, widely known as intrinsic mode functions (IMFs). Wavelet decomposition, which decomposes a signal into multiple numbers of Intrinsic Mode Functions (IMFs), is compared by EMD. It decomposes these IMFs using a shifting method. An IMF is a function with a single maximum between zero crossings and a mean value of zero. It produces a residue after degrading IMFs. These IMFs are sufficient to characterize a signal [263].
Most of our datasets mentioned in the previous section are a part of various BCI paradigms and follow these signal enhancement techniques as well. The motor imagery datasets represent paradigms such as sensorimotor activity or rhythms. In addition, error-related potentials datasets and datasets such as event-related potentials or visually evoke potentials signify their own BCI paradigm. Some other paradigms, such as overt attention, eye movement, miscellaneous, and emotion recognition, identify their datasets. Indeed, these paradigms become bigger in number as the measurement of different brain movements and emotions are attempted. More than 100 BCI designs are required to use signal enhancement techniques to extract features from the signal. In comparison, Reference [264] shows that 32% of BCI designs use surface Laplacian (SL) to extract features, principal component analysis (PCA) or independent component analysis (ICA) was used in 22%, and common spatial patterns (CSP) and common average referencing (CAR) techniques are used in 14% and 11%, respectively.

7. Feature Extraction

Now, it is necessary to understand what the features represent, their qualities, and how to use them for a BCI system to select the best appropriate classifier. A classification system’s accuracy or efficiency is primarily determined by the feature(s) of the samples to be categorized [265]; therefore, feature extraction has been crucial stage in BCI. The majority of noninvasive BCI devices use neuroimaging techniques such as MEG and MRI. However, EEG is the most widely utilized method, owing to its high temporal resolution and inexpensive cost [266]. The EEG signal feature extraction method is one of the essential components of a BCI system because of its involvement in successfully executing the classification stage at discriminating mental states. Nevertheless, the feature extraction methods based on both EEG and ECoG are discussed elaborately in the subsequent section.

7.1. EEG-Based Feature Extraction

Typically, BCI focuses on identifying acquired events using various neuroimage techniques, the most common of which is electroencephalography (EEG). Since its involvement in successfully executing the classification stage at discriminating mental states, the EEG signal feature extraction method is one of the essential components of a BCI system. According to [267] on EEG, three types of feature extraction are discussed in detail in the following sections. These features are the time domain, the frequency domain, and the time–frequency domain. The following subsection address the feature domains elaborately.

7.1.1. Time Domain

The time–frequency domain integrates analyses in the time and frequency domains. It depicts the signal energy distribution in the Time–Frequency plane (t-f plane) [268]. When it comes to deciphering rhythmic information in EEG data, a time–frequency analysis comes in handy. EEG’s time-domain properties are straightforward to fix, but they have the disadvantage of containing non-stationary signals that alter over time. Features are usually derived using signal amplitude values in time-domain approaches that can be distorted by interference as noise during EEG recording.
  • Event related potentials: Event-related potentials (ERPs) are very low voltages generated in brain regions in reaction to specific events or stimuli. They are time-locked EEG alterations that provide a safe and noninvasive way to research psychophysiological aspects of mental activities. A wide range of sensory, cognitive, or motor stimuli can trigger event-related potentials [269,270]. ERPs are useful to measure the time to process a stimulus and a response to be produced. The temporal resolution of event-related potentials is remarkable, but it has a low spatial resolution. ERPs were used by Changoluisa, V. et al. [271] to build an adaptive strategy for identifying and detecting changeable ERPs. Continuous monitoring of the curve in ERP components takes account of their temporal and spatial information. Some limitations of ERPs are that it shows poor spatial resolution, whether it is suitable with temporal resolution [272]. Furthermore, a significant drawback of ERP is the difficulty in determining where the electrical activity originates in the brain.
  • Statistical features: Several statistical characteristics were employed by several scholars [273,274,275] in their research:
    Mean absolute value:
    M A V = 1 N n = 1 N x n
    Power:
    P = 1 N n = 1 N x n 2
    Standard deviation:
    S D = 1 N n = 1 N x ( n ) μ n
    Root mean square (RMS):
    RMS = 1 N i = 1 N x i 2 1 / 2
    Square root of amplitude (SRA):
    SRA = 1 N i = 1 N x i 2
    Skewness value (SV):
    SV = 1 N i = 1 N x l x ¯ σ 3
    Kurtosis value (KV):
    KV = 1 N i = 1 N x l x ¯ σ 4
    where x ( n ) is the pre-processed EEG signal with N number of samples; μ n refers to the meaning of the samples. Statistical features are useful at low computational cost.
  • Hjorth features: Bo Hjorth introduced the Hjorth parameters in 1970 [276]; the three statistical parameters employed in time-domain signal processing are activity, mobility, and complexity. Dagdevir, E. et al. [277] proposed a motor imagery-based BCI system where the features were extracted from the dataset using the Hjorth algorithm. The Hjorth features have advantages in real-time analyses as it has a low computation cost. However, it has a statistical bias over signal parameter calculation.
  • Phase lag index (PLI): The functional connectivity is determined by calculating the PLI for two pairs of channels. Since it depicts the actual interaction between sources, this index may help estimate phase synchronization in EEG time series. PLI measures the asymmetry of the distribution of phase differences between two signals. The advantage of PLI is that it is less affected by phase delays. It quantifies the nonzero phase lag between the time series of two sources, making it less vulnerable to signals. The effectiveness of functional connectivity features evaluated by phase lag index (PLI), weighted phase lag index (wPLI), and phase-locking value (PLV) on MI classification was studied by Feng, L.Z. et al. [278].

7.1.2. Frequency Domain

When analyzing any signal in terms of frequency instead of just time, the frequency domain properties are considered. Any signal’s frequency domain representation displays how much of it falls inside a specific frequency range. The frequency domain properties are commonly acquired using power spectral density (PSD). The discussion about these properties is presented below in the following section.
  • Fast fourier transform (FFT): The Fourier transform is a mathematical transformation that converts any time-domain signal into its frequency domain. Discrete Fourier Transform (DFT) [279], Short Time Fourier Transform (STFT) [280,281], and Fast Fourier Transform are the most common Fourier transform utilized for EEG-based emotion identification (FFT) [282]. Djamal, E.C. et al. [283] developed a wireless device that is used to record a player’s brain activity and extracts each action using Fast Fourier Transform. FFT is faster than any other method available, allowing it to be employed in real-time applications. It is a valuable instrument for signal processing at a fixed location. A limitation of FFT is that it can convert the limited range of waveform data and the requirement to add a window weighting function to the waveform to compensate for spectral leakage.
  • Common spatial patterns (CSP): It is a spatial filtering technique usually employed in EEG and ECoG-based BCIs to extract classification-relevant data [284]. It optimizes the ratio of their variances whenever two classes of data are utilized to increase the separability of the two classes. In the case of dimensionality reduction, if a different dimension reduction phase precedes CSP, it appears to be better and has more essential generalization features. The basic structure of the CSP can be described by the Figure 5.
    In Figure 5, CSP provides spatial filters that minimize the variance of an individual class while concurrently maximizing the variance of other classes. These filters are mainly used to choose the frequency from the multichannel EEG signal. After frequency filtering, spatial filtering is performed using spatial filters that are employed to extract spatial information from the signal. Spatial information is significantly necessary to differentiate intent patterns in multichannel EEG recordings for BCI. The performance of this spatial filtering depends on the operational frequency band of EEG. Therefore, CSP is categorized as a frequency domain feature. However, CSP acts as signal enhancement while it requires no preceding excerpt or information of sub-specific bands.
  • Higher-order Spectral (HOS): Second-order signal measurements include the auto-correlation function and the power spectrum. Second-order measures operate satisfactorily if the signal resembles a Gaussian probability distribution function. However, most of the real-world signals are non-Gaussian. Therefore, Higher-Order Spectral (HOS) [285] is an extended version of the second-order measure that works well for non-Gaussian signals, when it comes into the equation. In addition, most of the physiological signals are nonlinear and non-stationary. HOS are considered favorable to detect these deviations from the signal’s linearity or stationarity. It is calculated using the Fourier Transform at various frequencies.
    H O S = X ( K ) X ( l ) X ( k + l )
    where X ( K ) is the Fourier transform of the raw EEG signal x ( n ) and l is a shifting parameter.

7.1.3. Time–Frequency Domain

In the time-frequency domain, the signal is evaluated both in the time and frequency domains simultaneously. The wavelet transform is one of many advanced approaches for analyzing the time-frequency representation. There are some other widely used models for utilizing the time-frequency domain. These models are addressed with a proper explanation in the subsequent section.
  • Autoregressive model: For EEG analysis, the Autoregressive (AR) model has been frequently employed. The central premise of the autoregressive (AR) model is that the real EEG can be approximated using the AR process. With this premise, the approximation AR model’s order and parameters are set to suit the observed EEG as precisely as possible. AR produces a smooth spectrum if the model order is too low, while it produces false peaks if it is too high [287]. AR also reduces leakage and enhances frequency resolution, but choosing the model order in spectral estimation is difficult. The observational data, denoted as x ( n ) , results from a linear system with an H ( z ) transfer function. Then, x ( n ) encounters an AR model of rank p in the formula [288].
    x ( n ) = i = 1 p a p ( i ) x ( n i ) + v ( n )
    The AR parameters are a p ( i ) , the observations are x ( n ) and the excitation white noise is v ( n ) . Lastly, the most challenging part of AR EEG modeling is choosing the correct model to represent and following the changing spectrum correctly.
  • Wavelet Transform (WT): The WT technique encodes the original EEG data using wavelets, which are known as simple building blocks. It looks at unusual data patterns using variable windows with expansive windows for low frequencies and narrow windows for high frequencies. In addition, WT is considered an advanced approach as it offers a simultaneous localization in the time-frequency domain, which is a significant advantage. These wavelets can be discrete or continuous and describe the signal’s characteristics in a time-domain frequency. The Discrete Wavelet Transform (DWT) and the Continuous Wavelet Transform (CWT) are used frequently in EEG analysis [289]. DWT is now a more widely used signal processing method than CWT as CWT is very redundant. DWT decomposes any signal into approximation and detail coefficients corresponding to distinct frequency ranges maintaining the temporal information in the signal. However, most researchers try all available wavelets before choosing the optimal one that produces the best results, as selecting a mother wavelet is challenging. In wavelet-based feature extraction, the Daubechies wavelet of order 4 (db4) is the most commonly employed [290].

7.2. ECoG-Based Features

Electrocorticography (ECoG) generates a reliable signal through electrodes placed on the surface of the human brain, which decodes movement, vision, and speech. Decoding ECoG signal processing gives immediate patient feedback and controls a computer cursor or perhaps an exoskeleton. The ECoG signal feature extraction approach is a crucial element of the BCI system since it is involved in accomplishing the classification phase during decoding. Some of the widely used feature extraction methods are discussed below.

7.2.1. Linear Filtering

It is typically employed to filter out noise in the form of signals that are not in the frequency range of the brain’s messages. Low-pass filters and high-pass filters are the two types of linear filters. This typical linear filtering is used to removed ECOG, EOG, and EMG artifacts from EEG signals. Low pass filtering is used to remove EMG artifacts, and high pass filtering is used to remove EOG artifacts [291]. These artifacts are noises produced by either physiological processes such as muscle, eye, or other biological movement or exogenous (external) sources such as machinery faults. There are three approaches for dealing with artifacts in EEG signal acquisition. Avoiding artifacts by keeping an eye on the subject’s movements and the machine’s operation. Contaminated trials are discarded due to artifact rejection. Pre-processing techniques are used to remove artifacts. The advantage of linear filtering is that signals are considered a controlled scaling of the signal’s frequency domain components. High pass filtering is used to raise the relative importance of the high-frequency components by reducing the features in the frequency domain’s center.

7.2.2. Spatial Filtering

Spatial filtering is a technique for improving decoding by leveraging information about the electrode positions. The spatial filter aims to lessen the influence of spatial distortion in the raw signal; various ECoG channels are treated as coordinates for multivariate data sampling through spatial filters. The filtering transforms that coordinate system to facilitate decoding. Spatial filtering can use to minimize data dimensionality or to increase the dissimilarity of various observations. The referencing systems used during ECoG recordings are frequently utilized for preliminary spatial filtering. Equation (10) determines the spatial filter [292].
x = i n x i w i
where x is the spatially filtered signal, x i is the EEG signal from channel i, and w i is the weight of that channel. With the aid of relevant information acquired from multiple EEG channels, spatial filtering contributes to recovering the brain’s original signal. Simultaneously, it reduces dimensionality by lowering EEG channel size to smaller spatially filtered signals.
Thus far, feature extraction involves extracting new features from existing ones to minimize feature measurement costs, to improve classifier efficiency, and to improve classification accuracy. Now in the following section, the extracted feature classifiers are briefly described.

8. BCI Classifiers

BCI always needs a subject to use its device, and similarly, the subject must produce several types of data to use a BCI device. In addition, to use a BCI system, the subject must develop various brain activity patterns that the system can recognize and convert into commands. To achieve this mentioned conversion, some regression or classification algorithms can be used. The classification step’s design comprises selecting one or more classification algorithms from a variety of options. In this section, some commonly known classifiers [293], which are classified in Figure 6, as well as some new classifiers [294] are described below.

8.1. Linear Classifiers

Linear classifiers are discriminant algorithms that discriminate classes using linear functions. It is most likely the most widely used algorithm in BCI systems. Two types of linear classifiers are used during BCI design: linear discriminant analysis (LDA) and support vector machine (SVM).

8.1.1. Linear Discriminant Analysis (LDA)

The objective of Linear Discriminant Analysis is to separate data from diverse classes using a hyperplane. The side of hyperplane determinded through the category of a feature vector in a two-class problem. LDA requires that the data has a normal distribution and that both classes have the same covariance matrix. The separation hyper-plane is based on looking for a projection that maximizes the margin between the means of two classes while minimizing intraclass variance [295]. Furthermore, this classifier is straightforward to apply and generally produces excellent results and soundly implemented in various BCI system, including MI-based BCI, P300 speller, multiclass, and asynchronous BCI. The disadvantage of LDA is its linearity, which might lead to unsatisfactory results when faced with various nonlinear EEG data.

8.1.2. Support Vector Machine (SVM)

A Support Vector Machine (SVM) uses a discriminant hyperplane to identify classes. The determined hyperplane in SVM is the one that maximizes the margins, i.e., the distance between both the nearest training samples. The ability to generalize is believed to improve when margins are maximized [296]. Linear SVM [297] is a type of SVM that allows for classification utilizing linear decision bounds. This classifier has been used to solve a substantial number of synchronous BCI tasks with tremendous success. The SVM classifier also works by projecting the input vector X onto a scalar value f(X), as shown in Equation (11).
f ( X ) = l = 1 N a 1 y l K X l , X + b
Gaussian SVM or RBF SVM is the term applied to the equivalent SVM. RBF and SVM have also produced remarkable outcomes in BCI applications. SVM is used to solve multiclass BCI problems that use the OVR approach, similar to LDA.

8.2. Neural Networks (NN)

Neural networks (NN) and linear classifiers are the two types of classifiers most usually employed in BCI systems, considering that a NN is a collection of artificial neurons that allows us to create nonlinear decision limits [298]. The multilayer perceptron (MLP) is the most extensively used NN for BCI, as described in this section. Afterward, it briefly discusses other neural network architectures utilized in BCI systems.

8.2.1. Deep Learning (DL) Models

Deep learning has been widely used in BCI applications nowadays compared with machine learning technologies because most BCI applications require a high level of accuracy. Deep learning models perform better in recognizing changing signals from the brain, which changes swiftly. Some popular DL models such as CNN, GNN, RNN, and LSTM are described below:
  • Convolutional Neural Network (CNN): A convolutional neural network (CNN) is an ANN intended primarily to analyze visual input used in image recognition and processing. The convolutional layer, pooling layer, and fully connected layer are the three layers that comprise CNN. Using a CNN, the input data may be reduced to instant response formations with a minimum loss, and the characteristic spatial relationships of EEG patterns can be recorded. Fatigue detection, sleep stage classification, stress detection, motor imagery data processing, and emotion recognition are among the EEG-based BCI applications using CNNs. In BCI, the CNN models are used in the input brain signals to exploit the latent semantic dependencies.
  • Generative Adversarial Network (GAN): Generative adversarial networks are a recent ML technique. The GAN used two ANN models for competing to train each other simultaneously. GANs allow machines to envision and develop new images on their own. EEG-based BCI techniques recorded the signals first and then moved to the GAN techniques to regenerate the images [299]. The significant application of GAN-based BCI systems is data augmentation. Data augmentation increases the amount of training data available and allows for more complicated DL models. It can also reduce overfitting and can increase classifier accuracy and robustness. In the context of BCI, generative algorithms, including GAN, are frequently used to rebuild or generate a set of brain signal recordings to improve the training set.
  • Recurrent Neural Network (RNN): RNNs’ basic form is a layer with the output linked to the input. Since it has access to the data from past time-stamps, and the architecture of an RNN layer allows for the model to store memory [300,301]. Since RNN and CNN have strong temporal and spatial feature extraction abilities in most DL approaches, it is logical to mix them for temporal and spatial feature learning. RNN can be considered a more powerful version of hidden Markov models (HMM), which classifies EEG correctly [302]. LSTM is a kind of RNN with a unique architecture that allows it to acquire long-term dependencies despite the difficulties that RNNs confront. It contains a discrete memory cell, a type of node. To manage the flow of data, LSTM employs an architecture with a series of “gates”. When it comes to modeling time series of tasks such as writing and voice recognition, RNN and LSTM have been proven to be effective [303].

8.2.2. Multilayer Perceptron (MLP)

An Multilayer Perceptron (MLP) [304] comprises multiple layers of neurons along with an input layer, one or more hidden layers, and an output layer. The input of each neuron is linked to the output of the neurons in the preceding layer. Meanwhile, the output layer neurons evaluate the classification of the input feature vector. MLP and neural networks can approximate, meaning they can compare continuous functions if they have sufficient neurons and layers. The challenging factor behind MLPs is that they are susceptible to over-training, particularly containing noisy and non-stationary data. As a result, significant selection and regularization of the architecture are necessary. Perceptron is a multilayer with no hidden layers comparable with LDA. It has been used in BCI applications on occasion [293]. Sunny, M.S.H. et al. [305] used Multilayer Perceptron (MLP) to distinguish distinct frequency bands from EEG signals to extract features more effectively.

8.2.3. Adaptive Classifiers

As new EEG data become accessible, adaptive classifiers’ parameters, such as the weights allocated to each feature in a linear discriminant hyperplane, are gradually re-estimated and updated. Adaptive classifiers can use supervised and unsupervised adaptation, that is, with or without knowledge of the input data’s real class labels. The true class labels of the receiving EEG signals are obtained using supervised adaptation. The classifier is either reassigned on the existing training data, enhanced with these updated, labeled incoming data, or updated solely on this new data. Supervised user testing is essential for supervised BCI adaptation. The label of the receiving EEG data is vague with unsupervised adaptation. As a result, unsupervised adaptation is based on class-unspecific adaptation, such as updating the generalized classes EEG data mean or a co-variance matrix in the classifier model or estimating the data class labels for additional training [306].

8.3. Nonlinear Bayesian Classifiers

This section discusses the Bayes quadratic and hidden Markov models (HMM), two Bayesian classifiers used in BCI. Although Bayesian graphical networks (BGN) have been used for BCI, they are not covered here since they are not widely used [307].

8.3.1. Bayes Quadratic

The objective of Bayesian classification is to provide the highest probability class to a feature vector. The Bayes rule is often used to calculate the a posteriori probability of a feature vector assigned to a single class. The class of this feature vector can be calculated by using the MAP (maximum a posteriori) rule with these probabilities. The Bayes quadratic assumption is that the data have a distinct normal distribution. The result is quadratic decision boundaries that justify the classifier’s name [308]. Although this classifier is not extensively utilized for BCI, it has been successfully used to classify motor imagery and mental tasks.

8.3.2. Hidden Markov Model

A Bayesian classifier that generates a nonlinear cost function is known as a Hidden Markov Model (HMM). An HMM is a statistical algorithm that calculates the chances of seeing a given set of feature variables [309]. These statistical probabilities from HMM are generally Gaussian Mixture Models (GMM) in case of BCI [310]. HMM may be used to categorize temporal patterns of BCI characteristics (Obermaier, B. et al. [302]), even raw EEG data, since the EEG elements required to control BCI have particular time sequences. Although HMM is not widely used in the BCI world, this research demonstrated that they could be helpful to classification on BCI systems such as EEG signals [311].

8.4. Nearest Neighbor Classifiers

In this section, some classifiers with distance vectors are described. Classifiers such as K nearest neighbors (KNN) and Mahalanobis distance are common among them as they are nonlinear discriminative classifiers [312].

8.4.1. K Nearest Neighbors

K nearest neighbor method aims to identify the dominant class amongst an unseen point within the dataset habituated for training. Nearest neighbors are typically estimated using a metric that has some intervals during the signal acquisition of BCI. KNN can construct nonlinear decision boundaries by evaluating any function with enough training data with an inflated k value. The usability of KNN algorithms is less in the BCI field as their condescending sensitivity hampers the capacity, which causes them to fail in multiple BCI research. KNN is efficient in BCI systems with some feature vectors, but low power can cause failure in BCI research [313].

8.4.2. Mahalanobis Distance

For each prototype of class c, Mahalanobis distance-based classifiers [314] assume a Gaussian distribution N ( c , M c ) . Subsequently, using the Mahalanobis distance d c , a feature vector x is allocated to the class that corresponds to the closest prototype (x).
d c ( x ) = x μ c M c 1 x μ c T
This results in a basic yet reliable classifier; it has been shown to work in multiclass and asynchronous BCI systems. Considering its excellent results, it is still rarely mentioned in BCI literature [315].

8.5. Hybrid

In several BCI papers, classification is implemented with a single classifier. Furthermore, a current tendency is to combine many classifiers in various ways [316]. The following are indeed the classifier combination strategies utilized in BCI systems:

8.5.1. Boosting

Boosting is the process of using multiple classifiers in a cascade, and each focused on the errors made by the one before it. It can combine numerous weak classifiers to form a powerful one; thereforem it is unlikely to overtrain. Moreover, it is susceptible to mislabeling, illustrating why it failed in one BCI trial [293].

8.5.2. Voting

Multiple classifiers are employed for voting, each of which allocates the input feature vector to a class. The majority class becomes the final class. In BCI systems, voting is the most preferred process of combining classifiers due to its simplicity and efficiency [293].

8.5.3. Stacking

Stacking is the process of utilizing multiple classifiers to categorize the input feature vector. Level-0 classifiers are what it is named. Each one of these classifiers’ output would then feed into a “meta-classifier” (or “level-1 classifier”), which makes a final decision [293].
Aforementioned in this section, some other classifiers are utilized in the recent BCI research. Since 2016 transfer learning is used for using MI classification tasks [317]. Some ground-breaking architectures are established in recent years, such as EEG-inception, an end-to-end Neural network [318], cluster decomposing, and multi-object optimization-based-ensemble learning framework [319]; RFNet is a fusion network that learns from attention weights and used in embedding-specific features for decision making [179].
Now, a better understanding of the performance of commonly known classifiers with some popular datasets are given in Table 9.

9. Evaluation Measurement

To evaluate the performance of BCI systems, researchers employed several evaluation metrics. The most common is accuracy, commonly known as error rate. Although accuracy is not always an acceptable criterion due to specific rigorous requirements, various evaluation criteria have been offered. An overview of BCI research evaluation criteria is provided below.

9.1. Generally Used Evaluation Metrics

In this section, we sorted the most commonly used evaluation metrics for measuring the BCI system performances. The evaluation measures are explained carefully in the following subsections.

9.1.1. The Confusion Matrix

The confusion matrix represents the relationship between the actual class’s user-intentioned output classes and the actual predicted class. True positives rate (TPR), False negative rate (FNR), False positives rate (FPR), Positive predictive value (PPV), and negative predictive value (PPV) are used to describe sensitivity or recall, specificity, (1-specificity), precision, etc. [325].

9.1.2. Classification Accuracy and Error Rate

Classification accuracy is one of the important metrics in BCI systems; this study evaluates performance using classification accuracy as well as sensitivity and specificity. This measure determines how frequently the BCI makes a right pick or what proportion of all selections are accurate. It is the most obvious indicator of BCI accomplishment, implying that it increase in a linear fashion with decision time, so it takes a long time. The following is the mathematical formula for calculating accuracy:
Classification accuracy = Correctly classified test trials Total test triols × 100

9.1.3. Information Transfer Rate

Shannon [326] proposed the Information Transfer Rate (ITR) as the rate that makes up both of these metrics. This rate represents the quantity of data that may pass through the system in one unit of time. In [327], the information transmission rate in bits per minute ( b i t s / m i n ) and accuracy (ACC) in percentage (%) were used to evaluate performance. They made demographic data (age and gender) as well as the performance outcomes of 10 participants, and the ITR was computed using the Formula (14), which is as follows:
B t = log 2 N + p log 2 p + ( 1 p ) log 2 1 p N 1 ,
where N is the number of targets and p is the classification accuracy (ACC). Based on four cursor movements and the choose command, this resulted in a N of 5. Bits per trial were used to compute B t .
According to ITR [328] also has some important parameters that are used to evaluate BCI. A description of them is given below:
  • Target detection accuracy: The accuracy of target identification may be enhanced by increasing the Signal-to-Noise Ratio (SNR) and the separability of several classes. Several techniques, such as trial averaging, spatial filtering, and eliciting increased task-related EEG signals, are employed in the preprocessing step to reduce the SNR. Many applications utilize trail averaging across topics to improve the performance of a single BCI. These mental states may be used to lower the SNR [53].
  • Number of classes: The number of classes is raised and more sophisticated applications are built with a high ITR. TDMA, FDMA, and CDMA are among the stimulus coding techniques that have been adopted for BCI systems [243,329]. P300, for example, uses TDMA to code the target stimulus. In VEP-based BCI systems, FDMA and CDMA have been used.
  • Target detection time: The detection time is when a user first expresses their purpose and when the system makes a judgment. One of the goals of BCI systems is to improve the ITR by reducing target detection time. Adaptive techniques, such as the “dynamic halting” method, might be used to minimize the target detection time [330].

9.1.4. Cohen’s Kappa Coefficient

Cohen’s Kappa measures the agreement between two observers; it measures the contract between the proper output and the command of BCI domain in a BCI-based AAC system. Cohen’s kappa coefficient resolves many of the accuracy measure’s objections [331]. The general agreement p 0 = A C C , which is equivalent to the classification accuracy and the chance agreement p e , with n i and n i i being the column i t h and row i t h , correspondingly, are used to calculate K.
p e = i = 1 M n i i n i : N 2
where posteriori and priori probability are n : i , n i : respectively. The estimated kappa Coefficient K and standard error e ( K ) are acquired by
κ = p 0 p e 1 p e
When there is no correlation between the expected and actual classes, the kappa coefficient becomes zero. A perfect categorization is indicated by a kappa coefficient of 1. If the Kappa value is less than zero, the classifier offers an alternative assignment for the output and actual classes [332].
σ e ( κ ) = p 0 + p e 2 i = 1 M n : i n i : n : i + n i : / N 3 1 p e N

9.2. Continuous BCI System Evaluation

Continuous BCI performance was measured using a variety of parameters. Different measures may be even more appropriate depending on whether the study is conducted online or offline. The section goes through some of the most commonly used metrics in this field, including the correlation coefficient, accuracy, and Fitts’s Law [333].

9.2.1. Correlation Coefficient

The correlation coefficient could be a useful statistic for determining whether an intracortical implant receives task-relevant neurons. There are two essential stipulations: one is scale-invariant, which implies that the cursor might miss the mark substantially while still generating high values if the sign of the actual and anticipated movements coincide [334]; the other is that a decoder can yield a high value if it simply generates a signal that fluctuates with the repetitions [333].

9.2.2. Accuracy

Task characteristics such as target size and dwell time have a significant impact on accuracy. As a result, it is more of a sign that the task was is good enough for the subject and modality than a performance measure [333].

9.2.3. Fitts’s Law

Fitts’s law asserts that the time taken for a person to move a mouse cursor to a targeted object of the target’s distance is divided by its size. The longer it takes, the greater the distance and the narrower the target [335,336]. Fitts’s law requires using a method to calculate the “index of difficulty” of a particular change.

9.3. User-Centric BCI System Evaluation

Users are an essential element of the BCI product life cycle. Their interactions and experiences influence whether BCI systems are acceptable and viable. The four criteria or User Experience (UX) factors are used to evaluate user-centric BCI systems. These are usability, affects, ergonomics, and quality of life, shown below in the following subsection.

9.3.1. Usability

The amount that can be utilized to fulfill specific objectives with effectiveness, efficiency, learnability, and satisfaction in a given context is referred to as usability [337]. In usability measure, we can include four metrics, such as,
  • Effectiveness or accuracy: It depicts the overall accuracy of the BCI system as experienced from the end user’s perspective [333].
  • Efficiency or information transfer rate: It refers to the speed and timing at which a task is accomplished. Therefore, it depicts the overall BCI system’s speed, throughput, and latency seen through the eyes of the end user’s perspective [333].
  • Learnability: The BCI system can make users feel as if they can use the product effectively and quickly learn additional features. Both the end-user and the provider are affected by learnability [338].
  • Satisfaction: It is based on participants’ reactions to actual feelings while using BCI systems, showing the user’s favorable attitude regarding utilizing the system. To measure satisfaction, we can use rating scales or qualitative methods [333].

9.3.2. Affect

Regarding BCIs, it might refer to how comfortable the system is, particularly for long periods, and how pleasant or uncomfortable the stimuli are to them. EEG event-related possibilities, spectral characteristics, galvanic skin responses, or heart rates could be used to quantitatively monitor user’s exhaustion, valence, and arousal levels [339].

9.3.3. Ergonomics

Ergonomics studies are the study of how people interact with their environments. The load on the user’s memory is represented by the cognitive task load, a multidimensional entity. In addition, physiological markers including eye movement, EEG, ERP, and spectral characteristics could also be employed to evaluate cognitive stress objectively [340].

9.3.4. Quality of Life

It expresses the user’s overall perception of the system’s utility and acceptance and its influence on their well-being. The Return on Investment (ROI) is an economic measure of the perceived benefit derived from it. The overall quality of experience is a measure of how satisfied a user is with their expertise [333].
Other assessment methods, such as Mutual Information, Written symbol rate (WSR), and Practical bit rate (PBR), are utilized to a lesser extent.

10. Limitations and Challenges

The brain-computer interface is advancing towards a more dynamic and accurate solution of the connection between brain and machine. Still, few factors are resisting achieving the ultimate goal. Therefore, we analyzed a few core research on BCI in this section and found the limitations exhibited in Table 10. Then, we demonstrated the significant challenges of the BCI domain.
The challenges and difficulties of the BCI domain are divided into three categories: challenges based on usability, technical challenges, and ethical challenges. The rest of the section briefly explains these challenges.

10.1. Based on Usability

This section describes the challenges that users have in accepting BCI technology [350]. They include concerns relating to the requisite training for class discrimination.

10.1.1. Training Time

Usually, training a user, either leading the user through the procedure or the total quantity of the documented manual, takes time. The majority of the time, the user also requests the system to be simpler to use. The users often despise a complicated system that is difficult to manage. It is a challenging effort to create such a sophisticated, user-friendly system [351].

10.1.2. Fatigue

The majority of present BCIs generate a lot of fatigue since they need a lot of concentration, focus, and awareness to a rapid and intermittent input. In addition to the annoyance of weariness of electrodes, BCI may fail to operate because the user cannot maintain a sufficient degree of focus. As in BCI, mental activity is continually monitored and the user’s attention point alters the input. The concentration necessary for stimuli results in a combination of input and output [352,353]. Rather than relaxing, the user must concentrate on a single point as an input and then look at the outcome. At some point, the interaction has a forced quality to it, rather than the natural quality that would be there if the user could choose whatever part of the visual output to focus on [6].

10.1.3. Mobility to Users

Across most situations, users are not allowed to move around or to have mobility in BCIs. During the test application, users must stay motionless and quiet, ideally sitting down. However, in a real-world setting, a user may need to utilize BCI while walking down the street, for example, to manage a smartphone. Additionally, BCIs cannot ensure user comfort. Usually, the EEG headset is not lightweight and easy to carry, which hampers the user experience.

10.1.4. Psychophysiological and Neurological Challenges

Emotional and mental mechanisms, cognition-related neurophysiology, and neurological variables, such as functionality and architecture, play vital roles in BCI performance, resulting in significant intra- and inter-individual heterogeneity. Immediate brain dynamics are influenced by psychological elements such as attention; memory load; weariness; conflicting cognitive functions; and users’ specific characteristics such as lifestyle, gender, and age. Participants with weaker empathy engage less emotionally in a P300-BCI paradigm and generate larger P300 wave amplitudes than someone with greater empathy involvement [354].

10.2. Technical Challenges

Non-linearity, non-stationarity, and noise as well as limited training sets and the accompanying dimensionality curse are difficulties relating to the recorded electrophysiological characteristics of brain impulses.

10.2.1. Non-Linearity

The brain is a very complex nonlinear system in which chaotic neuronal ensemble activity may be seen. Nonlinear dynamic techniques can thus better describe EEG data than linear ones.

10.2.2. Non-Stationarity

The non-stationarity of electrophysiological brain signals to recognize human recognition is a significant challenge in developing a BCI system. It results in a constant shift in the signals utilized with time, either between or within transition time. EEG signal variability can be influenced by the mental and emotional state backdrop across sessions. In addition, various emotional states such as sadness, happiness, anxiety, and fear can vary on daily basis that reflects non-stationarity [355]. Noise is also a significant contribution to the non-stationarity problems that BCI technology faces. Noises and other external interferences are always present in raw EEG data of emotion recognition that is most robust [356]. It comprises undesired signals generated by changes in electrode location as well as noise from the surroundings [357].

10.2.3. Transfer Rate of Signals

In BCIs, the system must continuously adjust to the signals of the user. This modification must be made quickly and precisely. Current BCIs have an extremely slow information transfer rate, taking almost two minutes to “digitalize” a single phrase, for example. Furthermore, BCI accuracy does not always reach a desirable level, particularly in visual stimulus-based BCI. Actions must sometimes be repeated or undone, producing pain or even dissatisfaction in using interactive systems using this type of interface [358].

10.2.4. Signal Processing

Recently, a variety of decoding techniques, signal processing algorithms, and classification algorithms have been studied. Despite this, the information retrieved from EEG waves does not have a high enough signal-to-noise ratio to operate a device with some extent of liberty, such as a prosthetic limb. Algorithms that are more resilient, accurate, and quick are required to control BCI.

10.2.5. Training Sets

In BCI, the training process is mainly impacted by usability concerns, but training sets are tiny in most cases. Although the subjects find the training sessions time-consuming and challenging, they give the user the required expertise to interact with the system and to learn to manage their neurophysiological signals. As a result, balancing the technological complexity of decoding the user’s brain activity with the level of training required for the proper functioning of the interfaces is a crucial issue in building a BCI [359].

10.2.6. Lack of Data Analysis Method

The classifiers should be evaluated online since every BCI implementation is in an online situation. Additionally, it should be validated to ensure that it has low complexity and can be calibrated rapidly in real-time. Domain adaptation and transfer learning could be an acceptable solution for developing calibration-free BCIs, where even the integration of unique feature sets, such as covariance matrices with domain adaptation algorithms, can strengthen the invariance performance of BCIs.

10.2.7. Performance Evaluation Metrics

A variety of performance evaluation measures are used to evaluate BCI systems. However, when different evaluation metrics are used to assess BCI systems, it is nearly impossible to compare systems. As a result, the BCI research community should establish a uniform and systematic approach to quantify a particular BCI application or a particular metric. For example, to test the efficiency of a BCI wheelchair control, the number of control commands, categories of control commands, total distance, time consumed, the number of collisions, classification accuracy, and the average success rate need to be evaluated, among other factors [360].

10.2.8. Low ITR of BCI Systems

The information transfer rate is one of the extensively used processes for the performance evaluation metrics of BCI systems. The number of classes, target detection accuracy, and target detection time are all factors of this rate. By increasing the Signal-to-Noise Ratio (SNR), it can improve the target detection accuracy [53,328]. Several techniques are typically used for the preprocessing phase to optimize the SNR. When a high ITR has been attained, more complicated applications can be created by expanding the number of classes available. CDMA, TDMA, and FDMA [243,361] are only a few of the stimulus coding schemes that have already been developed for BCI systems. TDMA was used with P300 to code the required stimuli, while CDMA and FDMA have been used with BCIs that interact with VEP. Furthermore, the essential aspect of BCIs is reducing the target recognition period, which helps to increase the ITR. Adaptive techniques, such as “dynamic stopping”, could be an effective option for accomplishing this.

10.2.9. Specifically Allocated Lab for BCI Technology

Most of the BCI systems are trialed in a supervised lab rather than in the actual surroundings of the users. When designing a BCI system, it is essential to think about the environment in which the technology may be used. It is critical to thoroughly investigate the system’s requirements, environmental factors, circumstances, and target users mostly during the system design phase.

10.3. Ethical Challenges

There are many thoughts surrounding the ethical issues behind BCI as it considers physical, psychological, and social factors. In biological factors, BCI always finds a human body to identify signals that must be acquainted with electrodes. As humans need to wear these electrodes, it is always risky for them and can harm the human body to some worse extent. BCI also requires strict maintenance of the human body during signal acquisition, so the subject must sit for a long time in his place. Adding to that, a user or participant must act what the electrodes need, so they cannot do anything willingly. This fact can have a substantial impact on the human body.

11. Conclusions

The brain-computer interface is a communication method that joins the wired brain and external applications and devices directly. The BCI domain includes investigating, assisting, augmenting, and experimenting with brain signal activities. Due to transatlantic documentation, low-cost amplifiers, greater temporal resolution, and superior signal analysis methods, BCI technologies are available to researchers in diverse domains. Moreover, It is an interdisciplinary area that allows for biology, engineering, computer science, and applied mathematics research. However, an architectural and constructive investigation of the brain–computer interface is exhibited in this article. It is aimed at novices who would like to learn about the current state of BCI systems and methodologies. The fundamental principles of BCI techniques are discussed elaborately. It describes the architectural perspectives of certain unique taxons and gives a taxonomy of BCI systems. The paper also covered feature extraction, classification, evaluation procedures, and techniques as the research continues. It presents a summary of the present methods for creating various types of BCI systems. The study looks into the different types of datasets that are available for BCI systems as well. The article also explains the challenges and limitations of the described BCI systems, along with possible solutions. Lastly, BCI technology advancement is accomplished in four stages: primary scientific development, preclinical experimentation, clinical investigation, and commercialization. At present, most of the BCI techniques are in the preclinical and clinical phases. The combined efforts of scientific researchers and the tech industries are needed to avail the benefit of this great domain to ordinary people through commercialization.

Author Contributions

Conceptualization, M.F.M.; Data curation, M.F.M., S.C.D., M.M.K. and A.A.L.; Formal analysis, M.F.M.; Investigation, M.R.I. and Y.W.; Methodology, M.F.M., S.C.D., M.M.K., A.A.L., M.R.I. and Y.W.; Software, S.C.D., M.M.K. and A.A.L.; Supervision, M.R.I.; Validation, M.F.M., M.R.I. and Y.W.; Visualization, M.F.M., S.C.D., M.M.K. and A.A.L.; Writing—original draft, M.F.M., S.C.D., M.M.K., A.A.L., M.R.I. and Y.W.; Writing—review & editing, M.F.M., M.R.I. and Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

There is no statement regarding the data.

Acknowledgments

We would like to thank Bangladesh University of Business & Technology (BUBT), University of Asia Pacific (UAP), and University of Aizu (UoA) for supporting this research. Also, special thanks to the Advanced Machine Learning lab, BUBT; Computer Vision & Pattern Recognition Lab, UAP; Database System Lab, UoA; for giving facilities to research and publish.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Berger, H. Über das elektroenkephalogramm des menschen. Archiv. Psychiatr. 1929, 87, 527–570. [Google Scholar] [CrossRef]
  2. Lindsley, D.B. Psychological phenomena and the electroencephalogram. Electroencephalogr. Clin. Neurophysiol. 1952, 4, 443–456. [Google Scholar] [CrossRef]
  3. Vidal, J.J. Toward direct brain-computer communication. Annu. Rev. Biophys. Bioeng. 1973, 2, 157–180. [Google Scholar] [CrossRef]
  4. Zeng, F.G.; Rebscher, S.; Harrison, W.; Sun, X.; Feng, H. Cochlear implants: System design, integration, and evaluation. IEEE Rev. Biomed. Eng. 2008, 1, 115–142. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces: A review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef]
  6. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain–computer interface paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef]
  7. Tiwari, N.; Edla, D.R.; Dodia, S.; Bablani, A. Brain computer interface: A comprehensive survey. Biol. Inspired Cogn. Archit. 2018, 26, 118–129. [Google Scholar] [CrossRef]
  8. Vasiljevic, G.A.M.; de Miranda, L.C. Brain–computer interface games based on consumer-grade EEG Devices: A systematic literature review. Int. J. Hum. Comput. Interact. 2020, 36, 105–142. [Google Scholar] [CrossRef]
  9. Martini, M.L.; Oermann, E.K.; Opie, N.L.; Panov, F.; Oxley, T.; Yaeger, K. Sensor modalities for brain-computer interface technology: A comprehensive literature review. Neurosurgery 2020, 86, E108–E117. [Google Scholar] [CrossRef]
  10. Bablani, A.; Edla, D.R.; Tripathi, D.; Cheruku, R. Survey on brain-computer interface: An emerging computational intelligence paradigm. ACM Comput. Surv. (CSUR) 2019, 52, 20. [Google Scholar] [CrossRef]
  11. Fleury, M.; Lioi, G.; Barillot, C.; Lécuyer, A. A Survey on the Use of Haptic Feedback for Brain-Computer Interfaces and Neurofeedback. Front. Neurosci. 2020, 14, 528. [Google Scholar] [CrossRef] [PubMed]
  12. Torres, P.E.P.; Torres, E.A.; Hernández-Álvarez, M.; Yoo, S.G. EEG-based BCI emotion recognition: A survey. Sensors 2020, 20, 5083. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, X.; Yao, L.; Wang, X.; Monaghan, J.J.; Mcalpine, D.; Zhang, Y. A survey on deep learning-based non-invasive brain signals: Recent advances and new frontiers. J. Neural Eng. 2021, 18, 031002. [Google Scholar] [CrossRef]
  14. Gu, X.; Cao, Z.; Jolfaei, A.; Xu, P.; Wu, D.; Jung, T.P.; Lin, C.T. EEG-based brain-computer interfaces (BCIs): A survey of recent studies on signal sensing technologies and computational intelligence approaches and their applications. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021. [Google Scholar] [CrossRef] [PubMed]
  15. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; EBSE Technical Report; Keele University and Durham University Joint Report: Durham, UK, 2007. [Google Scholar]
  16. Kitchenham, B. Procedures for Performing Systematic Reviews; Technical Report TR/SE-0401; Keele University: Keele, UK, 2004; Volume 33, pp. 1–26. [Google Scholar]
  17. Nijholt, A. The future of brain-computer interfacing (keynote paper). In Proceedings of the 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), Dhaka, Bangladesh, 13–14 May 2016; pp. 156–161. [Google Scholar]
  18. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-based brain-computer interfaces using motor-imagery: Techniques and challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Hara, Y. Brain plasticity and rehabilitation in stroke patients. J. Nippon. Med Sch. 2015, 82, 4–13. [Google Scholar] [CrossRef] [Green Version]
  20. Bousseta, R.; El Ouakouak, I.; Gharbi, M.; Regragui, F. EEG based brain computer interface for controlling a robot arm movement through thought. Irbm 2018, 39, 129–135. [Google Scholar] [CrossRef]
  21. Perales, F.J.; Riera, L.; Ramis, S.; Guerrero, A. Evaluation of a VR system for Pain Management using binaural acoustic stimulation. Multimed. Tools Appl. 2019, 78, 32869–32890. [Google Scholar] [CrossRef]
  22. Shim, M.; Hwang, H.J.; Kim, D.W.; Lee, S.H.; Im, C.H. Machine-learning-based diagnosis of schizophrenia using combined sensor-level and source-level EEG features. Schizophr. Res. 2016, 176, 314–319. [Google Scholar] [CrossRef]
  23. Sharanreddy, M.; Kulkarni, P. Detection of primary brain tumor present in EEG signal using wavelet transform and neural network. Int. J. Biol. Med. Res. 2013, 4, 2855–2859. [Google Scholar]
  24. Poulos, M.; Felekis, T.; Evangelou, A. Is it possible to extract a fingerprint for early breast cancer via EEG analysis? Med. Hypotheses 2012, 78, 711–716. [Google Scholar] [CrossRef]
  25. Christensen, J.A.; Koch, H.; Frandsen, R.; Kempfner, J.; Arvastson, L.; Christensen, S.R.; Sorensen, H.B.; Jennum, P. Classification of iRBD and Parkinson’s disease patients based on eye movements during sleep. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 441–444. [Google Scholar]
  26. Mikołajewska, E.; Mikołajewski, D. The prospects of brain—Computer interface applications in children. Open Med. 2014, 9, 74–79. [Google Scholar] [CrossRef]
  27. Mane, R.; Chouhan, T.; Guan, C. BCI for stroke rehabilitation: Motor and beyond. J. Neural Eng. 2020, 17, 041001. [Google Scholar] [CrossRef]
  28. Van Dokkum, L.; Ward, T.; Laffont, I. Brain computer interfaces for neurorehabilitation–its current status as a rehabilitation strategy post-stroke. Ann. Phys. Rehabil. Med. 2015, 58, 3–8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Soekadar, S.R.; Silvoni, S.; Cohen, L.G.; Birbaumer, N. Brain-machine interfaces in stroke neurorehabilitation. In Clinical Systems Neuroscience; Springer: Berlin/Heidelberg, Germany, 2015; pp. 3–14. [Google Scholar]
  30. Beudel, M.; Brown, P. Adaptive deep brain stimulation in Parkinson’s disease. Park. Relat. Disord. 2016, 22, S123–S126. [Google Scholar] [CrossRef] [Green Version]
  31. Mohagheghian, F.; Makkiabadi, B.; Jalilvand, H.; Khajehpoor, H.; Samadzadehaghdam, N.; Eqlimi, E.; Deevband, M. Computer-aided tinnitus detection based on brain network analysis of EEG functional connectivity. J. Biomed. Phys. Eng. 2019, 9, 687. [Google Scholar] [CrossRef] [PubMed]
  32. Fernández-Caballero, A.; Navarro, E.; Fernández-Sotos, P.; González, P.; Ricarte, J.J.; Latorre, J.M.; Rodriguez-Jimenez, R. Human-avatar symbiosis for the treatment of auditory verbal hallucinations in schizophrenia through virtual/augmented reality and brain-computer interfaces. Front. Neuroinformatics 2017, 11, 64. [Google Scholar] [CrossRef] [Green Version]
  33. Dyck, M.S.; Mathiak, K.A.; Bergert, S.; Sarkheil, P.; Koush, Y.; Alawi, E.M.; Zvyagintsev, M.; Gaebler, A.J.; Shergill, S.S.; Mathiak, K. Targeting treatment-resistant auditory verbal hallucinations in schizophrenia with fMRI-based neurofeedback–exploring different cases of schizophrenia. Front. Psychiatry 2016, 7, 37. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Ehrlich, S.; Guan, C.; Cheng, G. A closed-loop brain-computer music interface for continuous affective interaction. In Proceedings of the 2017 International Conference on Orange Technologies (ICOT), Singapore, 8–10 September 2017; pp. 176–179. [Google Scholar]
  35. Placidi, G.; Cinque, L.; Di Giamberardino, P.; Iacoviello, D.; Spezialetti, M. An affective BCI driven by self-induced emotions for people with severe neurological disorders. In International Conference on Image Analysis and Processing; Springer: Berlin/Heidelberg, Germany, 2017; pp. 155–162. [Google Scholar]
  36. Kerous, B.; Skola, F.; Liarokapis, F. EEG-based BCI and video games: A progress report. Virtual Real. 2018, 22, 119–135. [Google Scholar] [CrossRef]
  37. Stein, A.; Yotam, Y.; Puzis, R.; Shani, G.; Taieb-Maimon, M. EEG-triggered dynamic difficulty adjustment for multiplayer games. Entertain. Comput. 2018, 25, 14–25. [Google Scholar] [CrossRef]
  38. Zhang, B.; Wang, J.; Fuhlbrigge, T. A review of the commercial brain-computer interface technology from perspective of industrial robotics. In Proceedings of the 2010 IEEE International Conference on Automation and Logistics, Hong Kong, China, 16–20 August 2010; pp. 379–384. [Google Scholar]
  39. Van De Laar, B.; Brugman, I.; Nijboer, F.; Poel, M.; Nijholt, A. BrainBrush, a multimodal application for creative expressivity. In Proceedings of the Sixth International Conference on Advances in Computer-Human Interactions (ACHI 2013), Nice, France, 24 February–1 March 2013; pp. 62–67. [Google Scholar]
  40. Todd, D.; McCullagh, P.J.; Mulvenna, M.D.; Lightbody, G. Investigating the use of brain-computer interaction to facilitate creativity. In Proceedings of the 3rd Augmented Human International Conference, Megève, France, 8–9 March 2012; pp. 1–8. [Google Scholar]
  41. Liu, Y.T.; Wu, S.L.; Chou, K.P.; Lin, Y.Y.; Lu, J.; Zhang, G.; Lin, W.C.; Lin, C.T. Driving fatigue prediction with pre-event electroencephalography (EEG) via a recurrent fuzzy neural network. In Proceedings of the 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Vancouver, BC, Canada, 24–29 July 2016; pp. 2488–2494. [Google Scholar]
  42. Binias, B.; Myszor, D.; Cyran, K.A. A machine learning approach to the detection of pilot’s reaction to unexpected events based on EEG signals. Comput. Intell. Neurosci. 2018, 2018, 2703513. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Waldert, S. Invasive vs. non-invasive neuronal signals for brain-machine interfaces: Will one prevail? Front. Neurosci. 2016, 10, 295. [Google Scholar] [CrossRef] [Green Version]
  44. Panoulas, K.J.; Hadjileontiadis, L.J.; Panas, S.M. Brain-computer interface (BCI): Types, processing perspectives and applications. In Multimedia Services in Intelligent Environments; Springer: Berlin/Heidelberg, Germany, 2010; pp. 299–321. [Google Scholar]
  45. Wikipedia Contributors. Electrocorticography—Wikipedia, The Free Encyclopedia. 2021. Available online: https://en.wikipedia.org/w/index.php?title=Electrocorticography&oldid=1032187616 (accessed on 8 July 2021).
  46. Kuruvilla, A.; Flink, R. Intraoperative electrocorticography in epilepsy surgery: Useful or not? Seizure 2003, 12, 577–584. [Google Scholar] [CrossRef] [Green Version]
  47. Homan, R.W.; Herman, J.; Purdy, P. Cerebral location of international 10–20 system electrode placement. Electroencephalogr. Clin. Neurophysiol. 1987, 66, 376–382. [Google Scholar] [CrossRef]
  48. Cohen, D. Magnetoencephalography: Evidence of magnetic fields produced by alpha-rhythm currents. Science 1968, 161, 784–786. [Google Scholar] [CrossRef] [PubMed]
  49. Wikipedia Contributors. Human Brain—Wikipedia, The Free Encyclopedia. 2021. Available online: https://en.wikipedia.org/w/index.php?title=Human_brain&oldid=1032229379 (accessed on 8 July 2021).
  50. Zimmerman, J.; Thiene, P.; Harding, J. Design and operation of stable rf-biased superconducting point-contact quantum devices, and a note on the properties of perfectly clean metal contacts. J. Appl. Phys. 1970, 41, 1572–1580. [Google Scholar] [CrossRef]
  51. Wilson, J.A.; Felton, E.A.; Garell, P.C.; Schalk, G.; Williams, J.C. ECoG factors underlying multimodal control of a brain-computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 246–250. [Google Scholar] [CrossRef]
  52. Weiskopf, N.; Veit, R.; Erb, M.; Mathiak, K.; Grodd, W.; Goebel, R.; Birbaumer, N. Physiological self-regulation of regional brain activity using real-time functional magnetic resonance imaging (fMRI): Methodology and exemplary data. Neuroimage 2003, 19, 577–586. [Google Scholar] [CrossRef]
  53. Ramadan, R.A.; Vasilakos, A.V. Brain computer interface: Control signals review. Neurocomputing 2017, 223, 26–44. [Google Scholar] [CrossRef]
  54. Huisman, T. Diffusion-weighted and diffusion tensor imaging of the brain, made easy. Cancer Imaging 2010, 10, S163. [Google Scholar] [CrossRef] [Green Version]
  55. Borkowski, K.; Krzyżak, A.T. Analysis and correction of errors in DTI-based tractography due to diffusion gradient inhomogeneity. J. Magn. Reson. 2018, 296, 5–11. [Google Scholar] [CrossRef]
  56. Purnell, J.; Klopfenstein, B.; Stevens, A.; Havel, P.J.; Adams, S.; Dunn, T.; Krisky, C.; Rooney, W. Brain functional magnetic resonance imaging response to glucose and fructose infusions in humans. Diabetes Obes. Metab. 2011, 13, 229–234. [Google Scholar] [CrossRef] [Green Version]
  57. Tai, Y.; Piccini, P. Applications of positron emission tomography (PET) in neurology. J. Neurol. Neurosurg. Psychiatry 2004, 75, 669–676. [Google Scholar] [CrossRef]
  58. Walker, S.M.; Lim, I.; Lindenberg, L.; Mena, E.; Choyke, P.L.; Turkbey, B. Positron emission tomography (PET) radiotracers for prostate cancer imaging. Abdom. Radiol. 2020, 45, 2165–2175. [Google Scholar] [CrossRef]
  59. Wang, Y.; Wang, R.; Gao, X.; Hong, B.; Gao, S. A practical VEP-based brain-computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 234–240. [Google Scholar] [CrossRef]
  60. Lim, J.H.; Hwang, H.J.; Han, C.H.; Jung, K.Y.; Im, C.H. Classification of binary intentions for individuals with impaired oculomotor function: ‘eyes-closed’ SSVEP-based brain–computer interface (BCI). J. Neural Eng. 2013, 10, 026021. [Google Scholar] [CrossRef] [PubMed]
  61. Bera, T.K. Noninvasive electromagnetic methods for brain monitoring: A technical review. In Brain-Computer Interfaces; Springer: Berlin/Heidelberg, Germany, 2015; pp. 51–95. [Google Scholar]
  62. Zhu, D.; Bieger, J.; Garcia Molina, G.; Aarts, R.M. A survey of stimulation methods used in SSVEP-based BCIs. Comput. Intell. Neurosci. 2010, 2010, 702357. [Google Scholar] [CrossRef] [PubMed]
  63. Polich, J. Updating P300: An integrative theory of P3a and P3b. Clin. Neurophysiol. 2007, 118, 2128–2148. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Golub, M.D.; Chase, S.M.; Batista, A.P.; Byron, M.Y. Brain–computer interfaces for dissecting cognitive processes underlying sensorimotor control. Curr. Opin. Neurobiol. 2016, 37, 53–58. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Kim, J.H.; Kim, B.C.; Byun, Y.T.; Jhon, Y.M.; Lee, S.; Woo, D.H.; Kim, S.H. All-optical AND gate using cross-gain modulation in semiconductor optical amplifiers. Jpn. J. Appl. Phys. 2004, 43, 608. [Google Scholar] [CrossRef]
  66. Dobrea, M.C.; Dobrea, D.M. The selection of proper discriminative cognitive tasks—A necessary prerequisite in high-quality BCI applications. In Proceedings of the 2009 2nd International Symposium on Applied Sciences in Biomedical and Communication Technologies, Bratislava, Slovakia, 24–27 November 2009; pp. 1–6. [Google Scholar]
  67. Penny, W.D.; Roberts, S.J.; Curran, E.A.; Stokes, M.J. EEG-based communication: A pattern recognition approach. IEEE Trans. Rehabil. Eng. 2000, 8, 214–215. [Google Scholar] [CrossRef] [PubMed]
  68. Amiri, S.; Fazel-Rezai, R.; Asadpour, V. A review of hybrid brain-computer interface systems. Adv. Hum. Comput. Interact. 2013, 2013, 187024. [Google Scholar] [CrossRef]
  69. Mustafa, M. Auditory Evoked Potential (AEP) Based Brain-Computer Interface (BCI) Technology: A Short Review. Adv. Robot. Autom. Data Anal. 2021, 1350, 272. [Google Scholar]
  70. Cho, H.; Ahn, M.; Ahn, S.; Kwon, M.; Jun, S.C. EEG datasets for motor imagery brain–computer interface. GigaScience 2017, 6, gix034. [Google Scholar] [CrossRef] [PubMed]
  71. Gaur, P.; Gupta, H.; Chowdhury, A.; McCreadie, K.; Pachori, R.B.; Wang, H. A Sliding Window Common Spatial Pattern for Enhancing Motor Imagery Classification in EEG-BCI. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
  72. Long, J.; Li, Y.; Yu, T.; Gu, Z. Target selection with hybrid feature for BCI-based 2-D cursor control. IEEE Trans. Biomed. Eng. 2011, 59, 132–140. [Google Scholar] [CrossRef]
  73. Ahn, S.; Ahn, M.; Cho, H.; Jun, S.C. Achieving a hybrid brain-computer interface with tactile selective attention and motor imagery. J. Neural Eng. 2014, 11, 066004. [Google Scholar] [CrossRef] [PubMed]
  74. Wang, H.; Li, Y.; Long, J.; Yu, T.; Gu, Z. An asynchronous wheelchair control by hybrid EEG–EOG brain-computer interface. Cogn. Neurodyn. 2014, 8, 399–409. [Google Scholar] [CrossRef]
  75. Alomari, M.H.; AbuBaker, A.; Turani, A.; Baniyounes, A.M.; Manasreh, A. EEG mouse: A machine learning-based brain computer interface. Int. J. Adv. Comput. Sci. Appl. 2014, 5, 193–198. [Google Scholar]
  76. Xu, B.G.; Song, A.G. Pattern recognition of motor imagery EEG using wavelet transform. J. Biomed. Sci. Eng. 2008, 1, 64. [Google Scholar] [CrossRef] [Green Version]
  77. Wang, X.; Hersche, M.; Tömekce, B.; Kaya, B.; Magno, M.; Benini, L. An accurate eegnet-based motor-imagery brain–computer interface for low-power edge computing. In Proceedings of the 2020 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Bari, Italy, 1 June–1 July 2020; pp. 1–6. [Google Scholar]
  78. Kayikcioglu, T.; Aydemir, O. A polynomial fitting and k-NN based approach for improving classification of motor imagery BCI data. Pattern Recognit. Lett. 2010, 31, 1207–1215. [Google Scholar] [CrossRef]
  79. Loboda, A.; Margineanu, A.; Rotariu, G.; Lazar, A.M. Discrimination of EEG-based motor imagery tasks by means of a simple phase information method. Int. J. Adv. Res. Artif. Intell. 2014, 3, 10. [Google Scholar] [CrossRef] [Green Version]
  80. Alexandre, B.; Rafal, C. Grasp-and-Lift EEG Detection, Identify Hand Motions from EEG Recordings Competition Dataset. Available online: https://www.kaggle.com/c/grasp-and-lift-eeg-detection/data (accessed on 19 August 2021).
  81. Chen, X.; Zhao, B.; Wang, Y.; Xu, S.; Gao, X. Control of a 7-DOF robotic arm system with an SSVEP-based BCI. Int. J. Neural Syst. 2018, 28, 1850018. [Google Scholar] [CrossRef] [PubMed]
  82. Lin, B.; Deng, S.; Gao, H.; Yin, J. A multi-scale activity transition network for data translation in EEG signals decoding. IEEE/ACM Trans. Comput. Biol. Bioinform. 2020. [Google Scholar] [CrossRef]
  83. Neuper, C.; Müller-Putz, G.R.; Scherer, R.; Pfurtscheller, G. Motor imagery and EEG-based control of spelling devices and neuroprostheses. Prog. Brain Res. 2006, 159, 393–409. [Google Scholar] [PubMed]
  84. Ko, W.; Yoon, J.; Kang, E.; Jun, E.; Choi, J.S.; Suk, H.I. Deep recurrent spatio-temporal neural network for motor imagery based BCI. In Proceedings of the 2018 6th International Conference on Brain-Computer Interface (BCI), Gangwon, Korea, 15–17 January 2018; pp. 1–3. [Google Scholar]
  85. Duan, F.; Lin, D.; Li, W.; Zhang, Z. Design of a multimodal EEG-based hybrid BCI system with visual servo module. IEEE Trans. Auton. Ment. Dev. 2015, 7, 332–341. [Google Scholar] [CrossRef]
  86. Kaya, M.; Binli, M.K.; Ozbay, E.; Yanar, H.; Mishchenko, Y. A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces. Sci. Data 2018, 5, 1–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Duan, L.; Zhong, H.; Miao, J.; Yang, Z.; Ma, W.; Zhang, X. A voting optimized strategy based on ELM for improving classification of motor imagery BCI data. Cogn. Comput. 2014, 6, 477–483. [Google Scholar] [CrossRef]
  88. Hossain, I.; Khosravi, A.; Hettiarachchi, I.; Nahavandi, S. Multiclass informative instance transfer learning framework for motor imagery-based brain-computer interface. Comput. Intell. Neurosci. 2018, 2018, 6323414. [Google Scholar] [CrossRef] [Green Version]
  89. Khan, M.A.; Das, R.; Iversen, H.K.; Puthusserypady, S. Review on motor imagery based BCI systems for upper limb post-stroke neurorehabilitation: From designing to application. Comput. Biol. Med. 2020, 123, 103843. [Google Scholar] [CrossRef]
  90. Duan, L.; Bao, M.; Miao, J.; Xu, Y.; Chen, J. Classification based on multilayer extreme learning machine for motor imagery task from EEG signals. Procedia Comput. Sci. 2016, 88, 176–184. [Google Scholar] [CrossRef] [Green Version]
  91. Velasco-Álvarez, F.; Ron-Angevin, R.; da Silva-Sauer, L.; Sancha-Ros, S. Audio-cued motor imagery-based brain–computer interface: Navigation through virtual and real environments. Neurocomputing 2013, 121, 89–98. [Google Scholar] [CrossRef]
  92. Ahn, M.; Jun, S.C. Performance variation in motor imagery brain–computer interface: A brief review. J. Neurosci. Methods 2015, 243, 103–110. [Google Scholar] [CrossRef] [PubMed]
  93. Blankertz, B.; Müller, K.R.; Krusienski, D.; Schalk, G.; Wolpaw, J.R.; Schlögl, A.; Pfurtscheller, G.; Millán, J.d.R.; Schröder, M.; Birbaumer, N. BCI Competition iii. 2005. Available online: http://www.bbci.de/competition/iii/ (accessed on 19 August 2021).
  94. Blankertz, B.; Muller, K.R.; Krusienski, D.J.; Schalk, G.; Wolpaw, J.R.; Schlogl, A.; Pfurtscheller, G.; Millan, J.R.; Schroder, M.; Birbaumer, N. The BCI competition III: Validating alternative approaches to actual BCI problems. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 153–159. [Google Scholar] [CrossRef]
  95. Jin, J.; Miao, Y.; Daly, I.; Zuo, C.; Hu, D.; Cichocki, A. Correlation-based channel selection and regularized feature optimization for MI-based BCI. Neural Netw. 2019, 118, 262–270. [Google Scholar] [CrossRef] [PubMed]
  96. Lemm, S.; Schafer, C.; Curio, G. BCI competition 2003-data set III: Probabilistic modeling of sensorimotor/spl mu/rhythms for classification of imaginary hand movements. IEEE Trans. Biomed. Eng. 2004, 51, 1077–1080. [Google Scholar] [CrossRef]
  97. Tangermann, M.; Müller, K.R.; Aertsen, A.; Birbaumer, N.; Braun, C.; Brunner, C.; Leeb, R.; Mehring, C.; Miller, K.J.; Mueller-Putz, G.; et al. Review of the BCI competition IV. Front. Neurosci. 2012, 6, 55. [Google Scholar]
  98. Park, Y.; Chung, W. Frequency-optimized local region common spatial pattern approach for motor imagery classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1378–1388. [Google Scholar] [CrossRef]
  99. Wang, D.; Miao, D.; Blohm, G. Multi-class motor imagery EEG decoding for brain-computer interfaces. Front. Neurosci. 2012, 6, 151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Nguyen, T.; Hettiarachchi, I.; Khatami, A.; Gordon-Brown, L.; Lim, C.P.; Nahavandi, S. Classification of multi-class BCI data by common spatial pattern and fuzzy system. IEEE Access 2018, 6, 27873–27884. [Google Scholar] [CrossRef]
  101. Satti, A.; Guan, C.; Coyle, D.; Prasad, G. A covariate shift minimisation method to alleviate non-stationarity effects for an adaptive brain-computer interface. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 105–108. [Google Scholar]
  102. Sakhavi, S.; Guan, C.; Yan, S. Parallel convolutional-linear neural network for motor imagery classification. In Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 2736–2740. [Google Scholar]
  103. Raza, H.; Cecotti, H.; Li, Y.; Prasad, G. Adaptive learning with covariate shift-detection for motor imagery-based brain–computer interface. Soft Comput. 2016, 20, 3085–3096. [Google Scholar] [CrossRef]
  104. Selim, S.; Tantawi, M.M.; Shedeed, H.A.; Badr, A. A CSP∖AM-BA-SVM Approach for Motor Imagery BCI System. IEEE Access 2018, 6, 49192–49208. [Google Scholar] [CrossRef]
  105. Hersche, M.; Rellstab, T.; Schiavone, P.D.; Cavigelli, L.; Benini, L.; Rahimi, A. Fast and accurate multiclass inference for MI-BCIs using large multiscale temporal and spectral features. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 1690–1694. [Google Scholar]
  106. Sakhavi, S.; Guan, C.; Yan, S. Learning temporal information for brain-computer interface using convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5619–5629. [Google Scholar] [CrossRef] [PubMed]
  107. Hossain, I.; Khosravi, A.; Nahavandhi, S. Active transfer learning and selective instance transfer with active learning for motor imagery based BCI. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 4048–4055. [Google Scholar]
  108. Zhu, X.; Li, P.; Li, C.; Yao, D.; Zhang, R.; Xu, P. Separated channel convolutional neural network to realize the training free motor imagery BCI systems. Biomed. Signal Process. Control. 2019, 49, 396–403. [Google Scholar] [CrossRef]
  109. Sun, L.; Feng, Z.; Chen, B.; Lu, N. A contralateral channel guided model for EEG based motor imagery classification. Biomed. Signal Process. Control. 2018, 41, 1–9. [Google Scholar] [CrossRef]
  110. Uran, A.; Van Gemeren, C.; van Diepen, R.; Chavarriaga, R.; Millán, J.d.R. Applying transfer learning to deep learned models for EEG analysis. arXiv 2019, arXiv:1907.01332. [Google Scholar]
  111. Gandhi, V.; Prasad, G.; Coyle, D.; Behera, L.; McGinnity, T.M. Evaluating Quantum Neural Network filtered motor imagery brain-computer interface using multiple classification techniques. Neurocomputing 2015, 170, 161–167. [Google Scholar] [CrossRef]
  112. Ha, K.W.; Jeong, J.W. Motor imagery EEG classification using capsule networks. Sensors 2019, 19, 2854. [Google Scholar] [CrossRef] [Green Version]
  113. Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Ahn, M.; Cho, H.; Ahn, S.; Jun, S.C. High theta and low alpha powers may be indicative of BCI-illiteracy in motor imagery. PLoS ONE 2013, 8, e80886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Amin, S.U.; Alsulaiman, M.; Muhammad, G.; Mekhtiche, M.A.; Hossain, M.S. Deep Learning for EEG motor imagery classification based on multi-layer CNNs feature fusion. Future Gener. Comput. Syst. 2019, 101, 542–554. [Google Scholar] [CrossRef]
  116. Li, Y.; Zhang, X.R.; Zhang, B.; Lei, M.Y.; Cui, W.G.; Guo, Y.Z. A channel-projection mixed-scale convolutional neural network for motor imagery EEG decoding. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1170–1180. [Google Scholar] [CrossRef] [PubMed]
  117. Ahn, M.; Ahn, S.; Hong, J.H.; Cho, H.; Kim, K.; Kim, B.S.; Chang, J.W.; Jun, S.C. Gamma band activity associated with BCI performance: Simultaneous MEG/EEG study. Front. Hum. Neurosci. 2013, 7, 848. [Google Scholar] [CrossRef] [Green Version]
  118. Wang, W.; Degenhart, A.D.; Sudre, G.P.; Pomerleau, D.A.; Tyler-Kabara, E.C. Decoding semantic information from human electrocorticographic (ECoG) signals. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2011, 2011, 6294–6298. [Google Scholar] [PubMed]
  119. Williams, J.J.; Rouse, A.G.; Thongpang, S.; Williams, J.C.; Moran, D.W. Differentiating closed-loop cortical intention from rest: Building an asynchronous electrocorticographic BCI. J. Neural Eng. 2013, 10, 046001. [Google Scholar] [CrossRef]
  120. Li, Z.; Qiu, L.; Li, R.; He, Z.; Xiao, J.; Liang, Y.; Wang, F.; Pan, J. Enhancing BCI-Based emotion recognition using an improved particle swarm optimization for feature selection. Sensors 2020, 20, 3028. [Google Scholar] [CrossRef]
  121. Onose, G.; Grozea, C.; Anghelescu, A.; Daia, C.; Sinescu, C.; Ciurea, A.; Spircu, T.; Mirea, A.; Andone, I.; Spânu, A.; et al. On the feasibility of using motor imagery EEG-based brain–computer interface in chronic tetraplegics for assistive robotic arm control: A clinical test and long-term post-trial follow-up. Spinal Cord 2012, 50, 599–608. [Google Scholar] [CrossRef] [Green Version]
  122. Meng, J.; Streitz, T.; Gulachek, N.; Suma, D.; He, B. Three-dimensional brain–computer interface control through simultaneous overt spatial attentional and motor imagery tasks. IEEE Trans. Biomed. Eng. 2018, 65, 2417–2427. [Google Scholar] [CrossRef]
  123. Kosmyna, N.; Tarpin-Bernard, F.; Rivet, B. Towards brain computer interfaces for recreational activities: Piloting a drone. In IFIP Conference on Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 2015; pp. 506–522. [Google Scholar]
  124. Dua, D.; Graff, C. UCI Machine Learning Repository; University of California: Irvine, CA, USA, 2017. [Google Scholar]
  125. Sonkin, K.M.; Stankevich, L.A.; Khomenko, J.G.; Nagornova, Z.V.; Shemyakina, N.V. Development of electroencephalographic pattern classifiers for real and imaginary thumb and index finger movements of one hand. Artif. Intell. Med. 2015, 63, 107–117. [Google Scholar] [CrossRef]
  126. Müller-Putz, G.R.; Pokorny, C.; Klobassa, D.S.; Horki, P. A single-switch BCI based on passive and imagined movements: Toward restoring communication in minimally conscious patients. Int. J. Neural Syst. 2013, 23, 1250037. [Google Scholar] [CrossRef]
  127. Eskandari, P.; Erfanian, A. Improving the performance of brain-computer interface through meditation practicing. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2008, 2008, 662–665. [Google Scholar]
  128. Edelman, B.J.; Baxter, B.; He, B. EEG source imaging enhances the decoding of complex right-hand motor imagery tasks. IEEE Trans. Biomed. Eng. 2015, 63, 4–14. [Google Scholar] [CrossRef]
  129. Lotte, F.; Jeunet, C. Defining and quantifying users’ mental imagery-based BCI skills: A first step. J. Neural Eng. 2018, 15, 046030. [Google Scholar] [CrossRef] [Green Version]
  130. Jeunet, C.; N’Kaoua, B.; Subramanian, S.; Hachet, M.; Lotte, F. Predicting mental imagery-based BCI performance from personality, cognitive profile and neurophysiological patterns. PLoS ONE 2015, 10, e0143962. [Google Scholar] [CrossRef]
  131. Rathee, D.; Cecotti, H.; Prasad, G. Single-trial effective brain connectivity patterns enhance discriminability of mental imagery tasks. J. Neural Eng. 2017, 14, 056005. [Google Scholar] [CrossRef]
  132. Sadiq, M.T.; Yu, X.; Yuan, Z.; Aziz, M.Z. Identification of motor and mental imagery EEG in two and multiclass subject-dependent tasks using successive decomposition index. Sensors 2020, 20, 5283. [Google Scholar] [CrossRef] [PubMed]
  133. Lotte, F.; Jeunet, C. Online classification accuracy is a poor metric to study mental imagery-based bci user learning: An experimental demonstration and new metrics. In Proceedings of the 7th international BCI conference, Pacific Grove, CA, USA, 21–25 May 2017. [Google Scholar]
  134. Wierzgała, P.; Zapała, D.; Wojcik, G.M.; Masiak, J. Most popular signal processing methods in motor-imagery BCI: A review and meta-analysis. Front. Neuroinformatics 2018, 12, 78. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Park, C.; Looney, D.; ur Rehman, N.; Ahrabian, A.; Mandic, D.P. Classification of motor imagery BCI using multivariate empirical mode decomposition. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 21, 10–22. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  136. Alexandre, B.; Rafal, C. BCI Challenge @ NER 2015, A Spell on You If You Cannot Detect Errors! Available online: https://www.kaggle.com/c/inria-bci-challenge/data (accessed on 19 August 2021).
  137. Mahmud, M.; Kaiser, M.S.; McGinnity, T.M.; Hussain, A. Deep learning in mining biological data. Cogn. Comput. 2021, 13, 1–33. [Google Scholar] [CrossRef] [PubMed]
  138. Cruz, A.; Pires, G.; Nunes, U.J. Double ErrP detection for automatic error correction in an ERP-based BCI speller. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 26, 26–36. [Google Scholar] [CrossRef] [PubMed]
  139. Bhattacharyya, S.; Konar, A.; Tibarewala, D.N.; Hayashibe, M. A generic transferable EEG decoder for online detection of error potential in target selection. Front. Neurosci. 2017, 11, 226. [Google Scholar] [CrossRef] [PubMed]
  140. Jrad, N.; Congedo, M.; Phlypo, R.; Rousseau, S.; Flamary, R.; Yger, F.; Rakotomamonjy, A. sw-SVM: Sensor weighting support vector machines for EEG-based brain–computer interfaces. J. Neural Eng. 2011, 8, 056004. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  141. Zeyl, T.; Yin, E.; Keightley, M.; Chau, T. Partially supervised P300 speller adaptation for eventual stimulus timing optimization: Target confidence is superior to error-related potential score as an uncertain label. J. Neural Eng. 2016, 13, 026008. [Google Scholar] [CrossRef] [PubMed]
  142. Wirth, C.; Dockree, P.; Harty, S.; Lacey, E.; Arvaneh, M. Towards error categorisation in BCI: Single-trial EEG classification between different errors. J. Neural Eng. 2019, 17, 016008. [Google Scholar] [CrossRef] [PubMed]
  143. Combaz, A.; Chumerin, N.; Manyakov, N.V.; Robben, A.; Suykens, J.A.; Van Hulle, M.M. Towards the detection of error-related potentials and its integration in the context of a P300 speller brain–computer interface. Neurocomputing 2012, 80, 73–82. [Google Scholar] [CrossRef]
  144. Zeyl, T.; Yin, E.; Keightley, M.; Chau, T. Improving bit rate in an auditory BCI: Exploiting error-related potentials. Brain-Comput. Interfaces 2016, 3, 75–87. [Google Scholar] [CrossRef]
  145. Spüler, M.; Niethammer, C. Error-related potentials during continuous feedback: Using EEG to detect errors of different type and severity. Front. Hum. Neurosci. 2015, 9, 155. [Google Scholar]
  146. Kreilinger, A.; Neuper, C.; Müller-Putz, G.R. Error potential detection during continuous movement of an artificial arm controlled by brain–computer interface. Med. Biol. Eng. Comput. 2012, 50, 223–230. [Google Scholar] [CrossRef]
  147. Kreilinger, A.; Hiebel, H.; Müller-Putz, G.R. Single versus multiple events error potential detection in a BCI-controlled car game with continuous and discrete feedback. IEEE Trans. Biomed. Eng. 2015, 63, 519–529. [Google Scholar] [CrossRef]
  148. Dias, C.L.; Sburlea, A.I.; Müller-Putz, G.R. Masked and unmasked error-related potentials during continuous control and feedback. J. Neural Eng. 2018, 15, 036031. [Google Scholar] [CrossRef] [Green Version]
  149. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
  150. Atkinson, J.; Campos, D. Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers. Expert Syst. Appl. 2016, 47, 35–41. [Google Scholar] [CrossRef]
  151. Lan, Z.; Sourina, O.; Wang, L.; Scherer, R.; Müller-Putz, G.R. Domain adaptation techniques for EEG-based emotion recognition: A comparative study on two public datasets. IEEE Trans. Cogn. Dev. Syst. 2018, 11, 85–94. [Google Scholar] [CrossRef]
  152. Al-Nafjan, A.; Hosny, M.; Al-Wabil, A.; Al-Ohali, Y. Classification of human emotions from electroencephalogram (EEG) signal using deep neural network. Int. J. Adv. Comput. Sci. Appl 2017, 8, 419–425. [Google Scholar] [CrossRef]
  153. Chen, J.; Zhang, P.; Mao, Z.; Huang, Y.; Jiang, D.; Zhang, Y. Accurate EEG-based emotion recognition on combined features using deep convolutional neural networks. IEEE Access 2019, 7, 44317–44328. [Google Scholar] [CrossRef]
  154. Sánchez-Reolid, R.; García, A.S.; Vicente-Querol, M.A.; Fernández-Aguilar, L.; López, M.T.; Fernández-Caballero, A.; González, P. Artificial neural networks to assess emotional states from brain-computer interface. Electronics 2018, 7, 384. [Google Scholar] [CrossRef] [Green Version]
  155. Yang, Y.; Wu, Q.; Fu, Y.; Chen, X. Continuous convolutional neural network with 3d input for eeg-based emotion recognition. In International Conference on Neural Information Processing; Springer: Berlin/Heidelberg, Germany, 2018; pp. 433–443. [Google Scholar]
  156. Liu, J.; Wu, G.; Luo, Y.; Qiu, S.; Yang, S.; Li, W.; Bi, Y. EEG-based emotion classification using a deep neural network and sparse autoencoder. Front. Syst. Neurosci. 2020, 14, 43. [Google Scholar] [CrossRef]
  157. Lim, W.; Sourina, O.; Wang, L. STEW: Simultaneous task EEG workload data set. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 2106–2114. [Google Scholar] [CrossRef]
  158. Savran, A.; Ciftci, K.; Chanel, G.; Mota, J.; Hong Viet, L.; Sankur, B.; Akarun, L.; Caplier, A.; Rombaut, M. Emotion detection in the loop from brain signals and facial images. In Proceedings of the eNTERFACE 2006 Workshop, Dubrovnik, Croatia, 17 July–11 August 2006. [Google Scholar]
  159. Onton, J.A.; Makeig, S. High-frequency broadband modulation of electroencephalographic spectra. Front. Hum. Neurosci. 2009, 3, 61. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  160. Data-EEG-25-users-Neuromarketing, Recorded EEG Signals While Viewing Consumer Products on Computer Screen, Indian Institute of Technology, Roorkee, India. Available online: https://drive.google.com/file/d/0B2T1rQUvyyWcSGVVaHZBZzRtTms/view?resourcekey=0-wuVvZnp9Ub89GMoErrxSrQ (accessed on 19 August 2021).
  161. Yadava, M.; Kumar, P.; Saini, R.; Roy, P.P.; Dogra, D.P. Analysis of EEG signals and its application to neuromarketing. Multimed. Tools Appl. 2017, 76, 19087–19111. [Google Scholar] [CrossRef]
  162. Aldayel, M.; Ykhlef, M.; Al-Nafjan, A. Deep learning for EEG-based preference classification in neuromarketing. Appl. Sci. 2020, 10, 1525. [Google Scholar] [CrossRef] [Green Version]
  163. Zheng, W.; Liu, W.; Lu, Y.; Lu, B.; Cichocki, A. EmotionMeter: A Multimodal Framework for Recognizing Human Emotions. IEEE Trans. Cybern. 2018, 1–13. [Google Scholar] [CrossRef] [PubMed]
  164. Seidler, T.G.; Plotkin, J.B. Seed dispersal and spatial pattern in tropical trees. PLoS Biol. 2006, 4, e344. [Google Scholar] [CrossRef] [PubMed]
  165. Getzin, S.; Wiegand, T.; Hubbell, S.P. Stochastically driven adult–recruit associations of tree species on Barro Colorado Island. Proc. R. Soc. Biol. Sci. 2014, 281, 20140922. [Google Scholar] [CrossRef] [PubMed]
  166. Kong, X.; Kong, W.; Fan, Q.; Zhao, Q.; Cichocki, A. Task-independent eeg identification via low-rank matrix decomposition. In Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 3–6 December 2018; pp. 412–419. [Google Scholar]
  167. González, J.; Ortega, J.; Damas, M.; Martín-Smith, P.; Gan, J.Q. A new multi-objective wrapper method for feature selection–Accuracy and stability analysis for BCI. Neurocomputing 2019, 333, 407–418. [Google Scholar] [CrossRef] [Green Version]
  168. Dalling, J.W.; Brown, T.A. Long-term persistence of pioneer species in tropical rain forest soil seed banks. Am. Nat. 2009, 173, 531–535. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  169. Aznan, N.K.N.; Atapour-Abarghouei, A.; Bonner, S.; Connolly, J.D.; Al Moubayed, N.; Breckon, T.P. Simulating brain signals: Creating synthetic eeg data via neural-based generative models for improved ssvep classification. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  170. Zhong, P.; Wang, D.; Miao, C. EEG-based emotion recognition using regularized graph neural networks. IEEE Trans. Affect. Comput. 2020. [Google Scholar] [CrossRef]
  171. Li, H.; Jin, Y.M.; Zheng, W.L.; Lu, B.L. Cross-subject emotion recognition using deep adaptation networks. In International Conference on Neural Information Processing; Springer: Berlin/Heidelberg, Germany, 2018; pp. 403–413. [Google Scholar]
  172. Thejaswini, S.; Kumar, D.K.; Nataraj, J.L. Analysis of EEG based emotion detection of DEAP and SEED-IV databases using SVM. In Proceedings of the Second International Conference on Emerging Trends in Science & Technologies For Engineering Systems (ICETSE-2019), Bengaluru, India, 17–18 May 2019. [Google Scholar]
  173. Liu, W.; Qiu, J.L.; Zheng, W.L.; Lu, B.L. Multimodal emotion recognition using deep canonical correlation analysis. arXiv 2019, arXiv:1908.05349. [Google Scholar]
  174. Rim, B.; Sung, N.J.; Min, S.; Hong, M. Deep learning in physiological signal data: A survey. Sensors 2020, 20, 969. [Google Scholar] [CrossRef] [Green Version]
  175. Cimtay, Y.; Ekmekcioglu, E. Investigating the use of pretrained convolutional neural network on cross-subject and cross-dataset EEG emotion recognition. Sensors 2020, 20, 2034. [Google Scholar] [CrossRef] [Green Version]
  176. Zheng, W.L.; Lu, B.L. A multimodal approach to estimating vigilance using EEG and forehead EOG. J. Neural Eng. 2017, 14, 026017. [Google Scholar] [CrossRef] [PubMed]
  177. Ma, B.Q.; Li, H.; Zheng, W.L.; Lu, B.L. Reducing the subject variability of eeg signals with adversarial domain generalization. In International Conference on Neural Information Processing; Springer: Berlin/Heidelberg, Germany, 2019; pp. 30–42. [Google Scholar]
  178. Ko, W.; Oh, K.; Jeon, E.; Suk, H.I. VIGNet: A Deep Convolutional Neural Network for EEG-based Driver Vigilance Estimation. In Proceedings of the 2020 8th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Korea, 26–28 February 2020; pp. 1–3. [Google Scholar]
  179. Zhang, G.; Etemad, A. RFNet: Riemannian Fusion Network for EEG-based Brain-Computer Interfaces. arXiv 2020, arXiv:2008.08633. [Google Scholar]
  180. Munoz, R.; Olivares, R.; Taramasco, C.; Villarroel, R.; Soto, R.; Barcelos, T.S.; Merino, E.; Alonso-Sánchez, M.F. Using black hole algorithm to improve eeg-based emotion recognition. Comput. Intell. Neurosci. 2018, 2018, 3050214. [Google Scholar] [CrossRef]
  181. Izquierdo-Reyes, J.; Ramirez-Mendoza, R.A.; Bustamante-Bello, M.R.; Pons-Rovira, J.L.; Gonzalez-Vargas, J.E. Emotion recognition for semi-autonomous vehicles framework. Int. J. Interact. Des. Manuf. 2018, 12, 1447–1454. [Google Scholar] [CrossRef]
  182. Xu, H.; Plataniotis, K.N. Subject independent affective states classification using EEG signals. In Proceedings of the 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, USA, 14–16 December 2015; pp. 1312–1316. [Google Scholar]
  183. Drouin-Picaro, A.; Falk, T.H. Using deep neural networks for natural saccade classification from electroencephalograms. In Proceedings of the 2016 IEEE EMBS International Student Conference (ISC), Ottawa, ON, Canada, 29–31 May 2016; pp. 1–4. [Google Scholar]
  184. Al-Nafjan, A.; Hosny, M.; Al-Ohali, Y.; Al-Wabil, A. Review and classification of emotion recognition based on EEG brain-computer interface system research: A systematic review. Appl. Sci. 2017, 7, 1239. [Google Scholar] [CrossRef] [Green Version]
  185. Soleymani, M.; Pantic, M. Multimedia implicit tagging using EEG signals. In Proceedings of the 2013 IEEE International Conference on Multimedia and Expo (ICME), San Jose, CA, USA, 15–19 July 2013; pp. 1–6. [Google Scholar]
  186. Soroush, M.Z.; Maghooli, K.; Setarehdan, S.K.; Nasrabadi, A.M. A review on EEG signals based emotion recognition. Int. Clin. Neurosci. J. 2017, 4, 118. [Google Scholar] [CrossRef]
  187. Faller, J.; Cummings, J.; Saproo, S.; Sajda, P. Regulation of arousal via online neurofeedback improves human performance in a demanding sensory-motor task. Proc. Natl. Acad. Sci. USA 2019, 116, 6482–6490. [Google Scholar] [CrossRef] [Green Version]
  188. Gaume, A.; Dreyfus, G.; Vialatte, F.B. A cognitive brain–computer interface monitoring sustained attentional variations during a continuous task. Cogn. Neurodynamics 2019, 13, 257–269. [Google Scholar] [CrossRef] [Green Version]
  189. Pattnaik, P.K.; Sarraf, J. Brain Computer Interface issues on hand movement. J. King Saud-Univ.-Comput. Inf. Sci. 2018, 30, 18–24. [Google Scholar] [CrossRef] [Green Version]
  190. Weiskopf, N.; Scharnowski, F.; Veit, R.; Goebel, R.; Birbaumer, N.; Mathiak, K. Self-regulation of local brain activity using real-time functional magnetic resonance imaging (fMRI). J. Physiol.-Paris 2004, 98, 357–373. [Google Scholar] [CrossRef]
  191. Cattan, G.; Rodrigues, P.L.C.; Congedo, M. EEG Alpha Waves Dataset. Ph.D. Thesis, GIPSA-LAB, University Grenoble-Alpes, Saint-Martin-d’Hères, France, 2018. [Google Scholar]
  192. Grégoire, C.; Rodrigues, P.; Congedo, M. EEG Alpha Waves Dataset; Centre pour la Communication Scientifique Directe: Grenoble, France, 2019. [Google Scholar]
  193. Tirupattur, P.; Rawat, Y.S.; Spampinato, C.; Shah, M. Thoughtviz: Visualizing human thoughts using generative adversarial network. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Korea, 22–26 October 2018; pp. 950–958. [Google Scholar]
  194. Walker, I.; Deisenroth, M.; Faisal, A. Deep Convolutional Neural Networks for Brain Computer Interface Using Motor Imagery; Ipmerial College of Science, Technology and Medicine Department of Computing: London, UK, 2015; p. 68. [Google Scholar]
  195. Spampinato, C.; Palazzo, S.; Kavasidis, I.; Giordano, D.; Souly, N.; Shah, M. Deep learning human mind for automated visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6809–6817. [Google Scholar]
  196. Tan, C.; Sun, F.; Zhang, W. Deep transfer learning for EEG-based brain computer interface. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 916–920. [Google Scholar]
  197. Xu, G.; Shen, X.; Chen, S.; Zong, Y.; Zhang, C.; Yue, H.; Liu, M.; Chen, F.; Che, W. A deep transfer convolutional neural network framework for EEG signal classification. IEEE Access 2019, 7, 112767–112776. [Google Scholar] [CrossRef]
  198. Fahimi, F.; Zhang, Z.; Goh, W.B.; Lee, T.S.; Ang, K.K.; Guan, C. Inter-subject transfer learning with an end-to-end deep convolutional neural network for EEG-based BCI. J. Neural Eng. 2019, 16, 026007. [Google Scholar] [CrossRef] [Green Version]
  199. Tang, J.; Liu, Y.; Hu, D.; Zhou, Z. Towards BCI-actuated smart wheelchair system. Biomed. Eng. Online 2018, 17, 1–22. [Google Scholar] [CrossRef] [Green Version]
  200. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [Green Version]
  201. Bashivan, P.; Bidelman, G.M.; Yeasin, M. Spectrotemporal dynamics of the EEG during working memory encoding and maintenance predicts individual behavioral capacity. Eur. J. Neurosci. 2014, 40, 3774–3784. [Google Scholar] [CrossRef]
  202. Sprague, S.A.; McBee, M.T.; Sellers, E.W. The effects of working memory on brain–computer interface performance. Clin. Neurophysiol. 2016, 127, 1331–1341. [Google Scholar] [CrossRef] [Green Version]
  203. Ramsey, N.F.; Van De Heuvel, M.P.; Kho, K.H.; Leijten, F.S. Towards human BCI applications based on cognitive brain systems: An investigation of neural signals recorded from the dorsolateral prefrontal cortex. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 214–217. [Google Scholar] [CrossRef]
  204. Cutrell, E.; Tan, D. BCI for passive input in HCI. In Proceedings of the CHI, Florence, Italy, 5–10 April 2008; Volume 8, pp. 1–3. [Google Scholar]
  205. Riccio, A.; Simione, L.; Schettini, F.; Pizzimenti, A.; Inghilleri, M.; Olivetti Belardinelli, M.; Mattia, D.; Cincotti, F. Attention and P300-based BCI performance in people with amyotrophic lateral sclerosis. Front. Hum. Neurosci. 2013, 7, 732. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  206. Schabus, M.D.; Dang-Vu, T.T.; Heib, D.P.J.; Boly, M.; Desseilles, M.; Vandewalle, G.; Schmidt, C.; Albouy, G.; Darsaud, A.; Gais, S.; et al. The fate of incoming stimuli during NREM sleep is determined by spindles and the phase of the slow oscillation. Front. Neurol. 2012, 3, 40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  207. Sun, Y.; Ye, N.; Xu, X. EEG analysis of alcoholics and controls based on feature extraction. In Proceedings of the 2006 8th International Conference on Signal Processing, Guilin, China, 16–20 November 2006; Volume 1. [Google Scholar]
  208. Nguyen, P.; Tran, D.; Huang, X.; Sharma, D. A proposed feature extraction method for EEG-based person identification. In Proceedings of the 2012 International Conference on Artificial Intelligence, Las Vegas, NV, USA, 16–19 July 2012; pp. 826–831. [Google Scholar]
  209. Kjøbli, J.; Tyssen, R.; Vaglum, P.; Aasland, O.; Grønvold, N.T.; Ekeberg, O. Personality traits and drinking to cope as predictors of hazardous drinking among medical students. J. Stud. Alcohol 2004, 65, 582–585. [Google Scholar] [CrossRef] [PubMed]
  210. Huang, X.; Altahat, S.; Tran, D.; Sharma, D. Human identification with electroencephalogram (EEG) signal processing. In Proceedings of the 2012 International Symposium on Communications and Information Technologies (ISCIT), Gold Coast, QLD, Australia, 2—5 October 2012; pp. 1021–1026. [Google Scholar]
  211. Palaniappan, R.; Raveendran, P.; Omatu, S. VEP optimal channel selection using genetic algorithm for neural network classification of alcoholics. IEEE Trans. Neural Netw. 2002, 13, 486–491. [Google Scholar] [CrossRef] [PubMed]
  212. Zhong, S.; Ghosh, J. HMMs and coupled HMMs for multi-channel EEG classification. In Proceedings of the 2002 International Joint Conference on Neural Networks, Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1154–1159. [Google Scholar]
  213. Wang, H.; Li, Y.; Hu, X.; Yang, Y.; Meng, Z.; Chang, K.M. Using EEG to Improve Massive Open Online Courses Feedback Interaction. In AIED Workshops; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  214. Wang, H. Confused Student EEG Brainwave Data, EEG Data from 10 Students Watching MOOC Videos. 2018. Available online: https://www.kaggle.com/wanghaohan/confused-eeg/ (accessed on 19 August 2021).
  215. Fahimirad, M.; Kotamjani, S.S. A review on application of artificial intelligence in teaching and learning in educational contexts. Int. J. Learn. Dev. 2018, 8, 106–118. [Google Scholar] [CrossRef]
  216. Kanoga, S.; Nakanishi, M.; Mitsukura, Y. Assessing the effects of voluntary and involuntary eyeblinks in independent components of electroencephalogram. Neurocomputing 2016, 193, 20–32. [Google Scholar] [CrossRef] [Green Version]
  217. Abe, K.; Sato, H.; Ohi, S.; Ohyama, M. Feature parameters of eye blinks when the sampling rate is changed. In Proceedings of the TENCON 2014–2014 IEEE Region 10 Conference, Bangkok, Thailand, 22–25 October 2014; pp. 1–6. [Google Scholar]
  218. Narejo, S.; Pasero, E.; Kulsoom, F. EEG based eye state classification using deep belief network and stacked autoencoder. Int. J. Electr. Comput. Eng. 2016, 6, 3131–3141. [Google Scholar]
  219. Reddy, T.K.; Behera, L. Online eye state recognition from EEG data using deep architectures. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 712–717. [Google Scholar]
  220. Lim, C.K.A.; Chia, W.C.; Chin, S.W. A mobile driver safety system: Analysis of single-channel EEG on drowsiness detection. In Proceedings of the 2014 International Conference on Computational Science and Technology (ICCST), Kota Kinabalu, Malaysia, 27–28 August 2014; pp. 1–5. [Google Scholar]
  221. Chun, J.; Bae, B.; Jo, S. BCI based hybrid interface for 3D object control in virtual reality. In Proceedings of the 2016 4th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Korea, 22–24 February 2016; pp. 1–4. [Google Scholar]
  222. Agarwal, M.; Sivakumar, R. Blink: A fully automated unsupervised algorithm for eye-blink detection in eeg signals. In Proceedings of the 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 24–27 September 2019; pp. 1113–1121. [Google Scholar]
  223. Andreev, A.; Cattan, G.; Congedo, M. Engineering study on the use of Head-Mounted display for Brain-Computer Interface. arXiv 2019, arXiv:1906.12251. [Google Scholar]
  224. Agarwal, M.; Sivakumar, R. Charge for a whole day: Extending battery life for bci wearables using a lightweight wake-up command. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–14. [Google Scholar]
  225. Rösler, O.; Suendermann, D. A First Step towards Eye State Prediction Using EEG. 2013. Available online: https://www.kaggle.com/c/vibcourseml2020/data/ (accessed on 19 August 2021).
  226. Zhang, Y.; Xu, P.; Guo, D.; Yao, D. Prediction of SSVEP-based BCI performance by the resting-state EEG network. J. Neural Eng. 2013, 10, 066017. [Google Scholar] [CrossRef] [Green Version]
  227. Hamilton, C.R.; Shahryari, S.; Rasheed, K.M. Eye state prediction from EEG data using boosted rotational forests. In Proceedings of the 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 9–11 December 2015; pp. 429–432. [Google Scholar]
  228. Kim, Y.; Lee, C.; Lim, C. Computing intelligence approach for an eye state classification with EEG signal in BCI. In Proceedings of the 2015 International Conference on Software Engineering and Information Technology (SEIT2015), Guilin, China, 26–28 June 2016; pp. 265–270. [Google Scholar]
  229. Agarwal, M. Publicly Available EEG Datasets. 2021. Available online: https://openbci.com/community/publicly-available-eeg-datasets/ (accessed on 19 August 2021).
  230. Pan, J.; Li, Y.; Gu, Z.; Yu, Z. A comparison study of two P300 speller paradigms for brain–computer interface. Cogn. Neurodynamics 2013, 7, 523–529. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  231. Vareka, L.; Bruha, P.; Moucek, R. Event-related potential datasets based on a three-stimulus paradigm. GigaScience 2014, 3, 2047-217X-3-35. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  232. Gao, W.; Guan, J.A.; Gao, J.; Zhou, D. Multi-ganglion ANN based feature learning with application to P300-BCI signal classification. Biomed. Signal Process. Control. 2015, 18, 127–137. [Google Scholar] [CrossRef]
  233. Marathe, A.R.; Ries, A.J.; Lawhern, V.J.; Lance, B.J.; Touryan, J.; McDowell, K.; Cecotti, H. The effect of target and non-target similarity on neural classification performance: A boost from confidence. Front. Neurosci. 2015, 9, 270. [Google Scholar] [CrossRef] [Green Version]
  234. Shin, J.; Von Lühmann, A.; Kim, D.W.; Mehnert, J.; Hwang, H.J.; Müller, K.R. Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset. Sci. Data 2018, 5, 1–16. [Google Scholar] [CrossRef]
  235. Håkansson, B.; Reinfeldt, S.; Eeg-Olofsson, M.; Östli, P.; Taghavi, H.; Adler, J.; Gabrielsson, J.; Stenfelt, S.; Granström, G. A novel bone conduction implant (BCI): Engineering aspects and pre-clinical studies. Int. J. Audiol. 2010, 49, 203–215. [Google Scholar] [CrossRef] [Green Version]
  236. Guger, C.; Krausz, G.; Allison, B.Z.; Edlinger, G. Comparison of dry and gel based electrodes for P300 brain–computer interfaces. Front. Neurosci. 2012, 6, 60. [Google Scholar] [CrossRef] [Green Version]
  237. Shahriari, Y.; Vaughan, T.M.; McCane, L.; Allison, B.Z.; Wolpaw, J.R.; Krusienski, D.J. An exploration of BCI performance variations in people with amyotrophic lateral sclerosis using longitudinal EEG data. J. Neural Eng. 2019, 16, 056031. [Google Scholar] [CrossRef]
  238. McCane, L.M.; Sellers, E.W.; McFarland, D.J.; Mak, J.N.; Carmack, C.S.; Zeitlin, D.; Wolpaw, J.R.; Vaughan, T.M. Brain-computer interface (BCI) evaluation in people with amyotrophic lateral sclerosis. Amyotroph. Lateral Scler. Front. Degener. 2014, 15, 207–215. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  239. Miller, K.J.; Schalk, G.; Hermes, D.; Ojemann, J.G.; Rao, R.P. Spontaneous decoding of the timing and content of human object perception from cortical surface recordings reveals complementary information in the event-related potential and broadband spectral change. PLoS Comput. Biol. 2016, 12, e1004660. [Google Scholar] [CrossRef] [PubMed]
  240. Bobrov, P.; Frolov, A.; Cantor, C.; Fedulova, I.; Bakhnyan, M.; Zhavoronkov, A. Brain-computer interface based on generation of visual images. PLoS ONE 2011, 6, e20674. [Google Scholar] [CrossRef] [PubMed]
  241. Cancino, S.; Saa, J.D. Electrocorticographic signals classification for brain computer interfaces using stacked-autoencoders. Applications of Machine Learning 2020. Int. Soc. Opt. Photonics 2020, 11511, 115110J. [Google Scholar]
  242. Wei, Q.; Liu, Y.; Gao, X.; Wang, Y.; Yang, C.; Lu, Z.; Gong, H. A Novel c-VEP BCI Paradigm for Increasing the Number of Stimulus Targets Based on Grouping Modulation With Different Codes. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 1178–1187. [Google Scholar] [CrossRef]
  243. Bin, G.; Gao, X.; Wang, Y.; Li, Y.; Hong, B.; Gao, S. A high-speed BCI based on code modulation VEP. J. Neural Eng. 2011, 8, 025015. [Google Scholar] [CrossRef]
  244. Gembler, F.W.; Benda, M.; Rezeika, A.; Stawicki, P.R.; Volosyak, I. Asynchronous c-VEP communication tools—Efficiency comparison of low-target, multi-target and dictionary-assisted BCI spellers. Sci. Rep. 2020, 10, 17064. [Google Scholar] [CrossRef]
  245. Spüler, M.; Rosenstiel, W.; Bogdan, M. Online adaptation of a c-VEP brain-computer interface (BCI) based on error-related potentials and unsupervised learning. PLoS ONE 2012, 7, e51077. [Google Scholar] [CrossRef] [Green Version]
  246. Kapeller, C.; Hintermüller, C.; Abu-Alqumsan, M.; Prückl, R.; Peer, A.; Guger, C. A BCI using VEP for continuous control of a mobile robot. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5254–5257. [Google Scholar]
  247. Spüler, M.; Rosenstiel, W.; Bogdan, M. One Class SVM and Canonical Correlation Analysis increase performance in a c-VEP based Brain-Computer Interface (BCI). ESANN 2012. [Google Scholar] [CrossRef]
  248. Bin, G.; Gao, X.; Wang, Y.; Hong, B.; Gao, S. VEP-based brain-computer interfaces: Time, frequency, and code modulations [Research Frontier]. IEEE Comput. Intell. Mag. 2009, 4, 22–26. [Google Scholar] [CrossRef]
  249. Zhang, Y.; Yin, E.; Li, F.; Zhang, Y.; Tanaka, T.; Zhao, Q.; Cui, Y.; Xu, P.; Yao, D.; Guo, D. Two-stage frequency recognition method based on correlated component analysis for SSVEP-based BCI. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 1314–1323. [Google Scholar] [CrossRef]
  250. Wang, Y.; Chen, X.; Gao, X.; Gao, S. A benchmark dataset for SSVEP-based brain–computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 25, 1746–1752. [Google Scholar] [CrossRef]
  251. Podmore, J.J.; Breckon, T.P.; Aznan, N.K.; Connolly, J.D. On the relative contribution of deep convolutional neural networks for SSVEP-based bio-signal decoding in BCI speller applications. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 611–618. [Google Scholar] [CrossRef] [Green Version]
  252. Zhang, Y.; Guo, D.; Xu, P.; Zhang, Y.; Yao, D. Robust frequency recognition for SSVEP-based BCI with temporally local multivariate synchronization index. Cogn. Neurodynamics 2016, 10, 505–511. [Google Scholar] [CrossRef] [Green Version]
  253. Lee, M.H.; Kwon, O.Y.; Kim, Y.J.; Kim, H.K.; Lee, Y.E.; Williamson, J.; Fazli, S.; Lee, S.W. EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience 2019, 8, giz002. [Google Scholar] [CrossRef]
  254. Belwafi, K.; Romain, O.; Gannouni, S.; Ghaffari, F.; Djemal, R.; Ouni, B. An embedded implementation based on adaptive filter bank for brain–computer interface systems. J. Neurosci. Methods 2018, 305, 1–16. [Google Scholar] [CrossRef]
  255. Rivet, B.; Souloumiac, A.; Attina, V.; Gibert, G. xDAWN algorithm to enhance evoked potentials: Application to brain–computer interface. IEEE Trans. Biomed. Eng. 2009, 56, 2035–2043. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  256. Lahane, P.; Jagtap, J.; Inamdar, A.; Karne, N.; Dev, R. A review of recent trends in EEG based Brain-Computer Interface. In Proceedings of the 2019 International Conference on Computational Intelligence in Data Science (ICCIDS), Chennai, India, 21–23 February 2019; pp. 1–6. [Google Scholar]
  257. Deng, S.; Winter, W.; Thorpe, S.; Srinivasan, R. EEG Surface Laplacian using realistic head geometry. Int. J. Bioelectromagn. 2011, 13, 173–177. [Google Scholar]
  258. Shaw, L.; Routray, A. Statistical features extraction for multivariate pattern analysis in meditation EEG using PCA. In Proceedings of the 2016 IEEE EMBS International Student Conference (ISC), Ottawa, ON, Canada, 29–31 May 2016; pp. 1–4. [Google Scholar]
  259. Subasi, A.; Gursoy, M.I. EEG signal classification using PCA, ICA, LDA and support vector machines. Expert Syst. Appl. 2010, 37, 8659–8666. [Google Scholar] [CrossRef]
  260. Jannat, N.; Sibli, S.A.; Shuhag, M.A.R.; Islam, M.R. EEG Motor Signal Analysis-Based Enhanced Motor Activity Recognition Using Optimal De-noising Algorithm. In Proceedings of the International Joint Conference on Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2020; pp. 125–136. [Google Scholar]
  261. Vahabi, Z.; Amirfattahi, R.; Mirzaei, A. Enhancing P300 wave of BCI systems via negentropy in adaptive wavelet denoising. J. Med. Signals Sensors 2011, 1, 165. [Google Scholar] [CrossRef]
  262. Johnson, M.T.; Yuan, X.; Ren, Y. Speech signal enhancement through adaptive wavelet thresholding. Speech Commun. 2007, 49, 123–133. [Google Scholar] [CrossRef]
  263. Islam, M.R.; Rahim, M.A.; Akter, H.; Kabir, R.; Shin, J. Optimal IMF selection of EMD for sleep disorder diagnosis using EEG signals. In Proceedings of the 3rd International Conference on Applications in Information Technology, Aizu-Wakamatsu, Japan, 1–3 November 2018; pp. 96–101. [Google Scholar]
  264. Bashashati, A.; Fatourechi, M.; Ward, R.K.; Birch, G.E. A survey of signal processing algorithms in brain–computer interfaces based on electrical brain signals. J. Neural Eng. 2007, 4, R32. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  265. Aborisade, D.; Ojo, J.; Amole, A.; Durodola, A. Comparative analysis of textural features derived from GLCM for ultrasound liver image classification. Int. J. Comput. Trends Technol. 2014, 11, 6. [Google Scholar]
  266. He, B.; Yuan, H.; Meng, J.; Gao, S. Brain-computer interfaces. In Neural Engineering; Springer: Berlin/Heidelberg, Germany, 2020; pp. 131–183. [Google Scholar]
  267. Phadikar, S.; Sinha, N.; Ghosh, R. A survey on feature extraction methods for EEG based emotion recognition. In International Conference on Innovation in Modern Science and Technology; Springer: Berlin/Heidelberg, Germany, 2019; pp. 31–45. [Google Scholar]
  268. Vaid, S.; Singh, P.; Kaur, C. EEG signal analysis for BCI interface: A review. In Proceedings of the 2015 5th International Conference on Advanced Computing & Communication Technologies, Haryana, India, 21–22 February 2015; pp. 143–147. [Google Scholar]
  269. Sur, S.; Sinha, V.K. Event-related potential: An overview. Ind. Psychiatry J. 2009, 18, 70. [Google Scholar] [CrossRef]
  270. Hajcak, G.; MacNamara, A.; Olvet, D.M. Event-related potentials, emotion, and emotion regulation: An integrative review. Dev. Neuropsychol. 2010, 35, 129–155. [Google Scholar] [CrossRef]
  271. Changoluisa, V.; Varona, P.; De Borja Rodríguez, F. A Low-Cost Computational Method for Characterizing Event-Related Potentials for BCI Applications and Beyond. IEEE Access 2020, 8, 111089–111101. [Google Scholar] [CrossRef]
  272. Beres, A.M. Time is of the essence: A review of electroencephalography (EEG) and event-related brain potentials (ERPs) in language research. Appl. Psychophysiol. Biofeedback 2017, 42, 247–255. [Google Scholar] [CrossRef] [Green Version]
  273. Takahashi, K. Remarks on emotion recognition from bio-potential signals. In Proceedings of the 2nd International conference on Autonomous Robots and Agents, Palmerston North, New Zealand, 13–15 December 2004; Volume 1. [Google Scholar]
  274. Wang, X.W.; Nie, D.; Lu, B.L. EEG-based emotion recognition using frequency domain features and support vector machines. In International Conference on Neural Information Processing; Springer: Berlin/Heidelberg, Germany, 2011; pp. 734–743. [Google Scholar]
  275. Islam, R.; Khan, S.A.; Kim, J.M. Discriminant feature distribution analysis-based hybrid feature selection for online bearing fault diagnosis in induction motors. J. Sensors 2016, 2016, 7145715. [Google Scholar] [CrossRef]
  276. Hjorth, B. EEG analysis based on time domain properties. Electroencephalogr. Clin. Neurophysiol. 1970, 29, 306–310. [Google Scholar] [CrossRef]
  277. Dagdevir, E.; Tokmakci, M. Optimization of preprocessing stage in EEG based BCI systems in terms of accuracy and timing cost. Biomed. Signal Process. Control. 2021, 67, 102548. [Google Scholar] [CrossRef]
  278. Feng, Z.; Qian, L.; Hu, H.; Sun, Y. Functional Connectivity for Motor Imaginary Recognition in Brain-computer Interface. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 3678–3682. [Google Scholar] [CrossRef]
  279. Smith, J.O. Mathematics of the Discrete Fourier Transform (DFT): With Audio Applications; W3K Publishing: Stanford, CA, USA, 2007. [Google Scholar]
  280. Durak, L.; Arikan, O. Short-time Fourier transform: Two fundamental properties and an optimal implementation. IEEE Trans. Signal Process. 2003, 51, 1231–1242. [Google Scholar] [CrossRef]
  281. Zabidi, A.; Mansor, W.; Lee, Y.; Fadzal, C.C.W. Short-time Fourier Transform analysis of EEG signal generated during imagined writing. In Proceedings of the 2012 International Conference on System Engineering and Technology (ICSET), Bandung, Indonesia, 11–12 September 2012; pp. 1–4. [Google Scholar]
  282. Al-Fahoum, A.S.; Al-Fraihat, A.A. Methods of EEG signal features extraction using linear analysis in frequency and time-frequency domains. Int. Sch. Res. Not. 2014, 2014, 730218. [Google Scholar] [CrossRef] [Green Version]
  283. Djamal, E.C.; Abdullah, M.Y.; Renaldi, F. Brain computer interface game controlling using fast fourier transform and learning vector quantization. J. Telecommun. Electron. Comput. Eng. 2017, 9, 71–74. [Google Scholar]
  284. Conneau, A.C.; Essid, S. Assessment of new spectral features for eeg-based emotion recognition. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May2014; pp. 4698–4702. [Google Scholar]
  285. Petropulu, A.P. Higher-Order Spectral Analysis. Digital Signal Procesing Handbook. 2018. Available online: http://elektroarsenal.net/higher-order-spectral-analysis.html (accessed on 19 August 2021).
  286. Aggarwal, S.; Chugh, N. Signal processing techniques for motor imagery brain computer interface: A review. Array 2019, 1, 100003. [Google Scholar] [CrossRef]
  287. LaFleur, K.; Cassady, K.; Doud, A.; Shades, K.; Rogin, E.; He, B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. J. Neural Eng. 2013, 10, 046003. [Google Scholar] [CrossRef] [Green Version]
  288. Mane, A.R.; Biradar, S.; Shastri, R. Review paper on feature extraction methods for EEG signal analysis. Int. J. Emerg. Trend. Eng. Basic Sci. 2015, 2, 545–552. [Google Scholar]
  289. Darvishi, S.; Al-Ani, A. Brain-computer interface analysis using continuous wavelet transform and adaptive neuro-fuzzy classifier. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 3220–3223. [Google Scholar]
  290. Nivedha, R.; Brinda, M.; Vasanth, D.; Anvitha, M.; Suma, K. EEG based emotion recognition using SVM and PSO. In Proceedings of the 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), Kerala, India, 6–7 July 2017; pp. 1597–1600. [Google Scholar]
  291. Fatourechi, M.; Bashashati, A.; Ward, R.K.; Birch, G.E. EMG and EOG artifacts in brain computer interface systems: A survey. Clin. Neurophysiol. 2007, 118, 480–494. [Google Scholar] [CrossRef]
  292. Wu, D.; King, J.T.; Chuang, C.H.; Lin, C.T.; Jung, T.P. Spatial filtering for EEG-based regression problems in brain–computer interface (BCI). IEEE Trans. Fuzzy Syst. 2017, 26, 771–781. [Google Scholar] [CrossRef] [Green Version]
  293. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for EEG-based brain–computer interfaces. J. Neural Eng. 2007, 4, R1. [Google Scholar] [CrossRef] [PubMed]
  294. Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F. A review of classification algorithms for EEG-based brain–computer interfaces: A 10 year update. J. Neural Eng. 2018, 15, 031005. [Google Scholar] [CrossRef] [Green Version]
  295. Xanthopoulos, P.; Pardalos, P.M.; Trafalis, T.B. Linear discriminant analysis. In Robust Data Mining; Springer: Berlin/Heidelberg, Germany, 2013; pp. 27–33. [Google Scholar]
  296. Gokcen, I.; Peng, J. Comparing linear discriminant analysis and support vector machines. In International Conference on Advances in Information Systems; Springer: Berlin/Heidelberg, Germany, 2002; pp. 104–113. [Google Scholar]
  297. Schuldt, C.; Laptev, I.; Caputo, B. Recognizing human actions: A local SVM approach. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 26 August 2004; Volume 3, pp. 32–36. [Google Scholar]
  298. Sridhar, G.; Rao, P.M. A Neural Network Approach for EEG classification in BCI. Int. J. Comput. Sci. Telecommun. 2012, 3, 44–48. [Google Scholar]
  299. Kavasidis, I.; Palazzo, S.; Spampinato, C.; Giordano, D.; Shah, M. Brain2image: Converting brain signals into images. In Proceedings of the 25th ACM international conference on Multimedia, Mountain View, CA, USA, 23–27 October 2017; pp. 1809–1817. [Google Scholar]
  300. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Internal Representations by Error Propagation; Technical Report; California Univ. San Diego La Jolla Inst. for Cognitive Science: La Jolla, CA, USA, 1985. [Google Scholar]
  301. Werbos, P.J. Generalization of backpropagation with application to a recurrent gas market model. Neural Netw. 1988, 1, 339–356. [Google Scholar] [CrossRef] [Green Version]
  302. Obermaier, B.; Guger, C.; Neuper, C.; Pfurtscheller, G. Hidden Markov models for online classification of single trial EEG data. Pattern Recognit. Lett. 2001, 22, 1299–1309. [Google Scholar] [CrossRef]
  303. Graves, A.; Mohamed, A.r.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
  304. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386. [Google Scholar] [CrossRef] [Green Version]
  305. Sunny, M.S.H.; Afroze, N.; Hossain, E. EEG Band Separation Using Multilayer Perceptron for Efficient Feature Extraction and Perfect BCI Paradigm. In Proceedings of the 2020 Emerging Technology in Computing, Communication and Electronics (ETCCE), Dhaka, Bangladesh, 21–22 December 2020; pp. 1–6. [Google Scholar]
  306. Blumberg, J.; Rickert, J.; Waldert, S.; Schulze-Bonhage, A.; Aertsen, A.; Mehring, C. Adaptive classification for brain computer interfaces. IEEE Trans. Biomed. Eng. 2007, 54, 2536–2539. [Google Scholar]
  307. Rezaei, S.; Tavakolian, K.; Nasrabadi, A.M.; Setarehdan, S.K. Different classification techniques considering brain computer interface applications. J. Neural Eng. 2006, 3, 139. [Google Scholar] [CrossRef]
  308. Chaudhary, P.; Agrawal, R. A comparative study of linear and non-linear classifiers in sensory motor imagery based brain computer interface. J. Comput. Theor. Nanosci. 2019, 16, 5134–5139. [Google Scholar] [CrossRef]
  309. Rabiner, L.R. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 1989, 77, 257–286. [Google Scholar] [CrossRef]
  310. Lederman, D.; Tabrikian, J. Classification of multichannel EEG patterns using parallel hidden Markov models. Med. Biol. Eng. Comput. 2012, 50, 319–328. [Google Scholar] [CrossRef]
  311. Wang, M.; Abdelfattah, S.; Moustafa, N.; Hu, J. Deep Gaussian mixture-hidden Markov model for classification of EEG signals. IEEE Trans. Emerg. Top. Comput. Intell. 2018, 2, 278–287. [Google Scholar] [CrossRef]
  312. Liu, C.; Wang, H.; Lu, Z. EEG classification for multiclass motor imagery BCI. In Proceedings of the 2013 25th Chinese Control and Decision Conference (CCDC), Guiyang, China, 25–27 May 2013; pp. 4450–4453. [Google Scholar]
  313. Bablani, A.; Edla, D.R.; Dodia, S. Classification of EEG data using k-nearest neighbor approach for concealed information test. Procedia Comput. Sci. 2018, 143, 242–249. [Google Scholar] [CrossRef]
  314. Roth, P.M.; Hirzer, M.; Köstinger, M.; Beleznai, C.; Bischof, H. Mahalanobis distance learning for person re-identification. In Person re-identification; Springer: Berlin/Heidelberg, Germany, 2014; pp. 247–267. [Google Scholar]
  315. Mishuhina, V.; Jiang, X. Feature weighting and regularization of common spatial patterns in EEG-based motor imagery BCI. IEEE Signal Process. Lett. 2018, 25, 783–787. [Google Scholar] [CrossRef]
  316. Dou, J.; Yunus, A.P.; Bui, D.T.; Merghadi, A.; Sahana, M.; Zhu, Z.; Chen, C.W.; Han, Z.; Pham, B.T. Improved landslide assessment using support vector machine with bagging, boosting, and stacking ensemble machine learning framework in a mountainous watershed, Japan. Landslides 2020, 17, 641–658. [Google Scholar] [CrossRef]
  317. Wu, D.; Xu, Y.; Lu, B.L. Transfer learning for EEG-based brain-computer interfaces: A review of progress made since 2016. IEEE Trans. Cogn. Dev. Syst. 2020. [Google Scholar] [CrossRef]
  318. Zhang, C.; Kim, Y.K.; Eskandarian, A. EEG-inception: An accurate and robust end-to-end neural network for EEG-based motor imagery classification. J. Neural Eng. 2021, 18, 046014. [Google Scholar] [CrossRef]
  319. Zuo, C.; Jin, J.; Xu, R.; Wu, L.; Liu, C.; Miao, Y.; Wang, X. Cluster decomposing and multi-objective optimization based-ensemble learning framework for motor imagery-based brain–computer interfaces. J. Neural Eng. 2021, 18, 026018. [Google Scholar] [CrossRef]
  320. Aler, R.; Galván, I.M.; Valls, J.M. Evolving spatial and frequency selection filters for brain-computer interfaces. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; pp. 1–7. [Google Scholar]
  321. Mohamed, E.A.; Yusoff, M.Z.B.; Selman, N.K.; Malik, A.S. Enhancing EEG signals in brain computer interface using wavelet transform. Int. J. Inf. Electron. Eng. 2014, 4, 234. [Google Scholar] [CrossRef]
  322. Carrera-Leon, O.; Ramirez, J.M.; Alarcon-Aquino, V.; Baker, M.; D’Croz-Baron, D.; Gomez-Gil, P. A motor imagery BCI experiment using wavelet analysis and spatial patterns feature extraction. In Proceedings of the 2012 Workshop on Engineering Applications, Bogota, Colombia, 2–4 May 2012; pp. 1–6. [Google Scholar]
  323. Yang, J.; Yao, S.; Wang, J. Deep fusion feature learning network for MI-EEG classification. IEEE Access 2018, 6, 79050–79059. [Google Scholar] [CrossRef]
  324. Kanoga, S.; Kanemura, A.; Asoh, H. A Comparative Study of Features and Classifiers in Single-channel EEG-based Motor Imagery BCI. In Proceedings of the 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Anaheim, CA, USA, 26–29 November 2018; pp. 474–478. [Google Scholar]
  325. Yanase, J.; Triantaphyllou, E. A systematic survey of computer-aided diagnosis in medicine: Past and present developments. Expert Syst. Appl. 2019, 138, 112821. [Google Scholar] [CrossRef]
  326. Shannon, C.E.; Warren, W. The mathematical theory of communication; University of illinois Press: Champaign, IL, USA, 1949. [Google Scholar]
  327. Volosyak, I.; Valbuena, D.; Malechka, T.; Peuscher, J.; Gräser, A. Brain–computer interface using water-based electrodes. J. Neural Eng. 2010, 7, 066007. [Google Scholar] [CrossRef]
  328. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain–computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef]
  329. Farwell, L.A.; Donchin, E. Talking off the top of your head: Toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr. Clin. Neurophysiol. 1988, 70, 510–523. [Google Scholar] [CrossRef]
  330. Schreuder, M.; Höhne, J.; Blankertz, B.; Haufe, S.; Dickhaus, T.; Tangermann, M. Optimizing event-related potential based brain–computer interfaces: A systematic evaluation of dynamic stopping methods. J. Neural Eng. 2013, 10, 036025. [Google Scholar] [CrossRef] [Green Version]
  331. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
  332. Kraemer, H.C. Kappa Coefficient. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118445112.stat00365 (accessed on 19 August 2021).
  333. Thompson, D.E.; Quitadamo, L.R.; Mainardi, L.; Gao, S.; Kindermans, P.J.; Simeral, J.D.; Fazel-Rezai, R.; Matteucci, M.; Falk, T.H.; Bianchi, L.; et al. Performance measurement for brain–computer or brain–machine interfaces: A tutorial. J. Neural Eng. 2014, 11, 035001. [Google Scholar] [CrossRef] [Green Version]
  334. Chestek, C.A.; Batista, A.P.; Santhanam, G.; Byron, M.Y.; Afshar, A.; Cunningham, J.P.; Gilja, V.; Ryu, S.I.; Churchland, M.M.; Shenoy, K.V. Single-neuron stability during repeated reaching in macaque premotor cortex. J. Neurosci. 2007, 27, 10742–10750. [Google Scholar] [CrossRef] [Green Version]
  335. Simeral, J.; Kim, S.P.; Black, M.; Donoghue, J.; Hochberg, L. Neural control of cursor trajectory and click by a human with tetraplegia 1000 days after implant of an intracortical microelectrode array. J. Neural Eng. 2011, 8, 025027. [Google Scholar] [CrossRef] [Green Version]
  336. Gilja, V.; Nuyujukian, P.; Chestek, C.A.; Cunningham, J.P.; Byron, M.Y.; Fan, J.M.; Churchland, M.M.; Kaufman, M.T.; Kao, J.C.; Ryu, S.I.; et al. A high-performance neural prosthesis enabled by control algorithm design. Nat. Neurosci. 2012, 15, 1752–1757. [Google Scholar] [CrossRef] [Green Version]
  337. Ramos Lopez, C.; Castro Lopez, J.; Buchely, A.; Ordoñez Lopez, D. Specialized in Quality Control and Control of Mobile Applications Based on the ISO 9241-11 Ergonomic Requirements for Office Work with Visual Display Terminals (VDTS). 2016. Available online: https://revistas.utp.ac.pa/index.php/memoutp/article/view/1473/ (accessed on 19 August 2021).
  338. Seffah, A.; Donyaee, M.; Kline, R.B.; Padda, H.K. Usability measurement and metrics: A consolidated model. Softw. Qual. J. 2006, 14, 159–178. [Google Scholar] [CrossRef]
  339. Gupta, R.; Arndt, S.; Antons, J.N.; Schleicher, R.; Möller, S.; Falk, T.H. Neurophysiological experimental facility for Quality of Experience (QoE) assessment. In Proceedings of the 2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013), Ghent, Belgium, 27–31 May 2013; pp. 1300–1305. [Google Scholar]
  340. Coyne, J.T.; Baldwin, C.; Cole, A.; Sibley, C.; Roberts, D.M. Applying real time physiological measures of cognitive load to improve training. In International Conference on Foundations of Augmented Cognition; Springer: Berlin/Heidelberg, Germany, 2009; pp. 469–478. [Google Scholar]
  341. Liu, Y.H.; Wang, S.H.; Hu, M.R. A self-paced P300 healthcare brain-computer interface system with SSVEP-based switching control and kernel FDA+ SVM-based detector. Appl. Sci. 2016, 6, 142. [Google Scholar] [CrossRef] [Green Version]
  342. Tayeb, Z.; Fedjaev, J.; Ghaboosi, N.; Richter, C.; Everding, L.; Qu, X.; Wu, Y.; Cheng, G.; Conradt, J. Validating deep neural networks for online decoding of motor imagery movements from EEG signals. Sensors 2019, 19, 210. [Google Scholar] [CrossRef] [Green Version]
  343. Barachant, A.; Bonnet, S.; Congedo, M.; Jutten, C. Multiclass brain–computer interface classification by Riemannian geometry. IEEE Trans. Biomed. Eng. 2011, 59, 920–928. [Google Scholar] [CrossRef] [Green Version]
  344. Zhang, X.; Li, J.; Liu, Y.; Zhang, Z.; Wang, Z.; Luo, D.; Zhou, X.; Zhu, M.; Salman, W.; Hu, G.; et al. Design of a fatigue detection system for high-speed trains based on driver vigilance using a wireless wearable EEG. Sensors 2017, 17, 486. [Google Scholar] [CrossRef] [Green Version]
  345. Zhang, Y.; Wang, Y.; Zhou, G.; Jin, J.; Wang, B.; Wang, X.; Cichocki, A. Multi-kernel extreme learning machine for EEG classification in brain-computer interfaces. Expert Syst. Appl. 2018, 96, 302–310. [Google Scholar] [CrossRef]
  346. Tomita, Y.; Vialatte, F.B.; Dreyfus, G.; Mitsukura, Y.; Bakardjian, H.; Cichocki, A. Bimodal BCI using simultaneously NIRS and EEG. IEEE Trans. Biomed. Eng. 2014, 61, 1274–1284. [Google Scholar] [CrossRef] [PubMed]
  347. Cecotti, H.; Graser, A. Convolutional neural networks for P300 detection with application to brain-computer interfaces. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 433–445. [Google Scholar] [CrossRef]
  348. Jin, Z.; Zhou, G.; Gao, D.; Zhang, Y. EEG classification using sparse Bayesian extreme learning machine for brain–computer interface. Neural Comput. Appl. 2020, 32, 6601–6609. [Google Scholar] [CrossRef]
  349. Tsui, C.S.L.; Gan, J.Q.; Roberts, S.J. A self-paced brain–computer interface for controlling a robot simulator: An online event labelling paradigm and an extended Kalman filter based algorithm for online training. Med Biol. Eng. Comput. 2009, 47, 257–265. [Google Scholar] [CrossRef] [PubMed]
  350. Van Erp, J.; Lotte, F.; Tangermann, M. Brain-computer interfaces: Beyond medical applications. Computer 2012, 45, 26–34. [Google Scholar] [CrossRef] [Green Version]
  351. Gao, S.; Wang, Y.; Gao, X.; Hong, B. Visual and auditory brain–computer interfaces. IEEE Trans. Biomed. Eng. 2014, 61, 1436–1447. [Google Scholar]
  352. McCane, L.M.; Heckman, S.M.; McFarland, D.J.; Townsend, G.; Mak, J.N.; Sellers, E.W.; Zeitlin, D.; Tenteromano, L.M.; Wolpaw, J.R.; Vaughan, T.M. P300-based brain-computer interface (BCI) event-related potentials (ERPs): People with amyotrophic lateral sclerosis (ALS) vs. age-matched controls. Clin. Neurophysiol. 2015, 126, 2124–2131. [Google Scholar] [CrossRef] [Green Version]
  353. Holz, E.M.; Botrel, L.; Kaufmann, T.; Kübler, A. Long-term independent brain-computer interface home use improves quality of life of a patient in the locked-in state: A case study. Arch. Phys. Med. Rehabil. 2015, 96, S16–S26. [Google Scholar] [CrossRef] [PubMed]
  354. Mudgal, S.K.; Sharma, S.K.; Chaturvedi, J.; Sharma, A. Brain computer interface advancement in neurosciences: Applications and issues. Interdiscip. Neurosurg. 2020, 20, 100694. [Google Scholar] [CrossRef]
  355. Shen, Y.W.; Lin, Y.P. Challenge for affective brain-computer interfaces: Non-stationary spatio-spectral EEG oscillations of emotional responses. Front. Hum. Neurosci. 2019, 13, 366. [Google Scholar] [CrossRef] [PubMed]
  356. Ghare, P.S.; Paithane, A. Human emotion recognition using non linear and non stationary EEG signal. In Proceedings of the 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT), Pune, India, 9–10 September 2016; pp. 1013–1016. [Google Scholar]
  357. Miladinović, A.; Ajčević, M.; Jarmolowska, J.; Marusic, U.; Colussi, M.; Silveri, G.; Battaglini, P.P.; Accardo, A. Effect of power feature covariance shift on BCI spatial-filtering techniques: A comparative study. Comput. Methods Programs Biomed. 2021, 198, 105808. [Google Scholar] [CrossRef]
  358. und Softwaretechnik, R. Computational challenges for noninvasive brain computer interfaces. IEEE Intell. Syst. 2008, 23, 78–79. [Google Scholar]
  359. Allison, B.Z.; Dunne, S.; Leeb, R.; Millán, J.D.R.; Nijholt, A. Towards Practical Brain-Computer Interfaces: Bridging the Gap from Research to Real-World Applications; Springer Science & Business Media: New York, NY, USA, 2012. [Google Scholar]
  360. Rashid, M.; Sulaiman, N.; PP Abdul Majeed, A.; Musa, R.M.; Bari, B.S.; Khatun, S. Current status, challenges, and possible solutions of EEG-based brain-computer interface: A comprehensive review. Front. Neurorobotics 2020, 14, 25. [Google Scholar] [CrossRef]
  361. Jin, J.; Allison, B.Z.; Sellers, E.W.; Brunner, C.; Horki, P.; Wang, X.; Neuper, C. Optimized stimulus presentation patterns for an event-related potential EEG-based brain–computer interface. Med. Biol. Eng. Comput. 2011, 49, 181–191. [Google Scholar] [CrossRef]
Figure 1. The PRISMA process that is followed in this article.
Figure 1. The PRISMA process that is followed in this article.
Sensors 21 05746 g001
Figure 2. Basic architecture of a BCI system.
Figure 2. Basic architecture of a BCI system.
Sensors 21 05746 g002
Figure 3. The classification/taxonomy of the BCI system.
Figure 3. The classification/taxonomy of the BCI system.
Sensors 21 05746 g003
Figure 4. The basic architecture of BCI control signals.
Figure 4. The basic architecture of BCI control signals.
Sensors 21 05746 g004
Figure 5. The basic structure of CSP [286].
Figure 5. The basic structure of CSP [286].
Sensors 21 05746 g005
Figure 6. Classification of commonly used classifiers in BCI.
Figure 6. Classification of commonly used classifiers in BCI.
Sensors 21 05746 g006
Table 1. A summary of recent surveys/reviews on various BCI technologies, signals, algorithms, classifiers, etc.
Table 1. A summary of recent surveys/reviews on various BCI technologies, signals, algorithms, classifiers, etc.
Ref.PurposesChallenges
[6]Advantages, disadvantages, decoding algorithms, and classification methods of EEG-based BCI paradigm are evaluated.Training time and fatigue, signal processing, and novel decoders, shared control to supervisory control in closed-loop.
[7]A comprehensive review on the structure of the brain and on the phases, signal extraction methods, and classifiers of BCIHuman-generated thoughts are non-stationary, and generated signals are nonlinear.
[8]A systematic review on the challenges in BCI and current studies on BCI games using EEG devicesBiased within the process of search and classification.
[9]A well-structured review on sensors used on BCI applications that can detect patterns of the brainThe sensors are placed in the human brain when neurosurgery is needed, which is a precarious process.
[10]A brief review on standard invasive and noninvasive techniques of BCI, and on existing features and classifiersTo build brain signal capture systems with low-density electrodes and higher resolution.
[11]This paper briefly describes the application of BCI and neurofeedback related to haptic technologiesThis study only covers a small domain of BCI (haptic technology)
[12]This survey mainly focuses on identifying emotion with EEG-based BCI, with a brief discussion on feature extraction, selection, and classifiersThere are no real-life event datasets, and the literature could not sense the mixed feelings simultaneously.
[13]This paper refers to applying only noninvasive techniques on BCI and profound learning-related BCI studiesThis study exclusively covers noninvasive brain signals.
[14]This review focused on popular techniques such as deep learning models and advances in signal sensing technologiesPopular feature extraction processes, methods, and classifiers are not mentioned or reviewed.
Table 2. A table of different types of motor imagery datasets of BCI.
Table 2. A table of different types of motor imagery datasets of BCI.
Dataset NameSubject (S)/Electrodes (E)/Channels (C)Used in
Left or Right Hand MI [70]S: 52[71,72,73,74,75]
Motor Movement or Imagery DatasetS: 109 E: 64[76,77,78,79]
Grasp and Lift EEG [80]S: 12[81,82,83,84,85]
SCP data of Motor-Imagery [86]S: 13 Recordings: 60 h[87,88,89,90,91,92]
BCI Competition III [93]S: 3 C: 60[94,95,96]
BCI Competition IV-1S: 7 C: 64[97,98,99,100,101]
BCI Competition IV-2aS: 9 E: 22[102,103,104,105,106]
BCI Competition IV-2bS: 9 E: 3[107,108,109,110,111,112]
High-Gamma Dataset [113]S: 14 E: 128[114,115,116,117,118,119,120]
Left/Right Hand 1D/2D movementsS: one E: 19[86,121,122,123]
Imagination of Right-hand Thumb Movement [124]S: one E: 8[83,125,126,127,128]
Mental-Imagery DatasetS: 13[129,130,131,132,133,134,135]
Table 3. A table of different types of Error-Related Potentials (ErrP) dataset of BCI.
Table 3. A table of different types of Error-Related Potentials (ErrP) dataset of BCI.
Dataset NameSubject (S)/Electrodes (E)/Channels (C)Used in
BCI–NER Challenge [136]S: 26 C: 56[137]
ErrP in a target selection taskS: E: 64[138,139,140,141,142,143,144]
ErrPs during continuous feedback [145]S: 10 E: 28[146,147,148]
Table 4. A table of different types emotion recognition dataset of BCI.
Table 4. A table of different types emotion recognition dataset of BCI.
Dataset NameSubject (S)/Electrodes (E)/Channels (C)Used in
DEAP [149]S: 32 C: 32[150,151,152,153,154,155,156,157]
Enterface’06 [158]S: 5 C: 54NA
HeadITS: 31[159]
NeuroMarketing [160]S: 25 E: 14[161,162]
SEED [163]S: 15 C: 62[12,164,165,166,167,168,169]
SEED-IVS: 15 C: 62[170,171,172,173,174,175]
SEED-VIG [176]E: 18[137,177,178,179]
HCI-TaggingS: 30[180,181,182,183,184,185,186]
Regulation of Arousal [187]S: 18[52,130,188,189,190]
EEG Alpha Waves [191]S: 20[192]
Table 5. A table of different types of miscellaneous datasets.
Table 5. A table of different types of miscellaneous datasets.
Dataset NameSubject (S)/Electrodes (E)/Channels (C)Used in
MNIST Brain DigitsS: Single Recordings: 2 s[193,194]
Imagenet BrainS: Single Recordings: 3 s[195,196,197,198,199,200]
Working Memory [201]S: 15 E: 64[202,203,204,205]
Deep Sleep Slow Oscillation [201]R: 10s[206]
Genetic Predisposition to AlcoholismS: 120 E: 64[124,207,208,209,210,211,212]
Confusion during MOOC [213]S:10[214,215]
Table 6. A table of different types of eye-blink or movement datasets in BCI.
Table 6. A table of different types of eye-blink or movement datasets in BCI.
Dataset NameSubject (S)/Electrodes (E)/Channels (C)Used in
Voluntary-Involuntary Eye-Blinks [216]S: 20 E: 14[217]
EEG-eye state [124]Recordings: 117 s[218,219,220,221]
EEG-IO [222]S: 20 Blinks: 25[222,223]
Eye blinks and movements [222]S: 12[222,224]
Eye State Prediction [225]S: Single Recordings: 117 s[130,218,219,226,227,228]
Table 7. A table of different types Event-Related Potential (ERP) datasets in BCI. These datasets are collected from [229].
Table 7. A table of different types Event-Related Potential (ERP) datasets in BCI. These datasets are collected from [229].
Dataset NameSubject (S)/Electrodes (E)/Channels (C)Used in
Target Versus Non-Target (2012)S: 25 E: 16NA
Target Versus Non-Target (2013)S: 24 E: 16[230]
Target Versus Non-Target (2014)S: 71 E: 16[231]
Target Versus Non-Target (2015)S: 50 E: 32[232,233,234]
Impedance DataS: 12[86,94,235,236,237,238]
Face vs. House Discrimination [239]S: 7[240,241]
Table 8. A table of different types of Visually Evoked Potential (VEP) datasets in BCI. These datasets are collected from [229].
Table 8. A table of different types of Visually Evoked Potential (VEP) datasets in BCI. These datasets are collected from [229].
Dataset NameSubject (S)/Electrodes (E)/Channels (C)Used in
c-VEP BCIS: 9 C: 32[242,243,244]
c-VEP BCI with dry electrodesS: 9 C: 15[243,245,246,247,248]
SSVEPS: 30 E: 14[249,250,251,252,253]
Synchronized Brainwave DatasetVideo stimulus[254,255]
Table 9. Comparison of classifiers based on popular datasets and features.
Table 9. Comparison of classifiers based on popular datasets and features.
Ref.DatasetFeatureClassifierAccuracy
[102]BCI competition IV-2bCWTCNNMorlet- 78.93%, Bump-77.25%
[320]BCI competition IIICSPSVMEvolved Filters:
Subject 1—77.96%,
Subject 2—75.11%,
Subject 3—57.76%
[321]BCI competition IIIWTSVM85.54%
[321]BCI competition IIIWTNN82.43%
[322]BCI competition IIIWTLDAMisClassification Rate: 0.1286
[323]BCI competition IIIWTCNN86.20%
[324]BCI competition IV-2aSingle Channel CSPKNN62.2 ± 0.4%
[324]BCI competition IV-2aSingle Channel CSPMLP63.5 ± 0.4%
[324]BCI competition IV-2aSingle Channel CSPSVM63.3 ± 0.4%
[324]BCI competition IV-2aSingle Channel CSPLDA61.8 ± 0.4%
Table 10. A summary of some research papers proposing new methods of BCI.
Table 10. A summary of some research papers proposing new methods of BCI.
ModelNoveltyFeature ExtractionArchitectureLimitations
P300, ERN, MRCP, SMR [200]Compact Convolutional neural network for EEG based BCIBand pass filteringEEGNetThe proposed approaches only work effectively when the feature is accustomed to before.
WOLA [254]Dynamic filtering of EEG signalsCSPEmbedded-BCI (EBCI) systemThis model is not updated yet for eye blinking or muscle activities.
xDAWN [255]Enhance P300 evoked potentialsSpatial FilteringP300 speller BCI paradigmThere is room for improvization and enhancements.
SSVEP, P300 [341]BCI-based healthcare control systemP300 detector Kernel (FDA+ SSVEP)Self- paced P300 healthcare system with SSVEPSSVEP stimulation paradigm can be used to enhance accuracy.
LSTM, pCNN, RCNN [342]Online decoding of motor imagery movements using DL modelsCSP, log-BP featuresClassify Motor Imagery movementsThe data used in proposed models are limited.
MDRM and TSLDA [343]Classification framework for BCI-based motor imagerySpatial filteringMI-based BCI classification using Riemannian frameworkComputational costs are faced while implementing this proposed framework.
SVM [344]Fatigue detection systemFFTTrain driver Vigilance detectionNA
Gaussian, polynomial kernel [345]MKELM-based method for motor imagery EEG classificationCSPMKELM-based method for BCIImprovement of accuracy and extension of the framework is needed.
Bimodal NIRS-EEG approach [346]Bimodal BCI using EEG and NIRSLow pass filter and Savitzky–Golay (SG)SSVEP paradigmOnly used in EEG and fNIRS channels.
P300-BCI classification using CNN [347]Detection of P300 wavesSpatial filters with CNNNN architectureVariability over subjects, determining key layers
Unified ELM and SB learning [348]Sparse Bayesian ELM (SBELM)-based algorithmCSP methodSBELM for Motor Imagery-related EEG classificationMultiband optimization can increase the accuracy.
Extended Kalman adaptive LDA [349]Online training for controlling a simulated robotLDA classifiersOnline self-paced event detection systemLimited to two classes and does not extend to multiple classes.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mridha, M.F.; Das, S.C.; Kabir, M.M.; Lima, A.A.; Islam, M.R.; Watanobe, Y. Brain-Computer Interface: Advancement and Challenges. Sensors 2021, 21, 5746. https://doi.org/10.3390/s21175746

AMA Style

Mridha MF, Das SC, Kabir MM, Lima AA, Islam MR, Watanobe Y. Brain-Computer Interface: Advancement and Challenges. Sensors. 2021; 21(17):5746. https://doi.org/10.3390/s21175746

Chicago/Turabian Style

Mridha, M. F., Sujoy Chandra Das, Muhammad Mohsin Kabir, Aklima Akter Lima, Md. Rashedul Islam, and Yutaka Watanobe. 2021. "Brain-Computer Interface: Advancement and Challenges" Sensors 21, no. 17: 5746. https://doi.org/10.3390/s21175746

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop