An Introductory Tutorial on Brain–Computer Interfaces and Their Applications

: The prospect and potentiality of interfacing minds with machines has long captured human imagination. Recent advances in biomedical engineering, computer science, and neuroscience are making brain–computer interfaces a reality, paving the way to restoring and potentially augmenting human physical and mental capabilities. Applications of brain–computer interfaces are being explored in applications as diverse as security, lie detection, alertness monitoring, gaming, education, art, and human cognition augmentation. The present tutorial aims to survey the principal features and challenges of brain–computer interfaces (such as reliable acquisition of brain signals, ﬁltering and processing of the acquired brainwaves, ethical and legal issues related to brain–computer interface (BCI), data privacy, and performance assessment) with special emphasis to biomedical engineering and automation engineering applications. The content of this paper is aimed at students, researchers, and practitioners to glimpse the multifaceted world of brain–computer interfacing.


Introduction
Severe neurological and cognitive disorders, such as amyotrophic lateral sclerosis (ALS), brainstem stroke, and spinal cord injury, could destroy the pathways through which the brain communicates with and controls its external environment [1,2].Severely affected patients may loose all voluntary muscle control, including eye movements, and may be completely locked into their bodies, unable to communicate by any means.Such a complex of severe diseases is sometimes referred to as locked-in syndrome to signify the inability to interact with and to manifest any intent to the external world.A potential solution for restoring functions and to overcome motor impairments is to provide the brain with a new, nonmuscular communication and control channel, a direct brain-computer interface (BCI) or brain-machine interface, for conveying messages and commands to the external world [3].
Evidence shows that neuronal networks oscillate in a way that is functionally relevant [4][5][6].In particular, mammalian cortical neurons form behavior-dependent oscillating networks of various sizes.Recent findings indicate that network oscillations temporally link neurons into assemblies and facilitate synaptic plasticity, mechanisms that cooperatively support temporal representation and long-term consolidation of information [4,7,8].The essence of BCI technology, through sensors placed over the head, is to record such neuronal oscillations, which encode the brain activity, and to decipher the neuronal oscillation code.
The slow speeds, high error rate, susceptibility to artifacts, and complexity of early BCI systems have been challenges for implementing workable real-world systems [9].Originally, the motivation for developing BCIs was to provide severely disabled individuals with a basic communication system.In recent years, advances in computing and biosensing technologies improved the outlook for BCI applications, making them promising not only as assistive technologies but also for mainstream applications [10].
While noninvasive techniques are the most widely used in applications devoted both to regular consumers and to restoring the functionality of disabled subjects, invasive implants were initially developed involving experiments with animals and were applied in the control of artificial prostheses.At the end of the 1990s, implants were applied to humans to accomplish simple tasks such as moving a screen cursor.The electroencephalogram (EEG), for instance, is a typical signal used as an input for BCI applications and refers to the electrical activity recorded through electrodes positioned on the scalp.During the last decades, EEG-based BCI became one of the most popular noninvasive techniques.The EEG measures the summation of synchronous activity of neurons [11] that have the same spatial orientation after an external stimulus is produced.This technique was used to register different types of neural activities such as evoked responses (ERs), also known as evoked potentials (EPs) [12,13], or induced responses as event-related potentials, event-related desynchronisations, and slow cortical potentials [13].
In recent years, the motivation for developing BCIs has been not only to provide an alternate communication channel for severely disabled people but also to use BCIs for communication and control in industrial environments and consumer applications [14].It is also worth mentioning a recent technological development, closely related to BCI, known as a brain-to-brain interface (BBI).A BBI is a combination of the brain-computer interface and the computer-brain interface [15].Brain-to-brain interfaces allow for direct transmission of brain activity in real time by coupling the brains of two individuals.
The last decade witnessed an increasing interest towards the use of BCI for games and entertainment applications [16] given the amount of meaningful information provided by BCI devices not easily achievable by other input modalities.BCI signals can be utilized in this kind of application to collect data to describe the cognitive states of a user, which proved useful in adapting a game to the player's emotional and cognitive conditions.Moreover, research in BCI can provide game developers with information that is relevant to controling a gaming application or to developing hardware and software products to build an unobtrusive interface [17].
The use of mental states to trigger surroundings and to control the external environment can be achieved via passive brain-computer interfaces (pBCIs) to replace a lost function in persons with severe motor disabilities and without possibility of function recovery (i.e., amyotrophic lateral sclerosis or brainstem stroke) [18].In such instances, BCI is used as a system that allows for direct communication between the brain and distant devices.Over the last years, research efforts have been devoted to its use in smart environmental control systems, fast and smooth movement of robotic arm prototypes, as well as motion planning of autonomous or semiautonomous vehicles and robotic systems [19,20].BCI can be used in an active way, known as active BCI, leaving the user voluntarily modulating brain activity to generate a specific command to the surrounding environment, replacing or partially restoring lost or impaired muscular abilities.The alternative modality, pBCIs [21][22][23], is equipped to derive its outputs from arbitrary brain activity without the intention of a specific voluntary control (i.e., it exploits implicit information on the user states).In fact, in systems based on pBCIs, the users do not try to control their brain activity [24].pBCIs have been used in modern research on adaptive automation [25] and in augmented user's evaluation [26].A recent study demonstrated the higher resolution of neurophysiological measures in comparison to subjective ones and how the simultaneous employment of neurophysiological measures and behavioral ones could allow for a holistic assessment of operational tools [27].Reliability is a desirable characteristic of BCI systems when they are used under nonexperimental operating conditions.The usability of BCI systems is influenced by the involved and frequent procedures that are required for configuration and calibration.Such an obstruction to smooth user experience may be mitigated by automated recalibration algorithms [28].The range of possible EEG-based BCI applications is very broad, from very simple to complex, including generic cursor control applications, spelling devices [29], gaming [30,31], navigation in virtual reality [32], environmental control [33,34], and control of robotic devices [19,35].To highlight the capabilities of EEG-based BCIs, some applications with focus on their use in control and automation systems will be summarized.
A large amount of studies have accumulated over the decades on BCI in terms of methodological research and of real-world applications.The aim of the present review paper is to provide readers with a comprehensive overview of the heterogeneous topics related to BCI by grouping its applications and techniques in two macro-themes.The first one concerns low-level brain signals acquisitions methods, and the second one relates to the use of brain signals acquisition at higher levels to control artificial devices.Specifically, the literature on BCI has been collected and surveyed in this review with the aim of highlighting the following: 1.
the main aspects of BCI systems related to signal acquisition and processing, 2.
BCI applications to controlling robots and vehicles, assistive devices, and generally automation.
The present paper is organized as follows: Section 2 presents a review of the state-ofthe-art in brain-computer interfacing.Section 3 discusses in detail electrophysiological recordings for brain-computer interfacing.Section 4 illustrates the main applications of BCIs in automation and control problems.Section 5 introduces current limitations and challenges of the BCI Technologies.Section 6 concludes the paper.
The content of the present tutorial paper is aimed at getting a glimpse of the multifaceted and multidisciplinary world of brain-computer interfacing and allied topics.

BCI Systems: Interaction Modality and Signal Processing Technique
A BCI system typically requires the acquisition of brain signals, the processing of such signals through specifically designed algorithms, and their translation into commands to external devices [36].Effective usage of a BCI device entails a closed loop of sensing, processing, and actuation.In the sensing process, bio-electric signals are sensed and digitized before being passed to a computer system.Signal acquisition may be realized through a number of technologies and ranges from noninvasive to invasive.In the processing phase, a computing platform interprets fluctuations in the signals through an understanding of the underlying neurophysiology in order to discern user intent from the changing signal.The final step is the actuation of such an intent, in which it is translated into specific commands for a computer or robotic system to execute.The user can then receive feedback in order to adjust his/her thoughts and then generates new and adapted signals for the BCI system to interpret.

BCI Systems Classification
There exist several different methods that can be used to record brain activity [37], such as EEG on the scalp level, electrocorticography (ECoG), magnetoencephalography (MEG), positron emission tomography (PET), near-infrared spectroscopy (a technique that is able to detect changes in oxygenated hemoglobin and deoxygenated hemoglobin when the brain is activated and restored to the original state), or functional magnetic resonance imaging (fMRI) [38,39].ECoG requires surgery to fit the electrodes onto the surface of the cortex, making it an invasive technique [40].MEG [41,42], PET [43][44][45], and fMRI [46][47][48] require very expensive and cumbersome equipment and facilities.Careful development of a set of calibration samples and application of multivariate calibration techniques is essential for near-infrared spectroscopy analytic methods.In general, BCI systems are classified with respect to the techniques used to pick up the signals as invasive, when the brain signals are picked on the basis of surgery, or non-or partially invasive.
Noninvasive implants are easy to wear, but they produce poor signal resolution because the skull dampens signals, dispersing and blurring the electromagnetic waves created by the neurons.Even though brain waves can still be detected, it is more difficult to determine the area of the brain that created the recorded signals or the actions of individual neurons.ECoG is the most studied noninvasive interface, mainly due to its fine temporal resolution, ease of use, portability, and low setup cost.A typical EEG recording setup is shown in Figure 1.The brain is extremely complex (there are about 100 billion neurons in a human brain, and each neuron is constantly sending and receiving signals through an intricate network of connections).Assuming that all thoughts or actions are encoded in electric signals in the brain is a gross understatement, as there are chemical processes involved as well, which EEGs are unable to pick up on.Moreover, EEG signal recording is weak and prone to interference: EEGs measure tiny voltage potentials while the blinking eyelids of the subject can generate much stronger signals.Another substantial barrier to using EEG as a BCI is the extensive training required before users can effectively work with such technology.
ECoG measures the electrical activity of the brain taken from beneath the skull in a similar way to noninvasive electroencephalography, but the electrodes are embedded in a thin, plastic pad that is placed directly above the cortex, beneath the dura mater.The survey in [49] overviews the progresses in a recent field of applied research, microcorticography (µECoG).Miniaturized implantable µECoG devices possess the advantage of providing greater-density neural signal acquisition and stimulation capabilities in a minimally invasive implant.An increased spatial resolution of the µECoG array is useful for greater specificity diagnosis and treatment of neuronal diseases.In general, invasive devices to measure brain signals are based on electrodes directly implanted in the patient's grey matter of the brain during neurosurgery.Because the electrodes lie in the grey matter, invasive devices produce the highest-quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to become weaker or even null as the body reacts to a foreign object in the brain.Partially invasive BCI devices are implanted inside the skull but rest outside the brain rather than within the grey matter.They produce better resolution signals than noninvasive BCIs where the bone tissue of the cranium deflects and deforms signals and have a lower risk of forming scar-tissue in the brain than fully invasive BCIs.
Invasive and partially invasive technologies remain limited to healthcare fields [50].Medical grade brain-computer interfaces are often used in assisting people with damage to their cognitive or sensorimotor functions.Neuro-feedback is starting to be used for stroke patients by physical therapists to assist in visualizing brain activities and in promoting the brain [51].Such plasticity enables the nervous system to adapt to the environmental pressures, physiologic changes, and experiences [52].Because of the plasticity of the brain, the other parts of the brain with no damages take over the functions disabled by the suffered injuries.Indeed, medical-grade BCI was originally designed for the rehabilitation of patients following paralysis or loss of limbs.In this way, the control of robotic arms or interaction with computerized devices could be achieved by conscious thought processes, whereby users imagine the movement of neuroprosthetic devices to perform complex tasks [53].Brain-computer interfacing has also demonstrated profound benefits in correcting blindness through phosphine generation, thereby introducing a limited field of vision to previously sightless patients [54].Studies have shown that those patients with access to BCI technologies, get faster recoveries from serious mental and physical traumas compared to those who undergo traditional rehabilitation methods [55].For this very reason, the use of BCI has also expanded (albeit tentatively) into the fields of Parkinson's disease, Alzheimer's disease, and dementia research [56,57].
A class of wireless BCI devices were designed as pill-sized chips of electrodes implanted on the cortex [58].Such a small volume houses an entire signal processing system: a lithium ion battery, ultralow-power integrated circuits for signal processing and conversion, wireless radio and infrared transmitters, and a copper coil for recharging.All the wireless and charging signals pass through an electromagnetically transparent sapphire window.Not all wireless BCI systems are integrated and fully implantable.Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain.In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage the cortex [59].Starting from the observation that approximately 20% of traumatic cervical spinal cord injuries result in tetraplegia, the authors of [60] developed a semi-invasive technique that uses an epidural wireless brain-machine interface to drive an exoskeleton.BCI technology can empower individuals to directly control electronic devices located in smart homes/offices and associative robots via their thoughts.This process requires efficient transmission of ECoG signals from implanted electrodes inside the brain to an external receiver located outside on the scalp.The contribition [61] discusses efficient, low complexity, and balanced BCI communication techniques to mitigate interferences.

Elicitation of Brain Signals
Brain-computer interfaces were inherently conceived to acquire neural data and to exploit them to control different devices and appliances.Based on the way the neural data are elicited and used, BCI systems can be classified in four typologies [62]:

1.
Active: In this instance, a BCI system acquires and translates neural data generated by users who are voluntarily engaged in predefined cognitive tasks for the purpose of "driving" the BCI.

2.
Reactive: Such an instance makes use of neural data generated when users react to stimuli, often visual or tactile.

3.
Passive: Such an instance refers to the case that a BCI serves to acquire neural data generated when users are engaged in cognitively demanding tasks.

4.
Hybrid: Such an instance is a mixture of active, reactive, and passive BCI and possibly a further data acquisition system.
Cognitive brain systems are amenable to conscious control, yielding better regulation of magnitude and duration of localized brain activity.Signal generation, acquisition, processing, and commanding chain in a BCI system is illustrated in Figure 2. Likewise, the visual cortex is a focus of signal acquisition since the electrical signals picked up from the the visual cortex tend to synchronize with external visual stimuli hitting the retina.One of the most significant obstacles that must be overcome in pursuing the utilization of brain signals for control is the establishment of a valid method to extract event-related information from a real-time EEG [63].Most BCIs rely on one of three types of mental activities, namely, motor imagery [64], P300 [65], and steady-state visually evoked potentials (SSVEPs) [66].Some BCI may utilize more than one such mental activity; hence, they are referred to as "hybrid BCIs" [67].Once brain signal patterns are translated in relation to cognitive tasks, BCI systems can decode the user's goals.By manipulating such brain signals, patients can express their intent to the BCI system and said brain signals can act as control signals in BCI units.
Some users experience significant difficulty in using BCI technologies.It is reported that approximately 15-30% of users cannot modulate their brain signals, which results in the inability to operate BCI systems [68].Such target users are called "BCI-illiterate" [69].The sensorimotor EEG changes of the motor cortex during active and passive movement and motor imagery are similar.The study [70] showed that it is possible to use classifiers calculated with data from passive and active hand movement to detect motor imagery.Hence, a physiotherapy session for a stroke patient could be used to obtain data to learn a classifier and the BCI-rehabilitation training could start immediately.
P300 is a further type of brain activity that can be detected by means of EEG recordings.P300 is a brainwave component that occurs after a stimulus that is deemed "important".The existence of the P300 response may be verified by the standard "oddball paradigm", which consists in the presentation of a deviant stimulus within a stream of standard stimuli, which elicits the P300 component.In the EEG signal, P300 appears as a positive wave 300 ms after stimulus onset and serves as a link between stimulus characteristics and attention.In order to record P300 traces, the electrodes are placed over the posterior scalp.Attention and working memory are considered as cognitive processes underlying P300 amplitude [71].In fact, it was suggested that P300 is a manifestation of a context-updating activity occurring whenever one's model of the environment is revized.The study in [71] investigated the support of attentional and memory processes in controlling a P300-based BCI in people with amyotrophic lateral sclerosis.
The study in [72] investigated a BCI technology based on EEG responses to vibrotactile stimuli around the waist.P300-BCIs based on tactile stimuli have the advantage of not taxing the visual or auditory system, hence being especially suitable for patients whose vision or eye movements are impaired.The contribution of [73] suggested a method for the extraction of discriminative features in electroencephalography evoked-potential latency.Based on offline results, evidence is presented indicating that a full surround sound auditory BCI paradigm has potential for an online application.The auditory spatial BCI concept is based on a directional audio stimuli delivery technique, which employs a loudspeaker array.The stimuli presented to the subjects vary in frequency and timbre.Such research resulted in a methodology for finding and optimizing evoked response latencies in the P300 range in order to classify the subject's chosen targets (or the ignored non-targets).
The SSVEP paradigm operates by exposing the user to oscillating visual stimuli (e.g., flickering Light Emitting Diodes (LEDs) or phase-reversing checkerboards).An electrical activity corresponding to the frequency of such oscillation (and its multiples) can be measured from the occipital lobe of the brain.The user issues a command by choosing a stimulus (and therefore a frequency).Figure 3 shows an experimental setup where a patient gazes at a screen that presents eight different oscillating patterns.BCI includes significant target systems out of clinical scopes.Based on different frequency flicker through visual evocations, SSVEP systems can be used to provide different inputs in control applications.Martišius et al. in [74] described SSVEP as a mean to successfully control computer devices and games [74].Specifically, in [74], SSVEP is used to build up a human-computer interaction system for decision-making improvement.In particular, in this work, a model of traffic lights is designed as a case study.The experiments carried out on such model, which involved decision-making situations, allowed a SSVEP-BCI system to assist people to make a decision correctly.

Syncronous and Asyncronous Interaction
A first aspect that can be considered to analyse BCI-based control techniques is related to the use of synchronous or asynchronous protocols.In synchronous protocols, the system indicates to the user the moments when he/she must attend a cognitive process.Then, the signals need to be processed during a concrete interval of time and the decision is taken.The systems relying on such a kind of protocol are slow.The synchronous protocols make the processing of acquired signals easier because the starting time is known and the differences with respect to the background can be measured [75].On the other hand, asynchronous protocols [28,76] are more flexible because the user is not restricted in time and he/she can think freely [77].In synchronous protocols, the acquired signals are timelocked to externally paced cues repeated every time (the system controls the user), while in asynchronous protocols, the user can think of some mental tasks at any time (the user controls the system).The asynchronous BCIs [28,76] require extensive training (of the order of many weeks), their performance is user-dependent, and their accuracy is not as high as that of synchronous one [78].On the other hand, synchronous BCIs require minimal training and have stable performance and high accuracy.Asynchronous BCI is more realistic and practical than a synchronous system in that BCI commands can be generated whenever the user wants [79].
Asynchronous BCI systems are more practicable than synchronous ones in real-world applications.A key challenge in asynchronous BCI design is to discriminate intentional control and non-intentional control states.In the contribution [66], a two-stage asynchronous protocol for an SSVEP-based BCI was introduced.This visual-evoked potential-based asynchronous BCI protocol was extended to mixed frequency and phase-coded visual stimuli [80].Brain-computer interfacing systems can also use pseudo-random stimulation sequences on a screen (code-based BCI) [81].Such a system can be able to control a robotic device.In this case, the BCI controls may be overlaid on the video that shows a robot performing certain tasks.
Hybrid BCIs combine different input signals to provide more flexible and effective control.The combination of these input signals makes it possible to use a BCI system for a larger patient group and to make the system faster and more reliable.Hybrid BCIs can also use one brain signal and a different type of input, such as an electrophysiological signal (e.g., the heart rate) or a signal from an external device such as an eye-tracking system.For instance, a BCI can be used as an additional control channel in a video game that already uses a game pad or it can complement other bio-signals such as electrocardiogram or blood pressure in an application monitoring a driver's alertness [14].The contribution of [67] describes BCIs in which functioning is based on the classification of two EEG patterns, namely, the event-related (de)synchronisation of sensorimotor rhythms and SSVEPs.

Preprocessing and Processing Techniques
The technical developments that assist research into EEG-based communication can be split into the development of signal processing algorithms, the development of classification algorithms, and the development of dynamic models.Spontaneous EEG signals can be separated from the background EEG using finite impulse response band-pass filters or fast Fourier transform algorithms.Other spontaneous EEG signals, for example, those associated with mental tasks, i.e., mental arithmetic, geometric figure rotation, or visual counting, are better recognized using autoregressive features [82].
In the article of [83], Butkeviči ūt ė evaluated the effects of movement artifacts during EEG recording, which could significantly affect the use of EEG for control.
EEG artifacts may significantly affect the accuracy of feature extraction and data classification of BCIs.For example, the EEG artifacts derived from ocular and muscular activities are inevitable and unpredictable due to the physical conditions of the subject.Consequently, the removal of these artifacts is a crucial function for BCI applications to improve its robustness.Nowadays, different tools can be used to achieve eye blinks artifacts correction in EEG signal acquisition: the most used methods are based on regression techniques and Independent Component Analysis (ICA) [84].The choice of the most suitable method used depends on the specific application and on the limitations to the method itself.In fact, regression-based methods require at least one Electrooculography (EOG) channel and their performance is affected by the mutual contamination between EEG and EOG signals.With ICA, the EEG signals are projected onto an ICA embedding [85].ICA-based methods require a higher number of electrodes and a greater computational effort compared to regression-based techniques.The contribution [84] proposed a new regression-based method and compared it with three of the most used algorithms (Gratton, extended InfoMax, and SOBI) for eye-blink correction.The obtained results confirmed that ICA-based methods exhibit significantly different behaviors with respect to the regressionbased methods, with a larger reduction in the alpha band of the power spectral density over the frontal electrodes.Such results highlighted that the proposed algorithm can provide comparable performance in terms of blink correction, without requiring EOG channels, a high number of electrodes, or a high computational effort, thus preserving EEG information from blink-free signal segments.More generally, blind source separation (BSS) is an effective and powerful tool for signal processing and artifact removal from electroencephalographic signals [86].For high-throughput applications such as BCIs, cognitive neuroscience, or clinical neuromonitoring, it is of prime importance that blind source separation is effectively performed in real time.In order to improve the throughput of a BSS-based BCI in terms of speed, the optimal parallelism environment that hardware provides may be exploited.The obtained results show that co-simulation environment greatly reduces computation time.
Once the features of acquired brain signals are extracted by means of a signal processing algorithm, they can be classified using an intelligent/adaptive classifier.Typically, a choice between different classifiers is performed, although ways of combining predictions from several nonlinear regression techniques can be exploited.Some research endeavors have focused on the application of dynamic models such as the combined Hidden Markov Autoregressive model [87] to process and classify the acquired signals, while others have focused on the use of Hidden Markov Models and Kalman filters [88].The contribution in [89] suggested to eliminate redundancy in high-dimensional EEG signals and reduce the coupling among different classes of EEG signals by means of the principal component analysis and to employ Linear Discriminant Analysis (LDA) to extract features that represent the raw signals; subsequently, a voting-based extreme learning machine (ELM) method was made use of to classify such features.The contribution in [90] proposed the use of indexes applied to BCI recordings, such as the largest Lyapunov exponent, the mutual information, the correlation dimension, and the minimum embedding dimension, as features for the classification of EEG signals.A multi-layer perceptron classifier and a Support-Vector Machine (SVM) classifier based on k-means clustering were used to accomplish classification.A support-vector machine classification based method applied to P300 data was also discussed in the contribution [91].

Use of BCI in Medical and General Purpose Applications
An early application of BCI was to neural prosthetic implants, which showed several potential uses for recording neuronal activity and stimulating the central nervous system as well as the peripheral nervous system.The key goal of many neuroprosthetics is operation through closed-loop BCI systems (the loop is from the measurement of brain activity, classification of data, feedback to the subject, and the effect of feedback on brain activity [92]), with a channel for relaying tactile information.To be efficient, such systems must be equipped with neural interfaces that work in a consistent manner for as long as possible.In addition, such neuroprosthetic systems must be able to adapt the recording to changes in neuronal populations and to tolerate physical real-life environmental factors [93].Visual prosthetic development has one of the highest priorities in the biomedical engineering field.Complete blindness from retinal degeneration arises from diseases such as the age-related macular degeneration, which causes dystrophy of photoreceptor cells; from artery or vein occlusion; and from diabetic retinopathy.Functional vision can be achieved by converting images into binary pulses of electrical signals and by delivering them to the visual cortex.Sensations created are in the form of bright spots referred to as visual perception patterns.Current developments appear in the form of enhanced image processing algorithms and data transfer approaches, combined with emerging nanofabrication and conductive polymerization [94].Likewise, cochlear implants use electrical signals that directly stimulate the sensory epithelium of the basilar membrane to produce auditory stimuli [95].The highest levels of success were seen in subjects who had some sense of hearing during their critical developmental periods.Better resolution of sensory input is achieved by providing it to the cortex rather than the auditory nerve.Such implants may be placed into the cochlear nerve/pons junction, when the auditory nerve has been damaged; into the cochlear nucleus; or into the inferior colliculus [96].BCI could prove useful in psychology, where it might help in establishing the psychological state of patients, as described in [97].A patient's feeling is predictable by examining the electrical signals generated by the brain.Krishna et al. [97] proposed an emotion classification tool by developing generalized mixture model and obtained 89% classification precision in recognizing happiness, sadness, boredom, and neutral states in terms of valency.Further applications have been proposed in the biometric field, creating a EEG-based cryptographic authentication scheme [98] through EEG signals, getting important results based on the commitment scheme adopted from [99].
In recent years, the motivation for developing brain-computer interfaces has been not only to provide an alternate communication channel for severely disabled people but also to use BCIs for communication and control in industrial environments and consumer applications [14].BCI is useful not only for communication but also to allow mental states of the operator to trigger the surroundings [18].In such instances, BCI is used as a system that allows a direct communication between the brain and distant devices.Over the last years, research efforts have been paid on its use in smart environmental control systems, fast and smooth movement of robotic arm prototypes, as well as motion planning of autonomous or semi-autonomous vehicles.For instance, one study developed an SSVEP-BCI for controlling a toy vehicle [100].Figure 4 shows an experimental setup for controlling a toy vehicle through SSVEP-BCI technology.In particular, BCIs have been shown to achieve excellent performance in controlling robotic devices using only signals sensed from brain implants.Until now, however, BCIs successful in controlling robotic arms have relied on invasive brain implants.These implants require a substantial amount of medical and surgical expertise to be correctly installed and operated, not to mention the cost and potential risks to human subjects.A great challenge in BCI research is to develop less invasive or even totally noninvasive technologies that would allow paralyzed patients to control their environment or robotic limbs using their own thoughts.A noninvasive counterpart requiring less intervention that can provide high-quality control would thoroughly improve the integration of BCIs into clinical and home settings.Noninvasive neuroimaging and increased user engagement improve EEG-based neural decoding and facilitate real-time 2D robotic device control [101].Once the basic mechanism of converting thoughts to computerized or robotic action is perfected, the potential uses for the technology will be almost limitless.Instead of a robotic hand, disabled users could have robotic braces attached to their own limbs, allowing them to move and directly interact with the environment.Signals could be sent to the appropriate motor control nerves in the hands, bypassing a damaged section of the spinal cord and allowing actual movement of the subject's own hands [102].Breakthroughs are re-quired in areas of usability, hardware/software, and system integration, but for successful development, a BCI should also take user characteristics and acceptance into account.
Efforts have been focused on developing potential applications in multimedia communication and relaxation (such as immersive virtual reality control).Computer gaming, in particular, has benefited immensely from the commercialization of BCI technology, whereby users can act out first-person roles through thought processes.Indeed, using brain signals to control game play opens many possibilities beyond entertainment, since neurogaming has potentials in accelerating wellness, learning, and other cognitive functions.Major challenges must be tackled for BCIs to mature into an established communications medium for virtual-reality applications, which range from basic neuroscience studies to developing optimal peripherals and mental gamepads and more efficient brain-signal processing techniques [103].
An extended BCI technology is called the "collaborative BCI" [104].In this method, the tasks are performed by multiple people rather than just one person, which would make the result more efficient.Brain-computer interfacing relies on the focused concentration of a subject for optimal performance, while, in the collaborative BCI, if one person loses focus in concentration for any reason, the other subjects will compensate and produce the missing commands.Collaborative BCI found widespread applications in learning; communication; as well as in improving social, creative, and emotional skills.The authors of the paper [105] built a collaborative BCI and focused on the role of each subject in a study group, trying to answer the question of whether there are some subjects that would be better to remove or that are fundamental to enhance group performance.The goal of the paper in [106] was to introduce a new way to reduce the time identifying a message or command.Instead of relying on brain activity from one subject, the system proposed by the authors of [106] utilized brain activity from eight subjects performing a single trial.Hence, the system could rely on an average based on eight trials, which is more than sufficient for adequate classification, even though each subject contributed only one trial.The paper in [107] presents a detailed review about BCI applications for training and rehabilitation of students with neurodevelopmental disorders.
Machine-learning and deep learning approaches based on the analysis of physiological data play a central role since they can provide a means to decode and characterize taskrelated brain states (i.e., reducing from a multidimensional to one-dimensional problem) and to differentiate relevant brain signals from those that are not task-related.In this regard, researchers tested BCI systems in daily life applications, illustrating the development and effectiveness of this technique [18,108,109].EEG-based pBCI became a relevant tool for real-time analysis of brain activity since it can potentially provide information about the operator cognitive state without distracting the user from the main task and out of any conditioning from a subjective judgment of an observer or of the user itself [25].Another growing area for BCIs is mental workload estimation, exploring the effects of mental workload and fatigue upon the P300 response (used for word spell BCI) and the alphatheta EEG bands.In this regard, there is currently a movement within the BCI community to integrate other signal types into "hybrid BCIs" [67] to increase the granularity of the monitored response.

Ethical and Legal Issues in BCI
Brain-computer interfacing research poses ethical and legal issues related to the question of mind-reading as well as mind-conditioning.BCI research and its translation to therapeutic intervention gave rise to significant ethical, legal, and social concerns, especially about person-hood, stigma, autonomy, privacy, safety, responsibility, and justice [110,111].Talks of "brain hacking" and "brain phishing" triggered a debate among ethicists, security experts, and policymakers to prepare for a world of BCI products and services by predicting implications of BCI to liberty and autonomy.
To what concerns the mind-reading issue, for example, the contribution of [112] discusses topics such as the representation of persons with communication impairments, dealing with technological complexity and moral responsibility in multidisciplinary teams, and managing expectations, ranging from an individual user to the general public.Furthermore, the contribution in [112] illustrates that, where treatment and research interests conflict, ethical concerns arise.
With reference to the mind-conditioning issue, the case of deep brain stimulation (DBS) is discussed in [113].DBS is currently used to treat neurological disorders such as Parkinson's disease, essential tremor, and dystonia and is explored as an experimental treatment for psychiatric disorders such as major depression and obsessive compulsive disorder.Fundamental ethical issues arise in DBS treatment and research, the most important of which are balancing risks and benefits and ensuring respect for the autonomous wish of the patient.This implies special attention to patient selection, psycho-social impact of treatment, and effects on the personal identity.Moreover, it implies a careful informed consent process in which unrealistic expectations of patients and their families are addressed and in which special attention is given to competence.A fundamental ethical challenge is to promote high-quality scientific research in the interest of future patients while at the same time safeguarding the rights and interests of vulnerable research subjects.

Data Security
Data security and privacy are important aspects of virtually every technological application related to human health.With the digitization of almost every aspect of our lives, privacy leakage in cyber space has become a pressing concern [114].Considering that the data associated with BCI applications are highly sensitive, special attention should be given to keeping their integrity and security.Such a topic is treated in a number of recent scientific paper.
The objective of the paper in [115] is to ensure security in a network supporting BCI applications to identify brain activities in a real-time mode.To achieve such goal, the authors of this paper proposed a Radio Frequency Identification (RFID)-based system made of semi-active RFID tags placed on the patient's scalp.Such tags transmit the collected brain activities wirelessly to a scanner controller, which consists of a mini-reader and a timer integrated together for every patient.Additionally, the paper proposed a novel prototype of interface called "BCI Identification System" to assist the patient in the identification process.
BCI data represent an individual's brain activity at a given time.Like many other kinds of data, BCI data can be utilized for malicious purposes: a malicious BCI application (e.g., a game) could allow an attacker to phish an unsuspecting user enjoying a game and record the user's brain activity.By analyzing the unlawfully collected data, the attacker could be able to infer private information and characteristics regarding the user without the user's consent nor awareness.The paper in [114] demonstrates the ability to predict and infer meaningful personality traits and cognitive abilities by analyzing resting-state EEG recordings of an individual's brain activity using a variety of machine learning methods.
Unfortunately, manufacturers of BCI devices focus on application development, without paying much attention to security and privacy-related issues.Indeed, an increasing number of attacks to BCI applications exposed the existence of such issues.For example, malicious developers of third-party applications could extract private information of users.In the paper of [116], the authors focused on security and privacy of BCI applications.In particular, they classified BCI applications into four usage scenarios: (1) neuromedical applications, (2) user authentication, (3) gaming and entertainment, and (4) smartphonebased applications.For each usage scenario, the authors discussed security and privacy issues and possible countermeasures.

Performance Metrics
Metrics represent a critical component of the overall BCI experience; hence, it is of prime importance to develop and to evaluate BCI metrics.Recent research papers have investigated performance metrics in specific applications, as oulined below.
Affective brain-computer interfaces are a relatively new area of research in affective computing.The estimation of affective states could improve human-computer interactions as well as the care of people with severe disabilities [117].The authors of such a paper reviewed articles publicized in the scientific literature and found that a significant number of articles did not consider the presence of class imbalance.To properly account for the effect of class imbalance, the authors of [117] suggest the use of balanced accuracy as a performance metric and its posterior distribution for computing credible intervals.
The research work summarized in the paper [118] investigated the performance impact of time-delay data augmentation in motor-imagery classifiers for BCI applications.The considered strategy is an extension of the common spectral spatial patterns method, which consists of accommodating available additional information about intra-and inter-electrode correlations into the information matrix employed by the conventional Common Spatial Patterns (CSP) method.Experiments based on EEG signals from motor-imagery datasets, in a context of differentiation between binary (left and right-hand) movements, result in an overall classification improvement.The analyzed time-delay data-augmentation method improves the motor-imagery BCI classification accuracy with a predictable increase in computation complexity.
As BCIs become more prevalent, it is important to explore the array of human traits that may predict an individual's performance when using BCI technologies.The exploration summarized in the paper in [119] was based on collecting and analyzing the data of 51 participants.Such an experiment explored the correlations between performance and demographics.The measurements were interrelated to the participants' ability to control the BCI device and contributed to the correlation among a variety of traits, including brain hemispheric dominance and Myers-Briggs personality-type indicator [120].The preliminary results of such experiment was that a combination of human traits are being turned on and off in these individuals while using BCI technologies, affecting their performance.

Further Readings in BCI Challenges and Current Issues
Every technical advancement in a multidisciplinary field such as BCI unavoidably paves the way to new progresses as well as unprecedented challenges.The paper in [121] attempted to present an all-encompassing review of brai-n-computer interfaces and the scientific advancements associated with it.The ultimate goal of such a general overview of the BCI technology was to underscore the applications, practical challenges, and opportunities associated with BCI technology.
We found the study in [122] to be very interesting and insightful.Such a study aimed to assess BCI users' experiences, self-observations, and attitudes in their own right and looked for social and ethical implications.The authors conducted nine semi-structured interviews with medical BCI users.The outcome of such a study was that BCI users perceive themselves as active operators of a technology that offers them social participation and impacts their self-definition.Users understand that BCIs can contribute to retaining or regaining human capabilities and that BCI use contains elements that challenge common experiences, for example, when the technology proves to be in conflict with their affective side.The majority of users feel that the potential benefits of BCIs outweigh the risks because the BCI technology is considered to promote valuable qualities and capabilities.BCI users appreciate the opportunity to regain lost capabilities as well as to gain new ones.

Electrophysiological Recordings for Brain-Computer Interfacing
The present section discusses in detail typical EEG recording steps and responses that are widely used in BCI applications.

Electrodes Locations in Encephalography
In EEG measurement, the electrodes are typically attached to the scalp by means of conductive pastes and special caps (if necessary), although EEG systems based on dry electrodes systems (that do not need pastes) have been developed.For BCI applications, the EEG signals are usually picked up by multiple electrodes [2,123,124].Concerning EEG systems, currently, research is being conducted toward producing dry sensors (i.e., without any conductive gel) and eventually a water-based technology instead of the classical gelbased technology, providing high signal quality and better comfort.As reported in [125], three different kinds of dry electrodes were compared against a wet electrode.Dry electrode technology showed excellent standards, comparable to wet electrodes in terms of signals spectra and mental state classification.Moreover, the use of dry electrodes reduced the time taken to apply sensors, hence enhancing a user comfort.In the case of multi-electrode measurement, the International 10-20 [126], Extended 10-20 [127], , and the International 10-5 [129,130] methods have stood as the de facto standards for electrode arrangement.
In these systems, the locations on a head surface are described by relative distances between cranial landmarks.In the context of the International 10-20 system, for example, the landmarks are a nasal point located between the eyes, the height of eyes, and the inion, which is the most prominent projection of the occipital bone at the posterioinferior (lower rear) part of the human skull [126].The line connecting these landmarks along the head surface may lead to marks that divide the line into short segments of 10% and 20% of the whole length.Each pick-up point is defined as the intersection of the lines connecting these marks along the head surface.Typical electrodes arrangements are shown in Figure 5.The EEG systems record the electrical potential difference between any two electrodes [126].Referential recording is a method that reads the electrical potential difference between a target electrode and a reference electrode, and the reference electrodes are common among all electrodes.Earlobes that are considered electrically inactive are widely used to place the reference electrodes.In contrast, bipolar recording reads the electrical potential difference between electrodes located on head surface.
The electroencephalogram recording systems are generally more compact than the systems used in functional magnetic resonance, near infrared spectroscopy, and magnetic resonance.Two further important aspects of brain signal acquisition are temporal and spatial resolution.Spatial resolution refers to the degree of accuracy with which brain activity can be located in space, while temporal resolution refers to the degree of accuracy, on a temporal scale, with which a functional imaging technique can describe a neural event [131].
The temporal resolution of electroencephalography is higher than the temporal resolution afforded by functional magnetic resonance and by near infrared spectroscopy [132].However, since it is difficult to make electrodes smaller, the spatial resolution of electroencephalography is very low compared to spatial resolution afforded by other devices.In addition, the high-frequency components of the electrical activity of the neurons decrease in the observed signals because the physiological barriers between the emitters and the receivers, such as the skull, work as lowpass filters.The signals picked up by the electrodes are contaminated by the noise caused by poor contacting of the electrodes with the skin, by muscle movements (electromyographic signals), and by eye movements (electrooculographic signals).

Event-Related Potentials
An event-related potential (ERP) is an electrophysiological response to an external or internal stimulus that can be observed as an electric potential variation in EEG [133].A widely-used ERP in the BCIs is P300, which is a positive deflection in the EEG that occurs around 300 ms after a target stimulus has been presented.A target can be given as either a single stimulus or a combined set of visual, auditory, tactile, olfactory, or gustatory stimuli.
An example of the waveform of P300 is shown in Figure 6.The signal labeled "Target" is the average signal observed in the period of 0-1 s after displaying a visual stimulus when the subject attended the stimulus.The signal labeled "Non Target" is the average signal observed in the period of 0-1 s after displaying a stimulus that gets ignored by the subject.It can be easily recognized that there is a difference in the potential curves between "Targets" and "Non Targets" in the period of 0.2-0.4s.Such potential, elicited at around 300 ms, is exactly a P300 response.Such event-related potential is easily observed in EEG readouts by an experimental paradigm called the 'oddball task' [134].In the oddball paradigm, the subject is asked to react-by either counting or button-pressing-to target stimuli that are hidden as rare occurrences among a series of more common stimuli that often require no response.A well-known instance of BCI that makes use of the oddball paradigm is a P300 speller.Such a speller aims to input Latin letters on the basis of the user's will.A typical P300 speller consists of rows and columns of a two-dimensional matrix of Latin letters and symbols displayed on a screen, as illustrated in Figure 7.Each row and column flashes for a short time (typically 50-500 ms) in random order with an interval between successive flashes of 500 to 1000 ms [135], as illustrated in Figure 7.The user gazes at a target symbol on the screen.When the row or column including the target symbol flashes, the user counts incidences of the flash.By detecting a P300 at the flash of row and column, the interface can identify the symbol that the user is gazing at.Although the majority of P300-based BCIs use P300 responses to visual stimuli, the visual stimuli are not suitable for patients whose vision or eye movement are impaired.For such users, alternative BCIs that use either auditory or tactile stimuli have been developed.Several giant leaps have been made in the BCI field in the last years from several points of view.For example, many works have been produced in terms new BCI spellers.The Hex-O-Spell, a gaze-independent BCI speller that relies on imaginary movement, was first explained in 2006 by Blankertz et al. [136] and presented in [76,137].As reported in [138], the first variation of the Hex-O-Speller was used as an ERP P300 BCI system and was compared with other variations of the Hex-O-Spell utilizing ERP systems (Cake Speller and Center Speller) [139].These two GUIs were developed to be compared with the Hex-O-Spell ERP in [138] for gaze-independent BCI spellers.In [138], the Hex-O-Spell was transformed into an ERP system, to test when ERP spellers could also be gaze-independent.The purpose was to check if BCI-spellers could replace eye tracker speller systems.The aforementioned change in the design could improve the performance of the speller and could provide a more useful control without the need for spatial attention.A typical implementation of auditory P300-based interfaces exploits spatially distributed auditory cues, generated via multiple loudspeakers [140,141].Tactile P300-based BCIs are also effective as an alternative to visual-stimuli-based interfaces [72].In said kind of BCI, vibrating cues are given to users by multiple tactors.
The detection of a P300 response is not straightforward due to the very low signalto-noise ratio of the observed EEG readouts; thus, it is necessary to record several trials for the same target symbol.Averaging over multiple trials as well as lowpass filtering are essential steps to ensure the recognition of P300 responses.A statistical approach is used for the detection of a P300 response, such as linear discriminant analysis followed by dimension reduction including downsampling and principal component analysis, as well as convolutional neural networks [142].The stepwise linear discriminant analysis is widely used as an implementation of LDA.Moreover, the well-known SVM algorithm is applicable to classify a P300 response.
Rapid serial visual presentation appears to be one of the most appropriate paradigms for patients using a P300-based brain-computer interface, since ocular movements are not required.However, the use of different locations for each stimulus may improve the overall performance.The paper in [143] explored how spatial overlap between stimuli influences performance in a P300-based BCI.Significant differences in accuracy were found between the 0% overlapped condition and all the other conditions, and between 33.3% and higher overlap, namely 66.7% and 100%.Such results were explained by hypothesizing a modulation in the non-target stimulus amplitude signal caused by the overlapping factor.

Evoked Potentials
An EP is an electrophysiological response of the nervous system to external stimuli.A typical stimulation that elicits "evoked potentials" can be visual, auditory, or tactile.By modulating stimulation patterns, it is possible to construct several kinds of external stimuli assigned to different commands.
The evoked potential corresponding to a flickering light falls in the category of SSVEPs.The visual flickering stimulus excites the retina that elicits an electrical activity at the same frequency of the visual stimulus.When the subject gazes at a visual pattern flickering at the frequencies in the range 3-70 Hz, an SSVEP is observed in an EEG signal recorded through electrodes placed in the proximity of the visual cortex [144].The Figure 8 shows the power spectra of the signals observed when a subject gazes at a visual pattern flickering at the frequency of 12 Hz and when the subject does not gaze at any pattern.When the subject gazes at the active stimulus, the power spectrum exhibits a peak at 12 Hz.Visual flickering stimuli can be generated by LEDs [145] or computer monitors [146].The use of LEDs has the advantage of affording the display of arbitrary flickering patterns and the drawback that the system requires a special hardware, while computer monitors are controlled by a software but the flickering frequency is strongly dependent on their refresh rate.It is important to consider the safety and comfort of visual stimuli: modulated visual stimuli at certain frequencies can provoke epileptic seizures [147] while bright flashes and repetitive stimulation may impair the user's vision and induce fatigue [148].
A BCI can utilize flickering visual stimuli that elicit SSVEPs to implement multiple command inputs.Such an interface has multiple visual targets with different flickering patterns (typically with different frequencies within the range 3-70 Hz [148][149][150]) for a user to gaze at.By detecting the frequency of the recorded evoked potential, the interface determines the user-intended command.An example of the arrangement of multiple checkerboards on a monitor is illustrated in Figure 9, where six targets are displayed onscreen.Each target makes a monochrome inversion in the checkerboard at different frequencies, as illustrated in Figure 10.When the user gazes at the stimulus flickering at F 1 Hz, the steady-state evoked potential corresponding to the frequency F 1 Hz is elicited and measured in the observed EEG signal.By analyzing the EEG to recognize the frequency of the SSVEP response, a BCI is able to determine which target the user is gazing at.If a computer monitor is used for displaying the visual target, the number of available flickering frequency is limited due to its refresh rate.Therefore, this type of BCIs was unable to implement many commands while being able to achieve a higher input speed [124,132].Recently, a stimulus setting using frequency and phase modulation [151][152][153] was achieved to provide many commands.A straightforward way to recognize the frequency of an SSVEP is to check the peak of the discrete Fourier transform of an EEG channel (i.e., the signal picked up by a single electrode).For multiple electrodes, a method based on the canonical correlation analysis (CCA) [154,155] is also widely used for recognizing the frequency [156].In this method, the canonical correlation between the M-channels EEG, denoted by and the reference Fourier series with the fundamental frequency f , denoted by where L denotes the number of harmonics and is calculated for all candidate frequencies, f = f 1 , . . ., f N , where N is the number of visual targets.The frequency that gives the maximum canonical correlation is recognized as the frequency of the target that the user is gazing at.This approach can be improved by applying a bank of bandpass filters [157,158].
Recent studies have pointed out that the nonuniform spectra of the spontaneous or background EEG can deteriorate the performances of the frequency recognition algorithm and have proposed efficient methods to achieve frequency recognition based on the effects of background EEG [159][160][161].
In the CCA method to identify the target frequency, the reference signal, y(t), can be replaced by calibration signals, which are EEG signals to each target collected with multiple trials.There are several studies on data-driven methods for the CCA [80,162].Recently, it has been reported [163][164][165] that, by utilizing the correlation between trials with respect to the same target, recognition accuracy can be improved.This method is called task-related component analysis [166], which can be applied to detection the target frequency as well as the target phase.
Readers may wonder if this type of interface can be replaced by eye-tracking devices, as the latter are much simpler to implement.Indeed, the answer to such a question is mixed.For example, the comparative study in [167,168] has suggested that, for small targets on the screen, SSVEP-BCI achieves higher accuracy in command recognition than eye-tracking-based interfaces.
SSVEP-based BCIs suffer from a few limitations.The power of the SSVEP response is very weak when the flicker frequency is larger than 25 Hz [169].Moreover, every SSVEP elicited by a stimulus with a certain frequency includes secondary harmonics.As mentioned earlier, a computer monitor can generate visual stimuli with frequencies limited by the refresh rate; hence, possible frequencies that could be used to form visual targets are strictly limited [146,170].In order to overcome the limitation of using targets flickering with different frequencies (called "frequency modulation" [171]), several alternative stimuli have been proposed.Time-modulation uses the flash sequences of different targets that are mutually independent [172].Code-modulation is another approach that uses pseudo-random sequences, which are typically the "m-sequences" [81].A kind of shiftkeying technique (typically used in digital communications) is also used to form flickering targets [170].Another paradigm is waveform-modulation [173], where various types of periodic waveforms, such as rectangle, sinusoidal, and triangle waveforms, are used as stimuli.In addition, it has been experimentally verified that different frequencies as well as phase-shifts allocated to visual targets greatly increase the number of targets [161,174].Surveys that illustrate several methods and paradigms for stimulating visually evoked potentials are found in [148,175,176].
Additionally, a type of SSVEP obtained by turning the user's attention to the repeat of a short-term stimulus-such as a visual flicker-can be utilized.The repeat of tactile [177] and auditory [178,179] stimuli can likewise evoke such potentials.

Event-Related Desynchronisation/Synchronization
Brain-computer interfaces that use imagination of muscle movements have been extensively studied.This type of interface recognizes brain activities around motor cortices associated with imagining movements of body parts such as hands, feet, and tongue [132].This kind of interface is called "motor-imagery-based" (MI-BCI).The MI-BCI is based on the assumption that different motor imagery tasks, such as right-hand movement and left-hand movement, activate different brain areas.The imagined tasks are classified through an algorithmic analysis of the recorded EEG readouts and are chosen by the user to communicate different messages.
Neurophysiological studies suggest [180] that motor imagery tasks decrease the energy in certain frequency bands called mu (8-15 Hz) and beta (10-30 Hz) in the EEG signals observed through electrodes located on the (sensory) motor cortex [181].The decrease/increase in energy is called event-related desynchronisation/synchronization [182,183].There exist several methods to quantify event-related desynchronisation (ERD)/synchronization (ERS) [184].A typical quantification is the relative power decrease or increase, defined as where the quantity P event denotes the power within the frequency band of interest in the period after the event and the quantity P ref denotes the power within the frequency band of interest in the preceding baseline (or reference) period.
The area of the brain cortex where event-related desynchronisation/synchronization are observed is associated with the body part of which a subject imagines movement [183,185,186].Such a phenomenon implies that it is possible to infer of which body part the subject imagines movement from the EEG by detecting the location where event-related desynchronisation occurs.ERD is induced by motor imagery tasks performed by healthy subjects as well as paralyzed patients [187].An example of event-related desynchronisation observed in the EEG is shown in Figure 11.The EEG signals were recorded while a subject performed the tasks of motor imagery of their left and right hands [188].After the recording, the EEG signals were bandpassfiltered with a pass-band of 8-30 Hz.The Figure 11 illustrates the EEG power averaged over 100 trials of the task of motor imagery of the left or right hand withnormalization by a base energy corresponding to signals measured before performing motor imagery tasks.(Note that the ERD is defined as the relative power to the baseline power; therefore, if the current power is larger than the base power, the ERD can be over 100%).The symbol "0" on the horizontal axis is the time when the subject started performing the tasks.The decrease in the energy while the subject performs the tasks can be clearly observed in Figure 11.
One of the merits of MI-BCI against the BCIs based on the perception of flickering stimuli is that it is unnecessary to make use of a device to display the stimuli.Moreover, it has been reported recently that the detection of the motor imagery tasks and a feedback are useful for the rehabilitation of patients who suffer from motor disorders caused by brain injuries [189][190][191][192].The MI-BCI can be utilized in rehabilitation to recover motor functions as follows.One of the rehabilitation procedures for the recovery of motor functions is to make a subject perform movements of a disabled body part by a cue and to give visual or physical feedback [189].The coincident events of the intention of the movement that the subject has and the corresponding feedback to the subject are supposed to promote plasticity of the brain.
In rehabilitation, to generate the feedback coincidentally with the intention that elicited it, the intention is considered significant.In the general procedure of the rehabilitation illustrated above, the cue control generates the intention, which is detected from the EEG to give the feedback to the patient.When using the motor-imagery BCI for the rehabilitation, the feedback generation is controlled by the interface.In fact, the MI-BCI enables the rehabilitation system to detect the intention of a movement and to generate the feedback at an appropriate time.Some researches have suggested that rehabilitation based on MI-BCIs can promote the plasticity of the brain more efficiently than the conventional systems based on the cue [189][190][191][192].
A further promising paradigm related to the ERD is passive movement (PM).Braincomputer interfaces exploiting passive movements are instances of passive BCIs.The passive movement is typically performed with a mechatronic finger rehabilitation device [193].An early report about observing the effect in the brain cortical activity during PM suggested that PMs consisting of brisk wrist extensions done with the help of a pulley system resulted in significant ERD after the beginning of the movement, followed by ERS in the beta band [194].A recent study reported that the classification accuracy calculated from EEG signals during the passive and active hand movements did not differ significantly from the classification accuracy for detecting MI.It has been reported that ERD is induced not only by MI but also by either passive action observation (AO) or a combination of MI with .The PM and AO gives less fatigues to users; therefore, they are very promising for rehabilitation purposes.
A well-known method for extracting brain activity that is used in MI-BCIs is based on the notion of common spatial pattern (CSP) [132,198,199].The CSP consists of a set of spatial weight coefficients corresponding to electrodes recording a multichannel EEG.These coefficients are determined from a measured EEG in such a way that the variances of the signal extracted by the spatial weights maximally differ between two tasks (e.g., leftand right-hand movement imagery).Specifically, the CSP is given as follows: let C 1 and C 2 be the spatial covariance matrices of a multichannel EEG recording during two tasks (task 1 and task 2, respectively).The CSP for task 1 is given as the generalized eigenvector w corresponding to the maximum generalized eigenvalue λ of the matrix-pencil (C 1 , C 2 ): The generalized eigenvector associated with the minimum generalized eigenvalue is the CSP for task 2. It should be noted that the CSP can also be regarded as a spatial filter that projects the observed EEG signals onto a subspace to extract features, which are assigned to a class corresponding to a subject's cerebral status.In addition, the CSP is strongly related to a frequency band.Even though the brain activity caused by a motor-related task is observed in the mu and beta bands, the bandwidth and the center frequency basically depend on the individual.To specify the most effective frequency band, several variants of the CSP have been proposed [200][201][202][203][204][205][206].The underlying idea behind these methods is to incorporate finite impulse response filters adapting the measured EEG with the process of finding the CSP.In the CSP methods, the covariance matrices, C 1 and C 2 , should be estimated from the observed EEG.The quality of such an estimation directly affects the accuracy of the classifier.A method for active selection of the data from the observed dataset was also proposed [207].
While the CSP method is suitable for two-class classification, multiclass extensions of the CSP have been proposed, such as one-versus-the-rest-CSP and simultaneous diagonalization [185].Another novel approach [208] to the classification of motor-imagery EEG is to employ Riemannian geometry, where empirical covariances are directly classified on the Riemaninan manifold [209] instead of finding CSPs.The basic idea is that, on the Riemannian manifold of the symmetric and positive-definite matrices, each point (covariance matrix) is projected onto a Euclidean tangent space, where standard classifiers such as linear discriminant analysis [208] and support vector machine [210] may be employed.The use of filter banks can enhance the classification performance [211].A review paper about this approach was published [212].In this approach, a major issue is how to choose the reference tangent space, which is defined as the one at the "central point" of the collection of spatial covariance matrices computed by the help of specific numerical algorithms [213,214].Some studies have investigated classification accuracies with respect to various types of central points [211,215].To make it easier for the reader to identify the references in this section, Table 1 summarizes the works presented in each subsection.

Progress on Applications of Brain-Computer Interfaces to Control and Automation
Brain-computer interfacing is a real-time communication system that connects the brain and external devices.A BCI system can directly convert the information sent by the brain into commands that can drive external devices and can replace human limbs or phonation organs to achieve communication with the outside world and to control the external environment [115].In other words, a BCI system can replace the normal peripheral nerve and muscle tissue to achieve communication between a human and a computer or between a human and the external environment [115].
The frequent use of EEG for BCI applications is due to several factors, namely, it can work in most environments; it is simple and convenient to use in practice because scalp-recorded EEG equipment is lightweight, inexpensive, and easy to apply (it affords the most practical noninvasive access to brain activity); and it is characterized by a very high temporal resolution (about some milliseconds) that makes it attractive to be used in real time [216,217].The main disadvantages of EEG, on the other hand, are the poor spatial resolution (few centimeters) and the damping of the signal due to bone and skin tissue that produces a very weak scalp-recorded EEG [218,219].The reduced amplitude of the signal makes it susceptible to so-called artifacts, caused by other electrical activities (i.e., muscles electromyographic activity or electro-oculographic activity caused by eye movements, external electromagnetic sources such as power lines and electrical equipmen,t or movements of the cables).To reduce the artifacts effects and to improve the signal-tonoise ratio, most EEG electrodes require a conductive solution to be applied before usage.
While BCI applications share the same goal (rapid and accurate communication and control), they differ widely in their inputs, feature extraction and methods, translation algorithms, outputs, and operation protocols.Despite some of its limitations, BCI systems are quickly moving out of the laboratories and becoming practical systems also useful for communication, control, and automation purposes.In recent years, BCIs have been validated in various noisy structured environments such as homes, hospitals, and expositions, resulting in the direct application of BCIs gaining popularity with regular consumers [220].In the last years, some research efforts have been done on its use about smart environments, smart control systems, fast and smooth movement of robotic arm prototypes, motion planning of autonomous or semi-autonomous wheelchairs, as well as controlling orthoses and prostheses.A number of research endeavors confirmed that different devices such as a wheelchair or robot arm can already be controlled by a BCI device [221].
The domain of brain-computer interfacing is broad and includes various applications including well-established ones such as controlling a cursor on the screen [222], selecting letters from a virtual keyboard [223,224], browsing the internet [225,226], and playing games [227].BCIs are also already being used in more sophisticated applications with the brain controlling robotic devices including wheelchairs, orthoses, and prostheses [78].BCI technologies are being applied to smart homes and smart living.In the work of [34], BCI users controlled TV channels, a digital door-lock system, and an electric light system in an unshielded environment.Over the last years, research efforts have been devoted to BCI applications to smart environmental control systems, fast and smooth robotic arm prototypes, as well as motion planning of autonomous or semi-autonomous vehicles.The world of BCI applications is expanding and new fields are opening as in communications, control, and automation such as the control of unmanned vehicles [228]; virtual reality applications in games [227]; environmental control; or improvements in brain control of robotic devices.To better understand the new capabilities of EEG-based BCIs, applications related to control and automation systems are summarized in the next subsections.

Application to Unmanned Vehicles and Robotics
Recently, the authors of [228] gained extensive media attention for their demonstration of the potential of noninvasive EEG-based BCI systems in three-dimensional control of a quadcopter.Five subjects were trained to modulate their sensorimotor rhythms to control an AR drone navigating a physical space.Visual feedback was provided via a forward-facing camera placed on the hull of the drone.Brain activity was used to move the quadcopter through an obstacle course.The subjects were able to quickly pursue a series of foam ring targets by passing through them in a real-world environment.They obtained up to 90.5% of all valid targets through the course, and the movement was performed in an accurate and continuous way.The performance of such a system was quantified by using metrics suitable for asynchronous BCI.The results provide an affordable framework for the development of multidimensional BCI control in telepresence robotics.The study showed that BCI can be effectively used to accomplish complex control in a three-dimensional space.Such an application can be beneficial both to people with severe disabilities as well as in industrial environments.In fact, the authors of [228] faced problems related to typical control applications where the BCI acts as a controller that moves a simple object in a structured environment.Such a study follows previous research endeavors: the works in [229,230] that showed the ability of users to control the flight of a virtual helicopter with 2D control, and the work of [231], that demonstrated 3D control by leveraging a motor imagery paradigm with intelligent control strategies.
Applications on different autonomous robots are under investigation.For example, the study of [232] proposed a new humanoid navigation system directly controlled through an asynchronous sensorimotor rhythm-based BCI system.Their approach allows for flexible robotic motion control in unknown environments using a camera vision.The proposed navigation system includes posture-dependent control architecture and is comparable with the previous mobile robot navigation system that depends on an agent-based model.

Application to "Smart Home" and Virtual Reality
Applications of EEG-based brain-computer interfaces are emerging in "Smart Homes".BCI technology can be used by disabled people to improve their independence and to maximize their residual capabilities at home.In the last years, novel BCI systems were developed to control home appliances.A prototypical three-wheel, small-sized robot for smart-home applications used to perform experiments is shown in Figure 12.
The aim of the study [233] was to improve the quality of life of disabled people through BCI control systems during some daily life activity such as opening/closing doors, switching on and off lights, controlling televisions, using mobile phones, sending massages to people in their community, and operating a video camera.To accomplish such goals, the authors of the study [233] proposed a real-time wireless EEG-based BCI system based on commercial EMOTIV EPOC headset.EEG signals were acquired by an EMOTIV EPOC headset and transmitted through a Bluetooth module to a personal computer.The received EEG data were processed by the software provided by EMOTIV, and the results were transmitted to the embedded system to control the appliances through a Wi-Fi module.A dedicated graphical user interface (GUI) was developed to detect a key stroke and to convert it to a predefined command.
In the studies of [234,235], the authors proposed the integration of the BCI technique with universal plug and play (UPnP) home networking for smart house applications.The proposed system can process EEG signals without transmitting them to back-end personal computers.Such flexibility, the advantages of low-power-consumption and of using small-volume wireless physiological signal acquisition modules, and embedded signal processing modules make this technology be suitable for various kinds of smart applications in daily life.The study of [236] evaluated the performances of an EEG-based BCI system to control smart home applications with high accuracy and high reliability.In said study, a P300-based BCI system was connected to a virtual reality system that can be easily reconfigurable and therefore constitutes a favorable testing environment for real smart homes for disabled people.The authors of [237] proposed an implementation of a BCI system for controlling wheelchairs and electric appliances in a smart house to assist the daily-life activities of its users.Tests were performed by a subject achieving satisfactory results.
Virtual reality concerns human-computer interaction, where the signals extracted from the brain are used to interact with a computer.With advances in the interaction with computers, new applications have appeared: video games [227] and virtual reality developed with noninvasive techniques [238,239].

Application to Mobile Robotics and Interaction with Robotic Arms
The EEG signals of a subject can be recorded and processed appropriately in order to differentiate between several cognitive processes or "mental tasks".BCI-based control systems use such mental activity to generate control commands in a device or a robot arm or a wheelchair [132,240].As previously said, BCIs are systems that can bypass conventional channels of communication (i.e., muscles and speech) to provide direct communication and control between the human brain and physical devices by translating different patterns of brain activity into commands in real time.This kind of control can be successfully applied to support people with motor disabilities to improve their quality of life, to enhance the residual abilities, or to replace lost functionality [78].For example, with regard to individuals affected by neurological disabilities, the operation of an external robotic arm to facilitate handling activities could take advantage of these new communication modalities between humans and physical devices [22].Some functions such as those connected with the abilities to select items on a screen by moving a cursor in a three-dimensional scene is straightforward using BCI-based control [77,241].However, a more sophisticated control strategy is required to accomplish the control tasks at more complex levels because most external effectors (mechanical prosthetics, motor robots, and wheelchairs) posses more degrees of freedom.Moreover, a major feature of brain-controlled mobile robotic systems is that these mobile robots require higher safety since they are used to transport disabled people [78].In BCI-based control, EEG signals are translated into user intentions.
In synchronous protocols, usually P300 and SSVEP-BCIs based on external stimulation are adopted.For asynchronous protocols, event-related de-synchronization, and ERS, interfaces independent of external stimuli are used.In fact, since asynchronous BCIs do not require any external stimulus, they appear more suitable and natural for brain-controlled mobile robots, where users need to focus their attention on robot driving but not on external stimuli.
Another aspect is related to two different operational modes that can be adopted in brain-controlled mobile robots [78].One category is called "direct control by the BCI", which means that the BCI translates EEG signals into motion commands to control robots directly.This method is computationally less complex and does not require additional intelligence.However, the overall performance of these brain-controlled mobile robots mainly depends on the performance of noninvasive BCIs, which are currently slow and uncertain [78].In other words, the performance of the BCI systems limits that of the robots.In the second category of brain-controlled robots, a shared control was developed, where a user (using a BCI) and an intelligent controller (such as an autonomous navigation system) share the control over the robots.In this case, the performance of robots depend on their intelligence.Thus, the safety of driving these robots can be better ensured, and even the accuracy of intention inference of the users can be improved.This kind of approach is less compelling for the users, but their reduced effort translates into higher computational cost.The use of sensors (such as laser sensors) is often required.

Application to Robotic Arms, Robotic Tele-Presence, and Electrical Prosthesis
Manipulator control requires more accuracy in space target reaching compared with the wheelchair and other devices control.Control of the movement of a cursor in a threedimensional scene is the most significant pattern in BCI-based control studies [123,242].EEG changes, normally associated with left-hand, right-hand, or foot movement imagery, can be used to control cursor movement [242].
Several research studies [240,[243][244][245][246][247] presented applications aimed at the control of a robot or a robotic arm to assist people with severe disabilities in a variety of tasks in their daily life.In most of the cases, the focus of these papers is on the different methods adopted to classify the action that the robot arm has to perform with respect to the mental activity recorded by BCI.In the contributions [240,243], a brain-computer interface is used to control a robot's end-effector to achieve a desired trajectory or to perform pick/place tasks.The authors use an asynchronous protocol and a new LDA-based classifier to differentiate between three mental tasks.In [243], in particular, the system uses radiofrequency identification (RFID) to automatically detect objects that are close to the robot.A simple visual interface with two choices, "move" and "pick/place", allows the user to pick/place the objects or to move the robot.The same approach is adopted in the research framework described in [245], where the user has to concentrate his/her attention on the option required in order to execute the action visualized on the menu screen.In the work of [244], an interactive, noninvasive, synchronous BCI system is developed to control in a whole three-dimensional workspace a manipulator having several degrees of freedom.
Using a robot-assisted upper-limb rehabilitation system, in the work of [248], the patient's intention is translated into a direct control of the rehabilitation robot.The acquired signal is processed (through wavelet transform and LDA) to classify the pattern of leftand right-upper-limb motor imagery.Finally, a personal computer triggers the upper-limb rehabilitation robot to perform motor therapy and provides a virtual feedback.
In the study of [249], the authors showed how BCI-based control of a robot moving at a user's home can be successfully reached after a training period.P300 is used in [247] to discern which object the robot should pick up and which location the robot should take the object to.The robot is equipped with a camera to frame objects.The user is instructed to attend to the image of the object, while the border around each image is flashed in a random order.A similar procedure is used to select a destination location.From a communication viewpoint, the approach provides cues in a synchronous way.The research study [232] deals with a similar approach but with an asynchronous BCI-based direct-control system for humanoid robot navigation.The experimental procedures consist of offline training, online feedback testing, and real-time control sessions.Five healthy subjects controlled a humanoid robot navigation to reach a target in an indoor maze by using their EEGs based on real-time images obtained from a camera on the head of the robot.
Brain-computer interface-based control has been adopted also to manage hand or arm prosthesis [1,250].In such cases, patients were subjected to a training period, during which they learned to use their motor imagery.In particular, in the work described in [1], tetraplegic patients were trained to control the opening and closing of their paralyzed hand by means of orthosis by an EEG recorded over the sensorimotor cortex.

Application to Wheelchair Control and Autonomous Vehicles
Power wheelchairs are traditionally operated by a joystick.One or more switches change the function that is controlled by the joystick.Not all persons who could experience increased mobility by using a powered wheelchair possess the necessary cognitive and neuromuscular capacity needed to navigate a dynamic environment with a joystick.For these users, "shared" control approach coupled with an alternative interface is indicated.In a traditional shared control system, the assistive technology assists the user in path navigation.Shared control systems typically can work in several modes that vary the assistance provided (i.e., user autonomy) and rely on several movement algorithms.The authors of [251] suggest that shared control approaches can be classified in two ways: (1) mode changes triggered by the user via a button and (2) mode changes hard-coded to occur when specific conditions are detected.
Most of the current research related to BCI-based control of wheelchair shows applications of synchronous protocols [252][253][254][255][256][257].Although synchronous protocols showed high accuracy and safety [253], low response efficiency and inflexible path option can represent a limit for wheelchair control in the real environment.
Minimization of user involvement is addressed by the work in [251], through a novel semi-autonomous navigation strategy.Instead of requiring user control commands at each step, the robot proposes actions (e.g., turning left or going forward) based on environmental information.The subject may reject the action proposed by the robot if he/she disagrees with it.Given the rejection of the human subject, the robot takes a different decision based on the user's intention.The system relies on the automatic detection of interesting navigational points and on a human-robot dialog aimed at inferring the user's intended action.
The authors of the research work [252] used a discrete approach for the navigation problem, in which the environment is discretized and composed by two regions (rectangles of 1 m 2 , one on the left and the other on the right of the start position), and the user decides where to move next by imagining left or right limb movements.In [253,254], a P300-based (slow-type) BCI is used to select the destination in a list of predefined locations.While the wheelchair moves on virtual guiding paths ensuring smooth, safe, and predictable trajectories, the user can stop the wheelchair by means of a faster BCI.In fact, the system switches between the fast and the slow BCIs depending on the state of the wheelchair.The paper [255] describes a brain-actuated wheelchair based on a synchronous P300 neurophysiological protocol integrated in a real-time graphical scenario builder, which incorporates advanced autonomous navigation capabilities (shared control).In the experiments, the task of the autonomous navigation system was to drive the vehicle to a given destination while also avoiding obstacles (both static and dynamic) detected by the laser sensor.The goal/location was provided by the user by means of a braincomputer interface.
The contributions of [256,257] describe a BCI based on SSVEPs to control the movement of an autonomous robotic wheelchair.The signals used in this work come from individuals who are visually stimulated.The stimuli are black-and-white checkerboards flickering at different frequencies.
Asynchronous protocols have been suggested for the BCI-based wheelchair control in [258][259][260].The authors of [258] used beta oscillations in the EEG elicited by imagination of movements of a paralysed subject for a self-paced asynchronous BCI control.The subject, immersed in a virtual street populated with avatars, was asked to move among the avatars toward the end of the street, to stop by each avatar, and to talk to them.In the experiments described in [259], a human user makes path planning and fully controls a wheelchair except for automatic obstacle avoidance based on a laser range finder.In the experiments reported in [260], two human subjects were asked to mentally drive both a real and a simulated wheelchair from a starting point to a goal along a prespecified path.
Several recent papers describe BCI applications where wheelchair control is multidimensional.In fact, it appears that control commands from a single modality were not enough to meet the criteria of multi-dimensional control.The combination of different EEG signals can be adopted to give multiple control (simultaneous or sequential) commands.The authors of [261,262] showed that hybrid EEG signals, such as SSVEP and motor imagery, could improve the classification accuracy of brain-computer interfaces.The authors of [263,264] adopted the combination of P300 potential and MI or SSVEP to control a brain-actuated wheelchair.In this case, multi-dimensional control (direction and speed) is provided by multiple commands.In the paper of [265], the authors proposed a hybrid BCI system that combines MI and SSVEP to control the speed and direction of a wheelchair synchronously.In this system, the direction of the wheelchair was given by leftand right-hand imagery.The idle state without mental activities was decoded to keep the wheelchair moving along the straight direction.Synchronously, SSVEP signals induced by gazing at specific flashing buttons were used to accelerate or decelerate the wheelchair.To make it easier for the reader to identify the references in this section, Table 2 summarizes the papers about BCI applications presented in each subsection.

Current Limitations and Challenges of the BCI Technologies
The present review paper is a collection of specific papers related to the BCI technology for data capturing, methods for signal processing and information extraction, and BCI applications to control and automation.Some considerations about limitations and challenges related to BCI usage and applications can be then inferred by an analysis of the surveyed papers.BCI development depends on a close interdisciplinary cooperation between neuroscientists, engineers, psychologists, computer scientists, and rehabilitation specialists.It would benefit from general acceptance and application of methods for evaluating translation algorithms, user training protocols, and other key aspects of BCI technology.General limitations of BCI technology may be recognized to be the following: • inaccuracy in terms of classifying neural activity; • limited ability to read brain signals for those BCIs placed outside of the skull; • in limited cases, requirement for pretty drastic surgery; • amount of ethical issues due to reading people's inner thoughts; • the bulky nature of the system leading to possibly uncomfortable user experience; and • the security of personal data not being guaranteed against attackers or intruders.
Other limitation can be related to the methods used to record brain activity.Critical issues for EEG signal acquisition are related, for instance, to the artifacts and outliers that can limit its usability and the interpretability of the features extracted that can be noise-affected due to the low signal-to-noise ratio characterizing EEG signals [266].
Based on the different methods used to record brain activity (ECoG, fMRI, and PET), further different kinds of limitations can be recognized.To what concerns fMRI, the major limitations are the lack of image contrast (poor image contrast and weak image boundaries) and unknown noise toll [267], while the main problems related to PET are its maintenance cost and setup burden.The main limitation related to ECoG usage is its invasive nature based on the application of a surgically implanted electrode grid.Motion artifacts, environmental noise, or eye movements can reduce the reliability of data acquired and can limit the ability of extracting relevant patterns.Moreover, the rapid variation in time and among sessions of the EEG signals makes the parameters extracted from EEG nonstationary.For instance, the change in mental state or different levels of attention can affect the EEG signal characteristic and can increase its variability in different experimental sessions.Due to the chaotic behaviour of the neural system, the intrinsic nonlinear nature of the brain should be better analysed by nonlinear dynamic methods than the linear ones.In BCI technology, some challenges related to data acquisition of EEG signals are to be taken into account.Such a challenge concerns the identification of the optimal location for reference electrodes and the control of impedance when testing with high-density sponge electrode nets.A relevant aspect related to the use of BCI concerns the trade-off between the difficulty interpreting brain signals and the quantity of training needed for efficient operation of the interface [261].Moreover, in BCI systems, several channels record the signals to maintain a high spatial accuracy.Generally, this produces a large amount of data that increases with the number of channels, which can leads to the need for some kind of machine learning approach to extract relevant features [266].

Conclusions
Because of its nature, BCI is conceived for a continuous interaction between the brain and controlled devices, affording external activity and control of apparati.The interface enables a direct communication pathway between the brain and the object to be controlled.By reading neuronal oscillations from an array of neurons and by using computer chips and programs to translate the signals into actions, a BCI can enable a person suffering from paralysis to write a book or to control a motorized wheelchair.Current BCIs require deliberate conscious thought, while future applications, such as prosthetic control, are likely to work effortlessly.One of the major challenges in developing BCI technologies has been the design of electrodes and surgical methods that are minimally invasive.In the traditional BCI model, the brain accepts an implanted mechanical device and controls the device as a natural part of its representation of the body.Much current research is focused on the potential of noninvasive BCI.Cognitive-computation-based systems use adaptive algorithms and pattern-matching techniques to facilitate communication.Both the user and software are expected to adapt and learn, making the process more efficient with practice.
Near-term applications of BCIs are primarily task-oriented and are targeted to avoid the most difficult obstacles in development.In the farther term, brain-computer interfaces will enable a broad range of task-oriented and opportunistic applications by leveraging pervasive technologies to sense and merge critical brain, behavioral, task, and environmental information [268].
The theoretical groundwork of the 1930s and 1940s and the technical advance of computers in the following decades provided the basis for dramatic increases in human efficiency.While computers continue to evolve, the interface between humans and computers has begun to present a serious impediment to full realization of the potential payoff [241].While machine learning approaches have led to tremendous advances in BCIs in recent years, there still exists a large variation in performance across subjects [269].Understanding the reasons for this variation in performance constitutes perhaps one of the most fundamental open questions in the research on BCI.Future research on the integration of cognitive computation and brain-computer interfacing is foreseen to be about how the direct communication between the brain and the computer can be used to overcome this impediment by improving or augmenting conventional forms of human communication.

Figure 1 .
Figure 1.The most widely used method for recording brain activity in brain-computer interfacing is EEG, as it is a technique that is simple, noninvasive, portable, and cost-effective.Typical electroencephalogram recording setup: cap carrying on contact electrodes and wires (from the Department of Electrical and Electronic Engineering at the Tokyo University of Agriculture and Technology).The wires are connected to amplifiers that are not shown in the figure.Amplifiers also improve the quality of acquired signals through filtering.Some amplifiers include analog-to-digital converters to allow brain signals to be acquired by (and stored on) a computer.

Figure 2 .
Figure 2. Basic brain-computer interface (BCI) schematic: How targeted brain oscillation signals (or brainwaves) originate from a visual stimulus or a cognitive process and how they get acquired, processed, and translated into commands.

Figure 3 .
Figure 3. Experimental setup of a steady-state visually evoked potential (SSVEP) recording: A patient gazes at a screen that presents eight different patterns corresponding to eight different intended commands to the interface.Each pattern oscillates with a different frequency: whenever the patient gazes at a particular pattern, neuronal oscillations take place in the visual cortex, which locks up with the frequency of the flickering pattern (from the Department of Electrical and Electronic Engineering at the Tokyo University of Agriculture and Technology).

Figure 4 .
Figure 4.An experimental setup for controlling a vehicle through a steady-state visually-evoked-potentials-based BCI (from the Department of Electrical and Electronic Engineering at the Tokyo University of Agriculture and Technology).

Figure 5 .
Figure 5.The electrode arrangement of the International 10-20 method and the International 10-10 method.The circles show the electrodes defined by the International 10-20 method.The circles and the crosses show the electrodes defined by the International 10-10 method.

Figure 6 .
Figure 6.Examples of event-related potentials: P300 waveforms.The illustrated signals were averaged over 85 trials."Target" is the observed signal corresponding to the subject's perception after displaying a stimulus."Non Target" is the observed signal corresponding to the subject's perception corresponding to a stimulus ignored by the subject.

Figure 7 .
Figure 7. Stimuli in the telephone-call brain-computer interface.According to the order indicated by the arrows, each row and line flashes.

Figure 8 .
Figure 8. Example of an SSVEP: Power spectrum of the signal observed when a subject gazes at a visual pattern (flickering at 12 Hz) compared to the power spectrum of the signal observed when the subject does not receive any stimulus (idle).

Figure 9 .
Figure 9.An instance of steady-state visually evoked potentials-based brain-computer interfacing: Each checkerboard performs monochrome inversion at a different frequency.

Figure 11 .
Figure 11.Event-related desynchronisation elicited by the motor imagery tasks of moving the left hand (top) and the right hand (bottom) at electrodes CP3 and CP4.

Figure 12 .
Figure 12.A prototypical three-wheel, small-sized robot for smart-home applications used to perform experiments (from the Department of Information Engineering at the Università Politecnica delle Marche).

Table 2 .
BCI applications to automation.