Next Article in Journal
Testing and Noise Assessment of Two Types of Bridge Expansion Joints: Case Study
Previous Article in Journal
A Novel Effective Arsenic Removal Technique for High-Arsenic Copper Minerals: Two-Stage Filtration Technology Based on Fe-25Al Porous Material
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Neural–Computer Interfaces: Theory, Practice, Perspectives

1
Department of Neuroscience, Center for Genetics and Life Sciences, Sirius University of Science and Technology, 354340 Sirius Federal Territory, Krasnodar Region, Russia
2
Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education (MSUPE), 123290 Moscow, Russia
3
Institute of Translational Biomedicine, Saint Petersburg State University, 199034 Saint Petersburg, Russia
4
Pavlov Institute of Physiology, 199034 Saint Petersburg, Russia
5
Center for Life Improvement by Future Technologies “LIFT”, 121205 Moscow, Russia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(16), 8900; https://doi.org/10.3390/app15168900
Submission received: 10 July 2025 / Revised: 31 July 2025 / Accepted: 6 August 2025 / Published: 12 August 2025
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)

Abstract

This review outlines the technological principles of neural–computer interface (NCI) construction, classifying them according to: (1) the degree of intervention (invasive, semi-invasive, and non-invasive); (2) the direction of signal communication, including BCI (brain–computer interface) for converting neural activity into commands for external devices, CBI (computer–brain interface) for translating artificial signals into stimuli for the CNS, and BBI (brain–brain interface) for direct brain-to-brain interaction systems that account for agency; and (3) the mode of user interaction with technology (active, reactive, passive). For each NCI type, we detail the fundamental data processing principles, covering signal registration, digitization, preprocessing, classification, encoding, command execution, and stimulation, alongside engineering implementations ranging from EEG/MEG to intracortical implants and from transcranial magnetic stimulation (TMS) to intracortical microstimulation (ICMS). We also review mathematical modeling methods for NCIs, focusing on optimizing the extraction of informative features from neural signals—decoding for BCI and encoding for CBI—followed by a discussion of quasi-real-time operation and the use of DSP and neuromorphic chips. Quantitative metrics and rehabilitation measures for evaluating NCI system effectiveness are considered. Finally, we highlight promising future research directions, such as the development of electrochemical interfaces, biomimetic hierarchical systems, and energy-efficient technologies capable of expanding brain functionality.

1. Introduction

Today, as artificial intelligence approaches the limits of human capabilities and medicine struggles with incurable neurological diseases, NCIs occupy a special place among interdisciplinary research fields. They offer to restore mobility to the paralyzed, repair vision, hearing, tactile sensation, proprioception, and even memory, and in the long term, to transform the very nature of the human mind by expanding the boundaries of perception, thinking, and interaction with reality.
NCIs establish bidirectional communication between biological neural systems and digital devices, combining three key directions: brain-to-computer command transmission (BCI), computer-to-brain stimulation (CBI), and brain-to-brain interaction (BBI). Current developments in this area utilize a suite of physicochemical methods, from electrical and magnetic to optical and pharmacological approaches, while facing a number of fundamental challenges. The main difficulties are the need to accurately interpret complex neural patterns, ensure long-term biocompatibility of implantable systems, and preserve naturalness in the interaction between digital technologies and neurobiological dynamics, which requires careful consideration of both technical parameters and ethical aspects of such developments.
The relevance of NCIs cannot be overestimated. Just imagine: already today, paralyzed patients can type with their minds and blind people can distinguish the contours of objects with the help of visual implants. However, behind these breakthroughs lie unsolved problems. Why do the same algorithms work with different accuracy in different individuals? How can we strike a balance between invasive and non-invasive methods to maintain both precision and safety? And most importantly, how do we create interfaces that do not just execute commands, but become an extension of the nervous system, adapting to the user rather than the other way around?
Our review does not answer all these questions. It is an invitation to dialogue. We will consider the main types and principles of NCI construction, focusing on individual functional components, specific examples of implementations with detailed engineering aspects, algorithms and modeling approaches, criteria for evaluating NCI efficiency, and outline near and distant perspectives for the development of this field. We appeal to those who are ready to think boldly: psychophysiologists, neurobiologists with an engineering background, looking for inspiration in experimentation; doctors, passionate about creating new methods of rehabilitation; and futurologists, designing the future of human-machine symbiosis. After all, NCIs are not only the technology of tomorrow, but also a mirror into which we can look to try to see what humans might become in the age of brain-silicon fusion.

1.1. Justification for the Neural–Computer Interface Category

Currently, the term “neural–computer interfaces” (NCIs) is being replaced by “brain–computer interfaces” (BCIs), which are commonly used to define systems that focus exclusively on the brain and convert its activity into useful output signals for interacting with the external environment or correcting body functions. They are also considered capable of modulating brain activity through targeted stimulation, generating useful input signals for the nervous system (https://bcisociety.org/bci-definition/, accessed on 6 August 2025). In our opinion, this approach is unjustifiably narrow, in particular excluding brain–spine interfaces (BSIs) that contain all the attributes that fall under the definition of a BCI: measurement of brain activity, presence of computerized quasi-real-time processing, functionally useful outputs to reconstruct natural brain outputs [1,2,3,4], in addition, utilizes targeted stimulus delivery to the CNS to create functionally useful inputs [5]. In our opinion, the introduction of a more general category of “neural–computer interfaces” (NCI) allows the solving of the problem, including both existing BCIs and BSIs as a special case of technologies, which include methods of recording and stimulating both the brain and spinal cord to restore lost neurological functions. This classification may prove useful in the future in the construction of full-fledged bidirectional neural–computer interfaces aimed also at human empowerment. One should not confuse NCIs with neurocomputing, which refers to the technology of neuromorphic data processing inspired by biological processes in neural networks [6].

1.2. Main Types and Design Principles of Neural–Computer Interfaces (NCIs)

Some authors consider brain-machine interfaces (BMI), focusing on artificial device control [7,8,9]. However, we argue that the established term BCI [10] is preferable for describing information transmission from the brain, as it implies processing neural signals (e.g., correlates of user intent) that may be converted into commands not only for actuators but also for computer programs in downstream signal chains, irrespective of physical implementation.
For systems involving computer-mediated conversion of sensor/program signals into neural stimuli, the separate term CBI (computer–brain interface) is useful, denoting centripetal signal flow toward the CNS. The most critical application of this type is restoring lost sensory functions. Finally, we must consider BCBI—systems combining both aforementioned interface types to create bidirectional interaction between neural structures of one or multiple users via computational processing, analogous to “brain-machine-brain interfaces” (BMBI) [11]. Thus, NCIs may be unidirectional (BCI or CBI) or bidirectional (BCBI/BMBI). For brain-to-brain communication involving information extraction, processing, and transfer, the more compact term “brain–brain interfaces” (BBI) is sometimes used [12,13].
Agency in BBIs. Analysis of BBI systems cannot proceed without considering the factor of agency—the ability of organisms to distinguish between self-generated and external signals when initiating behavior [14]. BBI can connect different parts of the nervous system while remaining within intra-agent interaction. This may include a cognitive hippocampal neuroprosthesis for restoring memory function or a brain–spine interface (BSI)—a system that converts signals from the cerebral cortex into spinal cord stimuli to restore motor and visceral functions, while providing sensory feedback by bypassing damage that disrupts natural signal transmission at the spinal cord level [1]. These NCIs serve as a “bridge” between neural structures within a single organism.
The structural-functional organization of BBI between organisms with nervous systems possessing agency allows classifying such NCI types as inter-agent. Creating a full-fledged inter-agent BBI requires two sets of BCI and CBI for each participant. When cross-connected—where the BCI output of one agent is linked to the CBI input of the second, and the BCI output of the second is fed to the CBI input of the first—a bidirectional brain-to-brain information transfer system is formed, enabling exchange of neural correlates of states, intentions and sensations through a computational processor (see Figure 1). Theoretically, interaction could occur between multiple agents, forming a Brainet [15].
Neural–computer interfaces (NCIs) can be classified by the degree of physical intervention into three main categories. Non-invasive NCIs detect or modulate biological signals without breaking the skin, as in the case of EEG or tDCS. Semi-invasive NCIs involve surgical access to neural structures without direct penetration of neural tissue, such as epidural or subdural spinal cord stimulation, ECoG, and stentrode. Invasive NCIs require implantation of electrodes into neural tissue or rely on cellular-level interventions, such as optogenetic modification of neuronal populations.
Currently, the most developed are unidirectional command NCIs. Based on the nature of interaction between users and actuators, some authors [16] distinguish three categories. Active BCIs: generate output signals based on brain activity that the user consciously controls, independent of external events, to operate an application. An example of this BCI type is direct command interfaces based on motor imagery (MI-BCI). Reactive BCIs: use brain activity that arises in response to external stimuli. The user indirectly modulates this activity to control the application. Typical neural–computer interfaces of this type are P300 Speller and SSVEP (Steady-State Visual Evoked Potential) BCI. Passive BCIs: rely on brain activity not directly related to volitional intent [17]. They are designed to enhance the adaptability and safety of human-machine systems by monitoring hidden cognitive states such as mental workload and engagement, without interfering with the user’s primary task. These systems enable adaptive automation by adjusting task complexity based on the user’s condition, which helps reduce errors and improve interaction efficiency [18,19]. Some authors [20] propose dividing BCIs into those dependent on auxiliary muscle activity, such as eye–brain–computer interfaces (EBCIs) that use residual muscle activity, and those independent of it, which operate based on decoding brain activity alone. An NCI system includes a sequence of signal processing stages that can be visualized as functional blocks shown in Figure 1.
For proper NCI operation, iterative preliminary configuration must be performed at all signal processing pipeline steps: filtering, artifact removal, feature extraction, decoder-classifier for BCI, or encoder for CBI. This requires the processing framework to support deferred analysis mode. Some frameworks—BCI2000 [21], OpenViBE, NeuroPype, BCILAB [22]—implement real-time emulation mode, enabling system testing with new, pre-recorded data. The signal processing pipeline’s speed depends on computation complexity, optimization level, and the presence of hardware accelerators. Algorithm development focuses on finding the optimal balance between accuracy, processing speed, and hardware implementation ergonomics. In real-world conditions, neural–computer interfaces operate in quasi-real-time mode, striving to provide sufficient response speed within given computational constraints. The formation of agency—the sense of control and action authorship—also depends on processing accuracy, speed, and feedback presence [23].

2. Basic Principles of Signal Conversion and Processing for NCIs

2.1. Signal Processing Pipeline in BCIs

(a)
Signal conversion
Obtaining information about activity in the target CNS area requires converting various manifestations of its biological activity into proportional electrical signals suitable for analog-to-digital conversion and subsequent digital processing. At the first stage, this can be achieved either through direct recording of biopotentials using surface, subcutaneous, epidural, subdural and penetrating electrodes (electroencephalography—EEG, epidural electrocorticography—eECoG, subdural electrocorticography—sECoG, endovascular stentrode, intracortical arrays), or through conversion of magnetic field energy (magnetoencephalography—MEG), optical radiation (functional near-infrared spectroscopy—fNIRS), acoustic waves (functional ultrasound—fUS), etc. Each of these converters can be quite non-trivial, and a significant part of the subsequent processing chain depends on the quality of its design and implementation.
(b)
Signal transmission
Signal digitization and transmission from the recording site to the processing site is performed either by wired or wireless means. Since analog signal transmission is associated with problems of noise and attenuation, and processing occurs in digital form, it undergoes analog-to-digital conversion (ADC). High-speed precision sigma-delta ADCs with 24-bit resolution, such as TI ADS1299, have become widespread. Some chips incorporate functionality for amplification, digitization, and signal transmission via differential SPI protocol, for example Intan RHD 2000 series.
In modern NCIs, efficient wireless transmission of neural signals requires data compression/decompression methods that can reduce the amount of transmitted information without losing data critical for decoding brain activity. A recent study proposed a neural sensor architecture using Address-Event Representation (AER) that provides high data compression while preserving key information about neural spikes. Experiments showed this method achieves compression ratios from 50 to 100, 5–18 times better than previous developments, while maintaining approximately 0.9 correlation between original and reconstructed signals and spike detection accuracy over 90% [24].
(c)
Data preprocessing
All necessary signal preprocessing on the end device is performed in real time under the control of “low-level” software and specialized chips. Basic preprocessing typically includes artifact removal (e.g., from muscle EMG activity), low/high pass filters (LPF/HPF) or bandpass filtering, power line noise rejection, baseline correction, and spatial filtering. Digital filtering is usually implemented using finite/infinite impulse response filters (FIR/IIR), Kalman filter, or similar linear converters [25]. Spatial filtering in its simplest form is performed using the Laplacian operator [26].
Hardware solutions in the form of digital signal processors (DSP), such as 16-channel TMS320C5517, can perform (de)multiplexing, fast Fourier transform (FFT), power spectral density calculation, principal component analysis (PCA), linear discriminant analysis (LDA), Bayes rule, and finite state machine operations [27]. A more modern solution based on a programmable system-on-chip (PSoC) with 22 mm2 area contains a 68-channel neural signal processing system including spike detectors, action potential and local field potential codecs, as well as an energy-efficient processor with hardware accelerators for feature extraction and data compression [28].
When prior information is available about the types of control commands that can be specified, segmentation of neurophysiological data is performed by marking event start and end points, such as mental movement imagery or rest phases. This creates training datasets that, in the simplest case, consist of a class of signals related to intentional commands and a class of all other activities that the classifier should not respond to.
(d)
Data extraction
Feature extraction methods can be divided into several families, including statistical characteristics extraction, spatial pattern analysis, and spectral/time-frequency signal analysis. In one EEG-based BCI system, a combination of PCA and FFT was used for feature extraction [29]. Another study used two features of neural activity detected within a specific interval: the number of action potentials exceeding a certain threshold and average high-frequency spectral power [25]; a linear transformation was applied to map the selected features to motor commands.
(e)
Classification
Decoding neural activity is a key aspect of BCI operation as it affects command recognition accuracy and system speed, which are critical for real-time performance. Signal classes are typically predefined according to BCI tasks. Classification and regression methods (predicting signal values) can be divided into supervised learning, where the model trains on labeled data with class labels for each example, and unsupervised learning, where the model discovers hidden patterns and learns from unlabeled data. In the first step, the algorithm adjusts internal parameters based on training set examples. After training, the classifier processes new data and outputs class membership values.
Simple classifiers like LDA and support vector machines (SVM) are often used in NCI systems. For example, one EEG-based BCI system used an SVM with radial basis function to control a 2-degree-of-freedom robot with 85.45% accuracy [29]. LDA variations were used in asynchronous SSVEP-BCI for peripheral device control [30], and in MI-BCI for decoding imagined kinesthetic syllable pairs [31]. The Bagged Trees Classifier ensemble algorithm showed best performance for decoding upper limb MI from ECoG sensors over hand representation sensorimotor areas [32,33].
Artificial neural networks are more flexible data processing tools and are widely used as BCI classifiers since they can approximate nonlinear class boundaries. For example, a multilayer perceptron was used in a simple BCI system recognizing commands based on EEG alpha rhythm changes during eye opening/closing. Neural network models trained in TensorFlow using the Keras API achieved 92.1% accuracy in controlling a small tracked robot [34]. A pretrained recurrent neural network (RNN) decoder was applied in an intracortical speech BCI [35]. Deep learning (DL) model-based systems have also been used for EEG MI-BCI [36], including four-legged robot control in chronic experiments [37].
(f)
Command execution
This stage involves converting classifier-decoded data into application commands for device control (robots, exoskeletons, prostheses, orthoses, wheelchairs, etc.) or software.
(g)
Feedback
This aspect involves incorporating visual, tactile, and auditory signals to improve user-BCI interaction accuracy through neuroplasticity and sense of agency—authorship of events. For this aspect, application response speed is most important, while feedback type (additional auditory notifications, color schemes, spatial organization) is less critical.

2.2. Basic Principles of Data Processing for CBIs

The signal processing chain begins with modules functionally similar to BCI systems, responsible for data collection and preprocessing. Initially, signals must be cleaned from noise, amplified, and converted to digital form. The preprocessing is simplified because CBI typically receives standardized signals from sensors (artificial afferent sensors), but may also receive data from BCI output applications (see Figure 1). However, CBI systems have specific signal processing characteristics. While BCIs rely on machine-learning methods such as LDA, SVM, CNN, etc., for classification and decoding of brain signals, CBIs use encoders (algorithms that convert external data into stimulation patterns) and biophysical models of neural networks for precise modulation.
(a)
Data transformation and encoding modules
Visual. To transmit external data to the brain, the system must first extract relevant signal characteristics, which will then be encoded into stimulation patterns. This step varies depending on the type of sensory information being processed. Visual signals from photosensitive sensors involve extracting contours, objects, motion, encoding intensity, shape, depth as frequency, amplitude, or spatial patterns; converting data into stimuli for retinal neurons or the visual cortex. Photosensitive elements of implantable subretinal chips generate light intensity maps of a certain resolution and transmit them through electrodes to retinal bipolar cells; light intensity is converted to current strength via sigmoidal dependence [38].
Tactile. Increasing amplitude or frequency of electrical stimuli enhances perceived tactile signal intensity, allowing linear encoding; subtle features of evoked sensations depend on the type of mechanoreceptor fibers being innervated and complex neural connections, hence the natural tendency toward biomimetic strategies and creation of, for example, desynchronized stimulation patterns [39].
Auditory. Audio signals are processed by time-frequency analysis algorithms (e.g., Cochlear Filter Banks and Temporal Envelope Coding). Modern cochlear implants typically use a Continuous Interleaved Sampling (CIS) encoding strategy, transmitting envelope information to electrodes via unsynchronized alternating biphasic pulses [40].
Spatial. Spatial orientation signals involve integrating data from accelerometers, gyroscopes, encoding information, and stimulating the vestibular system (e.g., transmitting signals to the brain from semicircular canals to restore balance sense in vestibulopathy) [41].
(b)
Stimulation module
The stimulation module is the executive component that converts encoded data from sensors or BCI systems into neural activity using artificial means. Key parameters for stimulation include: stimulation type (electrical, magnetic, ultrasonic, optical, chemical, or multimodal); location (visual, auditory, somatosensory cortex, hippocampus, or other deep brain structures, afferent/efferent pathways); frequency and amplitude characteristics; spatiotemporal dynamics (patterned stimulus sequences for precise neural encoding).
Neural tissue stimulation employs diverse techniques depending on the target applications. Electromagnetic methods include transcutaneous electrical nerve stimulation (TENS) [42], peripheral nerve stimulation (PNS) [39], spinal cord stimulation (SCS), including transcutaneous spinal cord stimulation (tSCS) [43,44]; transcranial direct and alternating current stimulation (tDCS and tACS, respectively) [45], transcranial random noise stimulation (tRNS) [46], noisy galvanic vestibular stimulation (nGVS) [47], trans-spinal magnetic stimulation (ts-MS) [2] and transcranial magnetic stimulation (TMS) for non-invasive impact [48]. Deep brain stimulation (DBS) and intracortical microstimulation (ICMS/ISMS) [49,50] are invasive methods. The non-invasive temporal interference stimulation (TIS) method has shown promising potential, and along with focused ultrasound stimulation (FUS) is gaining popularity as a research tool [51,52,53]. While successful in optogenetic partial vision restoration [54], this technology remains in active research stages.

2.3. Data Processing Specifics for BBIs

While CBIs should be based on biologically realistic algorithms for transforming external signals into neural stimulation patterns, considering neuroplasticity, BBIs must additionally account for natural dynamics and neural “communication language” of interacting structures. For implementing a “digital bridge”, e.g., between brain and spinal cord or within the brain (hippocampal neuroprosthesis), it is necessary to convert recorded spatiotemporal neural activity patterns of neural activity from one neural structure into “terms” of another brain structure via stimulation. Studies show such conversion can be performed in real time using a “Multiple Input Multiple Output” (MIMO) model [55], representing a set of multiple-input single-output (MISO) models. Each model follows a generalized Laguerre-Volterra model form, which can be viewed as a combination of the Volterra model and probit-generalized linear model. Laguerre basis functions and regularized estimation were used to optimize model complexity and avoid overfitting [56]. A more advanced memory decoding model (MDM) correlated predictions of hippocampal CA1 region neural activation based on MIMO with trial image categories, creating sparse static stimulation templates for each category per patient [57]. Piecewise polynomial functions—B-splines—were used to extract memory features from spatiotemporal spike patterns [56]. Essentially, the hippocampal neural activity transformation module comprises scripts running on a general-purpose processor to emulate hippocampal neural networks.

3. Brain–Computer Interfaces (BCIs) by Degree of Invasiveness with Examples

This section examines BCIs categorized by their level of physical intervention—non-invasive, semi-invasive, and invasive—detailing their underlying principles and presenting specific implementations.

3.1. Non-Invasive BCIs

(a)
BCIs based on electroencephalography (EEG) and magnetoencephalography (MEG)
EEG and MEG-based BCIs utilize various evoked and induced brain potentials/fields that can be associated with commands, as well as oscillatory electrical/magnetic activity. BCIs decoding oscillatory brain activity analyze power changes in specific frequency bands (alpha, beta, theta, gamma) linked to different cognitive or motor processes. For example, the first EEG-based BCI using alpha rhythm suppression during eye opening was employed to start/stop operations [58]. Depending on the brain phenomena used for control, non-invasive BCIs are categorized into several types, some of which we will examine.
(b)
Motor Imagery (MI) BCI
This approach analyzes neural activity patterns associated with MI without requiring actual movement execution. It is actively researched for post-stroke rehabilitation and prosthesis control. Such BCIs track sensorimotor rhythm (SMR) during movement attempts or mental imagery. During these events, synchronous activity in sensorimotor cortex neurons decreases, reducing SMR amplitude—an oscillatory process (12–15 Hz) generated during rest that attenuates during movement, even imagined [59]. This involves multiple neurons firing at different phases, called event-related desynchronization (ERD). Movement initiation triggers the reverse process—event-related synchronization (ERS) [60]. These EEG rhythm changes detect user intent and generate control commands [60].
Numerous experimental EEG-BCIs use this approach: left/right hand MI discrimination [61], Berlin Brain–Computer Interface project [62,63], lower-limb exoskeleton control [64], three-class MI classification using traditional methods [65], imagined kinesthetic syllable decoding [31], deep learning models [36], including quadruped robot control in chronic experiments [37]. Studies explore MI-BCI rehabilitation programs for upper limb function recovery and cortical activation in stroke patients [66,67]. Combined BCI and MI decoding therapy shows promise for multiple sclerosis patients [68].
(c)
P300 Speller
The operating principle is based on the occurrence of the P300 evoked brain potential in response to a rare, significant stimulus that needs to be distinguished among more frequent, insignificant ones [69]. For example, a character matrix is displayed on the screen, and characters, rows, or areas covering the entire matrix flash randomly. When the desired character or area flashes, a P300 signal appears in the brain recordings approximately 300 ms later. Repeated detections of this signal allow the corresponding character to be selected, which is then added to the resulting string on the screen. The process continues until the desired word is formed. Unfortunately, the operational speed of such a BCI is relatively low, ranging from 5 to 10 characters per minute. A review of similar systems is presented in [20,70].
(d)
BCIs Based on SSVEP(F)s (Steady-State Visual Evoked Potentials (Fields))
SSVEPs are steady-state visually evoked brain potentials recorded using EEG. The underlying principle of such BCIs is based on the well-known and highly reproducible phenomenon of enhanced oscillatory brain activity in the visual cortex (V1) in response to periodic visual stimulation, with the frequency of stimulation being captured [71,72]. A similar phenomenon is observed during MEG recordings. In this case, instead of potentials, brain fields are discussed [73]. Most BCI systems based on SSVEP(F)s use predefined frequencies for different commands. More complex systems take into account the functioning of attention and voluntary control systems [74].
Modern BCIs based on this experimental paradigm aim to improve system performance. The Multiple Frequencies Sequential Coding protocol allows increasing the number of encoded targets with a limited number of available frequencies [75]. A hybrid BCI combining two neurophysiological phenomena, ERD and SSVEP, is presented in [76]. In another study, two types of visual stimuli were compared as sources of visual stimulation: frequency-modulated (FM) and traditional sinusoidal (SIN) stimuli. The results showed that the average classification accuracy for SIN stimuli was 95.3%, while for FM stimuli it was 91.7%. Thus, the use of FM stimuli led to a slight decrease in classification accuracy by 3.6 percentage points compared to SIN stimuli. However, subjective user comfort was significantly higher when using FM stimuli [77]. In [78], a BCI system based on SSVEP is presented, allowing control of a robotic arm with 7 degrees of freedom with an accuracy of 92.78% and a command transmission speed of 15 commands per minute. As a classifier, a canonical correlation analysis filter bank method was used, which does not require prior calibration.
In the asynchronous version of SSVEP-BCI, the EEG signal is analyzed continuously, and the system must detect moments when the user begins focusing on a stimulus associated with a specific stimulation frequency. This provides greater flexibility in control, but at the cost of an increased number of false alarms. To minimize these, additional checks for signal temporal stability, integration of BCI with other modalities, and post-processing through filters and probabilistic models are used. Such a system, described in [30], enabled joint control of a wearable manipulator and a robotic arm. The average recognition accuracy of user intentions was 93.0%.
(e)
Passive and Hybrid BCIs
Passive and hybrid brain–computer interfaces (BCIs) expand the boundaries of traditional human-machine interaction by taking into account not only the user’s intentional commands but also their cognitive and emotional states. These systems are especially relevant under high workload conditions, where adapting the interface to the operator’s state without additional distraction is critical.
A passive EEG-based BCI system was proposed to enhance interpersonal coordination during joint classification of bistable visual stimuli (Necker cube) with varying degrees of ambiguity. Although the system did not involve direct brain stimulation (and therefore cannot be considered a full BBI), it enabled participants to adapt to each other. Modulating task complexity at a frequency close to the “natural” rhythm of cognitive resource recovery helped maintain high brain performance over extended periods [79].
In [80], a dual BCI concept was introduced and experimentally demonstrated for the first time, integrating both reactive (rBCI) and passive (pBCI) modules into a single EEG-based system. The reactive BCI enabled pilots to perform control actions (e.g., switching checklists) via a VEP interface with high accuracy (~98–100%). The passive BCI monitored missed radar warnings (i.e., lapses in attention) and, in cases of inattentiveness, triggered an automatic collision-avoidance response. The system achieved an F1-score of 0.94. This approach opens up new opportunities for developing neuroadaptive human-machine interfaces, particularly in cognitively demanding scenarios such as piloting, air traffic control, and real-time operation of complex systems.
(f)
MEG as a Method of Non-Invasive Brain Activity Recording for BCI
The MEG method has several advantages that make it promising for use in BCI: millisecond temporal resolution, high spatial resolution due to multichannel systems, and less distortion of magnetic fields when passing through biological tissues, as well as a wide bandwidth. For example, the 306-sensor Elekta Neuromag MEG based on SQUID sensors (Superconducting Quantum Interference Device) provides a resolution of 2–5 mm depending on the depth of the source. The potential of MEG for BCI has been demonstrated in several studies: [81,82,83].
In recent years, compact sensors using optical pumping (OPM) have been actively developed to prepare spin states (usually of alkali metals such as rubidium or cesium) and measure changes in their magnetic state using laser light. In the experiment [84], three participants used OPM to control a BCI system enabling “mental writing” of words. The system analyzed data in real time and determined the selected letter based on the frequency characteristics of SSVEPs. The average recognition accuracy was 97.7%, demonstrating the high efficiency of the approach. In [85], the potential of OPM technology as an alternative to traditional SQUID sensors was investigated. It was shown [86] that a limited number of OPM sensors is sufficient for recording sensorimotor rhythms, reducing the cost and complexity of the system. In [73], a nine-command BCI based on high-frequency stimulation of steady-state visual evoked fields (SSVEFs) in the range of 58–62 Hz with a step of 0.5 Hz is presented. An advanced component analysis algorithm tailored for ensemble data processing tasks was applied for precise SSVEF identification and system performance evaluation. This study was the first in its field to demonstrate the capabilities of a high-frequency SSVEF-BCI implemented using OPM-MEG. The developed technique achieved a theoretical average information transfer rate (ITR) of 58.36 bits/min with a data analysis length of 0.7 s, while the maximum individual ITR value reached 63.75 bits/min.
(g)
BCI Based on Near-Infrared Spectroscopy (NIRS)
NIRS is based on measuring changes in the concentration of oxyhemoglobin and deoxyhemoglobin in the cerebral cortex. These changes correlate with the activation of various brain regions and reflect neural activity through changes in blood flow. According to [87], a BCI based on NIRS demonstrated higher control accuracy compared to an EEG-based interface during post-stroke rehabilitation. The average control accuracy using NIRS was 46.4%, while for EEG it was 40.0%. Additionally, patients undergoing rehabilitation with NIRS showed significant improvement in motor functions, as demonstrated by higher scores in the Action Research Arm Test (ARAT). A recent review of non-invasive BCIs is presented in [88].
(h)
Eye–Brain–Computer Interfaces (EBCIs)
These interfaces use eye movement signals and neural signals to control external devices. Such systems are particularly useful for users capable of controlling eye movements but limited in using other BCI methods. EBCIs combine eye tracking and EEG, creating a hybrid non-invasive BCI for contactless interaction. This integration improves control accuracy and reduces the risk of errors caused by unintentional eye movements. EBCIs effectively address the “Midas touch” problem arising from the system’s reaction to unconscious eye movements. Using accompanying brain signals, such as the event-related potentials of user interface response anticipation, EBCIs enhance the differentiation between intentional and scanning eye movements, improving the accuracy and speed of the interface. This approach makes EBCIs intuitive and user-friendly in complex scenarios [89].

3.2. Semi-Invasive BCIs

(a)
Epidural Electrocorticography (eECoG)
The wireless 64-channel epidural ECoG implant WIMAGINE [90] is one of the most advanced interfaces for clinical BCIs. It features an adjustable gain and sampling rate, with a noise level < 1 µV in the range of 0.5–300 Hz. Data transmission occurs via a wireless channel in the 402–405 MHz band, while power is supplied through an inductive link at 13.56 MHz, consuming 100 mW. Digitized signals are transmitted to a base station connected to a PC for application control. The implant is enclosed in a hermetic casing with an optimized antenna system, facilitating surgical manipulation and implying the use of an auxiliary headset. In [91], data on its biocompatibility and chronic use (6 months) were obtained using a primate model. A preclinical study was conducted on two sheep, in which the device was implanted for 10 months [92]. In [3], the restoration of voluntary control over lower-limb movements after spinal cord injury (SCI) was demonstrated in a human subject. In this study, an epidural 16-channel stimulator was used to establish communication between brain and spinal cord signals.
The integration of a passive BCI based on the WIMAGINE implant with depth sensors for space assessment was also investigated to create assistive robotic devices (wheelchairs or robotic manipulators) designed for long-term home use by individuals with motor impairments [93]. The system helped a quadriplegic patient avoid unintended actions, such as wheelchair collisions with surrounding objects or accidental opening of the robot’s grip, which could lead to dropping items. Results showed that the proposed solutions improved performance in both tasks, reducing execution time and minimizing the number of required mental commands. Neural signal acquisition (local field potentials—LFP) was performed with the following characteristics: bandpass filtering at 0.5–300 Hz and 12-bit digitization. The total system latency was 400 ms for the wheelchair and 200 ms for the robot.
A semi-invasive BCI named NEO (Neural Electronics Opportunity), equipped with 8 chronic eECoG electrodes placed over the primary sensorimotor cortex and a wearable hand exoskeleton, assisted a patient with tetraplegia following C4 SCI. The patient demonstrated a 5-point improvement in upper limb motor scores and a 27-point increase in the ARAT test. Using the neurointerface for 9 months significantly improved hand function. The BCI was equipped with wireless power and a wireless data transmission system [94].
(b)
Subdural Electrocorticography (sECoG)
The ECoG signal can be recorded to convert motor imagery (MI) into a signal for external devices, such as exoskeletons or wheelchairs [32]. An ECoG recording device was implanted in the sensorimotor cortex of a patient with cervical spinal cord injury. Subdural implantation of two four-contact electrode strips (Resume II Leads, Medtronic, Minneapolis, MN, USA) was performed in the somatosensory area responsible for arm movements. Decoding accuracy was defined as the percentage of correctly classified motor imagery, with the most accurate being an online bagging tree classifier (84.15% on average over 9 weeks). In the study by Davis et al. [33], data confirming the stable operation of the system over five years for brain activity recording and stimulation of hand motor functions using an orthosis in a single patient were presented. The average monthly AUC of the decoder was 0.959.
In another clinical study [95], two 64-channel ECoG grids (PMT Corporation, Chanhassen, Minnesota) were subdurally implanted on the pial surface of the brain, covering areas responsible for speech and upper limb movement control. A convolutional neural network was developed to decode neural activity efficiently from electrode matrices. The trial involved a patient with severe dysarthria caused by amyotrophic lateral sclerosis (ALS). Using a chronic ECoG implant placed over the ventral sensorimotor cortex, the patient successfully controlled computer applications using six intuitive speech commands. These commands were accurately recognized and decoded with a median accuracy of 90.59% over three months of testing. Notably, such high accuracy was achieved without the need for retraining or recalibration of the model.
Another notable example is the development of a BCI that decoded speech in a paralyzed bilingual man [96]. A machine-learning model trained on sentences in Spanish and English predicted phrases that the patient attempted to articulate in both languages. The decoder’s vocabulary consisted of 111 Spanish and 70 English words, along with several dozen phrases. With the decoder, the patient communicated with researchers in both languages.
In studies [97,98], the world’s first 1024-electrode microelectrode array for high-resolution mapping of the human cerebral cortex was demonstrated. The device, mounted on a planar flexible polyimide substrate with platinum electrodes, provided dense coverage with an inter-electrode distance of 400 µm and a thickness of 22 µm. This configuration allowed precise recording of neural activity both under anesthesia (sensorimotor mapping) and during wakefulness (language tasks), capturing boundaries of functional brain areas and activity related to speech intentions. The system demonstrated excellent results in decoding neural signals, including phase inversion boundaries in the somatosensory and motor cortex, as well as activation of Broca’s area during speech preparation. The high electrode density achieved a spatial resolution of 400 µm and decoding accuracy up to 90% during speech activity analysis. Layer 7 Cortical Interface is a commercial version of this technology by Precision Neuroscience, positioning itself as an optimized device for long-term use. The company was co-founded by former Neuralink members.
(c)
Stentrodes
The stentrode (TM) technology employs a semi-invasive endovascular method for recording and stimulating brain activity. A self-expanding mesh with a set of electrodes is introduced percutaneously, angiographically, through a catheter into the jugular vein and advanced into the superior sagittal sinus, positioned near the sensorimotor cortex. After the mesh expands, the electrodes are securely pressed against the vessel walls, enabling them to record or stimulate neural activity.
The first human implantation of a stentrode was performed on two ALS patients. In addition to relying on BCI signals for controlling a computer mouse (clicks, zooming), patients were also provided with an eye tracker for cursor navigation in instrumental activities of daily living (IADL). Participant 1 achieved an average click selection accuracy of 92.63% while typing at a speed of 13.81 correct characters per minute (CCPM). Participant 2 achieved an average click selection accuracy of 93.18%, with a speed of 20.10 CCPM. Both patients demonstrated independent execution of IADL tasks, including sending text messages, online shopping, and financial management [99].
Technologically, the stentrode consists of a monolithic thin-film cylindrical mesh array with 16 electrodes, a data collection, decoding, and wireless transmission unit (ITU, Synchron, Palo Alto, CA, USA), implanted subcutaneously on the patient’s body and connected to the stent via a flexible transvascular cable 50 cm in length, exiting from the base of the skull. Power is supplied inductively. The electrocorticographic signal from the stentrode is wirelessly transmitted to an external telemetry unit (ETU) using infrared light at a sampling rate of 2 kHz with a resolution of 0.125 µV/bit. The ETU communicates with a Windows Surface Book 2 tablet (Microsoft, Redmond, WA, USA), equipped with an eye tracker (Tobii Dynavox, Pittsburgh, PA, USA) integrated into the lower part of the device for interaction with the Windows 10 user interface. It is claimed that the recording quality matches subdural and epidural ECoG matrix-based methods for neural activity registration [100].

3.3. Invasive BCIs

“That’s one small step for [a] man, one giant leap for mankind”—these words, spoken by Neil Armstrong on 20 July 1969, as he first stepped onto the lunar surface, became a symbol of the greatest achievement in science and technology. Decades later, in 2012, another story inspired the world: “One small nibble for a woman, one giant bite for BCI”—so expressed her emotions Jan Scheuermann, a patient of Dr Jennifer Collinger, who at the age of 52 had two “Utah Array” electrode arrays implanted in her motor cortex and was able to bite off a piece of chocolate with the help of a BCI for the first time in many years. The patient suffered from hereditary spinocerebellar neurodegeneration, which left her tetraplegic. After 13 weeks of training with the BCI, she could control an anthropomorphic prosthetic arm with 7 degrees of freedom: 3-axis movement, 3D orientation, and grip. The average success rate for achieving goals was evaluated at 91.6% (SD 4.4) [101]. Although the original paper does not report adverse events, in the demonstration video, difficulties in using the BCI for object grasping while simultaneously visually focusing on it can be observed.
(a)
BrainGate
Jan Scheuermann became one of fourteen adults who participated in BrainGate clinical trials between 2004 and 2021 [102]. This research aimed to evaluate the feasibility of applying invasive interfaces to assist individuals affected by spinal cord injury, neurodegenerative disorders, or brainstem stroke. All patients had a “Utah” microelectrode array (NeuroPort, Blackrock Neurotech; Salt Lake City) implanted in the motor cortex of their brain, connected via gold wires to a percutaneous transmitter capable of transmitting recorded cortical activity. Researchers note that none of the implantations or explantations of the interface led to patient death or significant health deterioration, considering that 12 patients lived with this neurointerface for more than a year.
(b)
BrainGate2: Speech recognition
A team led by Francis R. Willett [35] presented a highly efficient speech neuroprosthesis for restoring communication in ALS patients. Using the advanced intracortical implant BrainGate2, researchers recorded and decoded neural activity from the ventral premotor cortex (area 6v) and area 44, associated with speech function (part of Broca’s area). The system demonstrated high accuracy: with a limited vocabulary of 50 words, the average error rate was 9.1%, and when expanding the vocabulary to 125,000 words, the accuracy remained impressive, with an error rate of 23.8%.
(c)
Neuroport arrays: BCI-based Virtual Environment/Object Control
The study [103] demonstrated the effectiveness of an intracortical BCI based on two 96-channel Neuroport arrays (Blackrock Microsystems) with 1.5 mm-long electrodes implanted in the “finger” area of the left precentral gyrus in a patient with tetraplegia (C4 AIS C). The interface decoded neural signals related to motor imagery of fingers, allowing the user to control a virtual hand with four degrees of freedom (4 DOF). Three independent groups of “fingers” were controlled separately, while the “thumb” could move in two dimensions, ensuring high manipulation accuracy. During testing, the patient successfully reached and held targets, demonstrating an average speed of 76 targets per minute and a task-completion time of 1.58 ± 0.06 s. High speed and control accuracy confirm the interface’s potential for practical motor tasks. The system was further tested in controlling a virtual quadcopter in a Unity environment, showcasing its versatility and potential for implementing complex BCI applications beyond traditional motor control.
(d)
BCI (Neuroport arrays) + Functional Electrical Stimulation (FES)
Decoding cortical activity using the same 96-channel multi-electrode Blackrock Microsystems arrays allowed a patient to control not only a virtual but also his own hand through functional electrical stimulation (FES) [25]. FES was performed using 36 transcutaneous electrodes, triggering hand movements according to decoded signals from the cerebral cortex. Using his paralyzed hand, the patient managed to drink coffee from a mug (with 11 successful attempts out of 12). Flexion and extension of the arm at the elbow joint and abduction and adduction of the wrist through FES were performed by the patient with the same success as movements of the virtual hand; other movements (hand opening/closing and wrist flexion/extension) were slower and with uneven acceleration but remained sufficiently successful.
(e)
Neuralink
An example of the logical development of microelectrode array applications is the work conducted by Neuralink [104]. Using gold conductive traces embedded in biocompatible thin-film polyimide plates, researchers created a neurointerface with more than 3000 electrodes in a single array, with up to 10 such arrays possible in the device. Notably, to minimize damage and accelerate the implantation process into brain tissue, a robotic setup was used. In 2023, Neuralink received approval from the U.S. Food and Drug Administration (FDA) and began human trials [105].
(f)
Paradromics
The Texas-based company Paradromics is developing a high-bandwidth and high-accuracy BCI (https://www.paradromics.com). Their Connexus DDI interface consists of three main components: a cortical module, an internal transceiver, and a connecting wire. The cortical module is surgically implanted in the motor cortex and contains a 421-channel microwire electrode array, where each electrode is approximately 1.55 mm long and less than 40 µm thick, penetrating just below the brain surface to record neural activity. The system supports up to four such modules, enabling simultaneous use of up to 1684 intracortical electrodes. Collected data are transmitted via a thin, flexible wire to an internal transceiver, from where they are wirelessly sent to an external wearable transceiver. Communication occurs through a secure infrared channel at speeds up to 100 Mbps. The system is powered via inductive coupling. The collected data are processed in a computational unit using artificial intelligence and machine-learning algorithms. The models allow real-time interpretation of the user’s motor imagery (MI), including intended speech and commands for interacting with a computer. Currently, the technology is in the preclinical research and clinical trial phase.
In this regard, it is worth mentioning that in their review [106], accumulated knowledge and achievements in the field of NCI with cortical implants are systematically analyzed. The author examines the results of over 20 years of clinical research on invasive NCIs, emphasizing the evolution of technologies and their effectiveness in rehabilitating patients with tetraplegia and anarthria.
With the development of invasive BCIs, capable of bringing about a qualitative leap in neurotechnology, ethical and social impact issues are becoming increasingly important. The implementation of such systems requires careful analysis and consideration of data privacy, informed consent from patients, potential personality changes, and impacts on user autonomy [107].

4. Computer–Brain Interfaces (CBIs) by Degree of Invasiveness with Examples

This section analyzes CBI systems designed to deliver information to the nervous system, organized by invasiveness: non-invasive (e.g., TMS, TES, tFUS), semi-invasive (e.g., PNS, epidural spinal stimulation), and invasive methods (e.g., cochlear implants, retinal prostheses, intracortical microstimulation). It reviews their mechanisms, signal encoding strategies, stimulation techniques, and practical implementations for restoring sensory functions and modulating neural activity.

4.1. Non-Invasive CBIs

(a)
Transcranial Magnetic Stimulation (TMS)
TMS is a technology that uses pulsed magnetic fields to induce electrical currents in the brain via a coil system placed on the head, allowing modulation of cortical neural activity. This method involves the use of relatively bulky and inefficient inductors, lacks sufficient spatial resolution for routine use in CBI as a sensory component (e.g., it cannot provide stimulation for differentiating cortical representations of fingers [108]), but can be used in patients with neurological disorders, for example, to improve cognitive and motor functions [48,109].
(b)
Transcranial Electrical Stimulation (TES)
TES is a non-invasive method of brain stimulation using weak electrical currents delivered through electrodes placed on the scalp. In particular, the effects of transcranial direct current stimulation (tDCS) on motor learning are being investigated [110].
(c)
Transcranial Focused Ultrasound Stimulation (tFUS)
tFUS is a non-invasive acoustic method of modulating neural activity that is gaining popularity for advanced CBIs. It allows selective targeting of deep brain structures through the skull without surgical intervention by controlling the acoustic focus and intensity. Modulation of the human primary somatosensory cortex (S1) has been demonstrated [111], including with explicit tactile sensations [112]. There are also reports of successful stimulation of the visual cortex (V1) in humans, accompanied by phosphene perception [52]. The possibility of transmitting motor commands for the right or left hand in an MI paradigm by stimulating corresponding representations in the recipient’s somatosensory cortex has been shown [53]. Additionally, tFUS targeted at area V5 (Middle Temporal area) significantly improved BCI Speller performance by enhancing attention to visual motion [113].

4.2. Semi-Invasive CBIs

(a)
Peripheral Nerve Stimulation (PNS)
PNS falls under semi-invasive neurostimulation methods as it can be implemented using subcutaneous electrodes. PNS has been shown to evoke sensations in patients described as natural tactile sensations in the absent limb [39] and suppress phantom limb pain in forearm amputees (transradial amputees) [114]. Biomimetic PNS has been used to reproduce natural tactile sensations in patients with leg amputations [115].
(b)
Epidural Spinal Cord Stimulation (SCS)
This approach involves epidural spinal cord stimulation using the commercially available Boston Scientific SCS system has been employed to create a perceptual channel—reproducible sensations in various dermatome areas associated with different tasks. For instance, a spinal CBI was developed to distinguish rhythm, Morse code, and tilt in model balance tasks [116]. The idea is based on the brain’s ability to associate artificial stimulation patterns with specific sensory or cognitive tasks, forming adaptive mapping between external signals and internal information representation (sensory substitution) [117,118].

4.3. Invasive CBIs

(a)
Cochlear Implants
A striking example of practical CBI technology application is cochlear implants for restoring hearing. When properly implanted, individually calibrated, and followed by rehabilitation, such systems provide good speech understanding in 70–80% of post-lingual deaf patients and are highly effective in speech development in children [40]. Advantages of bilateral cochlear implantation as a therapy for severe hearing loss have been demonstrated [119]. Vestibular implants aim to restore balance function by encoding head movement information and providing electrical stimulation to the vestibular nerve of the inner ear [120].
(b)
Retinal implants
Another example is CBIs for vision restoration. Depending on the integrity of the neural pathways of the analyzer, these can include retinal implants (“artificial retina”) [38], electrode arrays on the visual cortex V1 [121], or neural modification of the retina. For instance, Sahel et al. [54] were the first to successfully apply optogenetics in humans, modifying the patient’s retinal ganglion cells to express the “red” variant of rhodopsin ChR2. Over six months after the procedure, the subject wore special glasses that projected visual information onto the retina in a specific color range, stimulating photosensitive proteins. After a series of training sessions, the patient began distinguishing large objects, including a notepad, small items like a staple box, and even recognizing glass cups and contrasting stripes on pedestrian crossings.
In the study by Muqit et al. [122], the effectiveness of the subretinal microchip PRIMA was evaluated in patients with atrophic age-related macular degeneration (AMD) over four years of observation. PRIMA is a wireless photovoltaic implant designed to restore central vision in patients with atrophic AMD. Patients receiving the PRIMA implant demonstrated significant improvement in visual acuity in the implantation area. The average improvement was 0.3 logarithmic units of the minimum angle of resolution, equivalent to an improvement of 15 letters on a standard ophthalmological chart. Over four years of observation, the implant remained functional and did not cause serious complications. No cases of rejection or significant inflammatory reactions were noted.
(c)
Intracortical Microstimulation (ICMS)
ICMS is a method of direct brain cortex stimulation using microelectrodes, enabling more precise and localized neuromodulation. Unlike other invasive and non-invasive stimulation methods, ICMS has a key feature: high spatial selectivity (electrode sizes on the order of 10–50 µm). This allows transmission of detailed information, such as encoding tactile sensations. ICMS is a promising approach for restoring the sense of touch in people with prostheses [123]. The possibility of transmitting complex tactile sensations via ICMS in paralyzed individuals is being studied [49]. The authors developed techniques to create sensations similar to those evoked by natural touch, including sensations of object edges, convex and concave curves, and their movement. This is achieved through spatiotemporal patterning of stimulation across multiple electrodes, allowing control of perceived movement direction and speed.
Blackrock Neurotech electrode arrays have been used to create proprioceptive sensations in the brain [124]. ICMS of the somatosensory cortex was applied in three clinical cases involving individuals with cervical spinal cord injury. This stimulation evoked characteristic tactile sensations in participants, which remained stable over several years of experimentation. The size of electrode projection fields increased with both amplitude and frequency of stimulation. Microstimulation was used to create proprioceptive sensations from a bionic hand (Ability Hand, Psyonic), and it was found that participants localized touch points more accurately during multi-electrode stimulation compared to single-electrode stimulation; biomimetic stimulation protocols further enhanced this sensory feedback. In experiments where participants had to correctly identify a stiffer object using a bionic hand with feedback, the error rate for single-electrode linear feedback was 25%, while for multi-electrode biomimetic feedback it was only 7.5%.
The methods described above illustrate the diversity of technological approaches used in both Brain–Computer Interfaces (BCIs) and Computer–Brain Interfaces (CBIs), depending on the degree of invasiveness and the targeted structures of the nervous system. Figure 2 provides a summarized classification of these techniques, covering non-invasive, semi-invasive, and invasive methods applied across central and peripheral neural pathways within Neural–Computer Interface (NCI) systems.
To complement the classification presented in Figure 2, Table 1 provides a comparative overview of various Neural–Computer Interface (NCI) methods based on spatial and temporal resolution, depth of action, invasiveness, stimulation frequency, neural activation type, cost, current use in NCIs, and overall NCI compatibility.

5. Brain-to-Brain Interfaces (BBIs)

In the experiment [125], the concept of BrainNet was demonstrated in humans. Three participants collaboratively solved a task similar to “Tetris.” Two “senders” transmitted their decisions about block rotation using EEG, while the “receiver” received the information via TMS, inducing phosphenes. The receiver, guided by light flashes, decided whether to rotate the block using EEG. The inclusion of feedback allowed senders to assess the correctness of decisions. The experiment showed that the receiver could identify the more reliable sender based solely on brain signals. In the study [126], a “brain-to-brain” interface was developed, enabling a human to control the direction of movement of an in vivo mouse by decoding intentions via EEG.
An important direction in BBI development is the restoration of lost neurophysiological functions. For example, a non-invasive BSI for lower-limb neurorehabilitation based on the MI paradigm was proposed, with motor cortex activity recorded via EEG and ts-MS stimulation in a closed-loop system [2]. A more advanced wireless “digital bridge” between the brain and spinal cord restored natural control over lower-limb movements in a patient, allowing standing and walking on complex terrains after paralysis due to incomplete cervical spinal cord injury (C5/C6). Moreover, neurorehabilitation led to neurological improvements that persisted even when the BSI was turned off [3]. Thus, the subdural semi-invasive WIMAGINE technology described earlier demonstrates its potential not only as a research tool but also as an active module for restoring motor functions after severe neurological damage.
The most well-known type of cognitive “neurobridges” is the hippocampal prosthesis. Unlike sensory and motor neuroprostheses, a cognitive neuroprosthesis does not rely on brain plasticity and adaptation but instead considers the brain’s own “signals,” acting as an artificial bridge between preserved areas of the brain. Based on the MIMO model of intrahippocampal interaction, a neuroprosthesis was implemented, resulting in a 37% improvement in short-term/working memory and a 35% improvement in short-term/long-term visual memory in patients with pharmacoresistant epilepsy. Stimulation and recording were performed in the CA3 and CA1 subfields, perpendicular to the long axis of the hippocampus, using stereotactically placed depth electrodes (Ad-Tech Medical Instrumentation Corporation, Racine, Oak Creek, WI, USA) with Magnetic Resonance Imaging guidance [127].
An enhanced template-based MDM was tested on other patients, also with medication-refractory epilepsy, and showed memory improvement in nearly a quarter of cases, with a ratio of nearly 2:1 in all patients when Match Stim was used and a ratio of 9:2 in patients with impaired memory who received bilateral stimulation. The authors note the viability of the fixed-pattern model but also highlight the need for further refinement [57].
The review presented in [13] analyzes brain–computer interface (BCI) and computer–brain interface (CBI) technologies as components of brain-to-brain systems (BBI). The authors examine the history, current state, and prospects of these technologies, focusing on the possibility of direct interaction between brains without using traditional neuromuscular pathways. The review covers the features and applications of BBI and discusses potential directions for future research in this field. Ethical aspects related to the development of multi-user BBIs are considered in [128].

6. Modeling Neural–Computer Interfaces

General principles of neural–computer interfaces (NCIs) modeling suggest that their development is a multi-stage process integrating mathematical modeling, engineering solutions, and biological research. At the initial stage, in vitro and in vivo neural signals are analyzed as the basis for mathematical models. These models are implemented in silico—on general-purpose processors, FPGAs, or neuromorphic chips—allowing researchers to emulate the operation of NCI systems and optimize decoding and encoding algorithms. Following successful emulation, in vivo experiments on animal models are conducted to test system functionality and safety. The development then progresses to human studies and clinical trials.
While the development of NCIs may be described as a sequence of stages—from in vitro models through in vivo experiments to clinical application—it is important, when considering signal processing methods, to take into account the specific role of each phase. In particular, in vitro experiments play a special role at the interface between the biological and artificial phases of development: they enable investigation of parameters related to biocompatibility and biomimetics (such as elasticity modulus, charge injection, and conductivity), as well as testing of the physicochemical properties of materials. These models are also used to explore basic neural mechanisms, such as spike detection and sorting, population coding and pattern recognition, cross-correlation analysis, and other fundamental processes.
At the same time, it should be noted that the main goal of NCI is to extract and classify neural correlates of meaningful psychophysiological properties, processes, and states—such as intentions, emotions, levels of attention, sensory experiences, memory, and others—and to modulate these phenomena through stimulation. In this context, in vitro neuronal cultures represent systems that do not exhibit behaviorally defined states and, in a philosophical sense, remain “things-in-themselves” in the Kantian tradition: their internal side (“what it is like to be a neural network”) remains unknown. We can only record and weakly influence their electrochemical activity, which significantly limits the applicability of such models for human-machine interaction tasks. Therefore, in the present review, the comparative analysis is organized according to the types of analytical approaches, which allows for a clearer understanding of the applicability of various methods regardless of the specific experimental phase.

6.1. BCIs Modeling

Largely focuses on selecting the most optimal algorithm for decoding and classifying neural activity associated with user intentions. Optimality lies in the accuracy and speed of intention recognition. BCI research requires complex online experiments, so open repositories of high-quality neurobiological data simplify access to information. This facilitates interdisciplinary collaboration, improvement of signal processing algorithms, and comparative analysis of neurophysiological signals. In recent years, new open databases for BCI research have been developed. One such database is MindBigData 2022, which provides an extensive set of brain signal recordings related to various human actions, facilitating the application of machine-learning algorithms for decoding brain activity [129]. Another significant database is BETA (Benchmark Database Toward SSVEP-BCI Application), containing 64-channel EEG data from 70 participants performing visual BCI tasks [130]. MEG datasets are also available for BCI system development, as provided in [131]. A comparative analysis of electrophysiological databases such as DABI, DANDI, OpenNeuro, and Brain-CODE provides researchers with information on available archives, standards, and analysis tools for efficient data sharing and reuse [132].
(a)
Feature analysis and optimization methods
Feature extraction and selection are essential stages in neural signal processing for BCI applications. These methods allow the extraction of informative signal characteristics, reduce data redundancy, and improve classification robustness. For temporal analysis, statistical characteristics (mean, standard deviation, skewness, kurtosis, autocorrelation, etc.) and entropy measures (Shannon, Rényi, Sample Entropy, etc.) are used. Additionally, Hjorth parameters, mean-absolute value, zero crossings, slope sign changes, waveform length, maximum fractal length, Willison amplitude, root mean square (RMS), and autoregressive coefficients are applied. For time-frequency analysis, Fourier transforms (fast and short-time Fourier transform), synchrosqueezing transform, Hilbert–Huang transform, and wavelet transforms are utilized. These methods are effective for analyzing non-stationary signals like EEG/MEG and improve the accuracy of command decoding for BCI. Spatial methods such as Common Spatial Pattern (CSP) and its modifications (Filter Bank/Regularized/Adaptive CSP—FBCSP, RCSP, ACSP, respectively) highlight informative spatial patterns, enhancing differences between classes (e.g., MI of the left and right hand). PCA and ICA are widely used for dimensionality reduction and separation of mixed signals. Information-theoretic methods (mutual information-based best individual feature, mutual information-based rough set reduction, integral square descriptor) focus on extracting the most relevant features, improving model quality. Combining features from different sources (Multi-View, FBCSP, multi-stream feature fusion network) and covariance-based methods (contrastive multiple correspondence analysis, tensor-to-vector projection, tensor-based frequency feature combination) enhances classification accuracy in BCI tasks.
(b)
Classification
Classification is one of the core objectives in machine learning for BCI systems, focusing on assigning neural signal patterns to predefined categories. This process follows feature extraction and plays a crucial role in transforming complex brain data into interpretable commands.
Traditional machine-learning methods include linear methods: LDA and its variations—Fisher and Bayesian linear discriminant analysis; probabilistic methods: Bayesian Network (BN), Naive Bayes (NB), Hidden Markov Modeling (HMM); distance- and kernel-based methods (k-nearest neighbors, SVM); and ensemble approaches (random forest, weighted random forests). Variations of LDA can be applied for EEG model classification in MI-based BCI [133,134,135,136,137,138,139,140,141], active voluntary movement [142], passive movement using an exoskeleton [143], and in auditory model BCI in the P300 paradigm [144]. Probabilistic methods are used for recognizing movement phases in healthy individuals and stroke patients [145], recognizing MI [137], left- and right-sided tasks, HMM [146], and kinesthetic motor imagery—methods NB, BN [147].
Distance- and kernel-based methods are used in model analysis of MI-based BCI data [133,135,137,148], somatosensory evoked potential analysis for tactile BCI [149]. Adaptive versions of SVM allow real-time retraining [150]. Ensemble methods are applied for classifying data for reactive tactile BCI [149]; MI-based BCI [133,134,135], including using publicly available datasets such as competition III and IVa [136], recognizing real movements compared to rest [142], and distinguishing real and imagined movements [151]. Additionally, WRF-SVM was used to control lower-limb exoskeletons [152]; assess gait stability [153]; and in the P300 paradigm for controlling peripheral devices [154].
Deep learning (DL) methods such as convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM), gated recurrent unit (GRU), and specialized architectures (EEGNet, temporal convolutional network) automatically extract features and are suitable for analyzing multidimensional data. Hybrid methods (CNN + SVM, Boosted Harris Hawks Shuffled Shepherd Optimization Augmented Deep Learning) and meta-learning (Model-Agnostic Meta-Learning and Multi-Domain Model-Agnostic Meta-Learning) provide adaptation to variable data and improve decoding accuracy. CNNs are applied for EEG signal classification, including SSVEP [155,156,157,158,159], MI-BCI [160,161,162,163,164], and P300 in audiovisual paradigms [165]. They are also used for fNIRS data analysis, e.g., in MI-BCI [166,167] and motor task analysis [168]. The specialized architecture EEGNet [169] is adapted for analyzing evoked and induced EEG phenomena and is applied for MI-BCI data classification [170,171] and symbol identification based on the P300 paradigm [172]. Modifications of EEGNet, such as FB-EEGNet [173], 3D-EEGNet, MI-EEGNet, AME-EEGNet [174,175,176], and GRU-EEGNet [177], expand its capabilities for EEG and MEG analysis [178].
Recurrent Neural Networks (RNNs) are applied for EEG time-series analysis, e.g., in MI-BCI [179,180,181,182,183]. RNNs find applications in EEG signal classification, including predicting limb movement by analyzing slow cortical potentials and sensorimotor rhythms (SMR) in alpha and beta bands [184], gait stability [153], imagined speech recognition [185], and MI [186]. Additionally, RNNs can analyze fNIRS data in BCI, e.g., for recognizing two classes (activity and rest) during arm movement [187], walking, and resting [168]. In [188], a hybrid BCI EEG + fNIRS model based on a classifier combining CNN and Bi-LSTM (bidirectional LSTM) was proposed, achieving 98.3% and 99% accuracy in distinguishing four movement classes. Advantages of DL include the ability to identify complex nonlinear dependencies, while disadvantages include high computational resource requirements and susceptibility to overfitting.
Reservoir computing (RC) is originally a framework based on recurrent neural networks (RNNs), making it well-suited for temporal and sequential information processing. In RC, input data are transformed into spatiotemporal patterns within a high-dimensional space using an RNN-based reservoir. The output layer then analyzes these patterns based on their spatiotemporal structure. A key feature of RC is that the input weights and recurrent connections within the reservoir remain fixed and are not trained, while only the output weights are learned, typically via a simple algorithm such as linear regression. This simple and fast training process significantly reduces the computational cost compared to standard RNNs, which is one of the main advantages of RC [189].
One of the most promising implementations of RC is the Echo State Network (ESN). In a recent study [190], a BCI software architecture was proposed for prosthetic control using electrocorticographic (ECoG) signals. The system includes a three-dimensional spiking neural network (3D-SNN) for feature extraction and an ESN for online decoding of motor commands. The implementation was carried out in Python version 3.8.9 using the SNN NEST Simulator library and tested on experimental datasets collected from a person with tetraplegia. The initial results were encouraging. The study also discusses the potential future implementation of this system on a neuromorphic hardware platform with reduced size and power consumption.
A good example of BCI system modeling is the work of Braun and colleagues, where various classifiers were developed and tested, including on an animal model. As a result, a framework with an adaptive control architecture was implemented in an energy-efficient, compact FPGA decoder of motor and premotor cortex activity in freely moving rhesus macaques, recorded using six 32-channel floating microelectrode arrays [191].

6.2. Modeling CBIs

Realistic modeling of neural responses allows for the prediction and optimization of stimulation parameters to achieve desired activity patterns. For instance, the authors of [192] used NEURON 7.8.1 [193] to implement computational models of human and rat cortical neurons, adapting them from the Blue Brain library (https://portal.bluebrain.epfl.ch/resources/models/, accessed on 6 August 2025) and connecting model neurons with simulated electric fields. The developed and optimized neural models were successfully used to study the effects of electrical stimulation using pulse-width modulated TIS technology [194]. Epidural electrical spinal cord stimulation can be based on spatiotemporal modulation methods of muscle synergy during walking, as demonstrated in rats [195] and non-human primates [196].
For real-time CBI applications, simplified but computationally efficient models are preferred over detailed biophysical models. Linear and nonlinear neuron response models, such as Volterra models or adaptive filters, are used to predict responses to stimulation (e.g., TMS or tDCS) [197]; [198]. Simplified versions of the Hodgkin-Huxley model, excluding redundant parameters, are applied to assess field penetration depth during stimulation [199]. The Izhikevich model, approximating spike dynamics with low computational costs, is popular for neurointerface modeling [200]. The integrate-and-fire system is widely used in neuromorphic computing and DBS applications [201]. Spiking neural networks implemented on neuromorphic chips (e.g., Intel’s Loihi) allow real-time prediction of neural activity, which is useful for BSI development [202]. Deep neural network models (DNN, RNN, LSTM, CNN) predict optimal stimulation parameters but require pre-training and are often combined with lighter models.
In [203], a computational model of epiretinal stimulation was presented, accounting for multiscale aspects (electrode level, retinal tissue, and individual ganglion cells). The authors demonstrated how fine-tuning pulse parameters and electrode placement allows selective activation of targeted retinal neural layers. Grossman and colleagues were the first to investigate TIS in mice [204]. The possibility of using transcranial theta-rhythm temporal interference stimulation (tTIS) to modulate motor cortex excitability in rats was studied [205]. A finite element method (FEM) computational study of TIS using anisotropic human head models allowed evaluation of stimulation parameters, such as electrode configurations and currents [206]. In an ex vivo study on mice [207], cell-specific effects of TIS on cortical functions were examined.

6.3. Modeling BBIs

BBIs are modeled to create systems that enable information transfer between neural structures within one or multiple organisms. Optimization of CBI parameters to improve the efficiency of non-invasive BBIs was explored in [12]. A BBI system was simulated for its performance in terms of ITR. The influence of various parameters on system efficiency was studied: classifier accuracy, window update rate, system delay, stimulation failure rate (SFR), and timeout threshold. Simulation results showed that the optimal system delay is ≤100 ms, and the timeout threshold should not exceed double this delay. Under these parameters, the system maintains maximum efficiency even with an SFR of up to 25%.
Advanced digital signal processors are used to generate multichannel trigger signals based on real-time neural spike analysis. These algorithms are essential for converting cortical brain signals into precise stimuli for the spinal cord, enabling effective motor control and providing sensory feedback. In [208], a DSP was used to implement a closed-loop microstimulation PSoC controlled by the cortex in anesthetized rats. Hybrid models combining FEM of the target spinal cord segment for calculating electric fields and anatomically and biophysically realistic neural structures, such as motoneuron axons, interneurons, and myelinated afferent fibers, are used for spinal cord stimulation [209]. Modeling helps determine the optimal electrode placement for activating interneurons and motoneurons located in “hotspots”—activity centers linked to flexor and extensor muscles that define locomotion patterns.
Modeling of an artificial hippocampus [210] is a striking example of the practical utility of a theoretical approach aimed at implementing BBIs within a single agent’s brain to restore memory function. To address this challenge, a method for quantitative analysis of hippocampal neural activity was developed and applied based on principles of nonlinear systems theory. In 2016, Dong Song and colleagues published a study [55] on developing a hippocampal prosthesis that models interactions between CA3 and CA1 regions using a nonlinear MIMO model. This approach relies on real-time dynamic prediction of CA1 activity based on CA3 signals. This model was successfully tested [127]. Another step toward creating cognitive prostheses was made with the development of an improved MDM model, which considers the informational content of stimuli rather than just temporal connections between CA3 and CA1. The MDM model performs categorical binding, creating a library of static stimulation patterns. This approach was tested on patients with memory impairments and demonstrated its effectiveness [57].
To illustrate the range of modeling strategies used in the development of neural–computer interfaces (NCIs), Table 2 presents a comparative overview of representative approaches. The table highlights various modeling types across stages of system development, including typical data sources, implementation platforms, algorithmic methods, and key features such as computational cost and real-time performance.

7. Evaluation of Neural–Computer Interface Effectiveness

One of the most common approaches to assessing the performance of BCIs is the use of signal detection theory [211,212], which accounts for the probabilistic nature of recorded neural signals. Within this framework, the classifier is considered a component of the decision-making process under uncertainty.
(a)
Accuracy measures the proportion of correct predictions (intended commands and their absence) among all cases classified as “positive” or “negative.”
A c c u r a c y = T P + T N ( T P + T N + F P + F N )
  • where:
TP (True Positives)—correctly classified control commands; TN (True Negatives)—correctly rejected background signals; FP (False Positives)—erroneous activations of the interface; FN (False Negatives)—missed commands.
(b)
Precision indicates the proportion of correctly recognized target commands among all cases where the system detected a command. This metric is useful when commands are rare, but in the case of balanced classes, accuracy may be a better measure.
P r e c i s i o n = T P T P + F P
  • where:
TP (True Positives)—correctly recognized control commands; FP (False Positives)—erroneous detections, where the system interprets spontaneous brain activity or artifacts as control signals.
(c)
Sensitivity/Recall determines the proportion of correctly recognized neural events corresponding to intended commands, accounting for missed command events. This metric is crucial in tasks where missing user intentions is unacceptable, such as prosthetic control.
S e n s i t i v i t y = T P T P + F N
  • where:
TP (True Positives)—correctly classified commands; FN (False Negatives)—missed commands.
(d)
Specificity/Selectivity characterizes the proportion of correctly rejected neural events that are not related to commands, considering the proportion of events erroneously identified as commands. This metric is important in tasks where false activations must be minimized.
S p e c i f i c i t y = T N T N + F P
  • where:
TN (True Negatives)—correctly identified absences of intent; FP (False Positives)—erroneous activations of the NCI.
While the accuracy metric seems intuitively clear, its use is limited in the presence of class imbalance. For example, if 95% of examples do not contain control commands, a model that always predicts the absence of commands (TN) would achieve an accuracy of 95% but would be useless. Increasing sensitivity leads to an increase in false positives (FP), where the system mistakenly classifies data patterns as commands, reducing specificity. Conversely, increasing specificity results in a stricter classifier that selects only the most characteristic control patterns but may occasionally miss slightly deviating patterns (FN errors), reducing sensitivity. A good NCI must strike a balance: being sufficiently sensitive, generalizing to all possible variations of command patterns, and effectively rejecting noisy, irrelevant data. Only in such a configuration can user trust and a sense of agency be established [14].
The F1-score is calculated as the harmonic mean between Precision and Sensitivity/Recall, strictly penalizes extremes, and is useful when both types of errors (FP and FN) are unacceptable:
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
As practice shows, chronic use of an NCI can improve its performance if the system includes a mechanism for retraining and fine-tuning the classifier based on biological feedback (BFB) and supervised learning principles. The BFB mechanism is based on a feedback loop between the observed interface response and neural signals. If the user is motivated for the NCI to work well, the accuracy of command recognition may increase during use. Additional corrections can be made by explicitly pointing out errors in its operation. For example, if a command is missed or a false activation occurs, the user can indicate the type of error with an additional action, and the system retrains. Brain signals generated when users detect errors in BCI operation can serve as information for automatic correction without explicit behavioral responses [213,214].
  • AUC (Area Under Curve)
This metric is used to compare different decoding/classification algorithms. It represents the area under the ROC curve (Receiver Operating Characteristic), which plots the relationship between TPR (True Positive Rate) and FPR (False Positive Rate) at various thresholds. For example, an AUC value close to 1 indicates high discriminative ability of the BCI (Figure 3).
For evaluating the performance of BCIs in more complex tasks, such as control with multiple degrees of freedom or decoding continuous movements, a generalization of AUC [215] is used, along with additional metrics such as speed, accuracy, correlation coefficient, and temporal delay. These metrics allow for assessing not only the system’s ability to distinguish between classes but also its performance under real-world conditions. Additionally, behavioral tests and clinical scales are used to evaluate the effectiveness of motor NCIs.
(e)
The speed of a neural–computer interface encompasses two aspects: latency and operational throughput. Latency refers to the time between signal registration and command issuance. Operational throughput measures the amount of information transmitted per unit of time. It is determined by all stages in the chain, from the analog-to-digital converter of neural signals, the transmission segment, to the processing module, the software-hardware processor, and the translator into commands for the actuator, as well as the actuator itself. In general form, ITR [216,217] is measured in bits per second/minute and does not account for semantic load, which is entirely defined by the developer (6).
I T R = l o g 2 ( N ) + P   l o g 2 ( P ) + ( 1 P ) l o g 2 1 P N 1
  • where:
N—the number of possible commands (classes), P—classification accuracy (probability of correct command selection) as a fraction of one.
The final operational speed of an NCI can be expressed as the number of meaningful entities per unit of time, such as the number of characters typed on a screen per minute (CCPM—Characters Correct Per Minute) or the frequency of movements of the controlled device. In the review [218], BCI transmission speed was estimated at 10–25 bits/min. Twenty-one years later, in a study conducted at Stanford University [35], an intracortical BCI achieved a decoding speed of brain signals equivalent to 62 words per minute. With an average entropy of the English language being approximately 11.5 bits per word, this corresponds to about 713 bits/min, comparable to natural speech.
(f)
Criteria for Evaluating the Effectiveness of CBIs and BBIs
The effectiveness of CBI and BBI systems is evaluated comprehensively, taking into account clinical outcomes (functional recovery, safety), technical characteristics (temporal resolution, SFR), and specific parameters for BBIs (synchronization, information capacity). The evaluation criteria for the BCI component of a BBI are similar to those for unidirectional BCIs. These criteria can help determine how well a device meets requirements and can be successfully used in NCI systems. Behavioral tests and clinical metrics may also be employed.
(g)
Clinical Criteria for Evaluating Motor NCIs
Various metrics are used to assess neurological and functional status during the testing of motor neural–computer interfaces. The International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI), developed by the American Spinal Injury Association (ASIA), provides a detailed guide for evaluating the neurological status of patients with spinal cord injuries [219]. The ASIA scale is designed for standardized assessment of patients with spinal cord injuries and includes evaluations of motor function, sensitivity, and the extent of damage. Motor scores are calculated on a 5-point scale for the 10 most critical muscles—extensors and flexors. Each muscle is scored from 0 (complete paralysis) to 5 (normal function). The maximum total score is 100 (25 points per limb).
The Gross Motor Function Classification System (GMFCS) is a five-level scale for assessing motor function in individuals with cerebral palsy. Levels are divided based on the degree of motor activity limitation. The assessment is based on everyday movements (e.g., walking, climbing stairs, wheelchair use). The scale considers the need for assistive devices (walkers, orthoses, wheelchairs). GMFCS emphasizes the patient’s functional abilities in mobility and self-care. It is better suited for determining long-term rehabilitation goals and selecting strategies to improve quality of life [220].
Action Research Arm Test (ARAT) is a tool developed to evaluate the functional capabilities of the upper limbs in patients with neurological disorders, such as stroke. The ARAT assesses gross and fine motor functions and includes four subscales: grasp, grip, pinch, and gross movement. Tasks within each subscale are arranged by difficulty, from the most complex to simpler tasks. Since its creation, ARAT has undergone several changes and adaptations. The method was standardized [221], and an abbreviated version for virtual reality (ARAT-VR) was introduced. The authors demonstrated that ARAT-VR is a valid, convenient, and reliable tool for assessing upper limb activity in post-stroke patients [222].
Analysis of gait, balance, and coordination of lower-limb movements can be performed using the Gait Assessment and Intervention Tool (GAIT)—a 31-item measure of coordinated motor components of gait and associated deficits, with an ideal score of zero and a maximum gait deficit score of 64. It includes clinical assessments: the six-minute walk test, weight-bearing ability, timed walking, the Berg Balance Scale, and gait quality assessed using observational gait analysis [223].
The IADL (Instrumental Activities of Daily Living) criterion assesses basic skills for daily living. To objectively measure performance in IADL tasks, timed instrumental activities of daily living (timed IADL or TIADL) are used, which determine the time required for a person to complete a task. Since the development of the Instrumental Activities of Daily Living (IADL) scale [224], it has undergone several modifications and adaptations to improve accuracy and applicability in various clinical contexts. A short version of the Amsterdam IADL Questionnaire (A-IADL-Q), consisting of 30 items, was developed [225]. This version retains the high psychometric properties of the original and is designed for quicker identification of functional changes ranging from normal aging to dementia. A study confirmed that the A-IADL-Q does not exhibit significant bias based on age, gender, education level, or cultural differences [226], making it a reliable tool for assessing functional impairments in international studies.
The Walking Index for Spinal Cord Injury (WISCI) [227] and the Fugl-Meyer Assessment for Upper Extremities (FMA-UE) [228] are important tools for evaluating the effectiveness of NCIs in the context of specific pathologies (spinal cord injury and stroke, respectively). They complement previously mentioned metrics such as ASIA, GMFCS, ARAT, and GAIT, providing a more comprehensive assessment of motor function recovery. This allows NCI developers to select the most appropriate evaluation methods depending on the target patient group and the nature of the injuries. Other experimental methodologies also exist to supplement existing assessments of locomotor function [229].

8. Perspectives for the Development of Neural–Computer Interfaces

(a)
Minimally invasive and targeted stimulation technologies
Promising methods for recording and stimulating neural activity include various technologies. For example, subdermal electrodes offer a minimally invasive alternative to scalp EEG [230]. Despite their demonstrated effectiveness for clinical EEG monitoring [231], there are relatively few studies on the use of subdermal electrodes, making it worthwhile to explore the potential of this technology as one of the simplest to implement.
Temporal Interference Electrical Stimulation (TIS) is an enhancement of TES, widely used in neuroscience and clinical practice to activate specific brain regions. The main issue with traditional TES is significant off-target stimulation, especially when attempting to target deep brain structures. TIS addresses this problem by leveraging a unique property of neurons: pure high-frequency sinusoidal signals do not trigger neural excitation. However, if such a signal interferes with another similar signal that is slightly shifted in frequency and spatially offset, low-frequency beats emerge at the intersection of the fields at the difference frequency [232]. This allows precise targeting of specific brain areas. In a pilot study [233], the potential of theta-rhythmic tTIS for modulating the motor cortex in humans was demonstrated. The results showed that tTIS could enhance the strength of functional connectivity between M1 and the secondary motor cortex, confirming its potential for non-invasive stimulation of cortical brain regions. Modulation of hippocampal activity has been achieved [234], and stimulation in the theta range of the striatum has been shown to correlate with improvements in motor learning tasks [235]. It has also been demonstrated that the TIS method is a safe and effective way to stimulate the human brain [51].
(b)
Acoustic and optical imaging for decoding and stimulation
Functional Ultrasound Imaging (fUS) is an acoustic method based on Doppler measurement of blood flow changes using composite plane wave fields. High measurement frequencies (up to 1 kHz) enable the registration of hemodynamics associated with neural activity with high temporal precision. According to evaluations of the method [236], fUS achieves excellent spatial (~100 µm) and temporal (~100 ms) resolution in small animals in vivo and is compatible with behavioral tests, allowing the use of accessible and portable equipment. It has been found that fUS can decode motor intentions in non-human primates, predicting the timing, direction, and type of individual movements [237]. It has been shown that fUS technology, combined with ultrasound neuromodulation, could serve as the basis for developing a non-invasive NCI [238]. There is information about collaborative research by Forest Neurotech and Butterfly Network on the development of an epidural ultrasound-on-a-chip NCI capable of recording and stimulation [239].
Transcranial Focused Ultrasound Stimulation (tFUS) is a non-invasive method that uses acoustic waves for selective modulation of neural activity, making it promising for NCIs. For example, tFUS can induce tactile sensations by stimulating the somatosensory cortex, which is useful for restoring sensory feedback or future BBIs [53]. Additionally, tFUS enhances cognitive functions, such as attention, as already demonstrated in BCI control tasks [113]. This opens possibilities for creating more effective NCIs that combine non-invasiveness, precise focusing, and modulation of both sensory and cognitive processes.
The optical method of digital holographic imaging (DHI) allows visualization of microscopic changes in nervous tissue structure, including deformations and vibrations on a millisecond scale, by analyzing interference patterns formed by mixing reference and object laser beams. In a study [240], a high-resolution transcranial optical system for in vivo neural activity recording in rats was presented, using lasers with wavelengths of 780 nm and 1310 nm. This method enabled non-invasive, label-free recording of tissue shape changes caused by neural population activity.
(c)
Autonomous, wireless, and energy-efficient NCI systems
The shift toward compact, autonomous, and energy-efficient NCI systems is closely tied to advancements in engineering approaches for transmitting and processing neurophysiological signals. For instance, a device based on a flexible neural probe with a dissolvable biocompatible shuttle made of sucrose has been developed, facilitating implantation into deep brain regions while minimizing tissue trauma in non-human primates [241]. The probe is equipped with 32 microelectrodes (Pt/IrOx) for recording local field potentials (LFP) and an integrated accelerometer for tracking movements. Its porous structure, with cell sizes of ~220 nm, minimized electrochemical impedance (~37 kΩ). Real-time wireless data transmission and a subcutaneous induction coil power system (litz wire with a central frequency of 13.56 MHz) ensured the device’s autonomy without the need for wires or batteries. Data transmission speed exceeded 60 kbps at distances up to 7 m from a smartphone. The device was successfully used for long-term neurobiological data recording, lasting over a month in freely moving monkeys, enabling research without interfering with natural behavior.
(d)
Accelerated and neuromorphic computation
A critical factor for the successful implementation of NCIs is accelerating computations, which are becoming too complex for general-purpose processors to handle in near real-time. A promising near-field solution is the use of FPGAs. These chips offer several advantages that make them valuable for many applications. However, for tasks requiring neuromorphic processing, such as simulating synaptic plasticity and achieving high efficiency in parallel computations, the long-term perspective points toward neuromorphic chips like TrueNorth (IBM), Loihi (Intel), Tianjic (Tsinghua University), SpiNNaker (University of Manchester), BrainScaleS (EU/Human Brain Project), DYNAP (Dynamic Neuromorphic Asynchronous Processor), and Akida (BrainChip) [242].
(e)
Multimodal electrochemical interfaces and personalized implants
Emerging approaches to NCI development outline paths for integrating electrical and chemical aspects of neural communication. Multimodal electrochemical neurointerfaces can account for complex interactions within neural networks, providing both precise electrical stimulation and localized delivery of ligands. Studies in animals show [243] that combining electrical stimulation with chemical monoaminergic modulation of the spinal cord creates a synergistic effect surpassing the results of each method individually. Subsequently, a robotic NCI was developed capable of independently assessing and rehabilitating locomotor patterns and balance in rats with CNS injuries [229]. By combining mechanical support, electrical stimulation, and chemical modulation using monoamine agonists, significant locomotor recovery was achieved, even after complete spinal cord transection. This confirms the potential of the technology for clinical application in humans.
When implanting materials into the body, it is important to consider not only their functional properties but also their mechanical compatibility with surrounding tissues. Excessively soft materials may lose structural integrity over time, while overly rigid ones can cause scar tissue formation or damage to surrounding structures. In [244], a flexible electrochemical implant based on silicone was developed, featuring microfluidic channels for delivering chemical agents, soft platinum/silicone electrodes, and elastic gold connections for transmitting electrical signals. In experiments with subdural implantation into the rat spinal cord, this technology demonstrated successful bio-integration and stable functionality in the CNS [5], as confirmed in [245]. In experiments with decerebrated cats [246], the integration of a soft silicone-based implant containing ferrocene, combined with multi-walled carbon nanotubes, was shown; such implants with increased charge injection significantly reduce neural tissue damage. The ability to rapidly design and create personalized soft implants that match the biophysical properties of the integration site is a significant advantage in translational medicine [247]. Modern approaches combine soft materials with tissue engineering methods, creating hybrid “tissue-device” interfaces with integrated living cells [248].
(f)
Toward bidirectional chemical-electrical NCIs
Neurons use both electrical (action potentials) and chemical (synaptic transmission) processes for signal perception and transmission. The chemical aspect plays a key role in generating action potentials and maintaining communication within neural networks. Notably, a neuron’s electrical activity does not serve as its energy source but instead leads to energy expenditure. Neurons “communicate” with neighboring cells using chemical messengers. However, existing NCIs rely on electrophysiological signals for interpreting and transmitting information between neurons. During artificial electrical stimulation, these chemical signals may not be generated or may be insufficient, as artificial depolarization does not necessarily trigger intracellular processes associated with natural neuronal activity.
It can be predicted that to create a fully functional NCI, it will be necessary to register spatiotemporal maps not only of electrical fields using electrode arrays but also of ligands (neurotransmitters, their precursors, metabolites, and trophic factors) using arrays of chemosensors, applying them locally through high-density electrical stimulation and chemical modulation using chemoeffectors, such as those developed by one of the authors of this review [244], combined with microfluidic delivery or thermoregulated gel delivery [249]. Chemosensors can be created using carbon nanotubes with polymer molecularly imprinted polymers (MIPs) or through electrochemical functionalization of electrodes. For example, a dopamine sensor based on MIP carbon nanotubes has been developed [250], a norepinephrine sensor using palladium nanoparticles with MIP coating [251], an enzymatic biosensor for detecting glutamate [252], and an electrochemically functionalized sensor for serotonin detection [253]. Simultaneous registration of LFP, neural spikes using an intracranial silicon-based probe, and dopamine level changes via amperometric methods in the brains of anesthetized rats has been performed [254].
(g)
Cognitive-adaptive NCI systems and intention decoding
As understanding of fundamental psycho-neuro-biological mechanisms advances, next-generation NCIs will emerge, based not on the direct interpretation of conscious intentions from brain activity but on reading an “intention vector,” tracking a “discrepancy vector” between desired outcomes and actual execution, supported by real-time correction mechanisms for motor control. Such an NCI could represent a hierarchical self-learning system that forms basic motor templates adjustable by the user via feedback channels. As the system learns, the level of conscious control will gradually shift from detailed movement adjustments to monitoring increasingly general behavioral parameters, while preserving the ability to correct errors at all levels of the hierarchy. In this approach, in a properly tuned system, the user’s consciousness would focus on goals within the context of activity, while the NCI would take responsibility for executing detailed movements.
(h)
Energy autonomy and high-resolution interfaces
One of the key challenges is ensuring autonomous power supply through the body’s energy, significantly enhancing the mobility and convenience of wearable devices. Some progress has already been made in this direction [255,256]. Additionally, the drive to increase signal informativeness requires the development of recording and stimulation modules with cellular resolution and minimal invasiveness, enabling high accuracy and reliability. Innovative approaches to building NCIs envision full integration of natural biological and artificial phases within a unified energy-information network. This creates the foundation for expanding brain functionality, including multitasking and potentially the emergence of new types of subjective experience (qualia).

9. Conclusions

(a)
NCIs as bridges between psychophysiological worlds
Neural–computer interfaces (NCIs) are the general means for technologies that bridge realms where the flow of electrical impulses and the consequence of molecular events intertwine, carrying subjective states and meanings, and manifest as information. They bridge between intention and physical action, external environmental events and personal sensory experiences, and networked “neural voices” and themselves are crossings between ideal worlds of consciousness connected through what we interpret as Brain Matter. Presently, retinal, cochlear, and vestibular implants, sensorized prosthetics, cognitive prostheses, and internal speech decoding systems are demonstrating the effectiveness of this approach. NCIs are more than just tools. They are a portal to a new reality where the boundaries between biology and technology are blurring. Although we are only gliding along the surface of these possibilities today, every breakthrough in decoding the neural code, every new stimulation protocol, and every new way of biointegrating materials brings us closer to a world where the impossible becomes commonplace.
(b)
Current limitations and future directions in decoding and stimulation
The problem of modern NCIs is that, in trying to “connect” to the brain, we are still talking to it in a foreign language. Yes, we have learned to decode some motor commands and even simple phrases, partially restore locomotion, and induce artificial sensations, but this is only a surface layer of the neural code. First, a simple translation of neural activity into commands does not correspond to the systemic hierarchy of the brain. Second, electrical stimulation without taking into account the spatiotemporal structure of natural neurotransmission fails to reproduce the full sensations. Imagine a BCI that does not simply follow commands but learns to anticipate intentions, just as a musical instrument is an “extension” of a musician. Alternatively, a sensing neuroprosthesis that does not require detailed conscious control, because the brain and the artificial algorithm jointly optimize movements in real time, also using sensory information, as it happens with natural limb movements in manipulative interaction with objects.
(c)
Inter-agent BBI and collective intelligence
The real breakthrough will come when we move on to building full-fledged inter-agent BBIs. For now, these are just laboratory experiments with simple signal transmissions, but in the future, they may lead to the emergence of collective neural network intelligence. What if multiple people could pull their cognitive resources to solve complex problems? Or if a doctor could “feel” a patient’s symptoms through a direct neural interface? It sounds like science fiction, but the first steps in this direction have already been taken—what remains is to develop protocols that will provide not only data transfer, but will also be able to take into account individual semantics and informational meanings of each point of view of interacting agents.
(d)
Neurofantasia: The Mind Expanded—Towards Hypersenses and Neuroethics
However, NCI’s boldest possibilities lie beyond medicine. What if we learned to download skills like in The Matrix? Or create interfaces that allow us to acquire new qualia? We are not talking about technical synesthesia like “seeing” radio waves, but the birth of genuinely new forms of perception for which we do not even have analogs in our language (hypersenses). So far, it seems utopian, but the neuroplasticity of the brain demonstrates that it can adapt to the most unexpected forms of information input. Perhaps in 20 years we will not be discussing “controlling prosthetics with the power of thought”, but “neuro-adaptations”—voluntary expansion of sensoric and cognitive functions. The fundamental barrier to this is not even technology, but ethics. How do we protect the privacy of thoughts? Where is the boundary between therapy and enhancement? How to avoid neurohacking and malicious acts? These questions require open discussion now, before NCIs become mainstream. Any NCI should be based not just on the principle of functionality, but on the philosophy of meaningful human empowerment, where every new “neuroenhancement” brings not only knowledge and skills, but also the happiness of self-actualization in harmony with other users and the environment.

Author Contributions

Inspired by P.M.; I.D. authored all sections of the review except: Invasive Brain–Computer Interfaces and Invasive Computer–Brain Interfaces, M.Z., Modeling Neural–Computer Interfaces, I.S.; Editing, P.M., I.D., V.G. and O.G., Visualization, M.Z.; Discussion and approval of the final version, P.M. and O.G. Supervision, Funding acquisition by P.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the grant of the state program of the Sirius Federal Territory (Scientific and technological development of the Sirius Federal Territory) (Agreement No. 18-03, date 10 September 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACSPAdaptive Common Spatial Pattern
ADCAnalog-to-Digital Converter
ALSAmyotrophic Lateral Sclerosis
AMDAge-Related Macular Degeneration
ARATAction Research Arm Test
ASIAAmerican Spinal Injury Association
AUCArea Under Curve
BBIBrain–Brain Interface
BCBIBrain–Computer–Brain Interface
BCIBrain–Computer Interface
BSIBrain–Spine Interface
CBIComputer–Brain Interface
CCPMCorrect Characters Per Minute
CNNConvolutional Neural Network
CNSCentral Nervous System
CSPCommon Spatial Pattern
DBSDeep Brain Stimulation
DLDeep Learning
DSPDigital Signal Processor
EBCIEye–Brain–Computer Interface
ECoGElectrocorticography
EEGElectroencephalography
EMGElectromyography
ERDEvent-Related Desynchronization
ERSEvent-Related Synchronization
ESNEcho State Network
FEMFinite-Element Modeling
FESFunctional Electrical Stimulation
FFTFast Fourier Transform
FM-stimulusFrequency-Modulated Stimulus
(f)NIRS(functional) Near-Infrared Spectroscopy
FPGAField-Programmable Gate Array
fUSfunctional Ultrasound
GAITGait Assessment and Intervention Tool
GMFCSGross Motor Function Classification System
GRUGated Recurrent Unit
IADLInstrumental Activities of Daily Living
ICAIndependent components analysis
ICMS (ISMS)Intracortical Microstimulation (Intraspinal Microstimulation)
ITRInformation Transfer Rate
LDALinear Discriminant Analysis
LFPLocal Field Potential
LSTMLong Short-Term Memory
MDMMemory Decoding Model
MEGMagnetoencephalography
MIMotor Imagery
MIMOMulti-Input, Multi-Output
MIPMolecular Imprinting Polymer
MISOMulti-Input, Single-Output
MRIMagnetic Resonance Imaging
NCINeural–Computer Interface
OPMOptically Pumped Magnetometer
PCAPrincipal Component Analysis
PNSPeripheral Nerve Stimulation
PSoCProgrammable System on a Chip
RCReservoir Computing
RNNRecurrent Neural Network
SCISpinal Cord Injury
SFRStimulation Failure Rate
SIN-stimulusSinusoidal stimulus
SMRSensorimotor Rhythm
SQUIDSuperconducting Quantum Interference Device
SSVEP(F)Steady-State Visual Evoked Potential (Field)
SVMSupport Vector Machine
tACS (tDCS)transcranial Alternating (Direct) Current Stimulation
TENSTranscutaneous Electrical Nerve Stimulation
TESTranscranial Electrical Stimulation
(t)FUS(transcranial) Focused Ultrasound
TISTemporal Interference Stimulation
TMSTranscranial Magnetic Stimulation
tRNStranscranial Random Noise Stimulation
(t)SCS(transcutaneous) Spinal Cord Stimulation
ts-MStrans-spinal Magnetic Stimulation
WRFWeighted Random Forests

References

  1. Capogrosso, M.; Milekovic, T.; Borton, D.; Wagner, F.; Moraud, E.M.; Mignardot, J.-B.; Buse, N.; Gandar, J.; Barraud, Q.; Xing, D.; et al. A Brain–Spine Interface Alleviating Gait Deficits after Spinal Cord Injury in Primates. Nature 2016, 539, 284–288. [Google Scholar] [CrossRef]
  2. Insausti-Delgado, A.; López-Larraz, E.; Nishimura, Y.; Ziemann, U.; Ramos-Murguialday, A. Non-Invasive Brain-Spine Interface: Continuous Control of Trans-Spinal Magnetic Stimulation Using EEG. Front. Bioeng. Biotechnol. 2022, 10, 975037. [Google Scholar] [CrossRef]
  3. Lorach, H.; Galvez, A.; Spagnolo, V.; Martel, F.; Karakas, S.; Intering, N.; Vat, M.; Faivre, O.; Harte, C.; Komi, S.; et al. Walking Naturally after Spinal Cord Injury Using a Brain–Spine Interface. Nature 2023, 618, 126–133. [Google Scholar] [CrossRef]
  4. Lakshmipriya, T.; Gopinath, S.C.B. Brain-Spine Interface for Movement Restoration after Spinal Cord Injury. Brain Spine 2024, 4, 102926. [Google Scholar] [CrossRef]
  5. Capogrosso, M.; Gandar, J.; Greiner, N.; Moraud, E.M.; Wenger, N.; Shkorbatova, P.; Musienko, P.; Minev, I.; Lacour, S.; Courtine, G. Advantages of Soft Subdural Implants for the Delivery of Electrochemical Neuromodulation Therapies to the Spinal Cord. J. Neural Eng. 2018, 15, 026024. [Google Scholar] [CrossRef] [PubMed]
  6. Mead, C. Neuromorphic Electronic Systems. Proc. IEEE 1990, 78, 1629–1636. [Google Scholar] [CrossRef]
  7. Nicolelis, M.A.L. Brain–Machine Interfaces to Restore Motor Function and Probe Neural Circuits. Nat. Rev. Neurosci. 2003, 4, 417–422. [Google Scholar] [CrossRef] [PubMed]
  8. Nicolelis, M.A.L.; Lebedev, M.A. Principles of Neural Ensemble Physiology Underlying the Operation of Brain–Machine Interfaces. Nat. Rev. Neurosci. 2009, 10, 530–540. [Google Scholar] [CrossRef] [PubMed]
  9. Lebedev, M.A.; Nicolelis, M.A.L. Brain-Machine Interfaces: From Basic Science to Neuroprostheses and Neurorehabilitation. Physiol. Rev. 2017, 97, 767–837. [Google Scholar] [CrossRef]
  10. Vidal, J.J. Toward Direct Brain-Computer Communication. Annu. Rev. Biophys. Bioeng. 1973, 2, 157–180. [Google Scholar] [CrossRef]
  11. O’Doherty, J.E.; Lebedev, M.A.; Ifft, P.J.; Zhuang, K.Z.; Shokur, S.; Bleuler, H.; Nicolelis, M.A.L. Active Tactile Exploration Using a Brain–Machine–Brain Interface. Nature 2011, 479, 228–231. [Google Scholar] [CrossRef]
  12. LaRocco, J.; Paeng, D.-G. Optimizing Computer–Brain Interface Parameters for Non-Invasive Brain-to-Brain Interface. Front. Neuroinform. 2020, 14, 1. [Google Scholar] [CrossRef]
  13. Vakilipour, P.; Fekrvand, S. Brain-to-Brain Interface Technology: A Brief History, Current State, and Future Goals. Int. J. Dev. Neurosci. 2024, 84, 351–367. [Google Scholar] [CrossRef]
  14. Haggard, P.; Chambon, V. Sense of Agency. Curr. Biol. 2012, 22, R390–R392. [Google Scholar] [CrossRef] [PubMed]
  15. Ramakrishnan, A.; Ifft, P.J.; Pais-Vieira, M.; Byun, Y.W.; Zhuang, K.Z.; Lebedev, M.A.; Nicolelis, M.A.L. Computing Arm Movements with a Monkey Brainet. Sci. Rep. 2015, 5, 10767. [Google Scholar] [CrossRef] [PubMed]
  16. Zander, T.O.; Jatzev, S. Detecting Affective Covert User States with Passive Brain-Computer Interfaces. In Proceedings of the 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, Amsterdam, The Netherlands, 10–12 September 2009. [Google Scholar] [CrossRef]
  17. Aricò, P.; Borghini, G.; Di Flumeri, G.; Sciaraffa, N.; Babiloni, F. Passive BCI beyond the Lab: Current Trends and Future Directions. Physiol. Meas. 2018, 39, 08TR02. [Google Scholar] [CrossRef] [PubMed]
  18. Alimardani, M.; Hiraki, K. Passive Brain-Computer Interfaces for Enhanced Human-Robot Interaction. Front. Robot. AI 2020, 7, 125. [Google Scholar] [CrossRef]
  19. Pisarchik, A.N.; Kurkin, S.A.; Shusharina, N.N.; Hramov, A.E. Passive Brain–Computer Interfaces for Cognitive and Pathological Brain Physiological States Monitoring and Control. In Brain-Computer Interfaces; Elsevier: Amsterdam, The Netherlands, 2025; pp. 345–388. ISBN 978-0-323-95439-6. [Google Scholar]
  20. Alrumiah, S.S.; Alhajjaj, L.A.; Alshobaili, J.F.; Ibrahim, D.M. A Review on Brain-Computer Interface Spellers: P300 Speller. Biosci. Biotechnol. Res. Commun. 2020, 13, 1191–1199. [Google Scholar] [CrossRef]
  21. Schalk, G.; Mellinger, J. A Practical Guide to Brain–Computer Interfacing with BCI2000; Springer: London, UK, 2010; ISBN 978-1-84996-091-5. [Google Scholar]
  22. Lindgren, J.; Lecuyer, A. OpenViBE and Other BCI Software Platforms. In Brain–Computer Interfaces 2; Clerc, M., Bougrain, L., Lotte, F., Eds.; Wiley: Hoboken, NJ, USA, 2016; pp. 179–198. ISBN 978-1-84821-963-2. [Google Scholar]
  23. Limerick, H.; Coyle, D.; Moore, J.W. The Experience of Agency in Human-Computer Interactions: A Review. Front. Hum. Neurosci. 2014, 8, 643. [Google Scholar] [CrossRef]
  24. Mohan, V.; Tay, W.P.; Basu, A. Towards Neuromorphic Compression Based Neural Sensing for Next-Generation Wireless Implantable Brain Machine Interface. Neuromorphic Comput. Eng. 2025, 5, 014004. [Google Scholar] [CrossRef]
  25. Ajiboye, A.B.; Willett, F.R.; Young, D.R.; Memberg, W.D.; Murphy, B.A.; Miller, J.P.; Walter, B.L.; Sweet, J.A.; Hoyen, H.A.; Keith, M.W.; et al. Restoration of Reaching and Grasping in a Person with Tetraplegia through Brain-Controlled Muscle Stimulation: A Proof-of-Concept Demonstration. Lancet Lond. Engl. 2017, 389, 1821–1830. [Google Scholar] [CrossRef] [PubMed]
  26. McFarland, D.J.; McCane, L.M.; David, S.V.; Wolpaw, J.R. Spatial Filter Selection for EEG-Based Communication. Electroencephalogr. Clin. Neurophysiol. 1997, 103, 386–394. [Google Scholar] [CrossRef] [PubMed]
  27. Wang, P.T.; Gandasetiawan, K.; McCrimmon, C.M.; Karimi-Bidhendi, A.; Liu, C.Y.; Heydari, P.; Nenadic, Z.; Do, A.H. Feasibility of an Ultra-Low Power Digital Signal Processor Platform as a Basis for a Fully Implantable Brain-Computer Interface System. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 4491–4494. [Google Scholar]
  28. Guo, L.; Weiße, A.; Zeinolabedin, S.M.A.; Schüffny, F.M.; Stolba, M.; Ma, Q.; Wang, Z.; Scholze, S.; Dixius, A.; Berthel, M.; et al. 68-Channel Highly-Integrated Neural Signal Processing PSoC with On-Chip Feature Extraction, Compression, and Hardware Accelerators for Neuroprosthetics in 22nm FDSOI. arXiv 2024, arXiv:2407.09166. [Google Scholar]
  29. Bousseta, R.; El Ouakouak, I.; Gharbi, M.; Regragui, F. EEG Based Brain Computer Interface for Controlling a Robot Arm Movement Through Thought. IRBM 2018, 39, 129–135. [Google Scholar] [CrossRef]
  30. Xie, P.; Men, Y.; Zhen, J.; Shao, X.; Zhao, J.; Chen, X. The Supernumerary Robotic Limbs of Brain-Computer Interface Based on Asynchronous Steady-State Visual Evoked Potential. Available online: https://english.biomedeng.cn/article/10.7507/1001-5515.202312056 (accessed on 10 February 2025).
  31. Wu, S.; Bhadra, K.; Giraud, A.-L.; Marchesotti, S. Adaptive LDA Classifier Enhances Real-Time Control of an EEG Brain–Computer Interface for Decoding Imagined Syllables. Brain Sci. 2024, 14, 196. [Google Scholar] [CrossRef] [PubMed]
  32. Cajigas, I.; Davis, K.C.; Prins, N.W.; Gallo, S.; Naeem, J.A.; Fisher, L.; Ivan, M.E.; Prasad, A.; Jagid, J.R. Brain-Computer Interface Control of Stepping from Invasive Electrocorticography Upper-Limb Motor Imagery in a Patient with Quadriplegia. Front. Hum. Neurosci. 2023, 16, 1077416. [Google Scholar] [CrossRef]
  33. Davis, K.C.; Wyse-Sookoo, K.R.; Raza, F.; Meschede-Krasa, B.; Prins, N.W.; Fisher, L.; Brown, E.N.; Cajigas, I.; Ivan, M.E.; Jagid, J.R.; et al. 5-Year Follow-up of a Fully Implanted Brain–Computer Interface in a Spinal Cord Injury Patient. J. Neural Eng. 2025, 22, 026050. [Google Scholar] [CrossRef]
  34. Korovesis, N.; Kandris, D.; Koulouras, G.; Alexandridis, A. Robot Motion Control via an EEG-Based Brain–Computer Interface by Using Neural Networks and Alpha Brainwaves. Electronics 2019, 8, 1387. [Google Scholar] [CrossRef]
  35. Willett, F.R.; Kunz, E.M.; Fan, C.; Avansino, D.T.; Wilson, G.H.; Choi, E.Y.; Kamdar, F.; Glasser, M.F.; Hochberg, L.R.; Druckmann, S.; et al. A High-Performance Speech Neuroprosthesis. Nature 2023, 620, 1031–1036. [Google Scholar] [CrossRef]
  36. Gomez-Rivera, Y.A.; Cardona-Álvarez, Y.; Gomez-Morales, O.W.; Alvarez-Meza, A.M.; Castellanos-Domínguez, G. BCI-Based Real-Time Processing for Implementing Deep Learning Frameworks Using Motor Imagery Paradigms. J. Appl. Res. Technol. 2024, 22, 646–653. [Google Scholar] [CrossRef]
  37. An, Y.; Mitchell, D.; Lathrop, J.; Flynn, D.; Chung, S.-J. Motor Imagery Teleoperation of a Mobile Robot Using a Low-Cost Brain-Computer Interface for Multi-Day Validation. arXiv 2024. arXiv:2412.08971. [Google Scholar]
  38. Zrenner, E.; Bartz-Schmidt, K.U.; Benav, H.; Besch, D.; Bruckmann, A.; Gabel, V.-P.; Gekeler, F.; Greppmaier, U.; Harscher, A.; Kibbel, S.; et al. Subretinal Electronic Chips Allow Blind Patients to Read Letters and Combine Them to Words. Proc. R. Soc. B Biol. Sci. 2011, 278, 1489–1497. [Google Scholar] [CrossRef]
  39. Valle, G. Peripheral Neurostimulation for Encoding Artificial Somatosensations. Eur. J. Neurosci. 2022, 56, 5888–5901. [Google Scholar] [CrossRef]
  40. Dendys, K.; Bieniasz, J.; Bigos, P.; Kuźnicki, W.; Matkowski, I.; Potyrała, P. Cochlear Implants—An Overview. Are CIs World’s Most Successful Sensory Prostheses? J. Educ. Health Sport 2023, 24, 126–142. [Google Scholar] [CrossRef]
  41. Van Boxel, S.C.J.; Vermorken, B.L.; Volpe, B.; Guinand, N.; Perez-Fornos, A.; Devocht, E.M.J.; Van De Berg, R. The Vestibular Implant: Effects of Stimulation Parameters on the Electrically-Evoked Vestibulo-Ocular Reflex. Front. Neurol. 2024, 15, 1483067. [Google Scholar] [CrossRef] [PubMed]
  42. Hao, M.; Chou, C.-H.; Zhang, J.; Yang, F.; Cao, C.; Yin, P.; Liang, W.; Niu, C.M.; Lan, N. Restoring Finger-Specific Sensory Feedback for Transradial Amputees via Non-Invasive Evoked Tactile Sensation. IEEE Open J. Eng. Med. Biol. 2020, 1, 98–107. [Google Scholar] [CrossRef]
  43. Barss, T.S.; Parhizi, B.; Mushahwar, V.K. Transcutaneous Spinal Cord Stimulation of the Cervical Cord Modulates Lumbar Networks. J. Neurophysiol. 2020, 123, 158–166. [Google Scholar] [CrossRef]
  44. Atkinson, C.; Lombardi, L.; Lang, M.; Keesey, R.; Hawthorn, R.; Seitz, Z.; Leuthardt, E.C.; Brunner, P.; Seáñez, I. Development and Evaluation of a Non-Invasive Brain-Spine Interface Using Transcutaneous Spinal Cord Stimulation. bioRxiv 2024. [Google Scholar] [CrossRef]
  45. Lim, R.Y.; Jiang, M.; Ang, K.K.; Lin, X.; Guan, C. Brain-Computer-Brain System for Individualized Transcranial Alternating Current Stimulation with Concurrent EEG Recording: A Healthy Subject Pilot Study. In Proceedings of the 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 15 July 2024; IEEE: Piscataway, NJ, USA; pp. 1–4. [Google Scholar]
  46. Terney, D.; Chaieb, L.; Moliadze, V.; Antal, A.; Paulus, W. Increasing Human Brain Excitability by Transcranial High-Frequency Random Noise Stimulation. J. Neurosci. 2008, 28, 14147–14155. [Google Scholar] [CrossRef]
  47. McLaren, R.; Smith, P.F.; Taylor, R.L.; Ravindran, S.; Rashid, U.; Taylor, D. Efficacy of nGVS to Improve Postural Stability in People with Bilateral Vestibulopathy: A Systematic Review and Meta-Analysis. Front. Neurosci. 2022, 16, 1010239. [Google Scholar] [CrossRef]
  48. Jovellar, D.B.; Roy, O.; Belardinelli, P.; Ziemann, U. Real-Time Brain State-Coupled Network-Targeted Transcranial Magnetic Stimulation to Enhance Working Memory. In Brain-Computer Interface Research; Guger, C., Azorin, J., Korostenskaja, M., Allison, B., Eds.; SpringerBriefs in Electrical and Computer Engineering; Springer Nature: Cham, Switzerland, 2025; pp. 67–79. ISBN 978-3-031-80496-0. [Google Scholar]
  49. Valle, G.; Alamri, A.H.; Downey, J.E.; Lienkämper, R.; Jordan, P.M.; Sobinov, A.R.; Endsley, L.J.; Prasad, D.; Boninger, M.L.; Collinger, J.L.; et al. Tactile Edges and Motion via Patterned Microstimulation of the Human Somatosensory Cortex. Science 2025, 387, 315–322. [Google Scholar] [CrossRef]
  50. Tawakol, O.; Herman, M.D.; Foxley, S.; Mushahwar, V.K.; Towle, V.L.; Troyk, P.R. In-vivo Testing of a Novel Wireless Intraspinal Microstimulation Interface for Restoration of Motor Function Following Spinal Cord Injury. Artif. Organs 2024, 48, 263–273. [Google Scholar] [CrossRef]
  51. Wang, Y.; Zeng, G.Q.; Wang, M.; Zhang, M.; Chang, C.; Liu, Q.; Wang, K.; Ma, R.; Wang, Y.; Zhang, X. The Safety and Efficacy of Applying a High-Current Temporal Interference Electrical Stimulation in Humans. Front. Hum. Neurosci. 2024, 18, 1484593. [Google Scholar] [CrossRef] [PubMed]
  52. Lee, W.; Kim, H.-C.; Jung, Y.; Chung, Y.A.; Song, I.-U.; Lee, J.-H.; Yoo, S.-S. Transcranial Focused Ultrasound Stimulation of Human Primary Visual Cortex. Sci. Rep. 2016, 6, 34026. [Google Scholar] [CrossRef] [PubMed]
  53. Lee, W.; Kim, S.; Kim, B.; Lee, C.; Chung, Y.A.; Kim, L.; Yoo, S.-S. Non-Invasive Transmission of Sensorimotor Information in Humans Using an EEG/Focused Ultrasound Brain-to-Brain Interface. PLoS ONE 2017, 12, e0178476. [Google Scholar] [CrossRef]
  54. Sahel, J.-A.; Boulanger-Scemama, E.; Pagot, C.; Arleo, A.; Galluppi, F.; Martel, J.N.; Esposti, S.D.; Delaux, A.; de Saint Aubert, J.-B.; de Montleau, C.; et al. Partial Recovery of Visual Function in a Blind Patient after Optogenetic Therapy. Nat. Med. 2021, 27, 1223–1229. [Google Scholar] [CrossRef]
  55. Song, D.; Hampson, R.E.; Robinson, B.S.; Marmarelis, V.Z.; Deadwyler, S.A.; Berger, T.W. Decoding Memory Features from Hippocampal Spiking Activities Using Sparse Classification Models. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1620–1623. [Google Scholar]
  56. Song, D.; Wang, H.; Tu, C.Y.; Marmarelis, V.Z.; Hampson, R.E.; Deadwyler, S.A.; Berger, T.W. Identification of Sparse Neural Functional Connectivity Using Penalized Likelihood Estimation and Basis Functions. J. Comput. Neurosci. 2013, 35, 335–357. [Google Scholar] [CrossRef]
  57. Roeder, B.M.; She, X.; Dakos, A.S.; Moore, B.; Wicks, R.T.; Witcher, M.R.; Couture, D.E.; Laxton, A.W.; Clary, H.M.; Popli, G.; et al. Developing a Hippocampal Neural Prosthetic to Facilitate Human Memory Encoding and Recall of Stimulus Features and Categories. Front. Comput. Neurosci. 2024, 18, 1263311. [Google Scholar] [CrossRef] [PubMed]
  58. Bozinovski, S.; Sestakov, M.; Bozinovska, L. Using EEG Alpha Rhythm to Control a Mobile Robot. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, New Orleans, LA, USA, 4–7 November 1988; IEEE: Piscataway, NJ, USA; Volume 3, pp. 1515–1516. [Google Scholar]
  59. Arroyo, S.; Lesser, R.P.; Gordon, B.; Uematsu, S.; Jackson, D.; Webber, R. Functional Significance of the Mu Rhythm of Human Cortex: An Electrophysiologic Study with Subdural Electrodes. Electroencephalogr. Clin. Neurophysiol. 1993, 87, 76–87. [Google Scholar] [CrossRef]
  60. Pfurtscheller, G.; Neuper, C. Motor Imagery Activates Primary Sensorimotor Area in Humans. Neurosci. Lett. 1997, 239, 65–68. [Google Scholar] [CrossRef]
  61. Guger, C.; Ramoser, H.; Pfurtscheller, G. Real-Time EEG Analysis with Subject-Specific Spatial Patterns for a Brain-Computer Interface (BCI). IEEE Trans. Rehabil. Eng. 2000, 8, 447–456. [Google Scholar] [CrossRef]
  62. Blankertz, B.; Dornhege, G.; Krauledat, M.; Muller, K.-R.; Kunzmann, V.; Losch, F.; Curio, G. The Berlin Brain-Computer Interface: EEG-Based Communication without Subject Training. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 147–152. [Google Scholar] [CrossRef]
  63. Blankertz, B.; Tangermann, M.; Vidaurre, C.; Fazli, S.; Sannelli, C.; Haufe, S.; Maeder, C.; Ramsey, L.; Sturm, I.; Curio, G.; et al. The Berlin Brain–Computer Interface: Non-Medical Uses of BCI Technology. Front. Neurosci. 2010, 4, 198. [Google Scholar] [CrossRef]
  64. Choi, J.; Kim, K.T.; Jeong, J.H.; Kim, L.; Lee, S.J.; Kim, H. Developing a Motor Imagery-Based Real-Time Asynchronous Hybrid BCI Controller for a Lower-Limb Exoskeleton. Sensors 2020, 20, 7309. [Google Scholar] [CrossRef] [PubMed]
  65. Zhang, J.; Xu, B.; Lou, X.; Wu, Y.; Shen, X. MI-Based BCI with Accurate Real-Time Three-Class Classification Processing and Light Control Application. Proc. Inst. Mech. Eng. Part H 2023, 237, 1017–1028. [Google Scholar] [CrossRef] [PubMed]
  66. Ma, Z.-Z.; Wu, J.-J.; Cao, Z.; Hua, X.-Y.; Zheng, M.-X.; Xing, X.-X.; Ma, J.; Xu, J.-G. Motor Imagery-Based Brain–Computer Interface Rehabilitation Programs Enhance Upper Extremity Performance and Cortical Activation in Stroke Patients. J. Neuroeng. Rehabil. 2024, 21, 91. [Google Scholar] [CrossRef]
  67. Kim, M.S.; Park, H.; Kwon, I.; An, K.-O.; Kim, H.; Park, G.; Hyung, W.; Im, C.-H.; Shin, J.-H. Efficacy of Brain-Computer Interface Training with Motor Imagery-Contingent Feedback in Improving Upper Limb Function and Neuroplasticity among Persons with Chronic Stroke: A Double-Blinded, Parallel-Group, Randomized Controlled Trial. J. Neuroeng. Rehabil. 2025, 22, 1. [Google Scholar] [CrossRef] [PubMed]
  68. Russo, J.S.; Shiels, T.A.; Lin, C.-H.S.; John, S.E.; Grayden, D.B. Decoding Imagined Movement in People with Multiple Sclerosis for Brain–Computer Interface Translation. J. Neural Eng. 2025, 22, 016012. [Google Scholar] [CrossRef] [PubMed]
  69. Sutton, S.; Braren, M.; Zubin, J.; John, E.R. Evoked-Potential Correlates of Stimulus Uncertainty. Science 1965, 150, 1187–1188. [Google Scholar] [CrossRef]
  70. Bhandari, V.; Londhe, N.D.; Kshirsagar, G.B. A Systematic Review of Computational Intelligence Techniques for Channel Selection in P300-Based Brain Computer Interface Speller. Artif. Intell. Appl. 2024, 2, 155–164. [Google Scholar] [CrossRef]
  71. Van Der Tweel, L.H.; Lunel, H.F. Human Visual Responses To Sinusoidally Modulated Light. Electroencephalogr. Clin. Neurophysiol. 1965, 18, 587–598. [Google Scholar] [CrossRef]
  72. Regan, D. Some Characteristics of Average Steady-State and Transient Responses Evoked by Modulated Light. Electroencephalogr. Clin. Neurophysiol. 1966, 20, 238–248. [Google Scholar] [CrossRef]
  73. Ji, D.; Xiao, X.; Wu, J.; He, X.; Zhang, G.; Guo, R.; Liu, M.; Xu, M.; Lin, Q.; Jung, T.-P.; et al. A User-Friendly Visual Brain-Computer Interface Based on High-Frequency Steady-State Visual Evoked Fields Recorded by OPM-MEG. J. Neural Eng. 2024, 21, 036024. [Google Scholar] [CrossRef]
  74. Wang, Y.; Gao, X.; Hong, B.; Jia, C.; Gao, S. Brain-Computer Interfaces Based on Visual Evoked Potentials. IEEE Eng. Med. Biol. Mag. 2008, 27, 64–71. [Google Scholar] [CrossRef]
  75. Zhang, Y.; Xu, P.; Liu, T.; Hu, J.; Zhang, R.; Yao, D. Multiple Frequencies Sequential Coding for SSVEP-Based Brain-Computer Interface. PLoS ONE 2012, 7, e29519. [Google Scholar] [CrossRef] [PubMed]
  76. Allison, B.Z.; Brunner, C.; Altstätter, C.; Wagner, I.C.; Grissmann, S.; Neuper, C. A Hybrid ERD/SSVEP BCI for Continuous Simultaneous Two Dimensional Cursor Control. J. Neurosci. Methods 2012, 209, 299–307. [Google Scholar] [CrossRef] [PubMed]
  77. Dreyer, A.M.; Herrmann, C.S.; Rieger, J.W. Tradeoff between User Experience and BCI Classification Accuracy with Frequency Modulated Steady-State Visual Evoked Potentials. Front. Hum. Neurosci. 2017, 11, 391. [Google Scholar] [CrossRef] [PubMed]
  78. Chen, X.; Zhao, B.; Wang, Y.; Xu, S.; Gao, X. Control of a 7-DOF Robotic Arm System With an SSVEP-Based BCI. Int. J. Neural Syst. 2018, 28, 1850018. [Google Scholar] [CrossRef]
  79. Maksimenko, V.A.; Hramov, A.E.; Frolov, N.S.; Lüttjohann, A.; Nedaivozov, V.O.; Grubov, V.V.; Runnova, A.E.; Makarov, V.V.; Kurths, J.; Pisarchik, A.N. Increasing Human Performance by Sharing Cognitive Load Using Brain-to-Brain Interface. Front. Neurosci. 2018, 12, 949. [Google Scholar] [CrossRef]
  80. Dehais, F.; Ladouce, S.; Darmet, L.; Nong, T.-V.; Ferraro, G.; Torre Tresols, J.; Velut, S.; Labedan, P. Dual Passive Reactive Brain-Computer Interface: A Novel Approach to Human-Machine Symbiosis. Front. Neuroergon. 2022, 3, 824780. [Google Scholar] [CrossRef]
  81. Mellinger, J.; Schalk, G.; Braun, C.; Preissl, H.; Rosenstiel, W.; Birbaumer, N.; Kübler, A. An MEG-Based Brain–Computer Interface (BCI). NeuroImage 2007, 36, 581–593. [Google Scholar] [CrossRef]
  82. Hramov, A.; Pitsik, E.; Chholak, P.; Maksimenko, V.; Frolov, N.; Kurkin, S.; Pisarchik, A. A MEG Study of Different Motor Imagery Modes in Untrained Subjects for BCI Applications. In Proceedings of the 16th International Conference on Informatics in Control, Automation and Robotics, Prague, Czech Republic, 29–31 July 2019; SCITEPRESS—Science and Technology Publications: Setúbal, Portugal, 2019; pp. 188–195. [Google Scholar]
  83. Xu, H.; Gong, A.; Ding, P.; Luo, J.; Chen, C.; Fu, Y. Key Technologies for Intelligent Brain-Computer Interaction Based on Magnetoencephalography. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi J. Biomed. Eng. Shengwu Yixue Gongchengxue Zazhi 2022, 39, 198–206. [Google Scholar] [CrossRef]
  84. Wittevrongel, B.; Holmes, N.; Boto, E.; Hill, R.; Rea, M.; Libert, A.; Khachatryan, E.; Van Hulle, M.M.; Bowtell, R.; Brookes, M.J. Practical Real-Time MEG-Based Neural Interfacing with Optically Pumped Magnetometers. BMC Biol. 2021, 19, 158. [Google Scholar] [CrossRef] [PubMed]
  85. Brickwedde, M.; Anders, P.; Kühn, A.A.; Lofredi, R.; Holtkamp, M.; Kaindl, A.M.; Grent-‘t-Jong, T.; Krüger, P.; Sander, T.; Uhlhaas, P.J. Applications of OPM-MEG for Translational Neuroscience: A Perspective. Transl. Psychiatry 2024, 14, 341. [Google Scholar] [CrossRef] [PubMed]
  86. Fedosov, N.; Medvedeva, D.; Shevtsov, O.; Ossadtchi, A. Low Count of Optically Pumped Magnetometers Furnishes a Reliable Real-Time Access to Sensorimotor Rhythm. arXiv 2024. arXiv:2412.18353. [Google Scholar] [CrossRef]
  87. Mokienko, O.A.; Lyukmanov, R.K.; Bobrov, P.D.; Isaev, M.R.; Ikonnikova, E.S.; Cherkasova, A.N.; Suponeva, N.A.; Piradov, M.A. Brain-computer interfaces based on near-infrared spectroscopy and electroencephalography registration in post-stroke rehabilitation: A comparative study. Neurol. Neuropsychiatry Psychosom. 2024, 16, 17–23. [Google Scholar] [CrossRef]
  88. Edelman, B.J.; Zhang, S.; Schalk, G.; Brunner, P.; Müller-Putz, G.; Guan, C.; He, B. Non-Invasive Brain-Computer Interfaces: State of the Art and Trends. IEEE Rev. Biomed. Eng. 2025, 18, 26–49. [Google Scholar] [CrossRef] [PubMed]
  89. Reddy, G.S.R.; Proulx, M.J.; Hirshfield, L.; Ries, A.J. Towards an Eye-Brain-Computer Interface: Combining Gaze with the Stimulus-Preceding Negativity for Target Selections in XR. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  90. Charvet, G.; Foerster, M.; Chatalic, G.; Michea, A.; Porcherot, J.; Bonnet, S.; Filipe, S.; Audebert, P.; Robinet, S.; Josselin, V.; et al. A Wireless 64-Channel ECoG Recording Electronic for Implantable Monitoring and BCI Applications: WIMAGINE. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 783–786. [Google Scholar]
  91. Mestais, C.S.; Charvet, G.; Sauter-Starace, F.; Foerster, M.; Ratel, D.; Benabid, A.L. WIMAGINE: Wireless 64-Channel ECoG Recording Implant for Long Term Clinical Applications. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 10–21. [Google Scholar] [CrossRef]
  92. Sauter-Starace, F.; Ratel, D.; Cretallaz, C.; Foerster, M.; Lambert, A.; Gaude, C.; Costecalde, T.; Bonnet, S.; Charvet, G.; Aksenova, T.; et al. Long-Term Sheep Implantation of WIMAGINE®, a Wireless 64-Channel Electrocorticogram Recorder. Front. Neurosci. 2019, 13, 847. [Google Scholar] [CrossRef]
  93. Bellicha, A.; Struber, L.; Pasteau, F.; Juillard, V.; Devigne, L.; Karakas, S.; Chabardès, S.; Babel, M.; Charvet, G. Depth-Sensor-Based Shared Control Assistance for Mobility and Object Manipulation: Toward Long-Term Home-Use of BCI-Controlled Assistive Robotic Devices. J. Neural Eng. 2025, 22, 016045. [Google Scholar] [CrossRef]
  94. Liu, D.; Shan, Y.; Wei, P.; Li, W.; Xu, H.; Liang, F.; Liu, T.; Zhao, G.; Hong, B. Reclaiming Hand Functions after Complete Spinal Cord Injury with Epidural Brain-Computer Interface. medRxiv 2024. [Google Scholar] [CrossRef]
  95. Luo, S.; Angrick, M.; Coogan, C.; Candrea, D.N.; Wyse-Sookoo, K.; Shah, S.; Rabbani, Q.; Milsap, G.W.; Weiss, A.R.; Anderson, W.S.; et al. Stable Decoding from a Speech BCI Enables Control for an Individual with ALS without Recalibration for 3 Months. Adv. Sci. 2023, 10, 2304853. [Google Scholar] [CrossRef]
  96. Silva, A.B.; Liu, J.R.; Metzger, S.L.; Bhaya-Grossman, I.; Dougherty, M.E.; Seaton, M.P.; Littlejohn, K.T.; Tu-Chan, A.; Ganguly, K.; Moses, D.A.; et al. A Bilingual Speech Neuroprosthesis Driven by Cortical Articulatory Representations Shared between Languages. Nat. Biomed. Eng. 2024, 8, 977–991. [Google Scholar] [CrossRef]
  97. Rapoport, B.; Hettick, M.; Ho, E.; Poole, A.; Mongue, M.; Papageorgiou, D.; LaMarca, M.; Trietsch, D.; Reed, K.; Murphy, M.; et al. High-Resolution Cortical Mapping with Conformable Microelectrodes on a Thousand-Electrode Scale: A First-in-Human Study (S23.008). Neurology 2024, 102, 6531. [Google Scholar] [CrossRef]
  98. Konrad, P.E.; Gelman, K.R.; Lawrence, J.; Bhatia, S.; Dister, J.; Sharma, R.; Ho, E.; Byun, Y.W.; Mermel, C.H.; Rapoport, B.I. First-in-Human Experience Performing High-Resolution Cortical Mapping Using a Novel Microelectrode Array Containing 1,024 Electrodes. J. Neural Eng. 2025, 22, 026009. [Google Scholar] [CrossRef] [PubMed]
  99. Oxley, T.J.; Yoo, P.E.; Rind, G.S.; Ronayne, S.M.; Lee, C.M.S.; Bird, C.; Hampshire, V.; Sharma, R.P.; Morokoff, A.; Williams, D.L.; et al. Motor Neuroprosthesis Implanted with Neurointerventional Surgery Improves Capacity for Activities of Daily Living Tasks in Severe Paralysis: First in-Human Experience. J. Neurointerv. Surg. 2021, 13, 102–108. [Google Scholar] [CrossRef] [PubMed]
  100. Oxley, T. A 10-Year Journey towards Clinical Translation of an Implantable Endovascular BCI A Keynote Lecture given at the BCI Society Meeting in Brussels. J. Neural Eng. 2025, 22, 013001. [Google Scholar] [CrossRef]
  101. Collinger, J.L.; Wodlinger, B.; Downey, J.E.; Wang, W.; Tyler-Kabara, E.C.; Weber, D.J.; McMorland, A.J.C.; Velliste, M.; Boninger, M.L.; Schwartz, A.B. High-Performance Neuroprosthetic Control by an Individual with Tetraplegia. The Lancet 2013, 381, 557–564. [Google Scholar] [CrossRef]
  102. Rubin, D.B.; Ajiboye, A.B.; Barefoot, L.; Bowker, M.; Cash, S.S.; Chen, D.; Donoghue, J.P.; Eskandar, E.N.; Friehs, G.; Grant, C.; et al. Interim Safety Profile From the Feasibility Study of the BrainGate Neural Interface System. Neurology 2023, 100, e1177–e1192. [Google Scholar] [CrossRef]
  103. Willsey, M.S.; Shah, N.P.; Avansino, D.T.; Hahn, N.V.; Jamiolkowski, R.M.; Kamdar, F.B.; Hochberg, L.R.; Willett, F.R.; Henderson, J.M. A High-Performance Brain–Computer Interface for Finger Decoding and Quadcopter Game Control in an Individual with Paralysis. Nat. Med. 2025, 31, 96–104. [Google Scholar] [CrossRef] [PubMed]
  104. Musk, E. Neuralink An Integrated Brain-Machine Interface Platform With Thousands of Channels. J. Med. Internet Res. 2019, 21, e16194. [Google Scholar] [CrossRef]
  105. Parikh, P.M.; Venniyoor, A. Neuralink and Brain–Computer Interface—Exciting Times for Artificial Intelligence. S. Asian J. Cancer 2024, 13, 063–065. [Google Scholar] [CrossRef]
  106. Mokienko, O.A. Brain–Computer Interfaces with Intracortical Implants for Motor and Communication Functions Compensation: Review of Recent Developments. Sovrem. Tehnol. V Med. 2024, 16, 78. [Google Scholar] [CrossRef]
  107. Waisberg, E.; Ong, J.; Lee, A.G. Ethical Considerations of Neuralink and Brain-Computer Interfaces. Ann. Biomed. Eng. 2024, 52, 1937–1939. [Google Scholar] [CrossRef]
  108. Lee, L. Imaging the Effects of 1 Hz Repetitive Transcranial Magnetic Stimulation During Motor Behaviour. Ph.D. Thesis, University College London, London, UK, 2004. [Google Scholar]
  109. Lefaucheur, J.-P.; Aleman, A.; Baeken, C.; Benninger, D.H.; Brunelin, J.; Di Lazzaro, V.; Filipović, S.R.; Grefkes, C.; Hasan, A.; Hummel, F.C.; et al. Evidence-Based Guidelines on the Therapeutic Use of Repetitive Transcranial Magnetic Stimulation (rTMS): An Update (2014–2018). Clin. Neurophysiol. 2020, 131, 474–528. [Google Scholar] [CrossRef] [PubMed]
  110. Hsu, G.; Shereen, A.D.; Cohen, L.G.; Parra, L.C. Robust Enhancement of Motor Sequence Learning with 4 mA Transcranial Electric Stimulation. Brain Stimulat. 2023, 16, 56–67. [Google Scholar] [CrossRef] [PubMed]
  111. Legon, W.; Sato, T.F.; Opitz, A.; Mueller, J.; Barbour, A.; Williams, A.; Tyler, W.J. Transcranial Focused Ultrasound Modulates the Activity of Primary Somatosensory Cortex in Humans. Nat. Neurosci. 2014, 17, 322–329. [Google Scholar] [CrossRef] [PubMed]
  112. Lee, W.; Kim, H.; Jung, Y.; Song, I.-U.; Chung, Y.A.; Yoo, S.-S. Image-Guided Transcranial Focused Ultrasound Stimulates Human Primary Somatosensory Cortex. Sci. Rep. 2015, 5, 8743. [Google Scholar] [CrossRef]
  113. Kosnoff, J.; Yu, K.; Liu, C.; He, B. Transcranial Focused Ultrasound to V5 Enhances Human Visual Motion Brain-Computer Interface by Modulating Feature-Based Attention. Nat. Commun. 2024, 15, 4382. [Google Scholar] [CrossRef]
  114. Soghoyan, G.; Biktimirov, A.; Matvienko, Y.; Chekh, I.; Sintsov, M.; Lebedev, M.A. Peripheral Nerve Stimulation Enables Somatosensory Feedback While Suppressing Phantom Limb Pain in Transradial Amputees. Brain Stimulat. 2023, 16, 756–758. [Google Scholar] [CrossRef]
  115. Valle, G.; Katic Secerovic, N.; Eggemann, D.; Gorskii, O.; Pavlova, N.; Petrini, F.M.; Cvancara, P.; Stieglitz, T.; Musienko, P.; Bumbasirevic, M.; et al. Biomimetic Computer-to-Brain Communication Enhancing Naturalistic Touch Sensations via Peripheral Nerve Stimulation. Nat. Commun. 2024, 15, 1151. [Google Scholar] [CrossRef]
  116. Várkuti, B.; Halász, L.; Hagh Gooie, S.; Miklós, G.; Smits Serena, R.; Van Elswijk, G.; McIntyre, C.C.; Lempka, S.F.; Lozano, A.M.; Erōss, L. Conversion of a Medical Implant into a Versatile Computer-Brain Interface. Brain Stimulat. 2024, 17, 39–48. [Google Scholar] [CrossRef]
  117. Bach-y-Rita, P.; Kercel, S.W. Sensory Substitution and the Human–Machine Interface. Trends Cogn. Sci. 2003, 7, 541–546. [Google Scholar] [CrossRef]
  118. Novich, S.D.; Eagleman, D.M. Using Space and Time to Encode Vibrotactile Information: Toward an Estimate of the Skin’s Achievable Throughput. Exp. Brain Res. 2015, 233, 2777–2788. [Google Scholar] [CrossRef]
  119. Zou, X.; Chen, B.; Li, Y. Research Status and Progress of Bilateral Cochlear Implantation. Lin Chuang Er Bi Yan Hou Tou Jing Wai Ke Za Zhi J. Clin. Otorhinolaryngol. Head Neck Surg. 2024, 38, 666–670. [Google Scholar]
  120. Vesper, E.O.; Sun, R.; Della Santina, C.C.; Schoo, D.P. Vestibular Implantation. Curr. Otorhinolaryngol. Rep. 2024, 12, 50–60. [Google Scholar] [CrossRef]
  121. Fernández, E.; Alfaro, A.; Soto-Sánchez, C.; Gonzalez-Lopez, P.; Lozano, A.M.; Peña, S.; Grima, M.D.; Rodil, A.; Gómez, B.; Chen, X.; et al. Visual Percepts Evoked with an Intracortical 96-Channel Microelectrode Array Inserted in Human Occipital Cortex. J. Clin. Investig. 2021, 131, e151331. [Google Scholar] [CrossRef]
  122. Muqit, M.M.K.; Le Mer, Y.; Olmos de Koo, L.; Holz, F.G.; Sahel, J.A.; Palanker, D. Prosthetic Visual Acuity with the PRIMA Subretinal Microchip in Patients with Atrophic Age-Related Macular Degeneration at 4 Years Follow-Up. Ophthalmol. Sci. 2024, 4, 100510. [Google Scholar] [CrossRef]
  123. Shelchkova, N.D.; Valle, G.; Hobbs, T.G.; Verbaarschot, C.; Downey, J.E.; Gaunt, R.A.; Bensmaia, S.J.; Greenspon, C.M. Multi-Electrode ICMS Enables Dexterous Use of Bionic Hands. In Brain-Computer Interface Research: A State-of-the-Art Summary 12; Guger, C., Azorin, J., Korostenskaja, M., Allison, B., Eds.; Springer Nature: Cham, Switzerland, 2025; pp. 29–37. ISBN 978-3-031-80497-7. [Google Scholar]
  124. Greenspon, C.M.; Valle, G.; Shelchkova, N.D.; Hobbs, T.G.; Verbaarschot, C.; Callier, T.; Berger-Wolf, E.I.; Okorokova, E.V.; Hutchison, B.C.; Dogruoz, E.; et al. Evoking Stable and Precise Tactile Sensations via Multi-Electrode Intracortical Microstimulation of the Somatosensory Cortex. Nat. Biomed. Eng. 2024, 9, 935–951. [Google Scholar] [CrossRef] [PubMed]
  125. Jiang, L.; Stocco, A.; Losey, D.M.; Abernethy, J.A.; Prat, C.S.; Rao, R.P.N. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains. Sci. Rep. 2019, 9, 6115. [Google Scholar] [CrossRef] [PubMed]
  126. Ye, Y.; Wang, Z.; Tian, Y.; Zhou, T.; Zhou, Z.; Wei, X.; Tao, T.H.; Sun, L. A Brain-to-Brain Interface with a Flexible Neural Probe for Mouse Turning Control by Human Mind. IEEJ Trans. Electr. Electron. Eng. 2024, 19, 819–823. [Google Scholar] [CrossRef]
  127. Hampson, R.E.; Song, D.; Robinson, B.S.; Fetterhoff, D.; Dakos, A.S.; Roeder, B.M.; She, X.; Wicks, R.T.; Witcher, M.R.; Couture, D.E.; et al. Developing a Hippocampal Neural Prosthetic to Facilitate Human Memory Encoding and Recall. J. Neural Eng. 2018, 15, 036014. [Google Scholar] [CrossRef] [PubMed]
  128. Hildt, E. Multi-Person Brain-To-Brain Interfaces: Ethical Issues. Front. Neurosci. 2019, 13, 1177. [Google Scholar] [CrossRef] [PubMed]
  129. Vivancos, D.; Cuesta, F. MindBigData 2022 A Large Dataset of Brain Signals 2022. arXiv 2022. arXiv:2212.14746. [Google Scholar]
  130. Liu, B.; Huang, X.; Wang, Y.; Chen, X.; Gao, X. BETA: A Large Benchmark Database Toward SSVEP-BCI Application. Front. Neurosci. 2020, 14, 627. [Google Scholar] [CrossRef]
  131. Rathee, D.; Raza, H.; Roy, S.; Prasad, G. A Magnetoencephalography Dataset for Motor and Cognitive Imagery-Based Brain-Computer Interface. Sci. Data 2021, 8, 120. [Google Scholar] [CrossRef] [PubMed]
  132. Subash, P.; Gray, A.; Boswell, M.; Cohen, S.L.; Garner, R.; Salehi, S.; Fisher, C.; Hobel, S.; Ghosh, S.; Halchenko, Y.; et al. A Comparison of Neuroelectrophysiology Databases. Sci. Data 2023, 10, 719. [Google Scholar] [CrossRef]
  133. Wahid, M.F.; Tafreshi, R. Improved Motor Imagery Classification Using Regularized Common Spatial Pattern with Majority Voting Strategy. IFAC-PapersOnLine 2021, 54, 226–231. [Google Scholar] [CrossRef]
  134. Al-Qazzaz, N.K.; Aldoori, A.A.; Ali, S.H.B.M.; Ahmad, S.A.; Mohammed, A.K.; Mohyee, M.I. EEG Signal Complexity Measurements to Enhance BCI-Based Stroke Patients’ Rehabilitation. Sensors 2023, 23, 3889. [Google Scholar] [CrossRef]
  135. Batistić, L.; Sušanj, D.; Pinčić, D.; Ljubic, S. Motor Imagery Classification Based on EEG Sensing with Visual and Vibrotactile Guidance. Sensors 2023, 23, 5064. [Google Scholar] [CrossRef]
  136. Hashem, H.A.; Abdulazeem, Y.; Labib, L.M.; Elhosseini, M.A.; Shehata, M. An Integrated Machine Learning-Based Brain Computer Interface to Classify Diverse Limb Motor Tasks: Explainable Model. Sensors 2023, 23, 3171. [Google Scholar] [CrossRef] [PubMed]
  137. Degirmenci, M.; Yuce, Y.K.; Perc, M.; Isler, Y. EEG Channel and Feature Investigation in Binary and Multiple Motor Imagery Task Predictions. Front. Hum. Neurosci. 2024, 18, 1525139. [Google Scholar] [CrossRef]
  138. Kabir, M.H.; Akhtar, N.I.; Tasnim, N.; Miah, A.S.M.; Lee, H.-S.; Jang, S.-W.; Shin, J. Exploring Feature Selection and Classification Techniques to Improve the Performance of an Electroencephalography-Based Motor Imagery Brain–Computer Interface System. Sensors 2024, 24, 4989. [Google Scholar] [CrossRef]
  139. Lakshminarayanan, K.; Ramu, V.; Shah, R.; Haque Sunny, M.S.; Madathil, D.; Brahmi, B.; Wang, I.; Fareh, R.; Rahman, M.H. Developing a Tablet-Based Brain-Computer Interface and Robotic Prototype for Upper Limb Rehabilitation. PeerJ Comput. Sci. 2024, 10, e2174. [Google Scholar] [CrossRef]
  140. Miladinović, A.; Accardo, A.; Jarmolowska, J.; Marusic, U.; Ajčević, M. Optimizing Real-Time MI-BCI Performance in Post-Stroke Patients: Impact of Time Window Duration on Classification Accuracy and Responsiveness. Sensors 2024, 24, 6125. [Google Scholar] [CrossRef]
  141. Zhong, Y.; Yao, L.; Pan, G.; Wang, Y. Cross-Subject Motor Imagery Decoding by Transfer Learning of Tactile ERD. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 662–671. [Google Scholar] [CrossRef]
  142. Gulyás, D.; Jochumsen, M. Detection of Movement-Related Brain Activity Associated with Hand and Tongue Movements from Single-Trial Around-Ear EEG. Sensors 2024, 24, 6004. [Google Scholar] [CrossRef]
  143. Guerrero-Mendez, C.D.; Blanco-Diaz, C.F.; Rivera-Flor, H.; Fabriz-Ulhoa, P.H.; Fragoso-Dias, E.A.; de Andrade, R.M.; Delisle-Rodriguez, D.; Bastos-Filho, T.F. Influence of Temporal and Frequency Selective Patterns Combined with CSP Layers on Performance in Exoskeleton-Assisted Motor Imagery Tasks. NeuroSci 2024, 5, 169–183. [Google Scholar] [CrossRef]
  144. Kojima, S.; Kanoh, S. Four-Class ASME BCI: Investigation of the Feasibility and Comparison of Two Strategies for Multiclassing. Front. Hum. Neurosci. 2024, 18, 1461960. [Google Scholar] [CrossRef] [PubMed]
  145. Suresh, R.E.; Zobaer, M.S.; Triano, M.J.; Saway, B.F.; Grewal, P.; Rowland, N.C. Exploring Machine Learning Classification of Movement Phases in Hemiparetic Stroke Patients: A Controlled EEG-tDCS Study. Brain Sci. 2024, 15, 28. [Google Scholar] [CrossRef] [PubMed]
  146. Liu, Y.; Yu, S.; Li, J.; Ma, J.; Wang, F.; Sun, S.; Yao, D.; Xu, P.; Zhang, T. Brain State and Dynamic Transition Patterns of Motor Imagery Revealed by the Bayes Hidden Markov Model. Cogn. Neurodyn. 2024, 18, 2455–2470. [Google Scholar] [CrossRef] [PubMed]
  147. Martinez-Peon, D.; Garcia-Hernandez, N.V.; Benavides-Bravo, F.G.; Parra-Vega, V. Characterization and Classification of Kinesthetic Motor Imagery Levels. J. Neural Eng. 2024, 21, 046024. [Google Scholar] [CrossRef]
  148. Mebarkia, K.; Reffad, A. Multi Optimized SVM Classifiers for Motor Imagery Left and Right Hand Movement Identification. Australas. Phys. Eng. Sci. Med. 2019, 42, 949–958. [Google Scholar] [CrossRef]
  149. Novičić, M.; Djordjević, O.; Miler-Jerković, V.; Konstantinović, L.; Savić, A.M. Improving the Performance of Electrotactile Brain–Computer Interface Using Machine Learning Methods on Multi-Channel Features of Somatosensory Event-Related Potentials. Sensors 2024, 24, 8048. [Google Scholar] [CrossRef]
  150. Antony, M.J.; Sankaralingam, B.P.; Mahendran, R.K.; Gardezi, A.A.; Shafiq, M.; Choi, J.-G.; Hamam, H. Classification of EEG Using Adaptive SVM Classifier with CSP and Online Recursive Independent Component Analysis. Sensors 2022, 22, 7596. [Google Scholar] [CrossRef]
  151. De Brito Guerra, T.C.; Nóbrega, T.; Morya, E.; de, M.; Martins, A.; de Sousa, V.A. Electroencephalography Signal Analysis for Human Activities Classification: A Solution Based on Machine Learning and Motor Imagery. Sensors 2023, 23, 4277. [Google Scholar] [CrossRef] [PubMed]
  152. Zhang, J.; Yang, X.; Liang, Z.; Lou, H.; Cui, T.; Shen, C.; Gao, Z. A Brain–Computer Interface System for Lower-Limb Exoskeletons Based on Motor Imagery and Stacked Ensemble Approach. Rev. Sci. Instrum. 2025, 96, 015114. [Google Scholar] [CrossRef] [PubMed]
  153. Soangra, R.; Smith, J.A.; Rajagopal, S.; Yedavalli, S.V.R.; Anirudh, E.R. Classifying Unstable and Stable Walking Patterns Using Electroencephalography Signals and Machine Learning Algorithms. Sensors 2023, 23, 6005. [Google Scholar] [CrossRef] [PubMed]
  154. Akram, F.; Alwakeel, A.; Alwakeel, M.; Hijji, M.; Masud, U. A Symbols Based BCI Paradigm for Intelligent Home Control Using P300 Event-Related Potentials. Sensors 2022, 22, 10000. [Google Scholar] [CrossRef]
  155. Ravi, A.; Beni, N.H.; Manuel, J.; Jiang, N. Comparing User-Dependent and User-Independent Training of CNN for SSVEP BCI. J. Neural Eng. 2020, 17, 026028. [Google Scholar] [CrossRef]
  156. Zhao, X.; Du, Y.; Zhang, R. A CNN-Based Multi-Target Fast Classification Method for AR-SSVEP. Comput. Biol. Med. 2022, 141, 105042. [Google Scholar] [CrossRef]
  157. Bhuvaneshwari, M.; Grace Mary Kanaga, E.; George, S.T. Classification of SSVEP-EEG Signals Using CNN and Red Fox Optimization for BCI Applications. Proc. Inst. Mech. Eng. Part H 2023, 237, 134–143. [Google Scholar] [CrossRef]
  158. Xu, D.; Tang, F.; Li, Y.; Zhang, Q.; Feng, X. FB-CCNN: A Filter Bank Complex Spectrum Convolutional Neural Network with Artificial Gradient Descent Optimization. Brain Sci. 2023, 13, 780. [Google Scholar] [CrossRef]
  159. Li, X.; Yang, S.; Fei, N.; Wang, J.; Huang, W.; Hu, Y. A Convolutional Neural Network for SSVEP Identification by Using a Few-Channel EEG. Bioengineering 2024, 11, 613. [Google Scholar] [CrossRef]
  160. Li, M.; Han, J.; Yang, J. Automatic Feature Extraction and Fusion Recognition of Motor Imagery EEG Using Multilevel Multiscale CNN. Med. Biol. Eng. Comput. 2021, 59, 2037–2050. [Google Scholar] [CrossRef] [PubMed]
  161. Salimpour, S.; Kalbkhani, H.; Seyyedi, S.; Solouk, V. Stockwell Transform and Semi-Supervised Feature Selection from Deep Features for Classification of BCI Signals. Sci. Rep. 2022, 12, 11773. [Google Scholar] [CrossRef]
  162. Yang, J.; Gao, S.; Shen, T. A Two-Branch CNN Fusing Temporal and Frequency Features for Motor Imagery EEG Decoding. Entropy 2022, 24, 376. [Google Scholar] [CrossRef]
  163. Fan, Z.; Xi, X.; Gao, Y.; Wang, T.; Fang, F.; Houston, M.; Zhang, Y.; Li, L.; Lü, Z. Joint Filter-Band-Combination and Multi-View CNN for Electroencephalogram Decoding. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 2101–2110. [Google Scholar] [CrossRef] [PubMed]
  164. Lun, X.; Zhang, Y.; Zhu, M.; Lian, Y.; Hou, Y. A Combined Virtual Electrode-Based ESA and CNN Method for MI-EEG Signal Feature Extraction and Classification. Sensors 2023, 23, 8893. [Google Scholar] [CrossRef]
  165. Chen, G.; Zhang, X.; Zhang, J.; Li, F.; Duan, S. A Novel Brain-Computer Interface Based on Audio-Assisted Visual Evoked EEG and Spatial-Temporal Attention CNN. Front. Neurorobot. 2022, 16, 995552. [Google Scholar] [CrossRef] [PubMed]
  166. Ma, T.; Wang, S.; Xia, Y.; Zhu, X.; Evans, J.; Sun, Y.; He, S. CNN-Based Classification of fNIRS Signals in Motor Imagery BCI System. J. Neural Eng. 2021, 18, 056019. [Google Scholar] [CrossRef]
  167. Dale, R.; O’sullivan, T.D.; Howard, S.; Orihuela-Espina, F.; Dehghani, H. System Derived Spatial-Temporal CNN for High-Density fNIRS BCI. IEEE Open J. Eng. Med. Biol. 2023, 4, 85–95. [Google Scholar] [CrossRef] [PubMed]
  168. Hamid, H.; Naseer, N.; Nazeer, H.; Khan, M.J.; Khan, R.A.; Shahbaz Khan, U. Analyzing Classification Performance of fNIRS-BCI for Gait Rehabilitation Using Deep Neural Networks. Sensors 2022, 22, 1932. [Google Scholar] [CrossRef]
  169. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A Compact Convolutional Neural Network for EEG-Based Brain–Computer Interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [PubMed]
  170. Mwata-Velu, T.; Niyonsaba-Sebigunda, E.; Avina-Cervantes, J.G.; Ruiz-Pinales, J.; Velu-A-Gulenga, N.; Alonso-Ramírez, A.A. Motor Imagery Multi-Tasks Classification for BCIs Using the NVIDIA Jetson TX2 Board and the EEGNet Network. Sensors 2023, 23, 4164. [Google Scholar] [CrossRef] [PubMed]
  171. Rao, Y.; Zhang, L.; Jing, R.; Huo, J.; Yan, K.; He, J.; Hou, X.; Mu, J.; Geng, W.; Cui, H.; et al. An Optimized EEGNet Decoder for Decoding Motor Image of Four Class Fingers Flexion. Brain Res. 2024, 1841, 149085. [Google Scholar] [CrossRef]
  172. Wang, H.; Wang, Z.; Sun, Y.; Yuan, Z.; Xu, T.; Li, J. A Cascade xDAWN EEGNet Structure for Unified Visual-Evoked Related Potential Detection. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 2270–2280. [Google Scholar] [CrossRef]
  173. Yao, H.; Liu, K.; Deng, X.; Tang, X.; Yu, H. FB-EEGNet: A Fusion Neural Network across Multi-Stimulus for SSVEP Target Detection. J. Neurosci. Methods 2022, 379, 109674. [Google Scholar] [CrossRef]
  174. Riyad, M.; Khalil, M.; Adib, A. MI-EEGNET: A Novel Convolutional Neural Network for Motor Imagery Classification. J. Neurosci. Methods 2021, 353, 109037. [Google Scholar] [CrossRef]
  175. Park, D.; Park, H.; Kim, S.; Choo, S.; Lee, S.; Nam, C.S.; Jung, J.-Y. Spatio-Temporal Explanation of 3D-EEGNet for Motor Imagery EEG Classification Using Permutation and Saliency. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 4504–4513. [Google Scholar] [CrossRef]
  176. Wu, X.; Chu, Y.; Li, Q.; Luo, Y.; Zhao, Y.; Zhao, X. AMEEGNet: Attention-Based Multiscale EEGNet for Effective Motor Imagery EEG Decoding. Front. Neurorobot. 2025, 19, 1540033. [Google Scholar] [CrossRef]
  177. Wu, X.; Shi, C.; Yan, L. Driving Attention State Detection Based on GRU-EEGNet. Sensors 2024, 24, 5086. [Google Scholar] [CrossRef]
  178. Shi, R.; Zhao, Y.; Cao, Z.; Liu, C.; Kang, Y.; Zhang, J. Categorizing Objects from MEG Signals Using EEGNet. Cogn. Neurodyn. 2022, 16, 365–377. [Google Scholar] [CrossRef]
  179. Li, H.; Yin, F.; Zhang, R.; Ma, X.; Chen, H. Motor imagery electroencephalogram classification based on sparse spatiotemporal decomposition and channel attention. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi J. Biomed. Eng. Shengwu Yixue Gongchengxue Zazhi 2022, 39, 488–497. [Google Scholar] [CrossRef]
  180. Chunduri, V.; Aoudni, Y.; Khan, S.; Aziz, A.; Rizwan, A.; Deb, N.; Keshta, I.; Soni, M. Multi-Scale Spatiotemporal Attention Network for Neuron Based Motor Imagery EEG Classification. J. Neurosci. Methods 2024, 406, 110128. [Google Scholar] [CrossRef] [PubMed]
  181. Liang, G.; Cao, D.; Wang, J.; Zhang, Z.; Wu, Y. EISATC-Fusion: Inception Self-Attention Temporal Convolutional Network Fusion for Motor Imagery EEG Decoding. IEEE Trans. Neural Syst. Rehabil. Eng. Publ. IEEE Eng. Med. Biol. Soc. 2024, 32, 1535–1545. [Google Scholar] [CrossRef]
  182. Qin, Y.; Li, B.; Wang, W.; Shi, X.; Wang, H.; Wang, X. ETCNet: An EEG-Based Motor Imagery Classification Model Combining Efficient Channel Attention and Temporal Convolutional Network. Brain Res. 2024, 1823, 148673. [Google Scholar] [CrossRef] [PubMed]
  183. Xie, X.; Chen, L.; Qin, S.; Zha, F.; Fan, X. Bidirectional Feature Pyramid Attention-Based Temporal Convolutional Network Model for Motor Imagery Electroencephalogram Classification. Front. Neurorobot. 2024, 18, 1343249. [Google Scholar] [CrossRef]
  184. AL-Quraishi, M.S.; Tan, W.H.; Elamvazuthi, I.; Ooi, C.P.; Saad, N.M.; Al-Hiyali, M.I.; Karim, H.A.; Azhar Ali, S.S. Cortical Signals Analysis to Recognize Intralimb Mobility Using Modified RNN and Various EEG Quantities. Heliyon 2024, 10, e30406. [Google Scholar] [CrossRef] [PubMed]
  185. Abdulghani, M.M.; Walters, W.L.; Abed, K.H. Imagined Speech Classification Using EEG and Deep Learning. Bioengineering 2023, 10, 649. [Google Scholar] [CrossRef]
  186. Ma, X.; Qiu, S.; Du, C.; Xing, J.; He, H. Improving EEG-Based Motor Imagery Classification via Spatial and Temporal Recurrent Neural Networks. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1903–1906. [Google Scholar] [CrossRef]
  187. Akhter, J.; Naseer, N.; Nazeer, H.; Khan, H.; Mirtaheri, P. Enhancing Classification Accuracy with Integrated Contextual Gate Network: Deep Learning Approach for Functional Near-Infrared Spectroscopy Brain-Computer Interface Application. Sensors 2024, 24, 3040. [Google Scholar] [CrossRef] [PubMed]
  188. Shelishiyah, R.; Thiyam, D.B.; Margaret, M.J.; Banu, N.M.M. A Hybrid CNN Model for Classification of Motor Tasks Obtained from Hybrid BCI System. Sci. Rep. 2025, 15, 1360. [Google Scholar] [CrossRef]
  189. Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent Advances in Physical Reservoir Computing: A Review. Neural Netw. 2019, 115, 100–123. [Google Scholar] [CrossRef] [PubMed]
  190. Rusev, G.; Yordanov, S.; Nedelcheva, S.; Banderov, A.; Sauter-Starace, F.; Koprinkova-Hristova, P.; Kasabov, N. Decoding Brain Signals in a Neuromorphic Framework for a Personalized Adaptive Control of Human Prosthetics. Biomimetics 2025, 10, 183. [Google Scholar] [CrossRef]
  191. Braun, J.-M.; Fauth, M.; Berger, M.; Huang, N.-S.; Simeoni, E.; Gaeta, E.; Rodrigues Do Carmo, R.; García-Betances, R.I.; Arredondo Waldmeyer, M.T.; Gail, A.; et al. A Brain Machine Interface Framework for Exploring Proactive Control of Smart Environments. Sci. Rep. 2024, 14, 11054. [Google Scholar] [CrossRef]
  192. Aberra, A.S.; Peterchev, A.V.; Grill, W.M. Biophysically Realistic Neuron Models for Simulation of Cortical Stimulation. J. Neural Eng. 2018, 15, 66023. [Google Scholar] [CrossRef]
  193. Hines, M.L.; Carnevale, N.T. The NEURON Simulation Environment. Neural Comput. 1997, 9, 1179–1209. [Google Scholar] [CrossRef]
  194. Luff, C.E.; Dzialecka, P.; Acerbo, E.; Williamson, A.; Grossman, N. Pulse-Width Modulated Temporal Interference (PWM-TI) Brain Stimulation. Brain Stimulat. 2024, 17, 92–103. [Google Scholar] [CrossRef]
  195. Wenger, N.; Moraud, E.M.; Gandar, J.; Musienko, P.; Capogrosso, M.; Baud, L.; Le Goff, C.G.; Barraud, Q.; Pavlova, N.; Dominici, N.; et al. Spatiotemporal Neuromodulation Therapies Engaging Muscle Synergies Improve Motor Control after Spinal Cord Injury. Nat. Med. 2016, 22, 138–145. [Google Scholar] [CrossRef] [PubMed]
  196. Capogrosso, M.; Wagner, F.B.; Gandar, J.; Moraud, E.M.; Wenger, N.; Milekovic, T.; Shkorbatova, P.; Pavlova, N.; Musienko, P.; Bezard, E.; et al. Configuration of Electrical Spinal Cord Stimulation through Real-Time Processing of Gait Kinematics. Nat. Protoc. 2018, 13, 2031–2061. [Google Scholar] [CrossRef]
  197. Westwick, D.T.; Kearney, R.E. Identification of Nonlinear Physiological Systems; John Wiley & Sons: Hoboken, NJ, USA, 2003; ISBN 978-0-471-27456-8. [Google Scholar]
  198. Wagner, T.; Valero-Cabre, A.; Pascual-Leone, A. Noninvasive Human Brain Stimulation. Annu. Rev. Biomed. Eng. 2007, 9, 527–565. [Google Scholar] [CrossRef] [PubMed]
  199. Traub, R.D.; Wong, R.K.; Miles, R.; Michelson, H. A Model of a CA3 Hippocampal Pyramidal Neuron Incorporating Voltage-Clamp Data on Intrinsic Conductances. J. Neurophysiol. 1991, 66, 635–650. [Google Scholar] [CrossRef]
  200. Izhikevich, E.M. Simple Model of Spiking Neurons. IEEE Trans. Neural Netw. 2003, 14, 1569–1572. [Google Scholar] [CrossRef]
  201. Gerstner, W.; Kistler, W.M.; Naud, R.; Paninski, L. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition, 1st ed.; Cambridge University Press: Cambridge, UK, 2014; ISBN 978-1-107-06083-8. [Google Scholar]
  202. Davies, M.; Srinivasa, N.; Lin, T.-H.; Chinya, G.; Cao, Y.; Choday, S.H.; Dimou, G.; Joshi, P.; Imam, N.; Jain, S.; et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro 2018, 38, 82–99. [Google Scholar] [CrossRef]
  203. Paknahad, J.; Loizos, K.; Humayun, M.; Lazzi, G. Targeted Stimulation of Retinal Ganglion Cells in Epiretinal Prostheses: A Multiscale Computational Study. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 2548–2556. [Google Scholar] [CrossRef]
  204. Grossman, N.; Bono, D.; Dedic, N.; Kodandaramaiah, S.B.; Rudenko, A.; Suk, H.-J.; Cassara, A.M.; Neufeld, E.; Kuster, N.; Tsai, L.-H.; et al. Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields. Cell 2017, 169, 1029–1041.e16. [Google Scholar] [CrossRef]
  205. Wu, C.-W.; Lin, B.-S.; Zhang, Z.; Hsieh, T.-H.; Liou, J.-C.; Lo, W.-L.; Li, Y.-T.; Chiu, S.-C.; Peng, C.-W. Pilot Study of Using Transcranial Temporal Interfering Theta-Burst Stimulation for Modulating Motor Excitability in Rat. J. NeuroEng. Rehabil. 2024, 21, 147. [Google Scholar] [CrossRef]
  206. Rampersad, S.; Roig-Solvas, B.; Yarossi, M.; Kulkarni, P.P.; Santarnecchi, E.; Dorval, A.D.; Brooks, D.H. Prospects for Transcranial Temporal Interference Stimulation in Humans: A Computational Study. NeuroImage 2019, 202, 116124. [Google Scholar] [CrossRef] [PubMed]
  207. Caldas-Martinez, S.; Goswami, C.; Forssell, M.; Cao, J.; Barth, A.L.; Grover, P. Cell-Specific Effects of Temporal Interference Stimulation on Cortical Function. Commun. Biol. 2024, 7, 1076. [Google Scholar] [CrossRef]
  208. Shahdoost, S.; Frost, S.B.; Guggenmos, D.J.; Borrell, J.A.; Dunham, C.; Barbay, S.; Nudo, R.J.; Mohseni, P. A Brain-Spinal Interface (BSI) System-on-Chip (SoC) for Closed-Loop Cortically-Controlled Intraspinal Microstimulation. Analog Integr. Circuits Signal Process. 2018, 95, 1–16. [Google Scholar] [CrossRef] [PubMed]
  209. Capogrosso, M.; Wenger, N.; Raspopovic, S.; Musienko, P.; Beauparlant, J.; Bassi Luciani, L.; Courtine, G.; Micera, S. A Computational Model for Epidural Electrical Stimulation of Spinal Sensorimotor Circuits. J. Neurosci. 2013, 33, 19326–19340. [Google Scholar] [CrossRef] [PubMed]
  210. Song, D.; Chan, R.H.M.; Marmarelis, V.Z.; Hampson, R.E.; Deadwyler, S.A.; Berger, T.W. Nonlinear Dynamic Modeling of Spike Train Transformations for Hippocampal-Cortical Prostheses. IEEE Trans. Biomed. Eng. 2007, 54, 1053–1066. [Google Scholar] [CrossRef] [PubMed]
  211. Birdsall, T.; Fox, W. The Theory of Signal Detectability. IEEE Trans. Inf. Theory 1954, 4, 171–212. [Google Scholar]
  212. Green, D.M.; Swets, J.A. Signal Detection Theory and Psychophysics; Wiley: New York, NY, USA, 1966; Volume 1. [Google Scholar]
  213. Lopes-Dias, C.; Sburlea, A.I.; Müller-Putz, G.R. Online Asynchronous Decoding of Error-Related Potentials during the Continuous Control of a Robot. Sci. Rep. 2019, 9, 17596. [Google Scholar] [CrossRef]
  214. Soriano-Segura, P.; Ortiz, M.; Iáñez, E.; Azorín, J.M. Design of a Brain-Machine Interface for Reducing False Activations of a Lower-Limb Exoskeleton Based on Error Related Potential. Comput. Methods Programs Biomed. 2024, 255, 108332. [Google Scholar] [CrossRef]
  215. Hand, D.J.; Till, R.J. A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems. Mach. Learn. 2001, 45, 171–186. [Google Scholar] [CrossRef]
  216. McFarland, D.J.; Sarnacki, W.A.; Wolpaw, J.R. Brain–Computer Interface (BCI) Operation: Optimizing Information Transfer Rates. Biol. Psychol. 2003, 63, 237–251. [Google Scholar] [CrossRef]
  217. Schlogl, A.; Keinrath, C.; Scherer, R.; Furtscheller, P. Information Transfer of an EEG-Based Brain Computer Interface. In Proceedings of the First International IEEE EMBS Conference on Neural Engineering, Capri Island, Italy, 20–22 March 2003; IEEE: Piscataway, NJ, USA, 2003; pp. 641–644. [Google Scholar]
  218. Wolpaw, J.R.; Birbaumer, N.; Mcfarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain-Computer Interfaces for Communication and Control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef] [PubMed]
  219. Vissarionov, S.V.; Baindurashvili, A.G.; Kryukova, I.A. International Standards for Neurological Classification of Spinal Cord Injuries (ASIA/ISNCSCI Scale, Revised 2015) 67. Pediatr. Traumatol. Orthop. Reconstr. Surg. 2016, 4, 67–72. [Google Scholar] [CrossRef]
  220. Morris, C.; Bartlett, D. Gross Motor Function Classification System: Impact and Utility. Dev. Med. Child Neurol. 2004, 46, 60–65. [Google Scholar] [CrossRef]
  221. Yozbatiran, N.; Der-Yeghiaian, L.; Cramer, S.C. A Standardized Approach to Performing the Action Research Arm Test. Neurorehabil. Neural Repair 2008, 22, 78–90. [Google Scholar] [CrossRef] [PubMed]
  222. Burton, Q.; Lejeune, T.; Dehem, S.; Lebrun, N.; Ajana, K.; Edwards, M.G.; Everard, G. Performing a Shortened Version of the Action Research Arm Test in Immersive Virtual Reality to Assess Post-Stroke Upper Limb Activity. J. NeuroEng. Rehabil. 2022, 19, 133. [Google Scholar] [CrossRef]
  223. Daly, J.J.; Nethery, J.; McCabe, J.P.; Brenner, I.; Rogers, J.; Gansen, J.; Butler, K.; Burdsall, R.; Roenigk, K.; Holcomb, J. Development and Testing of the Gait Assessment and Intervention Tool (G.A.I.T.): A Measure of Coordinated Gait Components. J. Neurosci. Methods 2009, 178, 334–339. [Google Scholar] [CrossRef]
  224. Lawton, M.P.; Brody, E.M. Assessment of Older People: Self-Maintaining and Instrumental Activities of Daily Living. The Gerontol. 1969, 9, 179–186. [Google Scholar] [CrossRef]
  225. Jutten, R.J.; Peeters, C.F.W.; Leijdesdorff, S.M.J.; Visser, P.J.; Maier, A.B.; Terwee, C.B.; Scheltens, P.; Sikkes, S.A.M. Detecting Functional Decline from Normal Aging to Dementia: Development and Validation of a Short Version of the Amsterdam IADL Questionnaire. Alzheimer’s Dement. Diagn. Assess. Dis. Monit. 2017, 8, 26–35. [Google Scholar] [CrossRef] [PubMed]
  226. Dubbelman, M.A.; Verrijp, M.; Facal, D.; Sánchez-Benavides, G.; Brown, L.J.E.; Der Flier, W.M.; Jokinen, H.; Lee, A.; Leroi, I.; Lojo-Seoane, C.; et al. The Influence of Diversity on the Measurement of Functional Impairment: An International Validation of the Amsterdam IADL Questionnaire in Eight Countries. Alzheimer’s Dement. Diagn. Assess. Dis. Monit. 2020, 12, e12021. [Google Scholar] [CrossRef]
  227. Ditunno, J.; Ditunno, P.; Graziani, V.; Scivoletto, G.; Bernardi, M.; Castellano, V.; Marchetti, M.; Barbeau, H.; Frankel, H.; D’Andrea Greve, J.; et al. Walking Index for Spinal Cord Injury (WISCI): An International Multicenter Validity and Reliability Study. Spinal Cord 2000, 38, 234–243. [Google Scholar] [CrossRef]
  228. Fugl-Meyer, A.R.; Jääskö, L.; Leyman, I.; Olsson, S.; Steglind, S. The Post-Stroke Hemiplegic Patient. 1. a Method for Evaluation of Physical Performance. Scand. J. Rehabil. Med. 1975, 7, 13–31. [Google Scholar] [CrossRef]
  229. Dominici, N.; Keller, U.; Vallery, H.; Friedli, L.; van den Brand, R.; Starkey, M.L.; Musienko, P.; Riener, R.; Courtine, G. Versatile Robotic Interface to Evaluate, Enable and Train Locomotion and Balance after Neuromotor Disorders. Nat. Med. 2012, 18, 1142–1147. [Google Scholar] [CrossRef]
  230. Smith, M.M.; Rao, R.P.N.; Olson, J.D.; Darvas, F. Utilizing Subdermal Electrodes as a Noninvasive Alternative for Motor-Based BCIs. In Brain–Computer Interfaces Handbook; CRC Press: Boca Raton, FL, USA, 2018; pp. 269–277. [Google Scholar]
  231. Maren, E.V.; Alnes, S.L.; Cruz, J.R.D.; Sobolewski, A.; Friedrichs-Maeder, C.; Wohler, K.; Barlatey, S.L.; Feruglio, S.; Fuchs, M.; Vlachos, I.; et al. Feasibility, Safety, and Performance of Full-Head Subscalp EEG Using Minimally Invasive Electrode Implantation. Neurology 2024, 102, e209428. [Google Scholar] [CrossRef]
  232. Mirzakhalili, E.; Barra, B.; Capogrosso, M.; Lempka, S.F. Biophysics of Temporal Interference Stimulation. Cell Syst. 2020, 11, 557–572.e5. [Google Scholar] [CrossRef]
  233. Zhu, Z.; Xiong, Y.; Chen, Y.; Jiang, Y.; Qian, Z.; Lu, J.; Liu, Y.; Zhuang, J. Temporal Interference (TI) Stimulation Boosts Functional Connectivity in Human Motor Cortex: A Comparison Study with Transcranial Direct Current Stimulation (tDCS). Neural Plast. 2022, 2022, 7605046. [Google Scholar] [CrossRef]
  234. Violante, I.R.; Alania, K.; Cassarà, A.M.; Neufeld, E.; Acerbo, E.; Carron, R.; Williamson, A.; Kurtin, D.L.; Rhodes, E.; Hampshire, A.; et al. Non-Invasive Temporal Interference Electrical Stimulation of the Human Hippocampus. Nat. Neurosci. 2023, 26, 1994–2004. [Google Scholar] [CrossRef]
  235. Wessel, M.J.; Beanato, E.; Popa, T.; Windel, F.; Vassiliadis, P.; Menoud, P.; Beliaeva, V.; Violante, I.R.; Abderrahmane, H.; Dzialecka, P.; et al. Noninvasive Theta-Burst Stimulation of the Human Striatum Enhances Striatal Activity and Motor Skill Learning. Nat. Neurosci. 2023, 26, 2005–2016. [Google Scholar] [CrossRef]
  236. De Paz, J.M.M.; Macé, E. Functional Ultrasound Imaging: A Useful Tool for Functional Connectomics? NeuroImage 2021, 245, 118722. [Google Scholar] [CrossRef]
  237. Norman, S.L.; Maresca, D.; Christopoulos, V.N.; Griggs, W.S.; Demene, C.; Tanter, M.; Shapiro, M.G.; Andersen, R.A. Single-Trial Decoding of Movement Intentions Using Functional Ultrasound Neuroimaging. Neuron 2021, 109, 1554–1566.e4. [Google Scholar] [CrossRef]
  238. Zheng, H.; Niu, L.; Qiu, W.; Liang, D.; Long, X.; Li, G.; Liu, Z.; Meng, L. The Emergence of Functional Ultrasound for Noninvasive Brain–Computer Interface. Research 2023, 6, 0200. [Google Scholar] [CrossRef] [PubMed]
  239. Nolan, M. New Neurotech Eschews Electricity for Ultrasound—IEEE Spectrum. Available online: https://spectrum.ieee.org/bci-ultrasound (accessed on 3 March 2025).
  240. Lefebvre, A.T.; Rodriguez, C.L.; Bar-Kochba, E.; Steiner, N.E.; Mirski, M.; Blodgett, D.W. High-Resolution Transcranial Optical Imaging of in Vivo Neural Activity. Sci. Rep. 2024, 14, 24756. [Google Scholar] [CrossRef] [PubMed]
  241. Oh, S.; Jekal, J.; Won, J.; Lim, K.S.; Jeon, C.-Y.; Park, J.; Yeo, H.-G.; Kim, Y.G.; Lee, Y.H.; Ha, L.J.; et al. A Stealthy Neural Recorder for the Study of Behaviour in Primates. Nat. Biomed. Eng. 2024, 9, 882–895. [Google Scholar] [CrossRef] [PubMed]
  242. Ivanov, D.; Chezhegov, A.; Kiselev, M.; Grunin, A.; Larionov, D. Neuromorphic Artificial Intelligence Systems. Front. Neurosci. 2022, 16, 959626. [Google Scholar] [CrossRef]
  243. Musienko, P.; Van Den Brand, R.; Maerzendorfer, O.; Larmagnac, A.; Courtine, G. Combinatory Electrical and Pharmacological Neuroprosthetic Interfaces to Regain Motor Function After Spinal Cord Injury. IEEE Trans. Biomed. Eng. 2009, 56, 2707–2711. [Google Scholar] [CrossRef]
  244. Minev, I.R.; Musienko, P.; Hirsch, A.; Barraud, Q.; Wenger, N.; Moraud, E.M.; Gandar, J.; Capogrosso, M.; Milekovic, T.; Asboth, L.; et al. Electronic Dura Mater for Long-Term Multimodal Neural Interfaces. Science 2015, 347, 159–163. [Google Scholar] [CrossRef]
  245. Bloch, J.; Lacour, S.P.; Courtine, G. Electronic Dura Mater Meddling in the Central Nervous System. JAMA Neurol. 2017, 74, 470–475. [Google Scholar] [CrossRef] [PubMed]
  246. Deriabin, K.V.; Kirichenko, S.O.; Lopachev, A.V.; Sysoev, Y.; Musienko, P.E.; Islamova, R.M. Ferrocenyl-Containing Silicone Nanocomposites as Materials for Neuronal Interfaces. Compos. Part B Eng. 2022, 236, 109838. [Google Scholar] [CrossRef]
  247. Afanasenkau, D.; Kalinina, D.; Lyakhovetskii, V.; Tondera, C.; Gorsky, O.; Moosavi, S.; Pavlova, N.; Merkulyeva, N.; Kalueff, A.V.; Minev, I.R.; et al. Rapid Prototyping of Soft Bioelectronic Implants for Use as Neuromuscular Interfaces. Nat. Biomed. Eng. 2020, 4, 1010–1022. [Google Scholar] [CrossRef]
  248. Boufidis, D.; Garg, R.; Angelopoulos, E.; Cullen, D.K.; Vitale, F. Bio-Inspired Electronics: Soft, Biohybrid, and “Living” Neural Interfaces. Nat. Commun. 2025, 16, 1861. [Google Scholar] [CrossRef]
  249. Tian, M.; Ma, Z.; Yang, G.-Z. Micro/Nanosystems for Controllable Drug Delivery to the Brain. The Innovation 2024, 5, 100548. [Google Scholar] [CrossRef]
  250. Qian, T.; Yu, C.; Zhou, X.; Ma, P.; Wu, S.; Xu, L.; Shen, J. Ultrasensitive Dopamine Sensor Based on Novel Molecularly Imprinted Polypyrrole Coated Carbon Nanotubes. Biosens. Bioelectron. 2014, 58, 237–241. [Google Scholar] [CrossRef] [PubMed]
  251. Chen, J.; Huang, H.; Zeng, Y.; Tang, H.; Li, L. A Novel Composite of Molecularly Imprinted Polymer-Coated PdNPs for Electrochemical Sensing Norepinephrine. Biosens. Bioelectron. 2015, 65, 366–374. [Google Scholar] [CrossRef]
  252. Alsharabi, R.M.; Pandey, S.K.; Singh, J.; Kayastha, A.M.; Saxena, P.S.; Srivastava, A. Ultra-Sensitive Electrochemical Detection of Glutamate Based on Reduced Graphene Oxide/Ni Foam Nanocomposite Film Fabricated via Electrochemical Exfoliation Technique Using Waste Batteries Graphite Rods. Microchem. J. 2024, 199, 110055. [Google Scholar] [CrossRef]
  253. Han, J.; Stine, J.M.; Chapin, A.A.; Ghodssi, R. A Portable Electrochemical Sensing Platform for Serotonin Detection Based on Surface-Modified Carbon Fiber Microelectrodes. Anal. Methods 2023, 15, 1096–1104. [Google Scholar] [CrossRef] [PubMed]
  254. Johnson, M.D.; Franklin, R.K.; Gibson, M.D.; Brown, R.B.; Kipke, D.R. Implantable Microelectrode Arrays for Simultaneous Electrophysiological and Neurochemical Recordings. J. Neurosci. Methods 2008, 174, 62–70. [Google Scholar] [CrossRef] [PubMed]
  255. Zebda, A.; Cosnier, S.; Alcaraz, J.-P.; Holzinger, M.; Le Goff, A.; Gondran, C.; Boucher, F.; Giroud, F.; Gorgy, K.; Lamraoui, H.; et al. Single Glucose Biofuel Cells Implanted in Rats Power Electronic Devices. Sci. Rep. 2013, 3, 1516. [Google Scholar] [CrossRef] [PubMed]
  256. Xu, C.; Song, Y.; Han, M.; Zhang, H. Portable and Wearable Self-Powered Systems Based on Emerging Energy Harvesting Technology. Microsyst. Nanoeng. 2021, 7, 25. [Google Scholar] [CrossRef]
Figure 1. Functional architecture of BCI, CBI, BBI.
Figure 1. Functional architecture of BCI, CBI, BBI.
Applsci 15 08900 g001
Figure 2. Spectrum of recording and stimulation methods for NCIs. Non-invasive (green): 1—transcutaneous electrical nerve stimulation, 2—temporal interference stimulation, 3—electroencephalography, 4—functional near-infrared spectroscopy, 5—transcranial direct current stimulation, 6—transcranial alternating current stimulation, 7—magnetoencephalography, 8—transcranial magnetic stimulation; semi-invasive (yellow): 9—subdural electrostimulation, 10—epidural electrostimulation, 11—endovascular electrodes, 12—electrocorticography, 13—peripheral nerve stimulation, 14—functional electrical stimulation; invasive (red): 15—intraspinal microstimulation, 16—deep brain stimulation, 17—intracortical microstimulation, 18—visual prosthesis, 19—cochlear implant.
Figure 2. Spectrum of recording and stimulation methods for NCIs. Non-invasive (green): 1—transcutaneous electrical nerve stimulation, 2—temporal interference stimulation, 3—electroencephalography, 4—functional near-infrared spectroscopy, 5—transcranial direct current stimulation, 6—transcranial alternating current stimulation, 7—magnetoencephalography, 8—transcranial magnetic stimulation; semi-invasive (yellow): 9—subdural electrostimulation, 10—epidural electrostimulation, 11—endovascular electrodes, 12—electrocorticography, 13—peripheral nerve stimulation, 14—functional electrical stimulation; invasive (red): 15—intraspinal microstimulation, 16—deep brain stimulation, 17—intracortical microstimulation, 18—visual prosthesis, 19—cochlear implant.
Applsci 15 08900 g002
Figure 3. Example of an ROC curve with AUC = 0.91. The blue dashed line represents the baseline performance (AUC = 0.5), corresponding to random classification. BCI systems performing below this level are considered ineffective, as their output is no better than chance.
Figure 3. Example of an ROC curve with AUC = 0.91. The blue dashed line represents the baseline performance (AUC = 0.5), corresponding to random classification. BCI systems performing below this level are considered ineffective, as their output is no better than chance.
Applsci 15 08900 g003
Table 1. Comparative Table of Neural–Computer Interface Methods. Scheme 300. SSVEP, CBI, BBI, etc.). SRes = spatial resolution; TRes = temporal resolution; Depth = penetration depth; Inv = Invasiveness level: N = Non-Invasive, S = Semi-Invasive, I = Invasive; Freq = frequency; TypeNeuroAct = type of neural activity the method targets; Cost: L = Low, M = Medium, H = High, VH = Very High; NCI Compatibility: Low—experimental or limited integration, short-term or animal-only use, requiring substantial adaptation, Med—partially established integration, human-compatible but with technical or regulatory limitations, High—well-documented integration with NCIs, chronic or semi-chronic human use, supported by existing protocols and clinical experience.
Table 1. Comparative Table of Neural–Computer Interface Methods. Scheme 300. SSVEP, CBI, BBI, etc.). SRes = spatial resolution; TRes = temporal resolution; Depth = penetration depth; Inv = Invasiveness level: N = Non-Invasive, S = Semi-Invasive, I = Invasive; Freq = frequency; TypeNeuroAct = type of neural activity the method targets; Cost: L = Low, M = Medium, H = High, VH = Very High; NCI Compatibility: Low—experimental or limited integration, short-term or animal-only use, requiring substantial adaptation, Med—partially established integration, human-compatible but with technical or regulatory limitations, High—well-documented integration with NCIs, chronic or semi-chronic human use, supported by existing protocols and clinical experience.
MethodSRes (mm)TRes (ms)Depth (mm)InvFreq (Hz)TypeNeuroActCostNCI UseNCI Compatibility
TENS~20~5Surface, 5–20N1–150Gate Control HypothesisLPain, sensory mod CBIMed
TIS~10~5–10Deep, 30–50N2000–5000 (carrier)Selective neuromodulationLExperimental CBILow
EEG10–20~1Cortex, subcorticalN0.05–35 (practical)PSP (distorted LFP)MP300, SSVEP, MI BCIsHigh
fNIRS10–30~1000~10–20N0.01–2HemodynamicsMHybrid BCIsMed
tDCS~10~1000SuperficialNDCMembrane polarizationLHybrid BCIsMed
tACS~10~1000SuperficialN0.1–5000Oscillatory entrainmentLRhythmic neuromodulationMed
MEG~5–10~1SuperficialN0.1–100PSFVHResearch BCIsMed
TMS~5–10~10CortexNSingle/
Series
Depolarization of cortical pyramidal neuronsMStimulation, phosphenes (BBI)Med
Subdural SCS~0.5–2~1Spinal/
Subdural
S40–10000Dorsal column afferents, interneuronsHBSI, motor recoveryMed
Epidural SCS~5–10~1Spinal/
Epidural
S20–10000Dorsal column afferents, interneuronsHBSI, motor recoveryMed
Stentrode~1–2~1Venous, 2–3S0.5–200LPFHEndovascular BCIsHigh
ECoG~1~1Cortex, 1–2S0.5–5000PSP, LPFHHigh-res BCI, CBIHigh
PNS~1–5~1PeripheralS1–1000Stimulation of afferent fibersMSensorimotor feedbackMed
FES~10~1Muscle/
Nerve
S10–100Stimulation of efferent fibersMNeurorehab, BSIMed
ISMS~0.1~0.1Spinal, 1–3I10–100Stimulation of motoneuronsMMotor recovery, BSIHigh
DBS~1–4~1Deep nuclei, 60–80I1–200Stimulation of neural ensemblesHTherapeutic CBIHigh
ICMS~0.05–0.1~0.1Cortex, 0.5–2.5I10–300Neuronal cell bodies, apical dendritesVHHigh-res motor/sensory NCIHigh
Retinal implantLow~10Retina, 0.2–0.5I10–50Ganglion/bipolar cell stimulationHVision, sensory CBIHigh
Cochlear implantMedium~10Cochlea, 25–30I100–10000Afferents of the auditory nerveHHearing, speech CBIHigh
Table 2. Comparative Overview of Modeling Stages and Approaches in NCIs. CC = Computational Cost: L, M, H—Low, Medium, High; RT = Real-Time Potential: Y, P, N—Yes, Probability, No; CCP = Cloud Computing Platforms; ASIC = Application Specific Integrated Circuit; Acc. = accuracy, lat. = latency.
Table 2. Comparative Overview of Modeling Stages and Approaches in NCIs. CC = Computational Cost: L, M, H—Low, Medium, High; RT = Real-Time Potential: Y, P, N—Yes, Probability, No; CCP = Cloud Computing Platforms; ASIC = Application Specific Integrated Circuit; Acc. = accuracy, lat. = latency.
Type of MethodsData SourcePlatformAlgorithmsKey FeaturesCCRT
Feature analysis and optimization methods
Temporal analysisEEG,
ECoG,
MEG
(Time series)
PCs, server (General-purpose CPU)Statistical characteristics: mean, standard deviation, skewness, etc.Variable acc. depending on task; generally low lat.LY
PCs, servers
(General-purpose CPU)
Entropy measures: Shannon, Rényi, Sample Entropy, etc.Medium or high acc., medium lat., though higher for Sample Entropy.L, but H for Sample EntropyY
PCs (General-purpose CPU),
FPGA
Hjorth parameters, mean-absolute value, zero crossings, slope sign changes, waveform length, maximum fractal length, Willison amplitude, root mean square (RMS), autoregressive and adaptive autoregressive coefficients (AAR)Low, medium, or high acc. and lat. for different applications, higher for AARL, but M-H for AARY
Time-frequency analysisEEG,
ECoG,
MEG
(Time series)
PCs,
(General-purpose CPU),
FPGA
Fast Fourier Transform (FFT) and Short-Time Fourier Transform (STFT)Lower acc. for non-phase-locked responses (ERD/ERS) compared to phase-locked (SSVEP)LY
Power Spectral Density (PSD)Analysis of dominant rhythms in the resting state, detection of SSVEP, and assessment of neurometabolic activity.
Acc. is high for EEG, very low for ERD/ERS/P300. Low lat.
LY
Synchrosqueezing transform, Hilbert–Huang transform, wavelet transformsInevitable lat. due to data segmentation requirements.M/HY
Spatial methodsEEG,
ECoG, MEG,
fNIRS
(Multichannel)
PCs (General-purpose CPU), FPGA, ASICCommon Spatial Pattern (CSP) and its modifications, EEG source localization, and inverse model-based feature extractionExtraction of informative spatial patterns, enhancing differences between classes; dimensionality reduction, high acc.; lat. is inevitable due to data segmentation requirements.L/MY
Component AnalysisEEG, ECoG, fMRI, MEG, fNIRSPCs (General-purpose CPU)PCA, ICADimensionality reduction, artifact removal, and separation of mixed signals. Acc. is high for ICA, moderate for PCA. Lat. is low.H for ICA, L/M for PCAY
Information-theoretic methodsEEG, ECoG, MEA, MEG, fNIRSPCs, servers (General-purpose CPU), CPPMutual information-based best individual feature, mutual information-based rough set reduction, integral square descriptorExtraction of the most relevant features. Reaches theoretical limits of BCI performance. High acc., lat., robustness to noise, and versatility.HP
Combining features from different sourcesEEG, MEG, fNIRS, ECoG, MEA, EMG, Eye TrackingPCs, servers (General-purpose CPU), CPPMulti-view learning, Filter Bank CSP (FBCSP), multi-stream feature fusion networks, canonical correlation analysis (CCA), joint independent component analysis (jICA), feature concatenationCompensates for limitations, enhances informativeness, increases reliability, provides contextual understanding; Increases lat.Increasing Comp.
Cost
P
Covariance-
based methods
EEG, fNIRS, MEGPCs, servers (General-purpose CPU), CPPContrastive multiple correspondence analysis, tensor-to-vector projection, and tensor-based frequency feature combinationHigh acc., sensitive to noise; high lat.; sensitive to user independence; requires sufficiently long epochs.H for calibration, L for InferenceP
Graph statisticsPET, MEG, EEG,
DTI
PCs, servers (General-purpose CPU)Centrality, modularity, clusteringHigh acc.; very high lat.HP
Classification
Traditional machine-learning methodsEEG, ECoG, fNIRS, PET,
fMRI
PCs, servers (General-purpose CPU), CPPLDA and its variations: Fisher and Bayesian linear discriminant analysisFine-tunable.
Performs well on small sample sizes.
Effective on small datasets.
Ineffective at analyzing nonlinear dependencies.
Less robust to noise and sensitive to outliers.
LY
EEG, ECoG, fNIRS, fMRI,
Eye tracking
PCs, servers (General-purpose CPU), CPPProbabilistic methods: Bayesian networks (BN), naive Bayes (NB), hidden Markov modeling (HMM)Robust to noise.
Assumes independence of features.
Effective with small datasets.
L/MP
EEG, ECoG, fNIRS, fMRI, MEGCPU, PCs, servers (General-purpose CPU), CPPNearest neighbor and k-nearest neighborsSimple to implement.
Effective on small datasets.
Can handle nonlinear dependencies.
Highly sensitive to noise and outliers.
Poor scalability with large datasets.
M/HP
EEG, ECoG, fNIRS, fMRI, MEGCPU, PCs, servers,
(General-purpose CPU),
CPP
Support vector machine (SVM)Fine-tunable.
Works well with high-dimensional and sparse features.
Choice of kernel and hyperparameters can strongly affect performance.
MY
EEG, ECoG, fNIRS, fMRI,
DTI
CPU, PCs, servers,
(General-purpose CPU), CPP
Ensemble approaches: random forest, weighted random forests, boostingUseful when data contains a lot of noise, requires modeling nonlinear dependencies, or has multidimensional features.
Suitable for noisy data with complex nonlinear dependencies.
Effective for high-dimensional and sparse data.
Suitable for processing large datasets thanks to parallelization.
Handles imbalanced classes well.
May overfit without proper tuning.
M/HP
Deep learning (DL)EEG, ECoG, fNIRS, fMRI, MEG, PET, Eye-
tracking
GPU, PCs, servers,
(General-purpose CPU), CPP, neuromorphic chips
Convolutional Neural Network (CNN)Can learn features from raw data without prior feature selection.
Dynamic time-series data need to be transformed before input to CNN.
Often used hybridly with other methods.
Ready-made specialized solutions exist for EEG analysis, etc.
Alone, CNNs are usually difficult to use for analyzing long time sequences and dynamic data without hybridization.
HN
EEG, ECoG, Eye trackingGPU, PCs, servers,
(General-purpose CPU), CPP
Feed-forward (FF) neural network: multilayer perceptrons (MLPs)Capable of modeling complex nonlinear dependencies in data, improving classification quality compared to linear methods.
Can learn features from raw data without prior feature selection.
Relatively fast training with proper tuning.
Prone to overfitting with insufficient data or overly large networks.
Requires careful architecture and hyperparameter selection, which can be laborious.
Acts as a “black box,” complicating result interpretation.
M/HY
EEG, ECoG, MEG, fNIRS, fMRI,
PET
GPU/TPU, PCs, servers, CPP, neuromorphic chipsRecurrent Neural Network (RNN), Long short-term memory (LSTM), Gated Recurrent Unit (GRU)Well-suited for analyzing temporal dependencies and sequences.
Can work with raw time-series data, minimizing manual feature extraction.
Allows classification of long- and short-term brain activity patterns.
Has many hyperparameters, complicating data interpretation.
HY
EEG, ECoG, fNIRS, fMRI, PET, DTI, MEG,
Eye tracking
GPU, PCs, servers (General-purpose CPU), CPPHybrid methods: CNN + SVM, CNN + LSTMFine-tunable.
Combining different model types improves acc., noise robustness, and adaptability to heterogeneous data.
Has many hyperparameters, complicating data interpretation.
HP
EEG, ECoG, fNIRS, fMRI, Eye trackingGPU/TPU, PCs, servers, CPPMeta-learning: Model-Agnostic Meta-Learning and Multi-Domain Model-Agnostic Meta-LearningFine-tunable.
Effective with small datasets.
Has many hyperparameters, complicating result interpretation.
HY
Reservoir computing (RC)EEG, ECoGGPU, CPU, PCs, servers, CPP, neuromorphic chipsEcho State Network (ESN)Well-suited for analyzing temporal dependencies and sequences.
Simplicity of training.
Capability for adaptation.
Robustness to noise.
Requires careful hyperparameter
tuning.
The random and fixed reservoir structure may lead to result instability.
L/MY
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dubynin, I.; Zemlyanskov, M.; Shalayeva, I.; Gorskii, O.; Grinevich, V.; Musienko, P. Neural–Computer Interfaces: Theory, Practice, Perspectives. Appl. Sci. 2025, 15, 8900. https://doi.org/10.3390/app15168900

AMA Style

Dubynin I, Zemlyanskov M, Shalayeva I, Gorskii O, Grinevich V, Musienko P. Neural–Computer Interfaces: Theory, Practice, Perspectives. Applied Sciences. 2025; 15(16):8900. https://doi.org/10.3390/app15168900

Chicago/Turabian Style

Dubynin, Ignat, Maxim Zemlyanskov, Irina Shalayeva, Oleg Gorskii, Vladimir Grinevich, and Pavel Musienko. 2025. "Neural–Computer Interfaces: Theory, Practice, Perspectives" Applied Sciences 15, no. 16: 8900. https://doi.org/10.3390/app15168900

APA Style

Dubynin, I., Zemlyanskov, M., Shalayeva, I., Gorskii, O., Grinevich, V., & Musienko, P. (2025). Neural–Computer Interfaces: Theory, Practice, Perspectives. Applied Sciences, 15(16), 8900. https://doi.org/10.3390/app15168900

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop