Previous Article in Journal
Evolution and Trends of the Exploration–Exploitation Balance in Bio-Inspired Optimization Algorithms: A Bibliometric Analysis of Metaheuristics
Previous Article in Special Issue
Multi-Class Classification Methods for EEG Signals of Lower-Limb Rehabilitation Movements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NEuroMOrphic Neural-Response Decoding System for Adaptive and Personalized Neuro-Prosthetics’ Control

by
Georgi Rusev
1,
Svetlozar Yordanov
1,
Simona Nedelcheva
1,
Alexander Banderov
1,
Hugo Lafaye de Micheaux
2,
Fabien Sauter-Starace
2,
Tetiana Aksenova
2,
Petia Koprinkova-Hristova
1,* and
Nikola Kasabov
1,*
1
Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
2
Univ. Grenoble Alpes, CEA, Leti, Clinatec, F-38000 Grenoble, France
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 518; https://doi.org/10.3390/biomimetics10080518 (registering DOI)
Submission received: 30 June 2025 / Revised: 25 July 2025 / Accepted: 3 August 2025 / Published: 7 August 2025
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces 2025)

Abstract

In our previous work, we developed a neuromorphic decoder of intended movements of tetraplegic patients using ECoG recordings from the brain motor cortex, called Motor Control Decoder (MCD). Even though the training data are labeled based on the desired movement, there is no guarantee that the patient is satisfied by the action of the effectors. Hence, the need for the classification of brain signals as satisfactory/unsatisfactory is obvious. Based on previous work, we upgrade our neuromorphic MCD with a Neural Response Decoder (NRD) that is intended to predict whether ECoG data are satisfactory or not in order to improve MCD accuracy. The main aim is to design an actor–critic structure able to adapt via reinforcement learning the MCD (actor) based on NRD (critic) predictions. For this aim, NRD was trained using not only an ECoG signal but also the MCD prediction or prescribed intended movement of the patient. The achieved accuracy of the trained NRD is satisfactory and contributes to improved MCD performance. However, further work has to be carried out to fully utilize the NRD for MCD performance optimization in an on-line manner. Possibility to include feedback from the patient would allow for further improvement of MCD-NRD accuracy.

1. Introduction

Recent advancements in brain–computer interface (BCI) technology significantly improved their reliability [1,2], providing revolutionary benefits to patients with neurological conditions, such as stroke, spinal cord injuries, and neuro-degenerative disorders [3,4]. In particular, the BCI applications to human prosthetics’ control demonstrated remarkable progress in the recent decade [5,6,7,8,9]. A pioneering work using ElectroCorticoGram (ECoG) measurements from the motor cortex in the brain to decode movement intentions of tetraplegic patients has been reported in [10,11,12]. The performance of the developed Motor Control Decoder (MCD) allows patients to use the neuro-prosthesis for several weeks without recalibration. However, an unsupervised recalibration/adaptation would provide great value for home use of such BCI-controlled devices.
For the aim of the auto-adaptation of MCD, feedback of how it performs on given task is beneficial. Different methods for assessing a person’s performance and for delivering feedback are used in different scenarios [6,8,9,13,14]. For the aim of MCD training, the patient was asked to imagine desired movement, and recorded ECoG signals were marked accordingly. However, since there is no guarantee that the patient is satisfied by the effector actions, the need for classification of brain signals as satisfactory/unsatisfactory was raised.
Very recent research [15] proposed a reinforcement learning (RL) framework for improving BCI performance. Since immersion of the subject in the task to be performed by a prosthetic device could not be completed during all periods of data collection, the brain signal decoder trained by data from less engaged trails could have lower accuracy. The authors proposed an RL-based decoder that performs continuous parameter updates in an on-line mode. In this framework, the neural activity from the medial prefrontal cortex was considered as a reward-related brain region that represents task engagement. The reported results demonstrate the potential of an RL framework for on-line BCI accuracy improvement.
Since the ECoG implants used in our study are positioned over the motor cortex, the question of whether we can extract useful information about motor decoder performance from this particular brain area arises. Ref. [16] reported that prediction errors are signaled in the ventromedial prefrontal and lateral orbitofrontal cortex, the anterior insula and dorsolateral prefrontal cortex. The authors suggest that these regions might therefore belong to brain systems that differentially contribute to the repetition of rewarded choices and the avoidance of punished choices. The investigation in [17] whether cortical responses measured by ECoG implants elicited from user error have the potential to serve as feedback to BCI decoders came to the conclusion that significant responses were noted in primary somatosensory, motor, premotor, and parietal areas of the brain. In search of principles for designing BMIs with learning abilities, Ref. [18] investigated neural information pathways among cortical areas in task learning and, particularly, the relationship between the medial prefrontal cortex and primary motor cortex, which are actively involved in motor control and task learning. The authors found that motor cortex spikes can be well predicted from spikes in the medial prefrontal cortex. Since the co-activation between these areas evolves during task learning, and becomes stronger as subjects become well trained, prediction of motor cortex activity from the activity in the prefrontal cortex could help in the design of adaptive BMI decoders for task learning.
In [13], a framework for the detection of neural correlates of task performance from ECoG implants positioned at the motor cortex area was investigated, and a database labeled with satisfactory and unsatisfactory brain signals was created. Each movement of the prosthetic device provoked by a patient’s brain signals and decoded by a well trained MCD was labeled as satisfactory or not satisfactory, depending on the resulting error of the controller (MCD). Only satisfactory brain signals were used to train the MCD. Then, another model for the prediction of the neural response quality, called Neural Response Decoder (NRD), was trained. A further aim of NRD was to allow for the auto-adaptation of the MCD in an on-line mode. However, since the initial MCD, which is supposed to be well trained, might not be perfect, such a labeling of ECoG data as satisfactory/unsatisfactory might not be always exact. Since our work is based on the database described in [13], we propose that the satisfaction labels could be considered as punish/reward signal to be exploited to train an actor–critic RL architecture.
Since the task of brain signal decoding cannot be solved by traditional linear models, intensive research towards the development of various nonlinear approaches is underway. A recent work [19] compared the power of nonlinear and linear approaches on a task for predicting finger movements. The authors addressed concerns about neural networks having the possibility of obtaining inconsistent solutions. The reported results, however, demonstrated that various artificial neural network architectures showed promising results and could be used in the future for BMI decoding. However, the powerful ANNs usually need a lot of computational and energy resources for training and adaptation, which prevents their application in real time. That is why, nowadays, numerous neuromorphic devices were developed [20,21,22,23,24,25,26,27,28] in order to implement ANNs in less energy-consuming and fast embedded devices.
In our previous work [29], a novel neuromorphic framework of a BMI system for prosthetics’ control via decoding ECoG brain signals was described. It includes a three-dimensional spike timing neural network (3D-SNN) for spatio-temporal brain signal feature extraction and an on-line trainable recurrent reservoir structure (Echo state network (ESN)) for Motor Control Decoding (MCD). The decoder auto-adapts to the incoming brain signals via Spike Timing Dependent Plasticity (STDP) of connections (synapses) in the 3D-SNN structure. Based on our previous work, we upgrade our neuromorphic MCD with an NRD that should be able to predict whether the decoder actions and/or ECoG data from the patient are correct (satisfactory) or not. The main aim is to design an actor–critic structure able to adapt via reinforcement learning MCD (actor) based on NRD (critic) predictions. It would allow for the continuous adaptation of the decoder to changes in brain signals in real time.
The contributions of this work are the following:
  • The task for neural response decoding (NRD) for the aim of MCD improvement is presented as a reinforcement learning framework.
  • The MCD (actor) and NRD (critic) are designed as neuromorphic structures combining a 3D-SNN structure for spatio-temporal feature extraction from ECoG implants and an on-line trainable ESN for the final decoding stage, which makes their implementation suitable for a neuromorphic chip, offering low power consumption, fast processing and small size.
  • Several approaches to training both actor (MCD) and critic (NRD) in their interaction over time are investigated.
  • Potential for on-line MCD improvement using NRD predictions is proven via simulations.
The rest of this manuscript continues with a section describing the experimental database, ECoG feature extraction, RL framework, neuromorphic NRD and MCD structures, and their software implementation. Next, training and testing algorithms are described, followed by the simulation results, their discussion and concluding remarks with directions for future work.

2. Materials and Methods

2.1. Experimental Data

The developed MCD-NRD structure was trained on a database (DB) called RUNNER. Figure 1 shows the experimental set-up.
During the data collection experiment, the patient was seated in front of a computer screen where a human avatar was represented from a third-person perspective. The avatar could either stand still or walk forward at a fixed speed. The RUNNER DB was collected from a patient with traumatic sensorimotor tetraplegia caused by a complete C4–C5 spinal cord injury and having two chronic wireless ECoG implants at the motor cortex area. ECoG signals were recorded thanks to the WIMAGINE implants [30], composed of 64 planar electrodes, at a sampling rate of 586 Hz. The amplitude of the ECoG signals was of the order of 200–300 µV, and they were pass-band filtered in the range from 0.5 Hz to 300 Hz. The patient was involved in the “BCI and Tetraplegia” clinical trial at CEA/Clinatec (NCT02550522), which focused on the recording and decoding of motor intentions with different effectors. Here, the patient controlled the avatar using leg motor imagery decoded by a trained MCD. In total, 9 sessions, of approximately 11 min of recording, were acquired over 99 days. For blind testing of the algorithms, the database was split in two parts: the first 7 sessions were used to train the decoder, while the last 2 sessions were left for testing the trained model’s accuracy.
The  S t a t e d e s i r e d , i.e., the prescribed state of the screen avatar that should be predicted by MCD, has two possible values:  i d l e  or  w a l k . The satisfaction of the decoded brain signals and achieved new state of the avatar (denoted briefly as  S A T I S ) was marked according to the procedure described in [13,14], as follows: after a time lag, when the instruction on the screen was changed, if the avatar is in the  S t a t e d e s i r e d  as a result of proper decoding of the patient’s ECoG signals by the MCD, this is considered satisfactory ( S A T I S = 1 ); if it is not, the ECoG signal is marked as non-satisfactory ( S A T I S = 0 ).
However, the way of labeling does not depend on the patient’s opinion. Since the avatar states were decoded by a pre-trained MCD module, the decoding error could result either from an incorrect patient’s brain signals due to fatigue and distraction, or from MCD incorrectness. Hence, such a labeling could serve to predict an error in desired movement decoding with an unknown origin.
Nevertheless, the experimental database is a good starting point to train initial NRD, which could be further refined using feedback from the patient.

2.2. Neuromorphic Framework for MCD and NRD

In our previous work [29], we described a neuromorphic structure called MCD. It was trained to predict a patient’s desired avatar actions from ECoG signals. Here, we upgrade it with an NRD structure whose aim is to predict correctness (satisfaction) of the actions decoded by the MCD. The basic idea comes from the reinforcement learning theory [31], which is considered a biologically plausible way that our brain learns from experience by interacting with the environment and receiving feedback about resulting outcomes. The feedback signal is called  r e i n f o r c e m e n t  and could be very simple, e.g., a binary label  g o o d = 1  or  b a d = 0 .
Figure 2 presents the proposed reinforcement learning framework. In order to transfer the terminology from the experimental set-up to the terms in control theory and reinforcement learning, we refer to the patient as an  o b j e c t  under control whose state ( o b j e c t s t a t e ) is assessed by the extracted features from the signals measured by the ECoG implants; the MCD will further be referred to as the  a c t o r  (controller) that has to be optimized to generate an  A c t i o n  resulting in satisfaction ( S A T I S = 1 ) rather than non-satisfaction ( S A T I S = 0 ); the satisfaction itself is considered as a binary  r e i n f o r c e m e n t s i g n a l  to be predicted by the  c r i t i c  element (in our scheme, the NRD). Table 1 summarizes the correspondence between the experimental and RL terms.
Thus, the MCD’s role is to generate an  A c t i o n  based on the perceived object’s state ( E C o G f e a t u r e s ), while the NRD should predict the outcome ( S A T I S ) from the  A c t i o n  and perceived object’s state. In situations when the movement desired by the patient is not known, a perfectly trained NRD should be able support MCD in the generation of end effectors’ proper motion of the exoskeleton supporting the patient. However, since the labels in RUNNER DB described above were obtained without feedback from the patient, we cannot be sure whether incorrect avatar movement was due to fatigue or the distraction of the patient, or because the MCD was not perfectly trained. If feedback from a patient ( S A T I S l a b e l f r o m p a t i e n t ) is included in the experimental set-up, the RL framework will allow for the simultaneous training of the MCD and NRD.
The overall neuromorphic structure upgrades the MCD, as described in detail in [29] and as shown in Figure 3. It consists of the following basic modules:
  • A filtering module that transforms the raw ECoG signals to input signals for 3D-SNN using Morlet wavelet transformation for multiple central frequencies and their combination into a feature matrix of the same size as the original one.
  • A 3D recurrent SNN architecture called a 3D SNN cube, which is spatially structured and adaptable to an individual 3D brain template, is used for feature extraction from processed ECoG signals. It adapts continuously to the incoming input in unsupervised mode via the STDP rule.
  • Two recurrent Echo state network (ESN) structures for decoding of the desired movement (MCD) and satisfaction (NRD) from extracted features (spiking frequencies of the selected neurons in the 3D-SNN module). It can be trained on-line in supervised mode via recursive least squares (RLS) or in an unsupervised regime via reinforcement learning (RL) rules.
In our approach, data from the 64 ECoG electrodes’ signals are treated in blocks (portions) of 59 points at each time step, corresponding to approximately 100 milliseconds of data recording, as in [13,14]. Since using wavelet transformations of electro-physiological signals is a commonly used approach for feature extraction, features extracted from each block in our work combine 15 Morlet wavelet transformations, having 15 different central frequencies (from 10 to 150 Hz with step size of 10 Hz). The choice of central frequencies is based on previous works [13,14]. The extracted ECoG features in Figure 3 are a matrix of the same size as the current ECoG data block, i.e.,  59 × 64 , as follows:
E C o G f e a t u r e e i ( k ) = A U C ( M o r l e t f = [ 10 20 150 ] i ( k ) ) , i = 1 ÷ 59 , e = 1 ÷ 64
Here, e denotes the ECoG electrode number, i is the number of points in the block k, and f is the fundamental frequency of the Morlet wavelet.  A U C  is an abbreviation for the area under curve. The curve for each point i contains 15 values of 15 Morlet wavelets with central frequencies  f = [ 10 20 150 ]  Hz.
This approach to ECoG feature extraction constitutes a novelty, making features extracted suitable for on-line adaptations of 3D SNN cube connections reflecting time changes in ECoG signals. Its detailed investigation is a subject of another work that demonstrated a significant increase in MCD accuracy. Here, the reported results on MCD accuracy are also significantly better in comparison to those reported in our previous work [29], a fact in favor of this novel feature extraction approach.
The ECoG features are fed into the 3D SNN cube as generating currents for a time period corresponding to the time of the block recording (approximately 100 ms) to each of the 64 neurons in the structure. The neurons’ firing rates for this stimulation period are equal to the extracted spatio-temporal features from ECoG signals that are further fed into the MCD and NRD modules. Both the actor and critic modules are fast trainable recurrent neural network structures called Echo state networks (ESNs) [32]. They consist of a randomly connected pool of neurons with a hyperbolic tangent nonlinear activation function and a linear readout with weights trainable via the least squares method.
The top part of Figure 4 shows the 3D SNN cube structure. It consists of 64 spiking neurons (simulated by the Leaky Integrate and Fire (LIF) model) positioned at the ECoG electrode positions in a 3D space. The synaptic connections between each pair of neurons were randomly generated with weights proportional to their distance in the 3D space. The positive connections are marked in red, while the negative ones are marked in blue. All synapses are plastic, i.e., their strength (weight) adapts continuously to the input signals via an associative learning rule called Spike Timing Dependent Plasticity (STDP). The auto-adaptation in the 3D SNN structure reflects the spatio-temporal dependence of ECoG signals, thus accounting for the positions of the ECoG electrodes and changes of the recorded signals with time. The bottom part of Figure 4 shows the initial and two snapshots (after training and after testing) of the connections weights in the 3D-SNN structure. The color of each dot corresponds to the magnitude of the connection weight between each pair of neurons, numbered from 1 to 64 on the x and y axes. We observe that the initial weights (leftmost plot) converge very fast to connectivity, corresponding to the model input signals and continuing with small adaptation changes during the training (middle plot) and testing (right plot) phases.
The MCD is trained to predict the desired movement ( A c t i o n = i d l e / w a l k ) of the prosthetic device (in current experiment the avatar on the screen), while the NRD is trained to predict whether the brain signal from the ECoG generates desired ( S A T I S = 1 ) or incorrect ( S A T I S = 0 ) movements. The output of the NRD is considered as a prediction of the reward/punish signal in the form of satisfaction/non-satisfaction. This signal should allow for an adjustment of the actor’s behavior (MCD predictions) so as to decrease its error.

2.3. Software Implementation

All modules are written in Python, version 3.8.9. The 3D-SNN is based on the NEST simulator library, version 3.3 [33], while the rest of the code exploits numpy, SciPy and other Python libraries for mathematical calculations. The software works in pseudo-on-line mode and makes readings from  D B e d f  files, the next portion of 59 records from ECoG electrodes, desired decoder outputs ( A c t i o n d e s i r e d f r o m D B  and  S A T I S l a b e l f r o m D B ) from the  D B b e h  file, and generates  A c t i o n p r e d i c t e d  and  S A T I S p r e d i c t e d  as outputs for the current time period. During training, the model parameters are adjusted in supervised mode; during testing, the model’s output is kept in csv files.
The parameters that are adjustable in the supervised mode are the output connection weights of the ESN modules. They are tuned incrementally with every new input/output training data pair via the recursive least squares (RLS) method. The 3D-SNN connection weights auto-adapt via the STDP rule continuously. The 3D SNN cube state represents the membrane potentials of all the neurons in the structure. All model parameters are kept in csv files.

3. Methodology

We investigated four training approaches and performed three testing experiments, which are described further below.

3.1. Training Approaches

The training algorithm is outlined in Algorithm 1.  T A 1 T A 2 T A 3  and  T A 4  denote the four training approaches.
In the case of non-satisfaction, since it is supposed that the MCD should generate the opposite state, we first perform the following two training approaches:
  • F i r s t t r a i n i n g a p p r o a c h  is denoted henceforth as TA1 (Figure 5): Use the desired state of the MCD from the DB denoted as  A c t i o n d e s i r e d f r o m D B  as a target for the MCD and input to the NRD no matter whether the training example is labeled as satisfactory or non-satisfactory.
  • S e c o n d t r a i n i n g a p p r o a c h  is denoted henceforth as TA2 (Figure 6): Use the swapped desired state of the MCD denoted as  r e v e r t e d A c t i o n d e s i r e d f r o m D B  (if  i d l e , revert to  w a l k , and vise versa) as a target for the MCD and input for the NRD if the training example is labeled as non-satisfactory in the DB ( S A T I S l a b e l f r o m D B = 0 ).
Since the decoder will not be aware of exact target movement of the test subject in real time, it is important to be able to rely on predictions from the MCD to train and test the NRD. In order to start training of the NRD with a better-trained MCD, we also performed the following two training approaches:
  • S t e p 1 : Use the  F i r s t t r a i n i n g a p p r o a c h  (TA1) to train the initial models of both the MCD and NRD using only the first training session from the DB.
  • S t e p 2 : For the rest of the training sessions, use the  T h i r d t r a i n i n g a p p r o a c h  denoted as TA3 (Figure 7) or the  F o u r t h t r a i n i n g a p p r o a c h  denoted as TA4 (Figure 8).
The  T h i r d  (TA3) and  F o u r t h  (TA4) training approaches exploit the idea from the previous once: to use the  A c t i o n d e s i r e d f r o m D B  or to invert it (reverted  A c t i o n d e s i r e d f r o m D B ) based on the value of the  S A T I S l a b e l f r o m D B  for training of the MCD.
All training approaches rely on model target output data from the DB; therefore, they could be implemented in off-line mode. In real time, if the target output is available, the RLS algorithm will allow the user to adjust the model parameters in on-line mode too.
Algorithm 1 Pseudo-code of training algorithm
Initialization
Initialize  E S N M C D  and  E S N N R D  module parameters
Compose 3D-SNN module using ECoG positions
Initialize the cube connection weights based on the neurons’ distances
      while   n e w d a t a do
  2:
    E C o G s i g n a l s f o r t i m e p e r i o d t E C o G R e a d f r o m D B e d f f i l e
    A c t i o n d e s i r e d f r o m D B a n d S A T I S l a b e l f r o m D B R e a d f r o m D B b e h f i l e
  4:
    f i l t e r e d E C o G s i g n a l s f i l t e r ( E C o G s i g n a l s )
    3 D S N N i n p u t f i l t e r e d E C o G s i g n a l s
  6:
   for  t E C o G  do
        S i m u l a t e 3 D S N N
  8:
   end for
    E C o G f e a t u r e s s p i k i n g f r e q u e n c i e s o f 3 D S N N
10:
   if MCD training then
        E S N M C D E C o G f e a t u r e s
12:
        A c t i o n p r e d i c t e d E S N M C D o u t p u t
       if TA1 or TA3 then
14:
             M C D e r r o r = A c t i o n p r e d i c t e d A c t i o n d e s i r e d f r o m D B
       else if TA2 or TA4 then
16:
            if  S A T I S l a b e l f r o m D B = 1  then
                 M C D e r r o r = A c t i o n p r e d i c t e d A c t i o n d e s i r e d f r o m D B
18:
            else if  S A T I S l a b e l f r o m D B = 0  then
                 M C D e r r o r = A c t i o n p r e d i c t e d r e v e r t e d A c t i o n d e s i r e d f r o m D B
20:
            end if
       end if
22:
        n e w E S N M C D p a r a m e t e r s R L S ( M C D e r r o r )
   else if NRD training then
24:
       if TA1 then
             E S N N R D [ E C o G f e a t u r e s , A c t i o n d e s i r e d f r o m D B ]
26:
       else if TA2 then
            if  S A T I S l a b e l f r o m D B = 1  then
28:
                 E S N N R D [ E C o G f e a t u r e s , A c t i o n d e s i r e d f r o m D B ]
            else if  S A T I S l a b e l f r o m D B = 0  then
30:
                 E S N N R D [ E C o G f e a t u r e s , r e v e r t e d A c t i o n d e s i r e d f r o m D B ]
            end if
32:
       else if TA3 then
             E S N N R D [ E C o G f e a t u r e s , A c t i o n p r e d i c t e d ]
34:
       else if TA4 then
            if  S A T I S l a b e l f r o m D B = 1  then
36:
                 E S N N R D [ E C o G f e a t u r e s , A c t i o n p r e d i c t e d ]
            else if  S A T I S l a b e l f r o m D B = 0  then
38:
                 E S N N R D [ E C o G f e a t u r e s , r e v e r t e d A c t i o n p r e d i c t e d ]
          end if
40:
       end if
        S A T I S p r e d i c t e d E S N N R D o u t p u t
42:
        N R D e r r o r = S A T I S p r e d i c t e d S A T I S l a b e l f r o m D B
        n e w E S N N R D p a r a m e t e r s R L S ( N R D e r r o r )
44:
   end if
end while
46:
K e e p m o d e l p a r a m e t e r s

3.2. Testing Experiments

The testing algorithm is represented by Algorithm 2.  T E 1  and  T E 2  denote the  F i r s t  and  S e c o n d  test experiments, which are described further below.
Algorithm 2 Pseudo-code of testing algorithm
Initialization
Set  E S N M C D  and  E S N N R D  module parameters to the trained ones
Compose 3D-SNN module using ECoG positions
Set 3D-SNN state to the achieved after training
Set cube connection weights to the values achieved after training
       while   n e w d a t a do
  2:
    E C o G s i g n a l s f o r t i m e p e r i o d t E C o G R e a d f r o m D B e d f f i l e
    A c t i o n d e s i r e d f r o m D B R e a d f r o m D B b e h f i l e
  4:
    f i l t e r e d E C o G s i g n a l s f i l t e r ( E C o G s i g n a l s )
    3 D S N N i n p u t f i l t e r e d E C o G s i g n a l s
  6:
   for  t E C o G  do
        S i m u l a t e 3 D S N N
  8:
   end for
    E C o G f e a t u r e s s p i k i n g f r e q u e n c i e s o f 3 D S N N
10:
    E S N M C D E C o G f e a t u r e s
    A c t i o n p r e d i c t e d E S N M C D o u t p u t
12:
   if TE1 then
        E S N N R D [ E C o G f e a t u r e s , A c t i o n d e s i r e d f r o m D B ]
14:
   else if TE2 then
        E S N N R D [ E C o G f e a t u r e s , A c t i o n p r e d i c t e d ]
16:
   end if
    S A T I S p r e d i c t e d E S N N R D o u t p u t
18:
end while
K e e p 3 D S N N s t a t e a n d c i n n e c t i o n w e i g h t s
In total, three testing experiments were carried out, as follows:
  • F i r s t e x p e r i m e n t  is denoted further as TE1 (Figure 9): Feed the trained NRD with the desired action from the DB ( A c t i o n d e s i r e d f r o m D B ) rather than from the trained MCD prediction. In this way, we skip the MCD imitating knowledge about instructions on the screen. However, in on-line mode, the NRD must know the target action, which is not always possible.
  • S e c o n d e x p e r i m e n t  is denoted further as TE2 (Figure 10): Feed the trained MCD prediction ( A c t i o n p r e d i c t e d ) to the NRD, which is not always correct but will be available in a real situation. In this way, the decoder works fully in on-line mode.
  • T h i r d e x p e r i m e n t  is denoted further as TE3: Testing of both models trained via the third (TA3) and fourth (TA4) training approaches was carried out as in the second experiment TE2, i.e., in on-line mode.

4. Results

For the training and testing of both the MCD and NRD modules, we use the fully labeled data from training sessions of RUNNER DB, i.e., sessions from 1 to 7 for training and the rest of the sessions (8 and 9) for testing.
Table 2 and Table 3 show the NRD testing accuracy from the  F i r s t e x p e r i m e n t  and the  S e c o n d e x p e r i m e n t  for both models trained using the  F i r s t  and  S e c o n d  training approach, respectively. For the  F i r s t e x p e r i m e n t , we observe that when the NRD was trained using information about the desired avatar movement from the instructions on the screen, it could be better trained to predict the  S A T I S  label. In case where the desired actions were replaced by MCD predictions, as in the  S e c o n d t r a i n i n g a p p r o a c h , the NRD’s accuracy dropped significantly in comparison to that which was expected, since the MCD was not perfectly trained.
The results of the  S e c o n d e x p e r i m e n t  show that even if the model was trained using exact information about target movement, as in the  F i r s t t r a i n i n g a p p r o a c h , testing with trained MCD predictions yields decreased accuracy in comparison with the  F i r s t e x p e r i m e n t . Again, the  S e c o n d t r a i n i n g a p p r o a c h  results in lower model accuracy.
Testing of both models trained via the  T h i r d  (TA3) and  F o u r t h  (TA4) training approaches was carried out as in the  S e c o n d e x p e r i m e n t  TE2 (Figure 10). Table 4 shows the testing accuracy of the NRD from the third experiment. It is better than the accuracy achieved in the  S e c o n d e x p e r i m e n t  and lower than the accuracy achieved in the  F i r s t e x p e r i m e t . The  F o u r t h t r a i n i n g a p p r o a c h  yielded a model with better accuracy than expected.
Finally, we tested whether predictions from the trained NRD could be applied to improve the MCD’s accuracy, as shown by the dashed arrow from the NRD to the MCD in Figure 10. The results from the  F i r s t  and  S e c o n d  experiments are the same, since the MCD was trained in the same way in both cases. They are shown in Table 5. Table 6 shows the MCD’s accuracy from the  T h i r d e x p e r i m e n t .
We did not observe any significant differences in MCD accuracy in the  F i r s t / S e c o n d  and  T h i r d  experiments. However, all experiments using training approaches TA2 ( S e c o n d ) and TA4 ( F o u r t h ), with reverting of  A c t i o n d e s i r e d f r o m D B  in the case of  S A T I S l a b e l f r o m D B = 0 , yielded better MCD accuracy in comparison with the  F i r s t  (TA1) and  T h i r d  (TA3) training approaches (without reverting of  A c t i o n d e s i r e d f r o m D B ). This proves the hypothesis that even if the NRD is not perfectly trained, its predictions can be applied to improve MCD’s accuracy when the decoder operates in real time.

5. Discussion

The results about NRD’s accuracy reported here demonstrate that if the NRD is given the target action to be performed by the MCD, it is able to distinguish correct from incorrect decoding of brain signals with much higher accuracy in comparison with the case when it is given the actual MCD’s output. Unfortunately, this approach could be applied only in off-line mode. However, when training both the NRD and MCD off-line first using knowledge about target actions and then continuing in on-line mode by feeding the NRD with output from the pre-trained MCD, the achieved NRD accuracy is significantly higher.
The testing results also demonstrated that even a non-perfectly trained NRD could improve the MCD predictions in a pseudo on-line mode if the MCD was trained with the reverting of  A c t i o n d e s i r e d f r o m D B  in the case of  S A T I S l a b e l f r o m D B = 0 , i.e., if we use only correct labels from the DB about the MCD’s desired output.
In order to be able to exploit the NRD predictions for the auto-adaptation of the MCD, the proposed RL framework needs to be applied in real time with the patient in the loop, as in Figure 2. This will allow us to communicate with the patient and observe changes in his/her brain signals, provoked by an adjustment in both the MCD and NRD. Thus, satisfaction labels would be much more correct since we would rely on a patient’s impression of how well the MCD predicts their desired action; at the same time, NRD’s accuracy could be improved based on patient feedback rather than off-line labeling in the DB. In real time, temporal difference (TD) learning [31] can be applied to make this model a true on-line algorithm. This will allow for a continuous adaptation of both the actor (MCD) and critic (NRD) networks. Further work in this direction using a true reinforcement learning algorithm will allow us to fully utilize the NRD for MCD’s performance optimization in an on-line manner.

6. Conclusions

The proposed neuromorphic NRD system inspired by reinforcement learning theory proves to be a good idea that could be explored further. Even though we did not have the opportunity to perform real-time experiments with patients, the collected experimental database allowed us to test several training and testing approaches. We demonstrated that if the training of the NRD starts with information about the correct target behavior of the MCD, this is a good starting point for further reinforcement training of both the MCD and NRD parts of the decoder.
Further experiments in real time with patients in the loop would allow us to obtain even better results. We believe that the patient must be asked about their satisfaction in order to achieve better auto-adaptation of their brain signals’ decoder during its exploitation in real-life situations. Reinforcement learning would allow us to continuously re-adjust the decoder in real time.
Overall, improving the performance of neuro-prosthetics for the benefit of a patient is an open problem. That is why a further study is still needed. While this paper covers a two-class prosthetic control problem, the general task comprises multiclass control of both legs and hands, as well as desired movement trajectory prediction. Since the team has collected databases from various experiments, our further work will be aimed towards designing a common on-line adaptable decoder of brain signals.

Author Contributions

Conceptualization and methodology: N.K. and P.K.-H.; discovery and exploration of the novel approach used for ECoG feature extraction in this work: G.R.; software and simulation experiments: G.R., S.Y., A.B. and S.N.; experimental data collection: H.L.d.M.; data curation: F.S.-S. and T.A.; writing—original draft preparation: N.K., P.K.-H. and H.L.d.M.; visualization: S.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Commission under HORIZON-EIC action “Auto-adaptive Neuromorphic Brain Machine Interface: toward fully embedded neuroprosthetics (NEMO-BMI)”, No. 101070891/01.10.2022NEMO-BMI (https://nemo-bmi.net, accessed on 9 March 2025).

Informed Consent Statement

Not applicable. The experimental data for model training and testing was provided by NEMO-BMI, project coordinator CEA.

Data Availability Statement

No new data were created or analyzed in this study. This study presents only the MCD model’s structure and accuracy assessment obtained using simulations.

Acknowledgments

CEA Clinatec and CHUGA provided anonymized and processed data from the ongoing clinical trial BCI and tetraplegia registered on ClinicalTrials.gov as NCT02550522. The CEA Clinatec team also computed the prediction performances. The IICT-BAS team designed the MCD’s structure and software and performed training and testing experiments on DBs provided by the CEA. All calculations were performed on the supercomputer HEMUS at the Centre of Excellence in Informatics and ICT, financed by the OP SESG and co-financed by the EU through the ESIF.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rudroff, T. Decoding thoughts, encoding ethics: A narrative review of the BCI-AI revolution. Brain Res. 2025, 1850, 149423. [Google Scholar] [CrossRef] [PubMed]
  2. Sumithra, M.G.; Dhanaraj, R.K.; Milanova, M.; Balusamy, B.; Venkatesan, C. (Eds.) Brain-Computer Interface: Using Deep Learning Applications; Wiley: Hoboken, NJ, USA, 2023. [Google Scholar]
  3. Awuah, W.A.; Ahluwalia, A.; Darko, K.; Sanker, V.; Tan, J.K.; Pearl, T.O.; Ben-Jaafar, A.; Ranganathan, S.; Aderinto, N.; Mehta, A.; et al. Bridging Minds and Machines: The Recent Advances of Brain-Computer Interfaces in Neurological and Neurosurgical Applications. World Neurosurg. 2024, 189, 138–153. [Google Scholar] [CrossRef] [PubMed]
  4. Paul, D.; Mukherjee, M.; Bakshi, A. A Review of Brain-Computer Interface. In Advances in Medical Physics and Healthcare Engineering; Mukherjee, M., Mandal, J., Bhattacharyya, S., Huck, C., Biswas, S., Eds.; Lecture Notes in Bioengineering; Springer: Singapore, 2021; pp. 507–531. [Google Scholar]
  5. Ajiboye, A.B.; Willett, F.R.; Young, D.R.; Memberg, W.D.; Murphy, B.A.; Miller, J.P.; Walter, B.L.; Sweet, J.A.; Hoyen, H.A.; Keith, M.W.; et al. Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: A proof-of-concept demonstration. Lancet 2017, 389, 1821–1830. [Google Scholar] [CrossRef] [PubMed]
  6. Buttfield, A.; Ferrez, P.W.; Millan, J.R. Towards a robust BCI: Error potentials and online learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 164–168. [Google Scholar] [CrossRef] [PubMed]
  7. Eliseyev, A.; Auboiroux, V.; Costecalde, T.; Langar, L.; Charvet, G.; Mestais, C.; Aksenova, T.; Benabid, A.L. Recursive Exponentially Weighted N-way Partial Least Squares Regression with Recursive-Validation of Hyper-Parameters in Brain-Computer Interface Applications. Sci. Rep. 2017, 7, 16281. [Google Scholar] [CrossRef]
  8. Orsborn, A.L.; Moorman, H.G.; Overduin, S.A.; Shanechi, M.M.; Dimitrov, D.F.; Carmena, J.M. Closed-Loop Decoder Adaptation Shapes Neural Plasticity for Skillful Neuroprosthetic Control. Neuron 2014, 82, 1380–1393. [Google Scholar] [CrossRef]
  9. Wodlinger, B.; Downey, J.E.; Tyler-Kabara, E.C.; Schwartz, A.B.; Boninger, M.L.; Collinger, J.L. Ten-dimensional anthropomorphic arm control in a human brain-machine interface: Difficulties, solutions, and limitations. J. Neural Eng. 2014, 12, 016011. [Google Scholar] [CrossRef]
  10. Benabid, A.L.; Costecalde, T.; Eliseyev, A.; Charvet, G.; Verney, A.; Karakas, S.; Foerster, M.; Lambert, A.; Morinière, B.; Abroug, N.; et al. An exoskeleton controlled by an epidural wireless brain–machine interface in a tetraplegic patient: A proof-of-concept demonstration. Lancet Neurol. 2019, 18, 1112–1122. [Google Scholar] [CrossRef] [PubMed]
  11. Lorach, H.; Galvez, A.; Spagnolo, V.; Martel, F.; Karakas, S.; Intering, N.; Vat, M.; Faivre, O.; Harte, C.; Komi, S.; et al. Walking naturally after spinal cord injury using a brain–spine interface. Nature 2023, 618, 126–133. [Google Scholar] [CrossRef] [PubMed]
  12. Moly, A.; Costecalde, T.; Martel, F.; Martin, M.; Larzabal, C.; Karakas, S.; Verney, A.; Charvet, G.; Chabardes, S.; Benabid, A.L.; et al. An adaptive closed-loop ECoG decoder for long-term and stable bimanual control of an exoskeleton by a tetraplegic. J. Neural Eng. 2022, 19, 026021. [Google Scholar] [CrossRef] [PubMed]
  13. Rouanne, V. Adaptation of Discrete and Continuous Intracranial Brain-Computer Interfaces Using Neural Correlates of Task Performance Decoded Continuously From the Sensorimotor Cortex of a Tetraplegic. Ph.D. Thesis, Université Grenoble Alpes, Saint-Martin-d’Hères, France, 2022. [Google Scholar]
  14. Rouanne, V.; Costecalde, T.; Benabid, A.L.; Aksenova, T. Unsupervised adaptation of an ECoG based brain-computer interface using neural correlates of task performance. Sci. Rep. 2022, 12, 21316. [Google Scholar] [CrossRef]
  15. Zhang, X.; Shen, X.; Wang, Y. Modeling Task Engagement to Regulate Reinforcement Learning-Based Decoding for Online Brain Control. IEEE Trans. Cogn. Dev. Syst. 2025, 17, 606–614. [Google Scholar] [CrossRef]
  16. Gueguen, M.C.M.; Lopez-Persem, A.; Billeke, P.; Lachaux, J.-P.; Rheims, S.; Kahane, P.; Minotti, L.; David, O.; Pessiglione, M.; Bastin, J. Anatomical dissociation of intracerebral signals for reward and punishment prediction errors in humans. Nat. Commun. 2021, 12, 3344. [Google Scholar] [CrossRef]
  17. Wilson, N.R.; Sarma, D.; Wander, J.D.; Weaver, K.E.; Ojemann, J.G.; Rao, R.P.N. Cortical Topography of Error-Related High-Frequency Potentials During Erroneous Control in a Continuous Control Brain–Computer Interface. Front. Neurosci. 2019, 13, 502. [Google Scholar] [CrossRef]
  18. Wu, S.; Qian, C.; Shen, X.; Zhang, X.; Huang, Y.; Chen, S.; Wang, Y. Spike prediction on primary motor cortex from medial prefrontal cortex during task learning. J. Neural Eng. 2022, 19, 046025. [Google Scholar] [CrossRef] [PubMed]
  19. Temmar, H.; Willsey, M.S.; Costello, J.T.; Mender, M.J.; Cubillos, L.H.; DeMatteo, J.C.; Lam, J.L.; Wallace, D.M.; Kelberman, M.M.; Patil, P.G.; et al. Investigating the benefits of artificial neural networks over linear approaches to BMI decoding. J. Neural Eng. 2025, 22, 036050. [Google Scholar] [CrossRef] [PubMed]
  20. Bruno, U.; Mariano, A.; Rana, D.; Gemmeke, T.; Musall, S.; Santoro, F. From neuromorphic to neurohybrid: Transition from the emulation to the integration of neuronal networks. Neuromorphic Comput. Eng. 2023, 3, 023002. [Google Scholar] [CrossRef]
  21. Burelo, K.; Sharifshazileh, M.; Krayenbühl, N.; Ramantani, G.; Indiveri, G.; Sarnthein, J. A spiking neural network (SNN) for detecting high frequency oscillations (HFOs) in the intraoperative ECoG. Sci. Rep. 2021, 11, 6719. [Google Scholar] [CrossRef]
  22. Chen, K.; Wu, Z.; Xu, Y.; Wang, R.; Jiang, S.; Chen, L.; Mao, Y.; Li, M. A neuromorphic system based on spiking-timing-dependent plasticity for evaluating wakefulness and anesthesia states using intracranial EEG signals. View 2025. [Google Scholar] [CrossRef]
  23. Christensen, D.V.; Dittmann, R.; Linares-Barranco, B.; Sebastian, A.; Le Gallo, M.; Redaelli, A.; Slesazeck, S.; Mikolajick, T.; Spiga, S.; Menzel, S. 2022 roadmap on neuromorphic computing and engineering. Neuromorphic Comput. Eng. 2022, 2, 022501. [Google Scholar] [CrossRef]
  24. Donati, E.; Indiveri, G. Neuromorphic bioelectronic medicine for nervous system interfaces: From neural computational primitives to medical applications. Prog. Biomed. Eng. 2023, 5, 013002. [Google Scholar] [CrossRef]
  25. Forró, C.; Musall, S.; Montes, V.R.; Linkhorst, J.; Walter, P.; Wessling, M.; Offenhäusser, A.; Ingebrandt, S.; Weber, Y.; Lampert, A.; et al. Toward the Next Generation of Neural Iontronic Interfaces. Adv. Healthc. Mater. 2023, 12, 2301055. [Google Scholar] [CrossRef]
  26. Rathi, N.; Chakraborty, I.; Kosta, A.; Sengupta, A.; Ankit, A.; Panda, P.; Roy, K. Exploring Neuromorphic Computing Based on Spiking Neural Networks: Algorithms to Hardware. ACM Comput. Surv. 2023, 55, 3571155. [Google Scholar] [CrossRef]
  27. Yoo, J.; Shoaran, M. Neural interface systems with on-device computing: Machine learning and neuromorphic architectures. Curr. Opin. Biotechnol. 2021, 72, 95–101. [Google Scholar] [CrossRef] [PubMed]
  28. Yoon, S.J.; Park, J.T.; Lee, Y.K. The neuromorphic computing for biointegrated electronics. Soft Sci. 2024, 4, 30. [Google Scholar] [CrossRef]
  29. Rusev, G.; Yordanov, S.; Nedelcheva, S.; Banderov, A.; Sauter-Starace, F.; Koprinkova-Hristova, P.; Kasabov, N. Decoding brain signals in a neuromorphic framework for a personalized adaptive control of human prosthetics. Biomimetics 2025, 10, 183. [Google Scholar] [CrossRef]
  30. Mestais, C.S.; Charvet, G.; Sauter-Starace, F.; Foerster, M.; Ratel, D.; Benabid, A.L. WIMAGINE: Wireless 64-channel ECoG recording implant for long term clinical applications. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 10–21. [Google Scholar] [CrossRef]
  31. Andrew, B.; Richard, S.S. Reinforcement Learning: An Introduction, 2nd ed.; The MIT Press: Cambridge, MA, USA; London, UK, 2018. [Google Scholar]
  32. Jaeger, H. Tutorial on Training Recurrent Neural Networks, Covering BPPT, RTRL, EKF and the “Echo State Network” Approach; GMD Report 159, German National Research Center for Information Technology; GMD-Forschungszentrum Informationstechnik: Bonn, Germany, 2002. [Google Scholar]
  33. Spreizer, S.; Mitchell, J.; Gutzen, R.; Lober, M.; Linssen, C.; Trensch, G.; Jordan, J.; Plesser, H.E.; Kurth, A.; Vennemo, S.B.; et al. NEST 3.3 (3.3). Zenodo 2022. [Google Scholar] [CrossRef]
Figure 1. Experimental set-up for Runner DB acquisition.
Figure 1. Experimental set-up for Runner DB acquisition.
Biomimetics 10 00518 g001
Figure 2. Actor–critic framework of the proposed reinforcement learning scheme.
Figure 2. Actor–critic framework of the proposed reinforcement learning scheme.
Biomimetics 10 00518 g002
Figure 3. Decoder structure including both the MCD as actor and the NRD as critic network structures.
Figure 3. Decoder structure including both the MCD as actor and the NRD as critic network structures.
Biomimetics 10 00518 g003
Figure 4. Connections in a 3D SNN.
Figure 4. Connections in a 3D SNN.
Biomimetics 10 00518 g004
Figure 5. Block diagram of the  F i r s t t r a i n i n g a p p r o a c h  TA1. Solid arrows represent the input and output data to/from both the NRD and MCD structures. Dashed arrows represent propagation of the training error for both the action and critic networks.
Figure 5. Block diagram of the  F i r s t t r a i n i n g a p p r o a c h  TA1. Solid arrows represent the input and output data to/from both the NRD and MCD structures. Dashed arrows represent propagation of the training error for both the action and critic networks.
Biomimetics 10 00518 g005
Figure 6. Block diagram of the  S e c o n d t r a i n i n g a p p r o a c h  TA2. Solid arrows represent the input and output data to/from both the NRD and MCD structures. Dashed arrows represent propagation of the training error for both the action and critic networks.
Figure 6. Block diagram of the  S e c o n d t r a i n i n g a p p r o a c h  TA2. Solid arrows represent the input and output data to/from both the NRD and MCD structures. Dashed arrows represent propagation of the training error for both the action and critic networks.
Biomimetics 10 00518 g006
Figure 7. Block diagram of the  T h i r d t r a i n i n g a p p r o a c h  TA3. Dashed arrows represent training data and error, while the solid ones represent the input and output data to/from the MCD and NRD modules.
Figure 7. Block diagram of the  T h i r d t r a i n i n g a p p r o a c h  TA3. Dashed arrows represent training data and error, while the solid ones represent the input and output data to/from the MCD and NRD modules.
Biomimetics 10 00518 g007
Figure 8. Block diagram of the  F o u r t h t r a i n i n g a p p r o a c h  TA4. Dashed arrows represent training data and error, while the solid ones represent the input and output data to/from the MCD and NRD modules.
Figure 8. Block diagram of the  F o u r t h t r a i n i n g a p p r o a c h  TA4. Dashed arrows represent training data and error, while the solid ones represent the input and output data to/from the MCD and NRD modules.
Biomimetics 10 00518 g008
Figure 9. Testing in  F i r s t e x p e r i m e n t  TE1. The NRD uses known target movement of the avatar. The dashed arrow from the NRD’s output denotes possible corrective feedback from the NRD to the MCD’s output.
Figure 9. Testing in  F i r s t e x p e r i m e n t  TE1. The NRD uses known target movement of the avatar. The dashed arrow from the NRD’s output denotes possible corrective feedback from the NRD to the MCD’s output.
Biomimetics 10 00518 g009
Figure 10. Testing in  S e c o n d e x p e r i m e n t  TE2. The NRD uses predictions from the trained MCD. The dashed arrow from the NRD’s output denotes possible corrective feedback from the NRD to the MCD’s output.
Figure 10. Testing in  S e c o n d e x p e r i m e n t  TE2. The NRD uses predictions from the trained MCD. The dashed arrow from the NRD’s output denotes possible corrective feedback from the NRD to the MCD’s output.
Biomimetics 10 00518 g010
Table 1. Terminology counterparts.
Table 1. Terminology counterparts.
ExperimentReinforcement Learning
PatientObject
ECoG features   o b j e c t s t a t e
MCDActor
StateAction
NRDCritic
SatisfactionReinforcement signal
Table 2. The testing accuracy of NRD from the  F i r s t e x p e r i m e n t  TE1. The best results are underlined.
Table 2. The testing accuracy of NRD from the  F i r s t e x p e r i m e n t  TE1. The best results are underlined.
Training ApproachMetricsSession 8Session 9
TA1Balanced Accuracy0.74910.7616
TA2Balanced Accuracy0.64740.5456
TA1 F s c o r e  on  S A T I S = 0 0.41450.4924
TA2 F s c o r e  on  S A T I S = 0 0.36430.1546
TA1 F s c o r e  on  S A T I S = 1 0.92190.9610
TA2 F s c o r e  on  S A T I S = 1 0.94910.9586
Table 3. The testing accuracy of the NRD from the  S e c o n d e x p e r i m e n t  TE2. The best results are underlined.
Table 3. The testing accuracy of the NRD from the  S e c o n d e x p e r i m e n t  TE2. The best results are underlined.
Training ApproachMetricsSession 8Session 9
TA1Balanced Accuracy0.62790.5578
TA2Balanced Accuracy0.52910.4967
TA1 F s c o r e  on  S A T I S = 0 0.27960.1549
TA2 F s c o r e  on  S A T I S = 0 0.13860.0627
TA1 F s c o r e  on  S A T I S = 1 0.91770.9162
TA2 F s c o r e  on  S A T I S = 1 0.90280.9304
Table 4. The testing accuracy of the NRD from  T h i r d e x p e r i m e n t  TE3. The best results are underlined.
Table 4. The testing accuracy of the NRD from  T h i r d e x p e r i m e n t  TE3. The best results are underlined.
Training ApproachMetricsSession 8Session 9
TA3Balanced Accuracy0.56370.5501
TA4Balanced Accuracy0.67870.6424
TA3 F s c o r e  on  S A T I S = 0 0.18000.1391
TA4 F s c o r e  on  S A T I S = 0 0.27610.2350
TA3 F s c o r e  on  S A T I S = 1 0.86570.8793
TA4 F s c o r e  on  S A T I S = 1 0.85320.9028
Table 5. The testing accuracy of the MCD from the  F i r s t  (TE1) and  S e c o n d  (TE2) experiments. The best results are underlined.
Table 5. The testing accuracy of the MCD from the  F i r s t  (TE1) and  S e c o n d  (TE2) experiments. The best results are underlined.
Training ApproachNRD FeedbackMetricsSession 8Session 9
TA1YESBalanced Accuracy0.80690.7593
TA1NOBalanced Accuracy0.83700.7699
TA2YESBalanced Accuracy0.87230.8715
TA2NOBalanced Accuracy0.83040.8251
TA1YES F s c o r e  on  w a l k 0.78610.7272
TA1NO F s c o r e  on  w a l k 0.82210.7393
TA2YES F s c o r e  on  w a l k 0.83890.8589
TA2NO F s c o r e  on  w a l k 0.81540.8086
TA1YES F s c o r e  on  i d l e 0.82850.7879
TA1NO F s c o r e  on  i d l e 0.84900.7973
TA2YES F s c o r e  on  i d l e 0.87670.8886
TA2NO F s c o r e  on  i d l e 0.84160.8415
Table 6. The testing accuracy of the MCD from the  T h i r d e x p e r i m e n t  TE3. The best results are underlined.
Table 6. The testing accuracy of the MCD from the  T h i r d e x p e r i m e n t  TE3. The best results are underlined.
Training ApproachNRD FeedbackMetricsSession 8Session 9
TA3YESBalanced Accuracy0.79420.7691
TA3NOBalanced Accuracy0.83700.7699
TA4YESBalanced Accuracy0.87240.8800
TA4NOBalanced Accuracy0.83660.8403
TA3YES F s c o r e  on  w a l k 0.75280.7363
TA3NO F s c o r e  on  w a l k 0.82210.7393
TA4YES F s c o r e  on  w a l k 0.84000.8693
TA4NO F s c o r e  on  w a l k 0.82090.8273
TA3YES F s c o r e  on  i d l e 0.81300.8047
TA3NO F s c o r e  on  i d l e 0.84900.7973
TA4YES F s c o r e  on  i d l e 0.88120.8945
TA4NO F s c o r e  on  i d l e 0.85080.8536
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rusev, G.; Yordanov, S.; Nedelcheva, S.; Banderov, A.; Lafaye de Micheaux, H.; Sauter-Starace, F.; Aksenova, T.; Koprinkova-Hristova, P.; Kasabov, N. NEuroMOrphic Neural-Response Decoding System for Adaptive and Personalized Neuro-Prosthetics’ Control. Biomimetics 2025, 10, 518. https://doi.org/10.3390/biomimetics10080518

AMA Style

Rusev G, Yordanov S, Nedelcheva S, Banderov A, Lafaye de Micheaux H, Sauter-Starace F, Aksenova T, Koprinkova-Hristova P, Kasabov N. NEuroMOrphic Neural-Response Decoding System for Adaptive and Personalized Neuro-Prosthetics’ Control. Biomimetics. 2025; 10(8):518. https://doi.org/10.3390/biomimetics10080518

Chicago/Turabian Style

Rusev, Georgi, Svetlozar Yordanov, Simona Nedelcheva, Alexander Banderov, Hugo Lafaye de Micheaux, Fabien Sauter-Starace, Tetiana Aksenova, Petia Koprinkova-Hristova, and Nikola Kasabov. 2025. "NEuroMOrphic Neural-Response Decoding System for Adaptive and Personalized Neuro-Prosthetics’ Control" Biomimetics 10, no. 8: 518. https://doi.org/10.3390/biomimetics10080518

APA Style

Rusev, G., Yordanov, S., Nedelcheva, S., Banderov, A., Lafaye de Micheaux, H., Sauter-Starace, F., Aksenova, T., Koprinkova-Hristova, P., & Kasabov, N. (2025). NEuroMOrphic Neural-Response Decoding System for Adaptive and Personalized Neuro-Prosthetics’ Control. Biomimetics, 10(8), 518. https://doi.org/10.3390/biomimetics10080518

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop