Next Article in Journal
IR Spectroscopy as a Diagnostic Tool in the Recycling Process and Evaluation of Recycled Polymeric Materials
Previous Article in Journal
Sensor Input Type and Location Influence Outdoor Running Terrain Classification via Deep Learning Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Motor Imagery Acquisition Paradigms: In the Search to Improve Classification Accuracy

by
David Reyes
1,*,
Sebastian Sieghartsleitner
2,3,
Humberto Loaiza
1 and
Christoph Guger
2
1
School of Electrical and Electronics Engineering, University of Valle, Cali 760032, Colombia
2
g.tec Medical Engineering, 4521 Schiedlberg, Austria
3
Institute of Computational Perception, Johannes Kepler University, 4040 Linz, Austria
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(19), 6204; https://doi.org/10.3390/s25196204
Submission received: 19 July 2025 / Revised: 22 August 2025 / Accepted: 11 September 2025 / Published: 7 October 2025
(This article belongs to the Section Biomedical Sensors)

Abstract

In recent years, advances in medicine have been evident thanks to technological growth and interdisciplinary research, which has allowed the integration of knowledge, for example, of engineering into medical fields. This integration has generated developments and new methods that can be applied in alternative situations, highlighting, for example, aspects related to post-stroke therapies, Multiple Sclerosis (MS), or Spinal Cord Injury (SCI) treatments. One of the methods that has stood out and is gaining more acceptance every day is Brain–Computer Interfaces (BCIs), through the acquisition and processing of brain electrical activity, researchers, doctors, and scientists manage to transform this activity into control signals. In turn, there are several methods for operating a BCI, this work will focus on motor imagery (MI)-based BCI and three types of acquisition paradigms (traditional arrow, picture, and video), seeking to improve the accuracy in the classification of motor imagination tasks for naive subjects, which correspond to a MI task for both the left and the right hand. A pipeline and methodology were implemented using the CAR+CSP algorithm to extract the features and simple standard and widely used models such as LDA and SVM for classification. The methodology was tested with post-stroke (PS) subject data with BCI experience, obtaining 96.25% accuracy for the best performance, and with the novel paradigm proposed for the naive subjects, 97.5% was obtained. Several statistical tests were carried out in order to find differences between paradigms within the collected data. In conclusion, it was found that the classification accuracy could be improved by using different strategies in the acquisition stage.

1. Introduction

Recent research has shown that, worldwide, there are around 65 million people who have suffered some kind of Traumatic Brain Injury (TBI), which has led them in difficult situations where motor injuries can occur and, in the worst-case scenario, leading them to quadriplegia [1]. Among the most complex situations where people can survive with some kind of motor injury are stroke, Spinal Cord Injury (SCI), Amyotrophic Lateral Sclerosis (ALS), Multiple Sclerosis (MS), cerebral palsy, among others [2]. In Colombia, there are around 770,000 people in this kind of situation, according to the National Administrative Department of Statistics (DANE) [3]. To try to mitigate the effects of these matters, interdisciplinary research has been carried out where doctors, engineers, scientists, and researchers from different fields of expertise which work together to develop tools that allow these people to communicate, interact [2], restore some kind of mobility, etc. To this end, Brain–Computer Interfaces (BCIs) have played an important role, which, although it is an emerging technology, has proven to be a powerful tool when applied in aspects related to rehabilitation and functional restoration [4]. There are several methods that work with BCIs; however, this work will focus on those that involve motor imagery (MI) from EEG biosignals. This corresponds to motor tasks that are performed internally and mentally without their execution. MI-based BCIs have been used globally in rehabilitation therapies where patients can somehow, after a series of repetitions and training, improve their situation [5]. This type of BCI is commonly used to control, for example, a neuroprosthesis [6,7]. There are also works where they use this method in conjunction with Functional Electrical Stimulation (FES) for treatment of gait [8], for post-stroke therapies [9], and even in combination with robotics [10].
Although different works and research have been reported that indicate the effectiveness of MI-based BCIs in rehabilitation and functional restoration processes, likewise, the need has been seen to review aspects that involves more attention from BCI users, since the user’s attention, concentration and motivation plays a fundamental role in the processes of MI repetition and feedback to obtain better results in the rehabilitation therapy [5]. Regularly in BCI training stages, subjects can choose several strategies to carry out motor imagination; however, the system could provide good instructions on how to carry out this imagination. In some works, they recommend carrying out EEG signal acquisition paradigms of MI where those tasks are related to familiar actions that are or were executed daily [11]. This way, the user could have better performance with a BCI.
Different strategies have been considered to mitigate the low accuracy rate on the recognition of MI tasks for example advances in signal processing and machine learning methods [12]. Others focused on the feature extraction stage using e.g., the CSP algorithm in MI recognition [13] or even by using deep learning methods [14]. In response for seeking better classification accuracy this work proposes, three kinds of paradigms that were designed for the acquisition of motor imagination tasks through EEG biosignals: the first is the traditional method where an arrow cue as stimulus is used; the second is a picture of a hand used as cue so that users concentrate on their own hands; and the third is a video of how to perform the motor imagery action. In the context of motor rehabilitation, BCIs establish a bridge between users’ motor tasks and external devices, e.g., computers, an electrical wheel chair [15], exoskeletons [16], and prosthetic hands [17], among others. Such motor tasks encompass motor execution, motor attempt, or motor imagery (MI), with all of them leading to event-related desynchronization (ERD) and synchronization (ERS) over sensorimotor areas [18]. Therefore, these decreases (ERD) and increases (ERS) in oscillatory power are often utilized as control signals for MI-based BCIs. Additionally, ERD is associated with greater cortical activation [19]. The main objective is to identify those ERD from EEG recordings by using the strategies, previously mentioned, at the acquisition stage of the data to be processed. The rest of the article is organized as follows: Section 2 explains the materials, methods, and the experimental setup. Section 3 shows the results obtained for the different experiments, in Section 4, a discussion is presented, and, in Section 5, some conclusions about the work performed are provided.

2. Materials and Methods

This section will explain the materials and methods used for the acquisition, pre-processing, and processing of the biosignals obtained for the purposes of motor imagery tasks recognition. The guideline of the proposed methodology was followed by the framework shown in Figure 1.
The primary objective of this work is to improve the accuracy rate in the recognition of MI tasks from EEG signals by using different paradigms during the acquisition stage of the data. The next section will explain in detail the implementation made.

2.1. EEG Biosignals Acquisition

Data was obtained with the approval of the Ethikkommission des Landes Oberösterreich in Austria (#D-42-17), Post-stroke EEG signals were obtained from three subjects with left and right hand involvement, and data from ten healthy subjects (aged 22–44 years) with no diagnosis of brain disease or body mobility. All healthy participants agreed to and signed informed consent and declared no experience with MI-BCIs or BCIs. A g.Nautilus PRO device, manufactured by the Austrian company g.tec medical engineering GmbH, with 16 acquisition channels, was used. EEG biosignals were sampled at a frequency of 250 Hz, conduction gel was applied to improve signal quality on each channel, and raw EEGs were obtained for further processing in MATLAB v.2023a. The EEG electrode positions were FC3, FCz, FC4, C5, C3, C1, Cz, C2, C4, C6, CP3, CP1, CPz, CP2, CP4, and Pz; according to the international 10/20 system, a reference electrode was placed on the right earlobe and a ground electrode at AFz. This electrode distribution allowed recording brain electrical activity focused on the motor cortex, which is where cortical activations occur when imagining or performing a motor movement. The following Figure 2 shows the distribution of the electrodes of the montage made for the acquisition of EEG signals.

Experimental Paradigms

The participants were prepared for the brain electrical activity recording using EEG, for which a shielded space was organized containing a 20-inch LED monitor with a resolution of 1920 × 1080 pixels and a refresh rate of 60 Hz. This monitor is connected to the computer which controls the start and end of each run. Initially, a black screen with a fixation cross is displayed, indicating to the user that data recording is about to begin. In this work three types of acquisition paradigms were used: (1) the traditional arrow paradigm, (2) a hand picture paradigm and (3) a hand video paradigm.
To acquire the EEG data, it was suggested that each user would not make strong and unnecessary movements; they would be relaxed and in the same way concentrated on the motor imagination task, for which they were initially explained what type of action they should imagine. Each user performed two types of motor imagination tasks: imagining moving the left hand and the right hand. These actions were shown randomly during the acquisition process, obtaining 40 trials per class for a total of 80 trials per subject for each paradigm, for a total of 240 trials per subject. The order of each paradigm was shown randomly for each subject in each session. Figure 3 shows the biosignals acquired (A) and the experimental setup (B).
For the naive subjects, the traditional arrow paradigm consisted of showing an arrow pointing left or right, indicating which side should be imagined; this paradigm is commonly used [20,21]. As shown in Figure 4, on a black screen, a fixation cross followed by a “Relax” message is shown at the beginning of each trial, indicating to the subject that the motor imagination task recording is going to start. Then an arrow cue, after 2 s, is shown pointing to which side MI should be performed (left or right). Subjects had 5 s to perform the MI task. Finally, a notice is shown for each subject to relax and prepare for the next trial.
The other and novel paradigms, the hand picture paradigm and hand video paradigm, consisted of showing the user which hand they should imagine for the case of the picture, and for the video, the action they should or could imagine. Similarly, a fixation cross is shown on a black screen, indicating that the experiment is going to start, followed by a picture or video cue indicating the hand that must be imagined. The MI task is performed for 5 s for each subject; after this, a “Relax” message is displayed, and the subject is prepared for the next trial. Figure 5 shows the timing of the paradigms proposed in this work.
The implemented methodology was also tested with data from experienced BCI users after a stroke. Data was captured using the recoveriX [22] rehabilitation system from g.tec medical engineering, in 3 real-time runs for each user. During the acquisition and classifier calibration process, feedback was generated using Functional Electrical Stimulation (FES) and a 3D avatar. Thus, it was possible to validate that the proposed methodology was capable of identifying brain activity related to motor imagery of the left and/or right hand. The recordings and capturing protocol were the same as those used in the experiment with healthy individuals, and present the same protocol as the BR41N.IO Hackathons [23]. Testing and post-processing of the obtained data were applied in the same way as with the inexperienced healthy subjects. Table 1 shows the description of the data for each post-stroke participant and their condition.
Figure 6 shows the paradigm timing used for data capturing, where it is observed that after obtaining the indication cue to perform the motor imagination task, 1.5 s later, feedback is provided with the aspects mentioned previously.

2.2. EEG Biosignals Processing

After obtaining the raw EEG data, it was organized in MATLAB for further processing. Initially, a signal filtering process is carried out between 8 and 30 Hz, which, according to several works [11,24,25], is where electrical activation occurs during the execution and imagination of motor tasks. For this, a 4th-order Butterworth bandpass filter was designed, as well as a 3rd-order Notch type to filter the 50 and 60 Hz powerline. These filters have shown great results in several works [19,26], and their implementation does not require high computational resources. A Common Average Referencing (CAR) filter, Equation (1), was also applied; this kind of filter allows us to remove typical or common activity from the EEG biosignals and preserves the inactive activity of each individual signal at a specific electrode. This reference method was used to improve the signal-to-noise ratio (SNR) caused by artifacts in EEG signals. With this method, removing the average across all electrodes could produce cleaner signals [2]. CAR helps minimize uncorrelated sources of the biosignal and noise through averaging, while eliminating sources of noise common to all sites. Therefore, a common average reference more closely approximates the theoretical differential recording ideal [27].
x ~ t = x i t 1 N j = 1 N x j t
where
x i t :   raw   EEG   signal :   channeEEG   i   in   time   t
1 N j = 1 N x j t :   mean   value   of   all   channels   in   time   t
x ~ t :   signal   of   channelEEG   i   after   CAR
Subsequently, the trials were organized according to each class, subject and paradigm, considering the starting trigger of each MI task. For this, the starting point was taken 2 s before the MI cue trigger is shown and 2 s after finishing the MI task, so the total length of the trial is 9 s. Figure 6 shows the organization of the trials structure.
Once the trials were organized, signals were obtained for each class, which were then taken to a windowing process of T = 2 s and T = 3 s with steps of 0.2 s (see Figure 7), in order to evaluate the accuracy of the classifier throughout the entire run. Then, the Common Spatial Pattern (CSP) was applied to the data for transforming it into a new matrix with minimal variance of one class and maximal variance of the other [13,28]. This method is based on the simultaneous diagonalization of two covariance matrices whose decomposition of the EEG data leads to a new time series that allows two classes to be discriminated.
Given 16 channels of EEG data for each trial, left and right X, the CSP method returns a projection matrix W of size 16 × 16. The decomposition of the trials would then be given by Z = WX. According to several works [28,29,30] by selecting the first 2 and the last 2 rows of the matrix W, an optimal CSP method can be obtained to reduce the dimensionality of the new EEG data. Finally, the feature vector is organized by calculating the variance, taking into account the prior windowing T and log transforming the data:
f = log V A R r r = 1 4 V A R r  
For analysis and classification, two different types of classifiers were implemented: a Linear Discriminant Analysis (LDA), a Support Vector Machine (SVM) with two kernels, linear and 3rd-order polynomial. LDA classifiers are commonly used in several works [31,32,33,34], and also SVM for the classification of motor imagery data [11,35,36]. The accuracy was estimated via 10-fold cross-validation, which means that the data was partitioned into 10 subsets: for training and others for validation and testing. The accuracy was calculated for each paradigm and subject every 0.2 s steps for all trials, and the session accuracy was the maximum value after calculating the average of the 10 cross-validation repetitions.

3. Results

To illustrate the results obtained, different figures are shown with the accuracy for each subject, evaluating two window sizes, T = 2 s and T = 3 s. The size T = 1 s was discarded due to poor performance in terms of accuracy. Likewise, images are presented that refer to the Event Related Desynchronization and Synchronization (ERD/ERS) for the best and the worst performance during the tests, along with their CSP filter distributions. In this work, the healthy subjects were new to motor imagination tasks and had never even used a BCI, which is why the accuracy threshold for good performance was established to be greater than 60%. There are works that have reported good performance from this threshold [11]. The results are shown: first, the ones obtained for the healthy participants, followed by the post-stroke subjects.
Figure 8 presents the accuracy boxplot for each paradigm for the selected window sizes corresponding to the LDA Classifier. It can be seen that the majority of subjects obtained an accuracy percentage greater than 60% and also that the best classification rates were for the proposed paradigms. The maximum accuracy was for the picture paradigm classification, reaching 96.62% with a window size of T = 2 s, and the lowest was for the video paradigm, reaching 58% with a window size of T = 3 s.
Figure 9 shows the results boxplot for the SVM-Lin classifier, where the improvements in accuracy can be seen compared to the traditional arrow paradigm. It is important to mention that all subjects had an accuracy above 60% for the hand picture paradigm, which indicates that the performance was good for all naive subjects for this paradigm. The best performances were for window size T = 2 s. Figure 10 shows the results for the SVM-Poly classifier hand video paradigm, where it is observed that there is also an increase in the accuracy if compared with the arrow paradigm for some subjects; however, the lowest classification accuracy was performed with this classifier for 54.8% for window T = 3 s.
With these results it can be concluded that the best performance was reached within a time processing window of T = 2 s, and only two subjects did not reach the threshold accuracy of S2 and S5 with this time window size. Likewise, in Figure 11 one can see the best accuracies that correspond to subjects S7 and S10 for the MI_Video and MI_Picture paradigms for each implemented classifier using a time window size of T = 2 s. It is highlighted that subject S10 obtained an accuracy above 90% for both paradigms.
Figure 12 shows the ERD/ERS obtained for channels C3 and C4, respectively. These channels show electrical activation when a motor imagination task occurs: when the action corresponds to a right-hand motor imagery task, the energy was lower at C3, and vice versa for the C4, when the energy from the left-hand motor imagery task was lower [11]. In motor imagery paradigms, the ERD is observed contralateral to the imagined limb, whereas the ERS can appear ipsilaterally or during the post-stimulus resting phase [37]. These patterns allow us not only to understand the neurophysiological mechanisms underlying motor imagery, but also to construct discriminative features useful for EEG MI-based BCI systems [38]. The ERD/ERS of the best and worst performers in relation to the accuracy of the classifier are shown (see Figure 12), where a change in the energy levels (A) is observed in comparison to the worst performer (B).
Similarly, using the g.BSanalyze software(g.tec, Upper Austria, Austria), it was possible to calculate and graph the CSP filters; the first two and last two CSP filters are shown. These patterns or filters reflect the spatial distribution of MI-based neuronal activity and are essential for feature extraction, allowing for the distinction between different motor imagination classes, such as left and right hand MI. In the intraindividual analysis, the distribution of patterns is different, suggesting interindividual variability in cortical activations during MI tasks [12]. CSP filters maximize the signal variance for one class and minimize it for the other, resulting in characteristic spatial patterns. When these filters are projected onto the scalp as topographic maps, the areas with the greatest weight reflect the cortical regions that exhibit significant differences in EEG oscillation power. In the case of motor imagery, these maps typically show differentiated activations over the contralateral sensorimotor regions (mainly around areas C3 and C4 of the 10–20 system). For example, during right-hand imagery, it is common to observe greater modulation in electrodes close to C3 (left hemisphere), while, for the left hand, the opposite occurs around C4 (right hemisphere). It is important to note that CSP maps do not directly represent neuronal activity, but rather the spatial distribution of the filter weights that best separate the classes. Therefore, they should be interpreted as indicators of the scalp regions that contribute most to the discrimination of mental states associated with the task [28].
It can be observed, in Figure 13, that there is a variability in the spatial distribution of the CSP components in the comparison of the best S10 performance for the traditional paradigm A, with the proposed picture paradigm B. In Figure 12B, it can be observed that there is a contralateral activation between the first and the last filter, which can lead to obtaining a good percentage of success in motor imagination task recognition.
For participant S7, the same analysis was performed and it was found, according to Figure 14, that the filters are distributed better for the hand video paradigm, in which the subject obtained the best success performance, compared to the distribution of the traditional arrow paradigm.
Two distributions are shown randomly for two participants who obtained low success rates in motor imagery recognition tasks; see Figure 15. It can be observed that the distributions are not as discriminating as for the other cases; this likely affects the performance of the classifiers and creates greater confusion between the predicted output data and the actual output data.
Table 2 shows the results obtained for each classifier, window size, subject, and paradigm. It is observed that the best accuracy percentages are achieved for the windows of size T = 2 s and that, in comparison with the results obtained with the traditional arrow method, the proposed MI_Picture and MI_Video paradigms present better accuracy percentages for subject S10, who obtained the greatest improvement, an increase of 20.5%.
Finally, another test was carried out using CAR filtering to check the classification accuracy performance. Figure 16 shows the boxplots obtained after CAR. In Table 3, which summarizes the results obtained after applying the CAR filter, an interesting improvement can be observed compared to the previous results, where some classifiers could not reach the accuracy threshold; in this case, all of them exceeded it. The best performance was obtained with an accuracy of 97.5% for both participant S10 for the proposed hand picture paradigm and for participant S7 for the video paradigm, whose improvement for this occasion was 23.13% and 22.38%, respectively, compared to the traditional paradigm. It was also found that an improvement in the accuracy was obtained for the hand video paradigm, in general, for all users.
A Wilcoxon signed rank test, ANOVA, and Bonferroni-corrected t-tests were implemented to show the classification accuracy differences with a significance threshold set to α = 0.05. Table 4 shows the results for the p-value (values p < 0.05 are in bold) obtained after the test by comparing the traditional arrow paradigm accuracy with the proposed paradigms. Table 5 shows the p-values obtained after applying the CAR filter and for a window size of T = 2 s, since it had the best classification accuracy performance during the previous tests.
Observing the results obtained, it can be seen that there are some important aspects for the comparison between paradigms within the data processed, for example, after obtaining the Bonferroni-corrected p-value, there is a significance difference (p < 0.05) between arrow vs. picture and arrow vs. video for SVM classification and for both window sizes T. Better results are obtained after CAR filtering in which all classifiers showed significant differences after Bonferroni-corrected tests for both comparisons.

Post-Stroke Data

For the post-stroke data, a total of 40 imagination tasks were performed by each hand and both hands during each of the 3 runs, for a total of 120 MI tasks for each of both upper limbs. The same steps as the proposed methodology were performed, implemented, and tested, with the results shown in the following graphs. The data were processed with a processing window of t = 2 s, as it obtained the best performance in the previous tests and validations. In the first run, the classifier was trained, and the CSP filters were calculated. Run 2 and Run 3 were used to test the classifier obtained in the previous run. Figure 17 shows the classification performance for the last run for each subject and their ERD/ERS.
Figure 18, shows the CSP filter distribution for each subject for the classifier calibration run. This calibration stage is one of the most important aspects since testing runs would perform according to the results obtained during the classifier calibration. It can be observed that, for example, in Figure 18A there is a discriminative distribution that could lead to good performance in the classification rate, and there are high values around the left hemisphere of the brain for filters 3 and 4, which could indicate that it was easier for the subject to perform MI for the non-paretic hand. This could be found for the other subjects as well.
In Figure 19, it can be observed that all accuracy percentages exceeded the defined accuracy threshold, with above 60%. The best-performing user was S1, with a maximum correct answer of 95.42% with the LDA classifier, for Run 3. It is important to note that the three classifiers performed well in identifying motor imagery tasks. It can also be observed that for 3 subjects there is an ERD/ERS activity related to the motor imagination tasks, and that the accuracy improves after getting to the feedback phase, which FES can have a strong role in contributing to ERD generation [39].

4. Discussion

When analyzing the results obtained, there is a significant difference in the data gathered in relation to the accuracy obtained with the classifiers implemented for the proposed paradigms. The paradigm corresponding to the hand picture stands out, which obtained a significant difference, p < 0.05, for all classifiers and both processing time window sizes. Processing time window size classification accuracy performed better for the T = 2 s (see Figure 20), which also leads to a fast response in the processing pipeline. Healthy subjects stated that they feel more comfortable with the paradigms that show how to perform the motor imagery action, or which hand they have to imagine. Works [40] have shown that cortical brain activation can be induced by several factors, such as intended, observed, and/or imagined, the action to perform. This is an interesting aspect that is congruent with the results obtained. It is important to mention that BCIs are first and algorithm-dependent, but, on the other hand, subject-dependent as well. That is probably why there are some subjects that performed better than others.
The best classification accuracy was 97.5% for the hand picture paradigm obtained from subject S10; in fact, S10 performed the best or at least one of the best for all the paradigms and classifiers. Subjects S1, S7, and S10 stated that they are sporty and healthy humans, which could lead to performing better in MI tasks compared to the others, who were mostly workers and students. For example, during the analysis of the CSP filters, the distribution was better and more discriminative for these subjects than for the ones that had lower performance (see Figure 12, Figure 13 and Figure 14). This could also induce the presence of ERD/ERS during the motor imagery tasks within those subjects who had better performances. It is important to mention that action observation, repetitive actions, and performing familiar actions generate changes in power around mu rhythm [41], which is the main frequency band that is used in this work.
Some subjects reported that the video paradigm was confusing for them, but others found it easier to imagine the motor imagery action. This could be explained by the activation of the mirror neuron system introduced by [42], where it might play a fundamental role in both action understanding and in the capacity to learn by imitation. Within the arrow paradigm, subjects performed the MI task by using several strategies, and all of them felt more comfortable with the picture one. Some works have reported that through an observation of the action or an indication of it, brain activity has been evidenced in the premotor cortex, in the supplementary motor area, and in the primary somatosensory cortex [11]; this may be associated with the better performance obtained with the proposed paradigms compared to the traditional one.
For the post-stroke results, it was noted that the use of the FES and VR as feedback could generate a better motor imagery performance in each subject. The feedback phase plays an important role during rehabilitation processes that seek to restore functionality [19,26,43]. As for the BCI performance, when using feedback during the calibration stage, it improved the results during the training of the classifiers to be tested in the next runs. However, further studies with larger data are necessary to demonstrate, for example, the mirror neuron activation, and to obtain a more generalized classification.
With the results obtained from the best performances, and, in general, from the post-stroke data, something interesting was observed in the classification curve: after around ~1.5 s, it can be seen how the classification accuracy improves. This is because after that period of time, feedback through the FES and 3D Avatar was applied, which could help the user to better perform the motor imagination tasks. Similarly, Event-Related Desynchronizations and Synchronizations (ERDs/ERSs) can be observed when the MI tasks occur, which may indicate that it is actually the motor imagination that is recognized by the proposed methodology. The best classification performances were obtained with the LDA classifier, since it obtained the highest percentage of success in Run 3, where data from Run 2 was also processed and classified. However, the other classifiers are within the range with similar percentages of success. Figure 21 shows the summary of the results obtained.

5. Conclusions

In this work, three paradigms were tested for motor imagery-based BCIs using a hand picture and a hand video in comparison to the traditional arrow. The experiment demonstrated that the subjects who participated in the study improved the classification accuracy significantly, p < 0.05. It was also demonstrated that by using familiar actions, the MI task could be performed better. The paradigms proposed could induce motor imagery (MI)-related ERD, as well as action observation (AO) ERD, because both actions generate cortical activity; in fact, experiments that involve AO+MI, such as the ones in this work, could enhance that cortical activity [44]. Similar strategies to [11] were performed by subjects to perform MI tasks. It is very important to take into account that BCI is mostly subject-dependent and this study showed that with naive BCI users, it can obtain good performance by using different approaches and strategies in the acquisition paradigm; adding some feedback in this stage could also improve the results for further studies [19,45], as demonstrated in this work for the post-stroke data.
The results obtained in this work showed good performance in terms of the role within the results reported in the literature. The best performances (97.5%-S10 and 93.5%-S7) in this work are in the range of [11], where a Chinese character is presented in the paradigm stage. This work also presents better results than the best performance in this work [13], where they used visual robotic feedback in the paradigm, and the best accuracy is around 92%. This work surpasses those results of recent works, such as [12], where they obtained 95.24% in the recognition of the MI task.
For future works, an adaptive BCI framework could be implemented, and closed-loop paradigms show promising results for improving BCI performance in future implementations. The use of a window size T = 2 s, as well as the low computational costs, methods, and techniques used in this work, could be good for real-time response implementation. Using other approaches of BCI could be an opportunity for future work; invasive Electrocorticography (ECoG)-based BCI are demonstrating to have outstanding results, for example, in quadriplegic patients [46].

Author Contributions

Conceptualization, D.R.; methodology, D.R.; software, D.R. and S.S.; validation, D.R. and S.S.; formal analysis, D.R. and S.S.; data curation, D.R.; writing—original draft preparation, D.R. and S.S.; writing—review and editing, D.R.; supervision, H.L. and C.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “COLFUTURO—Fundación para el Futuro de Colombia”, grant number “Bicentenario 1st Call” managed by Universidad del Valle, Vicerrectoría de Investigaciones.

Institutional Review Board Statement

Data gathering was approved by Ethikkommission des Landes Oberösterreich, Austria (#D-42-17). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. This research followed the principles outlined in the Declaration of Helsinki.

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

The datasets are available upon request to the authors. The main reason the dataset has to be requested is that participants’ data need to be treated according to current data protection laws and ethical guidelines. So, requests to access the datasets should be directed to D.R.: david.reyes@correounivalle.edu.co.

Acknowledgments

Special thanks to the Austrian-based company g.tec medical engineering for its support and cooperation during the development of this work and to the University of Valle, School of Electrical and Electronics Engineering, for the administrative support, use of resources, and accompaniment.

Conflicts of Interest

S.S. was employed by g.tec medical engineering GmbH. CG was the CEO of g.tec medical engineering GmbH, which developed and commercialized the BCI system used in this study. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Dewan, M.C.; Rattani, A.; Gupta, S.; Baticulon, R.E.; Hung, Y.C.; Punchak, M.; Agrawal, A.; Adeleye, A.O.; Shrime, M.G.; Rubiano, A.M.; et al. Estimating the global incidence of traumatic brain injury. J. Neurosurg. 2019, 130, 1080–1097. [Google Scholar] [CrossRef]
  2. Rashid, M.; Sulaiman, N.; P. P. Abdul Majeed, A.; Musa, R.M.; Ahmad, A.F.; Bari, B.S.; Khatun, S. Current Status, Challenges, and Possible Solutions of EEG-Based Brain-Computer Interface: A Comprehensive Review. Front. Neurorobot. 2020, 14, 25. [Google Scholar] [CrossRef] [PubMed]
  3. Departamento Administrativo Nacional de Estadística (DANE). Estadísticas por Tema: Salud, Discapacidad. Available online: https://www.dane.gov.co/index.php/estadisticas-por-tema/demografia-y-poblacion/discapacidad (accessed on 21 March 2023).
  4. Mane, R.; Chouhan, T.; Guan, C. BCI for stroke rehabilitation: Motor and beyond. J. Neural Eng. 2020, 17, 041001. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, X.; Zhang, W.; Li, W.; Zhang, S.; Lv, P.; Yin, Y. Effects of motor imagery based brain-computer interface on upper limb function and attention in stroke patients with hemiplegia: A randomized controlled trial. BMC Neurol. 2023, 23, 136. [Google Scholar] [CrossRef]
  6. Poboroniuc, M.; Irimia, D.; Ionascu, R.; Roman, A.I.; Mitocaru, A.; Baciu, A. Design and Experimental Results of New Devices for Upper Limb Rehabilitation in Stroke. In Proceedings of the 2021 International Conference on e-Health and Bioengineering (EHB), Iasi, Romania, 18–19 November 2021; pp. 22–25. [Google Scholar] [CrossRef]
  7. Cho, J.-H.; Jeong, J.-H.; Lee, S.-W. NeuroGrasp: Real-Time EEG Classification of High-Level Motor Imagery Tasks Using a Dual-Stage Deep Learning Framework. IEEE Trans. Cybern. 2021, 52, 13279–13292. [Google Scholar] [CrossRef]
  8. King, C.E.; Wang, P.T.; McCrimmon, C.M.; Chou, C.C.Y.; Do, A.H.; Nenadic, Z. Brain-Computer Interface Driven Functional Electrical Stimulation System for Overground Walking in Spinal Cord Injury Participant. Neurology 2014, 82 (Suppl. 1). Available online: http://ovidsp.ovid.com/ovidweb.cgi?T=JS&PAGE=reference&D=emed12&NEWS=N&AN=71466813 (accessed on 8 July 2025).
  9. da Cunha, M.; Rech, K.D.; Salazar, A.P.; Pagnussat, A.S. Functional electrical stimulation of the peroneal nerve improves post-stroke gait speed when combined with physiotherapy. A systematic review and meta-analysis. Ann. Phys. Rehabil. Med. 2021, 64, 101388. [Google Scholar] [CrossRef]
  10. Ambrosini, E.; Gasperini, G.; Zajc, J.; Immick, N.; Augsten, A.; Rossini, M.; Ballarati, R.; Russold, M.; Ferrante, S.; Ferrigno, G.; et al. A Robotic System with EMG-Triggered Functional Eletrical Stimulation for Restoring Arm Functions in Stroke Survivors. Neurorehabil. Neural Repair 2021, 35, 334–345. [Google Scholar] [CrossRef]
  11. Qiu, Z.; Allison, B.Z.; Jin, J.; Zhang, Y.; Wang, X.; Li, W.; Cichocki, A. Optimized motor imagery paradigm based on imagining Chinese characters writing movement. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1009–1017. [Google Scholar] [CrossRef]
  12. Gómez-Morales, Ó.W.; Collazos-Huertas, D.F.; Álvarez-Meza, A.M.; Castellanos-Dominguez, C.G. EEG Signal Prediction for Motor Imagery Classification in Brain–Computer Interfaces. Sensors 2025, 25, 2259. [Google Scholar] [CrossRef]
  13. Hayta, Ü.; Irimia, D.C.; Guger, C.; Erkutlu, İ.; Güzelbey, İ.H. Optimizing Motor Imagery Parameters for Robotic Arm Control by Brain-Computer Interface. Brain Sci. 2022, 12, 833. [Google Scholar] [CrossRef]
  14. Thanjavur, K.; Babul, A.; Foran, B.; Bielecki, M.; Gilchrist, A.; Hristopulos, D.T.; Brucar, L.R.; Virji-Babul, N. Recurrent neural network-based acute concussion classifier using raw resting state EEG data. Sci. Rep. 2021, 11, 12353. [Google Scholar] [CrossRef]
  15. Prashant, P.; Joshi, A.; Gandhi, V. Brain Computer Interface: A review. In Proceedings of the 2015 5th Nirma University International Conference on Engineering (NUiCONE) Brain, Ahmedabad, India, 26–28 November 2015; Volume 74, pp. 3–30. [Google Scholar]
  16. Proietti, T.; Crocher, V.; Roby-Brami, A.; Jarrasse, N. Upper-limb robotic exoskeletons for neurorehabilitation: A review on control strategies. IEEE Rev. Biomed. Eng. 2016, 9, 4–14. [Google Scholar] [CrossRef] [PubMed]
  17. Saragih, A.S.; Basyiri, H.N.; Raihan, M.Y. Analysis of motor imagery data from EEG device to move prosthetic hands by using deep learning classification. AIP Conf. Proc. 2022, 2537, 50009. [Google Scholar] [CrossRef]
  18. Sebastián-Romagosa, M.; Cho, W.; Ortner, R.; Murovec, N.; Von Oertzen, T.; Kamada, K.; Allison, B.Z.; Guger, C. Brain Computer Interface Treatment for Motor Rehabilitation of Upper Extremity of Stroke Patients—A Feasibility Study. Front. Neurosci. 2020, 14, 591435. [Google Scholar] [CrossRef] [PubMed]
  19. Sieghartsleitner, S.; Sebastian-Romagosa, M.; Schreiner, L.; Grunwald, J.; Cho, W.; Ortner, R.; Tanackovic, S.; Scharinger, J.; Guger, C. Analysis of Cortical Excitability During Brain-Computer Interface Stroke Rehabilitation of Upper and Lower Extremity. In Proceedings of the 2024 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), St Albans, UK, 21–23 October 2024; pp. 1106–1111. [Google Scholar] [CrossRef]
  20. Yu, T.; Xiao, J.; Wang, F.; Zhang, R.; Gu, Z.; Cichocki, A.; Li, Y. Enhanced Motor Imagery Training Using a Hybrid BCI with Feedback. IEEE Trans. Biomed. Eng. 2015, 62, 1706–1717. [Google Scholar] [CrossRef]
  21. Scherer, R.; Vidaurre, C. Chapter 8—Motor Imagery Based Brain-Computer Interfaces. In Smart Wheelchairs and Brain-Computer Interfaces: Mobile Assistive Technologies, 2nd ed.; Elsevier, B.V.: Amsterdam, The Netherlands, 2018; p. 171. ISBN 9780128128923. [Google Scholar]
  22. g.tec Medical Engineering. RecoveriX. Available online: https://recoverix.com/ (accessed on 2 February 2025).
  23. g.tec Medical Engineering. BR41N.IO Hackathon. Available online: https://www.br41n.io/ (accessed on 8 July 2025).
  24. Song, M.; Kim, J. Motor Imagery Enhancement Paradigm Using Moving Rubber Hand Illusion System. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS), Jeju, Republic of Korea, 11–15 July 2017; pp. 1146–1149. [Google Scholar]
  25. Syrov, N.; Vasilyev, A.; Kaplan, A. Sensorimotor EEG Rhythms During Action Observation and Passive Mirror-Box Illusion. Commun. Comput. Inf. Sci. 2021, 1499, 101–106. [Google Scholar] [CrossRef]
  26. Brunner, I.; Lundquist, C.B.; Pedersen, A.R.; Spaich, E.G.; Dosen, S.; Savic, A. Brain computer interface training with motor imagery and functional electrical stimulation for patients with severe upper limb paresis after stroke: A randomized controlled pilot trial. J. Neuroeng. Rehabil. 2024, 21, 10. [Google Scholar] [CrossRef]
  27. Ludwig, K.A.; Miriani, R.M.; Langhals, N.B.; Joseph, M.D.; Anderson, D.J.; Kipke, D.R. Using a common average reference to improve cortical neuron recordings from microelectrode arrays. J. Neurophysiol. 2009, 101, 1679–1689. [Google Scholar] [CrossRef]
  28. Guger, C.; Ramoser, H.; Pfurtscheller, G. Real-time EEG analysis with subject-specific spatial patterns for a brain-computer interface (BCI). IEEE Trans. Rehabil. Eng. 2000, 8, 447–456. [Google Scholar] [CrossRef]
  29. Geng, X.; Li, D.; Chen, H.; Yu, P.; Yan, H.; Yue, M. An improved feature extraction algorithms of EEG signals based on motor imagery brain-computer interface. Alexandria Eng. J. 2022, 61, 4807–4820. [Google Scholar] [CrossRef]
  30. Kundu, S.; Tomar, A.S.; Chowdhury, A.; Thakur, G.; Tomar, A. Advancements in Temporal Fusion: A New Horizon for EEG-Based Motor Imagery Classification. IEEE Trans. Med. Robot. Bionics 2024, 6, 567–576. [Google Scholar] [CrossRef]
  31. Bhatti, M.H.; Khan, J.; Khan, M.U.G.; Iqbal, R.; Aloqaily, M.; Jararweh, Y.; Gupta, B. Soft Computing-Based EEG Classification by Optimal Feature Selection and Neural Networks. IEEE Trans. Ind. Informatics 2019, 15, 5747–5754. [Google Scholar] [CrossRef]
  32. Selim, S.; Tantawi, M.; Shedeed, H.; Badr, A. A Comparative Analysis of Different Feature Extraction Techniques for Motor Imagery Based BCI System. In Proceedings of the Advances in Intelligent Systems and Computing, Cairo, Egypt, 8–10 April 2020; Volume 1153, pp. 740–749. [Google Scholar]
  33. dos Santos, E.M.; San-Martin, R.; Fraga, F.J. Comparison of subject-independent and subject-specific EEG-based BCI using LDA and SVM classifiers. Med. Biol. Eng. Comput. 2023, 61, 835–845. [Google Scholar] [CrossRef]
  34. Malass, M.; Tabbal, J.; El Falou, W. EEG Features Extraction and Classification Methods in Motor Imagery Based Brain Computer Interface. In Proceedings of the 2019 Fifth International Conference on Advances in Biomedical Engineering (ICABME), Tripoli, Lebanon, 17–19 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
  35. Hou, Y.; Chen, T.; Lun, X.; Wang, F. A novel method for classification of multi-class motor imagery tasks based on feature fusion. Neurosci. Res. 2022, 176, 40–48. [Google Scholar] [CrossRef] [PubMed]
  36. Mebarkia, K.; Reffad, A. Multi optimized SVM classifiers for motor imagery left and right hand movement identification. Australas. Phys. Eng. Sci. Med. 2019, 42, 949–958. [Google Scholar] [CrossRef] [PubMed]
  37. Pfurtschellera, G.; da Silva, F.H.L. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 2012, 58, 1865–1873. [Google Scholar] [CrossRef] [PubMed]
  38. Bian, Y.; Zhao, L.; Li, J.; Guo, T.; Fu, X.; Qi, H. Improvements in Classification of Left and Right Foot Motor Intention Using Modulated Steady-State Somatosensory Evoked Potential Induced by Electrical Stimulation and Motor Imagery. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 150–159. [Google Scholar] [CrossRef]
  39. Takahashi, M.; Gouko, M.; Ito, K. Functional Electrical Stimulation (FES) effects for Event Related Desynchronization (ERD) on foot motor area. In Proceedings of the 2009 ICME International Conference on Complex Medical Engineering, Tempe, AZ, USA, 9–11 April 2009; pp. 1–6. [Google Scholar]
  40. Jeannerod, M. Neural simulation of action: A unifying mechanism for motor cognition. Neuroimage 2001, 14, 103–109. [Google Scholar] [CrossRef]
  41. Pineda, J.A. The functional significance of mu rhythms: Translating “seeing” and “hearing” into “doing”. Brain Res. Rev. 2005, 50, 57–68. [Google Scholar] [CrossRef]
  42. Rizzolatti, G.; Craighero, L. The mirror-neuron system. Annu. Rev. Neurosci. 2004, 27, 169–192. [Google Scholar] [CrossRef]
  43. Kim, M.S.; Park, H.; Kwon, I.; An, K.O.; Kim, H.; Park, G.; Hyung, W.; Im, C.H.; Shin, J.H. Efficacy of brain-computer interface training with motor imagery-contingent feedback in improving upper limb function and neuroplasticity among persons with chronic stroke: A double-blinded, parallel-group, randomized controlled trial. J. Neuroeng. Rehabil. 2025, 22, 1. [Google Scholar] [CrossRef]
  44. Vogt, S.; Rienzo, F.D.; Collet, C.; Collins, A.; Guillot, A. Multiple roles of motor imagery during action observation. Front. Hum. Neurosci. 2013, 7, 807. [Google Scholar] [CrossRef]
  45. Lin, C.L.; Chen, L.T. Improvement of brain–computer interface in motor imagery training through the designing of a dynamic experiment and FBCSP. Heliyon 2023, 9, e13745. [Google Scholar] [CrossRef]
  46. Cajigas, I.; Davis, K.C.; Prins, N.W.; Gallo, S.; Naeem, J.A.; Fisher, L.; Ivan, M.E.; Prasad, A.; Jagid, J.R. Brain-Computer interface control of stepping from invasive electrocorticography upper-limb motor imagery in a patient with quadriplegia. Front. Hum. Neurosci. 2023, 16, 1077416. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the framework followed by the proposed methodology.
Figure 1. Schematic diagram of the framework followed by the proposed methodology.
Sensors 25 06204 g001
Figure 2. Distribution of the 16 EEG biosignal acquisition channels around the motor cortex.
Figure 2. Distribution of the 16 EEG biosignal acquisition channels around the motor cortex.
Sensors 25 06204 g002
Figure 3. (A) First 8 of 16 EEG biosignals channels captured using the g.Nautilus PRO distributed around the motor cortex. (B) Experimental setup for data recording.
Figure 3. (A) First 8 of 16 EEG biosignals channels captured using the g.Nautilus PRO distributed around the motor cortex. (B) Experimental setup for data recording.
Sensors 25 06204 g003
Figure 4. Traditional arrow paradigm timing for a single trial in seconds used for EEG signal recordings.
Figure 4. Traditional arrow paradigm timing for a single trial in seconds used for EEG signal recordings.
Sensors 25 06204 g004
Figure 5. Hand picture paradigm and grasping hand video paradigm timing for a single trial in seconds used for EEG signal recordings.
Figure 5. Hand picture paradigm and grasping hand video paradigm timing for a single trial in seconds used for EEG signal recordings.
Sensors 25 06204 g005
Figure 6. Paradigm protocol used to capture EEG biosignals for MI tasks. Adapted from [18].
Figure 6. Paradigm protocol used to capture EEG biosignals for MI tasks. Adapted from [18].
Sensors 25 06204 g006
Figure 7. Trial structure, organization, and windowing for signal processing for each class, paradigm, and subject.
Figure 7. Trial structure, organization, and windowing for signal processing for each class, paradigm, and subject.
Sensors 25 06204 g007
Figure 8. Classification accuracy for the LDA classifier using window size T = 2 s, T = 3 s for each paradigm presented to gather the EEG biosignals from MI task.
Figure 8. Classification accuracy for the LDA classifier using window size T = 2 s, T = 3 s for each paradigm presented to gather the EEG biosignals from MI task.
Sensors 25 06204 g008
Figure 9. Classification accuracy for the SVM-Lin classifier using window size T = 2s, T = 3 s for each paradigm presented to gather the EEG biosignals from MI task.
Figure 9. Classification accuracy for the SVM-Lin classifier using window size T = 2s, T = 3 s for each paradigm presented to gather the EEG biosignals from MI task.
Sensors 25 06204 g009
Figure 10. Classification accuracy for the SVMPoly classifier using window size T = 2 s, T = 3 s for each paradigm presented to gather the EEG biosignals from MI task.
Figure 10. Classification accuracy for the SVMPoly classifier using window size T = 2 s, T = 3 s for each paradigm presented to gather the EEG biosignals from MI task.
Sensors 25 06204 g010
Figure 11. Classification accuracy best performance subjects (A) S10 for hand picture paradigm and (B) S7 for hand video paradigm, window size T = 2 s.
Figure 11. Classification accuracy best performance subjects (A) S10 for hand picture paradigm and (B) S7 for hand video paradigm, window size T = 2 s.
Sensors 25 06204 g011
Figure 12. ERD/ERS maps for best (A) and worst (B) performance for channels C3 and C4 of the EEG biosignals related to left- and right-hand MI tasks.
Figure 12. ERD/ERS maps for best (A) and worst (B) performance for channels C3 and C4 of the EEG biosignals related to left- and right-hand MI tasks.
Sensors 25 06204 g012
Figure 13. CSP filter distribution for S10, (A) Traditional arrow paradigm. (B) Proposed, hand picture paradigm.
Figure 13. CSP filter distribution for S10, (A) Traditional arrow paradigm. (B) Proposed, hand picture paradigm.
Sensors 25 06204 g013
Figure 14. CSP filter distribution for S7, (A) Traditional arrow paradigm. (B) Proposed, hand video paradigm.
Figure 14. CSP filter distribution for S7, (A) Traditional arrow paradigm. (B) Proposed, hand video paradigm.
Sensors 25 06204 g014
Figure 15. CSP filter distribution. (A) Traditional arrow paradigm for S4. (B) Traditional arrow paradigm for S8.
Figure 15. CSP filter distribution. (A) Traditional arrow paradigm for S4. (B) Traditional arrow paradigm for S8.
Sensors 25 06204 g015
Figure 16. Subject accuracy for each classifier and each acquisition paradigm implemented, with CAR filtering. (A) Arrow paradigm. (B) Hand picture paradigm. (C) Hand video paradigm.
Figure 16. Subject accuracy for each classifier and each acquisition paradigm implemented, with CAR filtering. (A) Arrow paradigm. (B) Hand picture paradigm. (C) Hand video paradigm.
Sensors 25 06204 g016
Figure 17. Classification performance for each post-stroke user and for each implemented classifier. (A) S1—Run 3. (B) S2—Run 3. (C) S3—Run 3.
Figure 17. Classification performance for each post-stroke user and for each implemented classifier. (A) S1—Run 3. (B) S2—Run 3. (C) S3—Run 3.
Sensors 25 06204 g017
Figure 18. CSP filter distribution for post-stroke subjects: (A) Subject 1—Left Hand affected. (B) Subject 2—Left Hand affected. (C) Subject 3—Right Hand affected.
Figure 18. CSP filter distribution for post-stroke subjects: (A) Subject 1—Left Hand affected. (B) Subject 2—Left Hand affected. (C) Subject 3—Right Hand affected.
Sensors 25 06204 g018
Figure 19. Subject classification accuracy for each classifier implemented with a t = 2 s window. (A) Run 1—Calibration. (B) Run 2—Testing. (C) Run 3—Testing.
Figure 19. Subject classification accuracy for each classifier implemented with a t = 2 s window. (A) Run 1—Calibration. (B) Run 2—Testing. (C) Run 3—Testing.
Sensors 25 06204 g019
Figure 20. Motor imagery performance for each healthy subject for the best classification accuracy reached for processing time window T = 2 s.
Figure 20. Motor imagery performance for each healthy subject for the best classification accuracy reached for processing time window T = 2 s.
Sensors 25 06204 g020
Figure 21. Post-stroke motor imagery classification accuracy for each run and classifier implemented.
Figure 21. Post-stroke motor imagery classification accuracy for each run and classifier implemented.
Sensors 25 06204 g021
Table 1. Description of the condition of post-stroke participants.
Table 1. Description of the condition of post-stroke participants.
ParticipantsCondition
S1Hand: Left side affected
S2Hand: Left side affected
S3Hand: Right side affected
Table 2. Motor imagery accuracy percentage for each subject, classifier, and paradigm, having window sizes of T = 2 s and T = 3 s.
Table 2. Motor imagery accuracy percentage for each subject, classifier, and paradigm, having window sizes of T = 2 s and T = 3 s.
MOTOR IMAGERY ACCURACY
PARADIGMARROWPICTUREVIDEO
WINDOWT = 2 sT = 3 sT = 2 sT = 3 sT = 2 sT = 3 s
CLASSIFIERLDASVMLSVMPLDASVMLSVMPLDASVMLSVMPLDASVMLSVMPLDASVMLSVMPLDASVMLSVMP
SUBJECT
S173,6275,2572,2575,8774,574,6281,1280,581,7576,7579,8779,2581,6383,2582,7581,1381,6383,13
S275,8762,7564,2563,1265,1263,562,8863,8766,8765,3864,564,565,563,559,562,6263,562,25
S365,8765,636362,1261,563,569,3769,6369,3870,8769,8771,6370,6371,8870,572,2572,569,5
S466,7564,626364,756463,7567,8766,7563,565,636666,3763,8862,6260,2559,6359,8854,87
S567,8767,8867,886869,1269,1273,1374,2571,8867,8870,6367,1360,7558,560,1358,1358,562,38
S664,626361,7565,3764,1260,573,7571,3771,3771,6371,8870,3863,8860,566,6363,8860,562,62
S768,3768,876967,3769,1267,1283,1384,3886,586,1284,8884,3890,590,8892,58787,6387,38
S861,3862,1362,1365,375859,6271,8872,1371,7571,6269,2570,1260,7561,126272,1261,1266,38
S966,1362,2563,2570,637265,3873,3771,3773,3871,7569,8769,581,2571,6268,6377,1379,3876,63
S107776,3775,569,570,7571,1296,6295,8796,3888,3889,75889696,389593,6293,7593,75
AVG
± STD
68,75
± 5,09
66,88
± 5,24
66,20
± 4,75
67,21
± 4,05
66,83
± 5,12
65,83
± 4,71
75,31
± 9,52
75,01
± 9,48
75,28
± 9,91
73,60
± 7,94
73,65
± 8,34
73,13
± 7,98
73,48
± 12,93
72,02
± 13,62
71,79
± 13,51
72,75
± 12
71,84
± 12,98
71,89
± 12,74
Table 3. Motor imagery accuracy percentages for each subject, classifier, and paradigm, having window size of T = 2 s after applying the CAR filter.
Table 3. Motor imagery accuracy percentages for each subject, classifier, and paradigm, having window size of T = 2 s after applying the CAR filter.
MOTOR IMAGERY ACCURACY
PARADIGMARROWPICTUREVIDEO
WINDOWT = 2 sT = 2 sT = 2 s
CLASSIFIERLDASVMLSVMPLDASVMLSVMPLDASVMLSVMP
SUBJECT
S17375,3770,7582,2581,3882,3881,758284,25
S262,8763,2562,3868,566,7570,136869,3860,13
S36766,7566,1369,3865,2564,7570,1270,568,38
S463,8864,2560,3866,1366,6266,564,2565,2561,63
S57371,3873,1374,8874,8874,3763,136163,38
S66565,6361,8873,57171,2564,56670
S768,6369,75708484,2589,1391,3892,1393,5
S86261,2562,2572,6371,6373,7564,8765,2562,87
S969,569,7569,3874,8873,7577,580,2578,7577,75
S1074,3774,1372,597,596,2596,3896,6296,2595
AVG
± STD
67,93
± 4,50
68,15
± 4,69
66,88
± 4,84
76,37
± 9,31
75,18
± 9,65
76,61
± 10,04
74,48
± 12,24
74,65
± 12,13
73,69
± 13,20
Table 4. Summary of p-values obtained with the statistics test for the classification accuracy from the paradigms comparison: arrow vs. picture and arrow vs. video for both window sizes T = 2 s and T = 3 s.
Table 4. Summary of p-values obtained with the statistics test for the classification accuracy from the paradigms comparison: arrow vs. picture and arrow vs. video for both window sizes T = 2 s and T = 3 s.
CLASSIFIERPARADIGMSp_ANOVAWicoxonp_Bonferroni-Correctedp_ANOVAWilcoxonp_Bonferroni-Corrected
LDAArrow vs. Picture0,04120,04880,12350,02010,00390,0602
Arrow vs. Video0,21100,37500,63310,13150,16020,3946
SVM LinearArrow vs. Picture0,00160,00200,00480,01300,01950,0390
Arrow vs. Video0,13980,32230,41950,16690,23240,5008
SVM PolynomialArrow vs. Picture0,00130,00200,00400,00590,00590,0177
Arrow vs. Video0,11610,16020,34830,09710,13090,2912
Table 5. Summary of p-values obtained with the statistics test for the classification accuracy from the paradigms comparison: arrow vs. picture and arrow vs. video after applying the CAR filter for window size T = 2 s.
Table 5. Summary of p-values obtained with the statistics test for the classification accuracy from the paradigms comparison: arrow vs. picture and arrow vs. video after applying the CAR filter for window size T = 2 s.
T = 2 s
CLASSIFIERPARADIGMSp_ANOVAWilcoxonp_Bonferroni-corrected
LDAArrow vs. Picture0,00320,00200,0096
Arrow vs. Video0,06990,06000,2096
SVM LinearArrow vs. Picture0,01040,00390,0312
Arrow vs. Video0,06600,04000,1980
SVM PolynomialArrow vs. Picture0,00260,00590,0077
Arrow vs. Video0,07370,08000,2212
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Reyes, D.; Sieghartsleitner, S.; Loaiza, H.; Guger, C. Motor Imagery Acquisition Paradigms: In the Search to Improve Classification Accuracy. Sensors 2025, 25, 6204. https://doi.org/10.3390/s25196204

AMA Style

Reyes D, Sieghartsleitner S, Loaiza H, Guger C. Motor Imagery Acquisition Paradigms: In the Search to Improve Classification Accuracy. Sensors. 2025; 25(19):6204. https://doi.org/10.3390/s25196204

Chicago/Turabian Style

Reyes, David, Sebastian Sieghartsleitner, Humberto Loaiza, and Christoph Guger. 2025. "Motor Imagery Acquisition Paradigms: In the Search to Improve Classification Accuracy" Sensors 25, no. 19: 6204. https://doi.org/10.3390/s25196204

APA Style

Reyes, D., Sieghartsleitner, S., Loaiza, H., & Guger, C. (2025). Motor Imagery Acquisition Paradigms: In the Search to Improve Classification Accuracy. Sensors, 25(19), 6204. https://doi.org/10.3390/s25196204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop