Next Article in Journal
Direct Cooling of Microsystems Using a Two-Phase Microfluidic Droplet
Previous Article in Journal
Towards 6G: A Review of Optical Transport Challenges for Intelligent and Autonomous Communications
Previous Article in Special Issue
Simulation and Fitting of a PID Fuzzy Control in a Virtual Prototype of a Knee Orthosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Brain–Computer Interface for Control of a Virtual Prosthetic Hand

by
Ángel del Rosario Zárate-Ruiz
1,
Manuel Arias-Montiel
1,* and
Christian Eduardo Millán-Hernández
2
1
Institute of Electronics and Mechatronics, Technological University of the Mixteca, Huajuapan de León 69000, Oaxaca, Mexico
2
Institute of Computing, Technological University of the Mixteca, Huajuapan de León 69000, Oaxaca, Mexico
*
Author to whom correspondence should be addressed.
Computation 2025, 13(12), 287; https://doi.org/10.3390/computation13120287
Submission received: 7 November 2025 / Revised: 30 November 2025 / Accepted: 1 December 2025 / Published: 6 December 2025

Abstract

Brain–computer interfaces (BCIs) have emerged as an option that allows better communication between humans and some technological devices. This article presents a BCI based on the steady-state visual evoked potentials (SSVEP) paradigm and low-cost hardware to control a virtual prototype of a robotic hand. A LED-based device is proposed as a visual stimulator, and the Open BCI Ultracortex Biosensing Headset is used to acquire the electroencephalographic (EEG) signals for the BCI. The processing and classification of the obtained signals are described. Classifiers based on artificial neural networks (ANNs) and support vector machines (SVMs) are compared, demonstrating that the classifiers based on SVM have superior performance to those based on ANN. The classified EEG signals are used to implement different movements in a virtual prosthetic hand using a co-simulation approach, showing the feasibility of BCI being implemented in the control of robotic hands.

1. Introduction

In recent years, more and better ways of communication between humans and technological devices have been sought out. In this context, brain–computer interfaces (BCIs) have emerged as a multidisciplinary technology that allows the use of electrical activity in the cerebral cortex to produce signals to replace, enhance, complement, or improve both the physical and cognitive abilities of the human being. According to Maiseli et al. [1], BCI can be classified according to the application: gaming and entertainment, security and authentication, healthcare, education, advertisement, neuromarketing (commercial marketing using principles of neuroscience and cognitive science), and neuroergonomics (application of neuroscience to ergonomics).
A BCI system acquires biosignals generated in the brain and extracts their characteristics using an appropriate algorithm. These characteristics are interpreted to generate control signals for any device that allows the user to interact with the environment. There are various techniques to detect brain activity. These techniques can be invasive such as electrocorticography (ECoG), or non-invasive, such as magnetoencephalography, functional magnetic resonance imaging (fMRI), functional near-infrared spectroscopy (fNIR), or electroencephalography (EEG). Each of these techniques allows the observation of neuronal activity during a task, providing useful data for medical applications [2]. The non-invasive EEG presents some advantages over other techniques for the detection of brain activity: reduced risk of complications such as infections and scarring (associated with invasive methods), greater comfort for users, and lower cost. These characteristics make non-invasive EEG a valuable tool in the development of BCIs [3,4].

2. Review of Related Work

Among the most common applications of BCIs are the following: rehabilitation of limbs in people with mobility problems, recovery of motor skills in disabled people, and robot operation. Elashmawi et al. [3] present an overview of BCI-based machine learning and deep learning algorithms coupled with EEG and motor imagery for stroke rehabilitation. The authors identify six types of rehabilitation in which BCIs have been used: motor, cognitive, neuropsychiatric, walking, visual and hearing, and multimodal rehabilitation. They conclude that despite the potential of BCIs in medical applications, clinically validated procedures are required to validate the effectiveness of using BCIs in rehabilitation tasks. In addition, this study reveals that only 18% of the more than 60 works consulted obtain their own datasets for the development and validation of BCIs. The most used dataset is the BCI Competition IV, whose data was acquired from nine subjects, and two sessions were recorded for each subject. The authors summarize the results on the classification of EEG signals using motor imagery and deep learning methodologies from 28 articles published between 2016 and 2024. These results report classification performance from 68% to 94.5%, and all of them use convolutional neural networks (CNNs) as a deep learning model.
Ahmed et al. [5] examine the impact of gamified BCIs and brain–machine interfaces (BMIs) on people with disabilities, focusing on the potential of these technologies for neurorehabilitation and skill development. The authors mention that significant improvements can be achieved in the speech, motor abilities, and cognitive function of disabled people. However, they recognize some areas of opportunity, such as the need to develop portable, affordable, and user-friendly systems.
In [6], a review of robots based on motor imagery BCIs is presented. Three important aspects are detected: the evocation paradigms, the signal processing algorithms, and the application. In this last topic, in addition to applications in medical rehabilitation, others stand out, such as virtual helicopters, unmanned aerial vehicles (UAVs), mobile robots, and robotic arms. The study is centered on MI and hybrid paradigms, where the most used classifiers are SVM and LDA. The authors report the common spatial pattern (CSP) as the main feature extraction method used in MI-based BCIs. In this review, the classification accuracy of the BCIs is not reported.
More recently, Zhang et al. [7] examine 87 studies published between 2018 and 2023 that explore EEG-based brain–robot interaction (BRI) systems. The authors classify robots into seven categories, focusing on their functionality, objectives, application domains, and implementation scenarios: industrial robots, service robots, medical robots, social robots, educational robots, exploratory robots, and autonomous vehicles. They find that service robots are the most used, followed by industrial and medical robots. Some challenges in EEG-based BRI research are identified, mainly in improving signal quality and acquisition, and issues related to safety and ethical concerns. The authors propose to explore the use of dry electrodes with flexible and wearable arrays to improve contact and reduce artifacts in the acquired signals, as well as the use of machine learning algorithms to improve the flexibility and adaptability of EEG–robot interaction systems. They mention the need to establish protocols to inform users about data collection, storage, and the possible use of their EEG data through informed consent to respect user privacy and autonomy. In their research, the authors find that the Emotiv EPOC is the preferred device due to its portability and consistent signal quality, while the Open BCI headset does not appear in the references consulted. In addition, they identify six types of feedback enhancement in EEG signal decoding, with feedback through visual stimuli being the most frequently used in the development of BCIs. Despite the recent increase in the application of deep learning techniques in EEG signal decoding tasks such as feature extraction and classification, the authors find that traditional machine learning methods such as LDA and SVM remain popular in EEG-based BCIs for their reliability and robustness.
The control of prosthetic devices is an emerging field of application for BCIs based on EEG [8]. In [9], a brain–machine interface is presented for the control of an anthropomorphic robotic arm using invasive sensors. EEG signals are obtained from the two 96-channel intracortical electrode arrays implanted in the subject’s left motor cortex. The results show that object interaction is an important factor in extracting these signals and that high-dimensional operation of prosthetic devices can be achieved with simple decoding algorithms. An extension of this work is given in [10], where tactile sensations are incorporated to improve the control of the robotic arm. Cantillo-Negrete et al. [11] report the use of a BCI based on EEG coupled with a robotic hand orthosis in stroke rehabilitation of the upper limb. The authors conclude that their approach could promote neuroplasticity and could be as effective as conventional therapy for upper limb recovery but that this needs to be evaluated in clinical trials. Recently, Zhang et al. [12] developed a BCI based on steady-state visual evoked potentials (SSVEP) to enhance the capabilities of a prosthetic hand carrying out 8 different movements, including grasp, put down, pinch, point, fist, palm push, hold pen, and initial position. The experimental platform used consists of a 32-channel EEG cap, an EEG amplifier (Neuracle, NeuSen W32), a portable computer (Intel i5-1135G7, 2.4 GHz) and a commercial 8-DOF prosthetic hand (Inspire Robot Technology, RH56DFX-2R), which represents an approximate cost of around USD 80,000.
According to Abiri et al. [13], the most used EEG-based BCI paradigms can be classified into motor imagery, imagined body kinematics, external stimulation, error-related potential, and hybrid paradigms. The steady-state visual evoked potentials (SSVEP) paradigm belongs to external stimulation paradigms and uses visual stimulation with a specific frequency in the range of 3.5–75 Hz to generate an electrical signal of the same frequency in the visual cortex of the brain [14]. The visual stimulus can be generated by a light-emitting diode (LED), an image on a liquid crystal display (LCD), an animation, or a pattern image. Using external stimuli, this paradigm requires less training time than other BCI techniques. Other advantages, such as high accuracy and high information transfer rate (ITR), have been demonstrated. In addition, the stimuli can flash at many different frequencies, thus resulting in many commands and more degrees of freedom to control prosthetic devices [13,15].
In the context specified in the paragraphs above, the main contributions of this article are listed below:
  • The development of a signal processing and classification system for the Open BCI Ultracortex Biosensing Headset low-cost EEG hardware;
  • The generation of a database for the SSVEP paradigm implementation by the Open BCI Ultracortex Biosensing Headset;
  • The use of the co-simulation approach in the control of a virtual prosthetic hand to avoid the use of complex and expensive hardware to test BCI performance.

3. Materials and Methods

Due to the aforementioned advantages, in this work, the SSVEP paradigm is used to acquire the EEG signals for the BCI development.

3.1. Visual Stimulator

LCD-based visual stimulators are the simplest, most versatile, and cheapest [15]. The frequency and intensity of the stimulus can be easily controlled in order to contrast with the environment and ensure the generation of steady-state potentials.
The implemented design consists of an Atmega328p microcontroller (Microchip Technology, Chandler, AZ, USA), a 5 V power supply, the LED stimulator and an adjustable power supply as shown in Figure 1.
The microcontroller was used for the generation of pulse width modulation (PWM) signals to control the frequency of the visual stimulator. The PWM signal was obtained directly from the timers to ensure a stable signal with the correct frequency and without delays. Six frequencies were implemented in pairs: 8, 12, 13, 15, 31 and 33 Hz. The first pair was chosen because stimuli close to 10 Hz (one of the resonant frequencies of the brain in SSVEP) generate a better SSVEP response than other distant frequencies [16,17]. Frequencies of 13 and 15 Hz were considered to obtain response signals with adequate power without the influence of the fundamental frequency. The highest frequencies (31 and 33 Hz) were chosen to minimize the influence of visual fatigue provided by the stimuli.
For the visual stimulator, 5 red-color LEDs were used. According to [18], this color generates a better SSVEP response than other ones. The electrical diagram for the LCD-based visual stimulator is presented in Figure 2.

3.2. EEG Signals Acquisition

The materials used in the acquisition of EEG signals are listed below:
  • Cyton data acquisition card by OpenBCI;
  • Ultracortex Mark 4 headset (OpenBCI, Brooklyn, NY, USA);
  • Four dry (non-spikey) electrodes;
  • User’s GUI for card-PC connection;
  • Visual stimulator.
Five participants were recruited to acquire EEG signals under the following eligibility requirements:
  • Age between 18 and 25 years;
  • Completion of a survey to determine good overall health;
  • No history of epileptic episodes;
  • Students of the Technological University of the Mixteca.
Some exclusion criteria were defined to ensure the safety of the participants and the proper performance of the acquisition system. These exclusion criteria are as follows:
  • Vision problems, including total or partial blindness;
  • Eye diseases such as diabetic retinopathy or macular degeneration;
  • Nervous system disorders such as epilepsy;
  • Multiple sclerosis or Parkinson’s disease;
  • People who have suffered a cerebrovascular accident or stroke;
  • Cardiovascular diseases such as high blood pressure or use of pacemakers;
  • Treatments with medications that affect the nervous system, such as tranquilizers or anxiolytics, etc.;
  • Attention or concentration problems such as autism, attention deficit disorder, or hyperactivity.
It is important to mention that the participants signed an informed consent form. This form and the experimental protocol for the acquisition of EEG signals, described below, were approved by the Research Ethics Committee of the University of the Sierra Sur in Mexico. This approval can be consulted in [19].
Experiments for the acquisition of EEG signals were carried out in a controlled environment with natural illumination, without direct exposure to sunlight, low noise and room temperature between 20 and 25 °C. The technical conditions included a distance to the visual stimulator between 50 and 100 cm, a chair without armrests, and the stimulator located just at the center of the visual field taking as a reference the central LED of the stimulator.
The procedure used for the acquisition of EEG signals is described below.
  • Preparation
    • The length between the user’s nasion and inion is measured to determine the Cz point.
    • The EEG headset is placed, ensuring that the Cz point coincides with that of the user.
    • The electrodes are placed in the Poz, O1, Oz, and O2 positions according to the international standard 10–20 [20] to cover the visual cortex of the brain located in the occipital lobe, this being the area of generation of SSVEPs.
    • The contact of the electrodes with the scalp is verified by visual inspection and by the GUI indicator.
    • The quality of the EEG signals is verified through the impedance indicator and the detection of alpha waves.
  • Task
    • Visualization of the stimulus located 70 cm from the user. The user is instructed to sit in a comfortable and relaxed position.
    • Four sessions are performed with 5 repetitions per session.
    • Each repetition consists of two phases: relaxation and attention. Each phase lasts 2 min. Each repetition is performed 2 min after the previous one has been completed.
    • During the EEG signal acquisition process, the user is monitored to respond to any perceived discomfort or inconvenience. Involuntary movements and external factors that may affect the process are also recorded.
Data acquisition was performed using the Cyton card and the Ultracortex headset with four electrodes located at the points O1, Oz, O2 and POz, according to the standard 10–20. Reference electrodes were taken from the left earlobe for the BIAS and from the right earlobe for the reference signals to attenuate the myographic alterations. In Figure 3, the final configuration of the Ultracortex headset is shown.
For communication between the data acquisition card and the PC, a Bluetooth receiver provided by the manufacturer was used, and the reception and saving of the data were performed using the user interface (GUI).

3.3. EEG Signals Preprocessing

This stage consists of two phases. In the first one, the files obtained in the acquisition stage are modified and formatted so that they can be used in the second phase, which consists of a digital processing of the signals to eliminate artifacts and noise and to segment the signals to obtain the features used in the classification stage.
The obtained data are in .txt format and are converted to .csv files to facilitate the handling of data in MATLAB® software version 2020b. The signals were segmented into two phases, relaxation and attention, by a MATLAB® script, which can be consulted in [19].
The sequence for the digital processing of the signals is illustrated in Figure 4. This sequence is applied to signals of both relaxation and attention phases.
A band-pass filter was implemented with a center frequency exactly halfway between the frequencies of interest (for each frequency pair). This selection eliminates external artifacts such as electrical line noise and most ocular and myoelectric artifacts.
To calculate the features, the signals were segmented using 3 s windows with an overlap of 80%. Taking into account signals of 15 s in duration, 20 windows of 3 s were obtained.
For the FFT calculation, the MATLAB® function “fft()” was used. Applying the function to each channel of the EEG signals, four features were obtained (one per channel). In addition, the signal power was recorded at each frequency to obtain four more characteristics.
The Hjorth parameters—namely Activity (A), Mobility (M) and Complexity (C)—are statistical characteristics of a time-variant signal which allow the determination of characteristics in the frequency domain. Recently, they have been used as an advanced feature extraction methodology for EEG signals [21]. Mathematically, the Hjorth parameters of a signal x ( t ) are calculated as [2]
A = v a r ( x ( t ) ) ,
M = v a r ( x ( t ) ) v a r ( x ( t ) ) ,
C = v a r ( x ( t ) ) v a r ( x ( t ) ) ,
where v a r ( x ( t ) ) is the variance, and x and x are the first and second time derivatives of the signal x ( t ) , respectively.
For the calculation of the Hjorth parameters, the pseudocode described in Algorithm 1 was implemented.
Algorithm 1 Pseudocode for the calculation of the Hjorth parameters.
  • Input: data signal vector
  • Initialize variables                 ▷ This is a comment
  • f s i g n a l _ v e c t o r
  • A 0
  • M 0
  • C 0
  • v a r i a n c e 0
  • d s i g n a l 0
  • d 2 s i g n a l 0
  • h 1
  • Calculate parameters               ▷ This is a comment
  • v a r i a n c e v a r ( f )
  • d s i g n a l d i f f ( f ) / h
  • d 2 s i g n a l d i f f ( d s i g n a l ) / h
  • Calculate Hjorth parameters           ▷ This is a comment
  • A v a r i a n c e
  • M s q r t ( v a r ( d s i g n a l ) / v a r )
  • C s q r t ( v a r ( d 2 s i g n a l ) / v a r )
  • Return: A , M , C
Following this procedure, three characteristics per channel and twelve per window were obtained.
Statistical characterization of the EEG signals was performed by calculating the mean absolute value (MAV), the simple square integral (SSI), and the waveform length (WL) with Equations (4)–(6):
M A V = 1 N i = 1 N | x i | ,
S S I = i = 1 N ( | x i | ) 2 ,
W L = i = 1 N | x i + 1 x 1 | ,
where N is the number of samples of the segment to be analyzed, and x i is the i-th sample of the segment. Twelve characteristics per window were obtained.
For the entropy calculation, the relative probability of the signal is first calculated by the MATLAB® function “histcounts()”, and the 0 probabilities are changed to 1. After that, the Shannon entropy of the signal is calculated by
H ( X ) = i = 1 M P ( x i ) l o g 2 ( P ( x i ) ) ,
where X is a random variable, M is the number of possible states, P ( x i ) is the probability of the state x i , and H ( X ) is the entropy associated with X. One characteristic per channel and four per window were obtained.
Finally, the obtained features were organized in a matrix N × 36 , where each column corresponds to one feature, and each row to a window.

3.4. EEG Signal Classification

The classification system consists of two stages: training and real-time classification. According to the proposed application and the selected paradigm, two classification approaches were considered: support vector machines (SVMs) and artificial neural networks (ANNs). These techniques do not require excessive computational effort and present a performance comparable to that of more sophisticated approaches that require greater resources [8]. Classification systems based on SVM and ANN were evaluated and compared. For the training of the classification systems, four phases were developed: training with all the features of the signals, training with important characteristics for the SSVEP, training using principal component analysis (PCA), and training using linear discriminant analysis (LDA). This procedure is illustrated in Figure 5.
The first phase consists of training the model and trying to adjust the hyperparameters of the models, which received the 36 characteristics for training. In the second phase, only the features that are theoretically relevant for the SSVEP paradigm were used. These features are the main frequency in the FFT, the power of this frequency, and the activity of the Hjorth parameters. These features are strongly related to the frequency of stimulation that is sought to be generated in the brain. For the third and fourth phases, a reduction of dimensionality techniques were used to reduce the 36 features into only useful characteristics to describe the same dataset. From the dataset, we sought to be able to describe at least 70% of the relevant information using the principal components, in order to choose 4 or more components. Finally, a linear discriminant analysis was applied to the system training to improve the separability between classes. Each of these phases was applied to the three frequency pairs, and the models with the best performance in terms of precision in training and validation were selected to subsequently compare each of the systems in the online classification.
For the evaluation of the classification models, confusion matrices and the metrics associated with them were used. These metrics are accuracy, precision, sensitivity, specificity, and negative predictive value (NPV); see Figure 6.
For each frequency pair, ANNs with different characteristics were trained, implementing each of the four phases described above. In total, approximately 200 ANNs were trained, but only the results of those that showed the best classification performance are reported in Figure 7 and Table 1 (for frequencies of 31 and 33 Hz).
For SVM training, the dataset was divided into training and testing (80 and 20%, respectively). For all frequencies, 8 SVMs were trained, which were divided into each of the 4 phases previously described and using one of the two available kernels, linear and RBF. The results are summarized in Table 1.
From Table 1, one can observe that SVMs present a better performance than ANNs, in general. Note that the models generated for the frequencies of 31 and 33 Hz offer the best performance. Cross-validation was performed for both models to verify that the model generalizes correctly and remains stable. Table 2 shows the evaluation results obtained during K-fold cross-validation with a value of k = 10 for each of the models trained for the frequencies of 31 and 33 Hz, which demonstrated better performance in terms of accuracy. The mean accuracy in cross-validation for the ANN was 95.58% with a variance of 0.17%, indicating that the results are not sensitive to the way the training and test data are partitioned. For the SVM, the cross-validation accuracy had a mean of 96.21% and a standard deviation of 0.19%. These results demonstrate that both classification systems are not sensitive to the way the data are partitioned, and are also stable due to the low standard deviation of the cross-validation, suggesting that the models generalize correctly, making them robust to unknown inputs.
For the final selection of the classification system, other aspects were taken into account with the aim for simplifying the model fitting to new data, and therefore to new users. These aspects are as follows:
  • Ease of training;
  • Ease of adjustment to new data;
  • Computational cost.
Due to the aforementioned aspects, the classification system based on SVMs is selected. This system requires less heuristic data for training, which implies a shorter training time and a lower computational cost, eliminating the possibility of overfitting.

3.5. Virtual Prosthetic Hand

As a case of application of the developed BCI, the control of a virtual prototype of a prosthetic hand was proposed. A co-simulation approach was considered to avoid the use of sophisticated and expensive hardware for the BCI evaluation. Collaborative simulation, or co-simulation, allows the use of realistic models or virtual prototypes in the design, analysis, and control of complex engineering systems by using specialized computer software. This approach has been used in robotic systems because it reduces time and resources in the development of this kind of systems [22].
The robotic hand considered in this work is based on the open-source design presented in [23]. The original design was modified to be used in a MATLAB®-ADAMS/View co-simulation, mainly due to the restriction of the ADAMS student license to use a maximum of 40 pieces in a simulation. The modifications consisted of removing components related to motors, gears, belts and screws, and grouping other elements into subassemblies without altering the movement of the robotic hand. The virtual prototype in the ADAMS/View environment is shown in Figure 8. This prototype considers two DOFs per finger to actuate the proximal and distal phalanges in flexion/extension. The proximal phalanx of the thumb has 1 DOF, while the distal phalanx with the opposition movement is able to rotate around the three coordinated axes. Sensors were placed to measure the angular displacement of the proximal and distal phalanges of each finger.

3.6. Systems Integration

For the communication between MATLAB® and the GUI where the BCI signals are acquired, the Lab Streaming Layer (LSL) protocol was selected. This method allows for sending and receiving time-series data streams, reducing data loss and simplifying the transmission of electrophysiological data between different devices. For data reception between the GUI and MATLAB® by LSL, the LSL library for MATLAB® was used [24].
The MATLAB®-ADAMS/View co-simulation is carried out by exporting the robotic hand model to the MATLAB®-Simulink environment, creating a plant block with inputs to receive information from the classifier system, and outputs to visualize the measured motion of the virtual robotic hand. A more detailed procedure to implement a MATLAB®-ADAMS/View co-simulation can be consulted in [25].
Once the interfaces between the GUI, MATLAB® and ADAMS/View were obtained, a control script was developed to carry out the complete process from the acquisition of the EEG signals to the movement of the virtual robotic hand. Figure 9 shows the activities to implement this process in real time.

4. Results

This section presents the results of different tests performed on the different components of the BCI and on the integrated system.

4.1. Performance of the Classification Model Online

For this test, two datasets different from those used for the training phase were considered. These EEG signals were obtained under the same conditions as those used in the training of the classification model. Once obtained and filtered, these signals were divided into 3 s windows with an 80% overlap, resulting in 39 windows: 19 for the 31 Hz stimulus and 20 for the 33 Hz stimulus. Using these data for the validation of the classification model, the results summarized in the confusion matrix in Figure 10 were obtained. A similar behavior to that shown in the training phase was observed. The SVM performance varied by 2.7% compared to the training phase, confirming the correct generalization of the classification model.

4.2. Performance of the BCI Classification

For this test, the BCI is used to validate the performance of the classification system and to demonstrate the correct integration of the acquisition, filtering, and characterization of the EEG signals. In the BCI, three 3 s windows with an 80% overlap are used. Each window is independently filtered, characterized, and classified. After the classification of each window, the statistical mode of the three classifications is used to select the output, avoiding false positives for the frequency being analyzed. Taking into account the duration of the signal for each stimulus frequency, 4 samples of 4.2 s each were obtained, resulting in 8 samples for the BCI validation.
The percentage of classification performance was 100% as shown in Figure 11. In addition, the correct integration of the different modules comprising the BCI was verified, as well as the correct acquisition of data from the GUI.

4.3. Virtual Robotic Hand Motion

Once the EEG signals were classified, they were used to control the movement of the virtual prosthetic hand. Two different gestures were proposed: greeting and fine pincer. Each movement was previously characterized. The initial position of the virtual robotic hand was with the palm open and the fingers fully extended as shown in Figure 8. As a reference for the movements described below, the flexion/extension movement of the index, middle, ring, and little fingers was carried out around the X-axis, while the proximal phalanx of the thumb rotated around the Y-axis. In this way, the distal phalanx of the thumb was able to rotate around the three coordinate axes.
To carry out these tests, the MATLAB®/ADAMS/View co-simulation was executed in such a way that the input data for the ADAMS model were sent from the MATLAB® environment as mentioned in Section 3.6.
The greeting gesture was generated by the motion of the thumb, the little and ring fingers as illustrated in Figure 12. Linear trajectories from the initial position to the final position were implemented for each finger phalanx. These trajectories are shown in Figure 13.
The fine pincer gesture is shown in Figure 14. As can be observed, this gesture involves the movement of five fingers.
The trajectories implemented for the movement of each thumb’s phalanx are presented in Figure 15. It is important to note that the distal phalanx has rotational movement around the three axes.
In Figure 16, the trajectories for the phalanges of the index and middle fingers are shown. Since the movements of the little and ring fingers for this gesture are the same as for the greeting gesture, they are omitted.

4.4. Overall Evaluation

To carry out the integration tests of the developed subsystems, the following steps were followed:
  • Open the GUI for data reading.
  • Load the validation data into the GUI.
  • Start LSL communication from the GUI.
  • Start the EEG signals acquisition from the GUI.
  • Execute the MATLAB® script for signal classification and ADAMS/View co-simulation.
In each of the tests carried out, the correct classification of the EEG signals was verified, as well as the movement of the virtual robotic hand. For each test, the gesture to be performed was defined according to the frequency identified by the classifier. The frequency of 31 Hz (output 0 of the classifier) was assigned to the fine pincer gesture, while the 33 Hz frequency (output 1 of the classifier) was assigned to the greeting one.
In Figure 17, the GUI interface and the virtual prototype in the ADAMS/View environment during integration tests are shown. In reference [26], a short video of EEG signals acquisition and the MATLAB®-ADAMS/View co-simulation can be viewed. Each complete test takes an average of 1 to 2 min. This is due to the high computational cost for data acquisition, classification, and co-simulation. It is important to note that this time may vary depending on the characteristics of the computer equipment used to perform the tests. The characteristics of the processing unit used are the following: CPU—Ryzen 7 4800; GPU—NVIDIA GeForce GTX 1650 Ti; and RAM memory—8 GB.

5. Discussion

In this work, an EEG-based BCI for the control of a virtual prosthetic hand was presented. The BCI uses the Open BCI open-source commercial device for the acquisition of EEG signals and its integration with MATLAB® for the processing of signals, the extraction of features, and the online classification. A system for the conditioning of EEG signals based on band-pass filters was developed to reduce the noise inherent to the acquisition of signals through superficial EEG electrodes and to mitigate artifacts involved in signals. The paradigm used for the acquisition of EEG signals was SSVEP because it requires less training time with the user compared to IM and P300. The visual stimulator was developed using an easy-to-implement low-cost LED matrix. It is important to remark that the quality of the acquired EEG signals tended to be unstable due to the rigidity of the 3D-printed headset; it lacked any margin for adaptation to the shape of the test subjects’ skulls, except for the adjustment nodes provided by the manufacturer. Therefore, manual measurements were required to determine the points of interest and thus achieve accurate signal acquisition. Another important aspect related to quality is the repeatability of the measurements. It was observed that when performing the experiment of putting on the headset, acquiring signals, and then removing and replacing it, following the procedure described in Section 3.2, signals with different statistical characteristics were obtained, as well as variations in the frequency activation of the signal acquired by each node. These variations made the process of generalizing classification models difficult. Therefore, it was decided to conduct sessions in which the headset was not removed, and only minimal positioning corrections were made.
Two classification systems were implemented, one based on ANN and the other on SVM. The latter showed better overall performance, with online validation accuracy of 95% and 100% of the eight tests performed with the BCI. The proposed training procedure for classifiers based on the extraction and use of different features improved their performance.
In Table 3 a comparison of classification accuracy between our results and other SSVEP-based BCIs is presented.
The co-simulation approach represents an alternative for testing and implementing BCIs without the need for complex and expensive hardware. Using these tools, it is possible to generate virtual prototypes that reflect the realistic behavior of complex robotic systems in which BCIs can be applied, such as robotic prostheses, for example.

6. Conclusions and Future Work

We have developed a BCI based on an almost unreported device (Open BCI Ultracortex headset, (OpenBCI, Brooklyn, NY, USA)) and using the SSVEP paradigm, for which a database was created to train and test a classifier system based on SVM. The database is available in [33]. The classifier performance obtained was 95% in training, 92.7% in online tests, and 100% in 8 tests carried out for BCI validation. These results are superior to most of those reported in the reviews mentioned above, even those that use more complex classification systems based on deep learning techniques such as convulational neural networks (CNNs) or deep neural networks (DNNs). On the other hand, co-simulation enables EEG-based BCIs to be tested and validated without excessive computational effort, and this approach has been scarcely reported in BCI literature.
In future work, we propose to enhance the database of EEG signals acquired with Open BCI for other stimulus frequencies that allow the number of control commands to be expanded to generate more movements in the virtual prototype of the prosthetic hand and even use the data for other applications. Another activity necessary to implement the BCI in the proposed application is to perform signal acquisition tests on people with upper limb amputations since this condition can alter EEG signals and affect the performance of the proposed classification systems. Finally, integration of the developed BCI with some experimental prototype of a robotic hand is proposed to verify the communication requirements.

Author Contributions

Conceptualization, Á.d.R.Z.-R. and M.A.-M.; methodology, Á.d.R.Z.-R., M.A.-M. and C.E.M.-H.; software, Á.d.R.Z.-R.; validation, Á.d.R.Z.-R. and M.A.-M.; formal analysis, Á.d.R.Z.-R., M.A.-M. and C.E.M.-H.; investigation, Á.d.R.Z.-R.; resources, M.A.-M.; data curation, Á.d.R.Z.-R.; writing—original draft preparation, M.A.-M.; writing—review and editing, Á.d.R.Z.-R., M.A.-M. and C.E.M.-H.; visualization, Á.d.R.Z.-R. and M.A.-M.; supervision, M.A.-M.; project administration, M.A.-M.; funding acquisition, M.A.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The procedure for acquiring biological signals was approved by the Bioethics Committee of UNIVERSIDAD DE LA SIERRA SUR (folio number CEI-01/2023 and date of approval 7 February 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original data presented in the study are openly available in Mendeley Data at https://data.mendeley.com/datasets/f8v96skxj3/1 (accessed on 30 November 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Maiseli, B.; Abdalla, A.T.; Massawe, L.V.; Mbise, M.; Mkocha, K.; Nassor, N.A.; Ismail, M.; Michael, J.; Kimambo, S. Brain–computer interface: Trend, challenges, and threats. Brain Inform. 2023, 10, 20. [Google Scholar] [CrossRef]
  2. Rao, R.P.N. Brain-Computer Interfacing: An Introduction, 1st ed.; Cambridge University Press: New York, NY, USA, 2013; pp. 18–32. [Google Scholar]
  3. Elashmawi, W.H.; Ayman, A.; Antoun, M.; Mohamed, H.; Mohamed, S.E.; Amr, H.; Talaat, Y.; Ali, A. A comprehensive review on brain–computer interface (BCI)-based machine and deep learning algorithms for stroke rehabilitation. Appl. Sci. 2024, 14, 6347. [Google Scholar] [CrossRef]
  4. Prapas, G.; Angelidis, P.; Sarigiannidis, P.; Bibi, S.; Tsipouras, M.G. Connecting the brain with augmented reality: A systematic review of BCI-AR systems. Appl. Sci. 2024, 14, 9855. [Google Scholar] [CrossRef]
  5. Ahmed, B.; Khan, S.; Lim, H.; Ku, J.; Amr, H.; Talaat, Y.; Ali, A. Challenges and opportunities of gamified BCI and BMI on disabled people learning: A systematic review. Electronics 2025, 14, 491. [Google Scholar] [CrossRef]
  6. Zhang, J.; Wang, M. A survey on robots controlled by motor imagery brain-computer interfaces. Cogn. Robot. 2021, 1, 12–24. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Rajabi, N.; Taleb, F.; Matviienko, A.; Ma, Y.; Björkman, M.; Kragic, D. Mind meets robots: A review of EEG-based brain-robot interaction systems. Int. J. Hum.-Comput. Interact. 2025, 41, 12784–12815. [Google Scholar] [CrossRef]
  8. Baniqued, P.D.E.; Stanyer, E.C.; Awais, M.; Alazmani, A.; Jackson, A.E.; Mon-Williams, M.A.; Mushtaq, F.; Holt, R.J. Brain–computer interface robotics for hand rehabilitation after stroke: A systematic review. J. NeuroEng. Rehabil. 2021, 18, 15. [Google Scholar] [CrossRef] [PubMed]
  9. Wodlinger, B.; Downey, J.E.; Tyler-Kabara, E.C.; Schwartz, A.B.; Boninger, M.L.; Collinger, J.L. Ten-dimensional anthropomorphic arm control in a human brain-machine interface: Difficulties, solutions, and limitations. J. Neural. Eng. 2015, 12, 016011. [Google Scholar] [CrossRef]
  10. Flesher, S.N.; Downey, J.E.; Weiss, J.M.; Hughes, C.L.; Herrera, A.J.; Tyler-Kabara, E.C.; Boninger, M.L.; Collinger, J.L.; Gaunt, R.A. A brain-computer interface that evokes tactile sensations improves robotic arm control. Science 2021, 372, 831–836. [Google Scholar] [CrossRef]
  11. Cantillo-Negrete, J.; Carino-Escobar, R.I.; Carrillo-Mora, P.; Rodriguez-Barragan, M.A.; Hernandez-Arenas, C.; Quinzaños-Fresnedo, J.; Hernandez-Sanchez, I.R.; Galicia-Alvarado, M.A.; Miguel-Puga, A.; Arias-Carrion, O. Brain-computer interface coupled to a robotic hand orthosis for stroke patients’ neurorehabilitation: A crossover feasibility study. Front. Hum. Neurosci. 2021, 15, 656975. [Google Scholar] [CrossRef]
  12. Zhang, X.; Zhang, T.; Jiang, Y.; Zhang, W.; Lu, Z.; Wang, Y.; Tao, Q. A novel brain-controlled prosthetic hand method integrating AR-SSVEP augmentation, asynchronous control, and machine vision assistance. Heliyon 2024, 10, e26521. [Google Scholar] [CrossRef]
  13. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain-computer interface paradigms. J. Neural. Eng. 2019, 16, 011001. [Google Scholar] [CrossRef] [PubMed]
  14. Siribunyaphat, N.; Punsawad, Y. Steady-state visual evoked potential-based brain-computer interface using a novel visual stimulus with quick response (QR) code pattern. Sensors 2022, 22, 1436. [Google Scholar] [CrossRef]
  15. Zhu, D.; Bieger, J.; Molina, G.G.; Aarts, R.M. A survey of stimulation methods used in SSVEP-based BCIs. Comput. Intell. Neurosci. 2010, 2010, 702357. [Google Scholar] [CrossRef]
  16. Herrmann, C.S. Human EEG responses to 1–100 Hz flicker: Resonance phenomena in visual cortex and their potential correlation to cognitive phenomena. Exp. Brain Res. 2001, 137, 346–353. [Google Scholar] [CrossRef] [PubMed]
  17. Horki, P.; Neuper, C.; Müller-Putz, G. Identifying “resonance” frequencies for SSVEP-BCI. Int. J. Bioelectromagn. 2011, 13, 76–77. [Google Scholar]
  18. Regan, D. An effect of stimulus colour on average steady-state potentials evoked in man. Nature 1996, 210, 1056–1057. [Google Scholar] [CrossRef] [PubMed]
  19. Zárate-Ruiz, A.R. Development of an EEG-Based Brain-Computer Interface to Control a Robotic Hand. Bachelor’s Thesis, Technological University of the Mixteca, Oaxaca, México, August 2024. Available online: http://jupiter.utm.mx/~tesis_dig/14641.pdf (accessed on 6 August 2025). (In Spanish).
  20. Nuwer, M.R.; Comi, G.; Emerso, R.; Fuglsang-Frederiksen, A.; Guérit, J.M.; Hinrichs, H.; Ikeda, A.; Luccas, F.J.C.; Rappelsburger, P. IFCN standards for digital recording of clinical EEG. Electroencephalogr. Clin. Neurophysiol. 1998, 106, 259–261. [Google Scholar] [CrossRef]
  21. Alawee, W.H.; Basem, A.; Al-Haddad, L.A. Advancing biomedical engineering: Leveraging Hjorth features for electroencephalography signal analysis. J. Electr. Bioimpedance 2023, 14, 66–72. [Google Scholar] [CrossRef]
  22. Herrera-Cordero, M.E.; Arias-Montiel, M.; Ceccarelli, M.; Lugo-González, E. Cosimulation and control of a single-wheel pendulum mobile robot. ASME J. Mech. Robot. 2021, 13, 050909. [Google Scholar] [CrossRef]
  23. Krausz, N.E.; Rorrer, R.A.L.; Weir, R.F. Design and fabrication of a six degree-of-freedom open source hand. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 562–572. [Google Scholar] [CrossRef]
  24. Delorme, A. liblsl-Matlab. 2020. Available online: https://github.com/labstreaminglayer/liblsl-Matlab (accessed on 21 August 2025).
  25. Sosa-Méndez, D.; Lugo-González, E.; Arias-Montiel, M.; García-García, R.A. ADAMS-MATLAB co-simulation for kinematics, dynamics, and control of the Stewart–Gough platform. Int. J. Adv. Robot. Syst. 2017, 14, 1–10. [Google Scholar] [CrossRef]
  26. Zárate-Ruiz, A.d.R. Co-Simulation Between OpenBCI-Matlab-AdamsView in a BCI System. 2025. Available online: https://www.youtube.com/watch?v=5TVkmBhGyWw (accessed on 28 September 2025).
  27. Keihani, A.; Shirzhiyan, Z.; Farahi, M.; Shamsi, E.; Mahnam, A.; Makkiabadi, B.; Haidari, M.R.; Jafari, A.H. Use of sine shaped high-frequency rhythmic visual stimuli patterns for SSVEP response analysis and fatigue rate evaluation in normal subjects. Front. Hum. Neurosci. 2018, 12, 210. [Google Scholar] [CrossRef] [PubMed]
  28. Mu, J.; Grayden, D.B.; Tan, Y.; Oetomo, D. Frequency superposition—A multi-frequency stimulation method in SSVEP-based BCIs. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 1–5 November 2021; pp. 5924–5927. [Google Scholar]
  29. Rekrut, M.; Jungbluth, T.; Alexandersson, J.; Krüger, A. Spinning icons: Introducing a novel SSVEP-BCI paradigm based on rotation. In Proceedings of the 26th Annual Conference on Intelligent User Interface, College Station, TX, USA, 13–17 April 2021; pp. 234–243. [Google Scholar]
  30. Li, X.; Yang, S.; Fei, N.; Wang, J.; Huang, W.; Hu, Y. Convolutional neural network for SSVEP identification by using a few-channel EEG. Bioengineering 2024, 11, 613. [Google Scholar] [CrossRef] [PubMed]
  31. Israsena, P.; Pan-Ngum, S. CNN-based deep learning approach for SSVEP detection targeting binaural ear-EEG. Front. Comput. Neurosci. 2022, 16, 868642. [Google Scholar] [CrossRef]
  32. Guney, O.B.; Oblokulov, M.; Ozkan, H. A deep neural network for SSVEP-based brain-computer interfaces. IEEE Trans. Trans. Biomed. Eng. 2022, 19, 932–944. [Google Scholar] [CrossRef]
  33. Zárate-Ruiz, A.d.R. RAW Signal of EEG Using SSVEP Paradigm. 2025. Available online: https://data.mendeley.com/datasets/f8v96skxj3/1 (accessed on 30 November 2025).
Figure 1. Implemented design for the visual stimulator.
Figure 1. Implemented design for the visual stimulator.
Computation 13 00287 g001
Figure 2. Electrical diagram of the LCD-based visual stimulator.
Figure 2. Electrical diagram of the LCD-based visual stimulator.
Computation 13 00287 g002
Figure 3. Location of the electrodes on the Ultracortex headset.
Figure 3. Location of the electrodes on the Ultracortex headset.
Computation 13 00287 g003
Figure 4. Block diagram for the digital processing of EEG signals.
Figure 4. Block diagram for the digital processing of EEG signals.
Computation 13 00287 g004
Figure 5. EEG signals classification procedure.
Figure 5. EEG signals classification procedure.
Computation 13 00287 g005
Figure 6. Confusion matrix for the two classes.
Figure 6. Confusion matrix for the two classes.
Computation 13 00287 g006
Figure 7. Confusion matrix for ANN trained with 5 characteristics and LDA.
Figure 7. Confusion matrix for ANN trained with 5 characteristics and LDA.
Computation 13 00287 g007
Figure 8. Virtual prototype of the prosthetic hand: (a) Isometric view. (b) Front view.
Figure 8. Virtual prototype of the prosthetic hand: (a) Isometric view. (b) Front view.
Computation 13 00287 g008
Figure 9. Real-time control script flowchart.
Figure 9. Real-time control script flowchart.
Computation 13 00287 g009
Figure 10. Confusion matrix for SVM validation.
Figure 10. Confusion matrix for SVM validation.
Computation 13 00287 g010
Figure 11. Confusion matrix of the BCI.
Figure 11. Confusion matrix of the BCI.
Computation 13 00287 g011
Figure 12. Final position of the virtual prosthetic hand in the greeting gesture: (a) Front view. (b) Isometric view.
Figure 12. Final position of the virtual prosthetic hand in the greeting gesture: (a) Front view. (b) Isometric view.
Computation 13 00287 g012
Figure 13. Trajectories for the greeting gesture: (a) Thumb’s distal phalanx (around Z-axis). (b) Thumb’s proximal phalanx. (c) Ring finger’s distal phalanx. (d) Ring finger’s proximal phalanx. (e) Little finger’s distal phalanx. (f) Little finger’s proximal phalanx.
Figure 13. Trajectories for the greeting gesture: (a) Thumb’s distal phalanx (around Z-axis). (b) Thumb’s proximal phalanx. (c) Ring finger’s distal phalanx. (d) Ring finger’s proximal phalanx. (e) Little finger’s distal phalanx. (f) Little finger’s proximal phalanx.
Computation 13 00287 g013aComputation 13 00287 g013b
Figure 14. Final position of the virtual prosthetic hand in the fine pincer gesture: (a) Front view. (b) Isometric view.
Figure 14. Final position of the virtual prosthetic hand in the fine pincer gesture: (a) Front view. (b) Isometric view.
Computation 13 00287 g014
Figure 15. Trajectories for the fine pincer gesture (thumb’s phalanges): (a) Proximal phalanx. (b) Distal phalanx (around X). (c) Distal phalanx (around Y). (d) Distal phalanx (around Z).
Figure 15. Trajectories for the fine pincer gesture (thumb’s phalanges): (a) Proximal phalanx. (b) Distal phalanx (around X). (c) Distal phalanx (around Y). (d) Distal phalanx (around Z).
Computation 13 00287 g015
Figure 16. Trajectories for the fine pincer gesture (index and middle fingers): (a) Index finger’s distal phalanx. (b) Index finger’s proximal phalanx. (c) Middle finger’s distal phalanx. (d) Middle finger’s proximal phalanx.
Figure 16. Trajectories for the fine pincer gesture (index and middle fingers): (a) Index finger’s distal phalanx. (b) Index finger’s proximal phalanx. (c) Middle finger’s distal phalanx. (d) Middle finger’s proximal phalanx.
Computation 13 00287 g016
Figure 17. Subsystems integration test.
Figure 17. Subsystems integration test.
Computation 13 00287 g017
Table 1. Summary of the trained classification models.
Table 1. Summary of the trained classification models.
StimuliClassifierTraining AccuracyValidation AccuracyUsed Features
8 and 12 HZANN 90.56 % 87.5 % 12 SSVEP
SVM 94.96 % 92.5 % 36 overall
13 and 15 HZANN 96.85 % 72.5 % 5 PCA
SVM 85.53 % 80.0 % 36 overall
31 and 33 HZANN 95.0 % 95.9 % 1 LDA
SVM 96.22 % 95.0 % 1 LDA
Table 2. Cross-validation of the classification models.
Table 2. Cross-validation of the classification models.
FoldAccuracy
ANNSVM
1 93.75 % 93.75 %
2 100 % 100 %
3 93.75 % 93.75 %
4 100 % 100 %
5 87.50 % 87.50 %
6 93.75 % 93.75 %
7 100 % 100 %
8 100 % 93.75 %
9 100 % 100 %
10 93.33 % 93.33 %
Table 3. Comparison of classification accuracy of different SSVE-based BCIs.
Table 3. Comparison of classification accuracy of different SSVE-based BCIs.
AuthorsVisual StimuliClassifierClassification Accuracy
Zhang et al. [12]Augmented Reaity (AR)-based visual stimulusCanonical Correlation Analysis (CCA)-SVM94.66%
Keihani et al. [27]ED and three-fiber optic sensor with high frequencies (25, 30, and 35 Hz).CCA, Power Spectral Density (PSD), and Least Absolute Shrinkage and Selection Operator Analysis (LASSO)88.35% for PSD and more than 90% for CCA and LASSO
Mu et al. [28]Red LED with two 50% duty cycle square waves with the OR and ADD operator with frequencies of 7 and 9 Hz, 7 and 11 Hz, 7 and 13 Hz, 9 and 11  Hz, 9 and 13 Hz, and 11 and 13 Hz.CCAAverage accuracy of 70.83% on frequency superposition stimulation.
Rekrut et al. [29]Spinning icons including check, arrow, box, cross, gear, icon check, icon email, icon PDF, icon spread, and icon text with frequencies of 7.5, 10, and 13 Hz.CCA86% from cross SSMVEP and 75% for PDF icon
Siribunyaphat and Punsawad [14]A novel visual stimulus pattern, inspired by the QR code style with three fundamental frequencies of 7, 13, and 17 Hz.PSD and CCA 89.4 % for PSD and 91.4 % for CCA
Li et al. [30] 4 × 10 flicker matrix displayed on a 24.5-inch LCD monitorConvolutional Neural Network (CNN)Under 80%
Israsena and Pan-Ngum [31]Not reportedCNNUnder 90%
Guney et al. [32]SpellerDeep Neural Network (DNN)Under 84%
This workLED-based visual stimulatorSVM95% in training and validation, 92.3% in online testing and 100% in BCI integration
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zárate-Ruiz, Á.d.R.; Arias-Montiel, M.; Millán-Hernández, C.E. A Brain–Computer Interface for Control of a Virtual Prosthetic Hand. Computation 2025, 13, 287. https://doi.org/10.3390/computation13120287

AMA Style

Zárate-Ruiz ÁdR, Arias-Montiel M, Millán-Hernández CE. A Brain–Computer Interface for Control of a Virtual Prosthetic Hand. Computation. 2025; 13(12):287. https://doi.org/10.3390/computation13120287

Chicago/Turabian Style

Zárate-Ruiz, Ángel del Rosario, Manuel Arias-Montiel, and Christian Eduardo Millán-Hernández. 2025. "A Brain–Computer Interface for Control of a Virtual Prosthetic Hand" Computation 13, no. 12: 287. https://doi.org/10.3390/computation13120287

APA Style

Zárate-Ruiz, Á. d. R., Arias-Montiel, M., & Millán-Hernández, C. E. (2025). A Brain–Computer Interface for Control of a Virtual Prosthetic Hand. Computation, 13(12), 287. https://doi.org/10.3390/computation13120287

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop