Brain–Computer Interface Based on PLV-Spatial Filter and LSTM Classification for Intuitive Control of Avatars

: This study researches the combination of the brain–computer interface (BCI) and virtual reality (VR) in order to improve user experience and facilitate control learning in a safe environment. In addition, it assesses the applicability of the phase-locking value spatial filtering (PLV-SF) method and the Short-Term Memory Network (LSTM) in a real-time EEG-based BCI. The PLV-SF has been shown to improve signal quality, and the LSTM exhibits more stable and accurate behavior. Ten healthy volunteers, six men and four women aged 22 to 37 years, participated in tasks inside a virtual house, using their EEG states to direct their movements and actions through a commercial, low-cost wireless EEG device together with a virtual reality system. A BCI and VR can be used effectively to enable the intuitive control of virtual environments by immersing users in real-life situations, making the experience engaging, fun, and safe. Control test times decreased significantly from 3.65 min and 7.79 min in the first and second quartiles, respectively, to 2.56 min and 4.28 min. In addition, a free route was performed for the three best volunteers who finished in an average time of 6.30 min.


Introduction
Technology has advanced greatly over the years, with many new innovations and developments improving our daily lives, and there are two fields to highlight: braincomputer interfaces (BCIs) and virtual reality (VR).
A BCI is a system that allows a user to control a computer or other device using their brain activity.This technology has the potential to revolutionize the way people interact with their environment by providing a direct link between the human brain and machines.These systems can be more attractive to a person with a mobility problem because these devices can help in several ways to increase their independence and autonomy in their daily activities.There is a body of literature on different proposals.For example, Huyang et al. [1] proposed in their study a novel hybrid BCI system based on electroencephalography (EEG) and electrooculogram (EOG) to control a combined wheelchair and robotic arm system through hand motor imagery, eye blinks, and eyebrow-raising movements.Yu et al. [2] presented a BCI system that integrates motor imagination (MI) potential and P300, while Wang et al. [3] proposed a new system that aggregates EOG information.Both systems were designed with the purpose of implementing wheelchair control.Chen et al. [4] explored the use of an EEG-based BCI and a steady-state visual evoked potential (SSVEP) to control an electric wheelchair for people with motor disabilities, including multiple sclerosis and amyotrophic lateral sclerosis.Pawus et al. [5] proposed an expert system that involves two neural networks that analyze EEG signals from selected electrodes and detect nervous tics, as well as interference from external sources, for application in BCI technology.Wang et al. [6] proposed a BCI system based on SSVEP and EOG to estimate the vigilance state of a person.This model applied a spatiotemporal convolution module with an attention mechanism to explore the spatiotemporal information of EEG features, and a short-term memory module was used to learn the temporal dependencies of EOG features.In the review by Naser et al. [7], it was possible to see the evolution of the control of EEG-driven wheelchairs, in which both the state-of-the-art and the different models adopted in the literature during recent decades and the limitations that these systems present were identified.
VR, which is a computer-generated simulation of an environment, can create an immersive experience that allows the user to feel as if they are physically present in a fictional world.VR is often used for gaming, education, training, and other interactive experiences [8][9][10].There are different applications, such as in therapy: Emmelkamp et al. [11] described its effectiveness in anxiety disorders and post-traumatic stress disorder; Juan et al. [12] studies an application with three serious games for motor rehabilitation of hands movements; with the applications presented in the review developed by Ehioghae et al. [13], they highlighted VR as a promising way to optimize postoperative recovery in orthopedic surgery patients.Example applications for the improvement in the quality of education include that demonstrated in the meta-analysis on nursing education developed by Chen et al. [14], the virtual environment of the hydrogen atom that allows exploring atomic orbitals in 3D space developed by Suno et al. [15], and in the various cases of surgical education presented in the review by Ntakakis et al. [16].
If both technologies are combined, a system can be obtained that controls an avatar in a virtual reality environment using brain signals to interpret the user's intentions and movements.This system can allow the user to learn and improve the control of a BCI by being more familiar with it and developing a stronger connection with it in a safe and controlled environment for practicing movements and activities.Some examples follow.Deng et al. [17] developed a modular multi-quadcopter system in a 3D virtual reality scene where an SSVEP-BCI system was applied for swarm control; Vourvopoulos et al. [18] researched the embodied feedback with VR to help the elderly population with stroke age-range demographic in their BCI performance; the integration of a BCI based on the SSVEP paradigm in a VR flight simulator was proposed by Zhengdong et al. [19].In a VR environment, it is possible to provide visual or auditory feedback when the user successfully performs a task, as Juliano et al. [20] demonstrated in their study of embodiment and as in the REINVENT platform developed by Vourvopoulos et al. [21], in which EEG, electromyography (EMG), and VR were combined to generate feedback and generate a fruitful recovery of chronic stroke survivors.
This combination of systems can help the user learn to control their brain activity more effectively.Additionally, using a variety of tasks and challenges during training can help the user develop a wider range of control over the BCI system.Its application in therapy and rehabilitation can be highly effective in helping patients overcome cognitive and physical challenges, such as brain injuries, stroke, and neurological disorders.The technology allows for personalized and tailored therapy sessions, which can be adjusted in real time to suit the needs of each patient.This can lead to faster and more effective rehabilitation outcomes.Examples are the VR rehabilitation set for stroke patients defined by Karácsony et al. [22], which uses a real-time EEG-based BCI MI for different activations, and the system proposed by Gao et al. [23] for patients with the same pathology (stroke patients).This system combined BCI, soft hand rehabilitation glove, and VR to mobilize more cerebral cortex, muscle strength, and muscle tension to address hand motor dysfunction.
The current manuscript presents a practical application of a VR environment that incorporates a BCI system that uses phase-locking value spatial filtering (PLV-SF) [24] and Long Short-Term Memory (LSTM) neural networks for signal classification [25].In particular, the work of Martín-Chinea et al. [24] detailed a spatial filtering based on a graph Laplacian quadratic form, while [25] compared the performance of LSTM neural networks with other classification algorithms commonly used in the literature (support vector machine, discriminant analysis, k-nearest neighbor, and decision tree learner).The LSTM networks showed an improvement range of around 30%.This combination of ad-vanced methodologies offers a unique solution for capturing and analyzing brain activity and applying it in VR environments.This study demonstrates the applicability of these complementary methodologies, which aim to find high accuracy and reliability, in the real use case.The PLV spatial filtering method enhances the performance of BCIs by improving the signal-to-noise ratio of EEG signals, particularly in noisy or complex environments.This method effectively separates relevant signals from background noise and unwanted artifacts, thereby improving the accuracy and reliability of EEG control signals.Utilizing this innovative approach, previously confined to academic environments, its real-time integration with EEG signals showcases its practical viability in real-world scenarios.Moreover, the LSTM neural network is a machine learning algorithm that models the temporal dynamics of brain signals and user behavior, which is crucial for accurate and responsive control of BCI and VR systems.LSTM networks can process variable-length input sequences and recognize patterns to make predictions, making them ideal for modeling complex temporal relationships.Other authors, such as Gong et al. [26], have demonstrated its applicability using a model that combines spiking neurons with adaptive LSTM and graph convolution to classify EEG signals.Wang et al. [27] proposed a hybrid 2D CNN-LSTM model for MI EEG classification.Guerrero-Méndez et al. [28] applied various models, such as the Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM)/Bidirectional Long Short-Term Memory (BiLSTM), and a combination of CNN and LSTM, in the same context (MI EEG).
Several aspects of improvement can be identified when applying the methodologies proposed in the cited publications.On the one hand, many of them are based on specific systems that do not take into account the evolution of users and how they interact with the system over time.Another crucial aspect is the choice of the systems used; in this case, the feasibility of a low-cost wireless trading system for EEG signal acquisition is highlighted, which may present a lower signal-to-noise ratio compared to clinical systems.In addition, the integration of VR technology as a tool to test various methodologies and evaluate user training in BCI systems in an engaging, fun, and safe way is evaluated.Therefore, this manuscript details all aspects of this study, and the results obtained are presented, demonstrating not only the practical applicability of both methods but also the volunteers' experiences in the developed environment.
The background is presented throughout this introduction.The methods applied are presented below, including the PLV-SF method, the LSTM for cognitive state classification, and the VR for the task environment.Finally, the findings are presented in the results, with the interpretation in the discussion part, and the conclusion, which summarizes the key ideas and suggests future directions.

Materials and Methods
To assess the effectiveness of both the PLV-SF method and LSTM as decision-making systems within a VR environment designed for user training and system control enhancement, the materials and methods described in this section were applied.The purpose of this section is to provide a detailed description of the methods utilized in this research and obtain information about the user experience, including the signal filtering process and the operation of the system in the virtual environment.This information will help readers understand and evaluate the methodology used to conduct this study.

Equipment and Software
An OpenBCI device (OpenBCI Inc., New York, NY, USA), a non-invasive and low-cost EEG device, was utilized in the study, as shown in Figure 1a.It featured a combination of Cyton and Daisy biosensing boards and was used in conjunction with a Python program to record signals at a sampling rate of 125 Hz.The cap employed in the study featured 18 sensors (FP1, FP2, F7, F3, F4, F8, T7, C3, C4, T8, P7, P3, P4, P8, O1, O2, and two reference electrodes at Ref and GND) placed according to the 10-20 system, as shown in Figure 1b.This system was combined with the HTC Vive virtual reality goggles (HTC Corporation, Taipei, Taiwan), which displayed the virtual reality environment.Figure 1c shows an example of a user using both devices.All processing and analyses were carried out using Matlab ® (The MathWorks, Inc., Natick, MA, USA) and Fieldtrip (https://www.fieldtriptoolbox.org/,accessed on 21 April 2024) [29], a toolbox developed by the Donders Institute for Brain, Cognition and Behaviour in Nijmegen, The Netherlands in collaboration with other institutes.This toolbox offers advanced preprocessing and analysis methods for magnetoencephalography (MEG), EEG, intracranial electroencephalography (iEEG), and near-infrared spectroscopy (NIRS) recordings.The LSTM was defined and trained using the Matlab Deep Learning Toolbox™, and the virtual environment was created with the Unity game engine (Unity Technologies, San Francisco, CA, USA).

Participants
Ten healthy volunteers participated in the tests, consisting of 6 men and 4 women ranging in age from 22 to 37 years.Only 4 of them had prior experience using BCI systems.
Written consent was obtained from each volunteer prior to their participation in the study, ensuring their informed agreement to take part in the research procedures and protocols.Additionally, all the ethical and experimental procedures and protocols applied, which involved human subjects, were granted by the Ethics Committee of the University of La Laguna under Approval No. CEIBA2020-0405.This approval underscored our commitment to upholding the welfare and rights of all individuals involved in the study in compliance with ethical principles and regulatory requirements.

Experimental Training Protocol
The experimental design was conducted in a noise-free room, where the volunteer had to remain comfortably seated in a chair, wearing the EEG device and VR goggles.
The brain signal used to train the classifier was based on the protocol in Figure 2.This protocol has two key components, each lasting 20 s with a 10 s break in between.During the initial phase of the study, each participant was asked to simply observe the virtual environment without any active participation.This baseline state was used to record the participant's EEG at rest.The second part of the experiment corresponded to the EEG signal when the user's eyes were closed.This period was indicated to the volunteer by an auditory signal (a beep) that tells them when to open and close their eyes.
After conducting the training and generating the classifier, the user interaction was studied through a menu with different action buttons (each corresponding to a specific action: forward movement, left turn, right turn, and backward movement).Three consecutive sessions were conducted where the user was presented with a set of four actions to perform (the first and last sessions had identical action sequences to assess participant learning).
In addition, three of the people who obtained the best results in the previous experiments and felt more comfortable using the system took an extra experiment, in which they had to move the avatar in the VR environment freely around the room to move from one point to another.

Data Preprocessing
The signals worked in 5 s segments.During the training process, a window of this size was generated and moved along the sequence to build the training dataset.In real-time classification, the signal was acquired every 5 s for continuous processing and analysis.Both training and real-time processing had the same preprocessing.First, a band-pass filter was used between 1 and 40 Hz.
Secondly, the REBLINCA procedure [30] was used to eliminate the impact of flicker on the signal.This process used a central sensor as a reference signal (FPZ) to detect the effects of flicker and eliminate its influence on other sensors.From this signal, a signal component was created with a fifth-order Butterworth filter in the frequency range of 1 to 7 Hz.In addition, a derived threshold signal was calculated to highlight the rising and falling phases generated by the flicker.This latter signal was also normalized to have a mean of zero and a variance of one, and a moving average was applied over the square of the signal.When this threshold signal identified the areas affected by the flicker, it was corrected by subtracting the weighted component signal of each channel, where the weighting was defined by the ratio between these two signals.For more information on the procedure carried out, see [25].
Third, an automatic artifact rejection was applied.Each channel was normalized with respect to its standard deviation and mean, a new generic signal was created with the mean of all normalized channels, and to this signal, a threshold 3σ around the mean of the average was applied to automatically remove artifacts, as described in the Fieltrip tutorial [31].

Features Extraction
The Morlet wavelet transform [32] was applied to obtain the power evolution in the time-frequency domain.This function was chosen because of its suitability for the analysis of non-stationary signals such as EEG [33,34].This choice was based on its ability to effectively adapt to changes in the signal as well as its excellent localization in both the time and frequency domains.This allowed us to effectively capture the various temporal and spectral characteristics of the EEG signal.
The analysis focused on telling an eyes-open state from an eyes-closed state apart.To achieve this differentiation, the alpha band (8-12 Hz) of the EEG signals recorded by the O1 and O2 sensors in the visual area (occipital lobe) [25,35] was used to train the classifier.This band was selected because it is associated with states of visual attention, relaxation, and low cognitive activity [36][37][38].Both aspects make it suitable for identifying the response to visual stimuli or any activity related to visual perception and attention.

Phase-Locking Value Spatial Filtering
This process is described in the paper [24] and was applied to the alpha band power spectrum.PLV-SF is a method that uses the spatial dependencies between EEG sensors to reconstruct the signals.
Starting from the set of trials, where X is a matrix with dimension n x m, the temporal power associated with each sensor is calculated from the Morlet wavelet transform, where n are the sensors and m are the temporal samples.The graph Laplacian matrix L of each trial is generated, which is computed from the synchronization metric used, in this case, the phase-locking value (PLV).This synchronization metric defines the average phase difference between two sensors, as can be seen in the following formula used [39]: The time-independent value that characterizes the entire interval of interest is calculated from the phase difference between the two signals, which is obtained using the Hilbert transform.
The PLV is a metric commonly used to measure synchronization in EEG sensors [40,41] for a BCI because synchronization metrics are less affected by noise than amplitude (this is indicated by a non-random phase or phase difference distribution [42]).Noise in an EEG signal can directly affect the signal, while the phase of the two signals can remain relatively unchanged by amplitude noise since this type of noise generally tends to affect the magnitude of the signal more than its relative phase.In addition, by working on temporal synchronization between different brain regions, temporal synchronization patterns are evaluated, obtaining more robust information even in the presence of external noise.
PLV-SF is a filtering method that uses this synchronization metric to solve the following convex problem at each sensor, generating a new matrix of equal dimensions with filtered signals: where Y corresponds to the original matrix without the specific channel to be filtered, B corresponds to a binary matrix in which the channel to be filtered is removed, • denotes the Hadamard product, and || || F denotes the Frobenius (or L2) norm for matrices.

LSTM Classification
An LSTM, an artificial neural network (ANN), was selected as the classification algorithm [43]; the algorithm selection was based on the article [25], where an LSTM network was compared with other classification algorithms.The comparison focused on the classification of the filtered signal from the power spectrum of the sensors corresponding to the occipital lobe, and the device used O1 and O2 sensors.
The same architecture used in that publication [25] was applied, as shown in Figure 3, which includes a sequence input layer with two neurons for each channel.The LSTM layer, consisting of eight cells, is included to identify long-term dependencies between time steps in the sequence data.The fully connected layer connects every neuron in one layer with those in the next, while the SoftMax layer uses the SoftMax activation function to calculate the probability of each class.The final classification layer determines the outcome [44,45].
The results presented in the study [25] highlight the outstanding performance of LSTM, mainly for two reasons.First, the accuracy (acc), sensitivity (sen), specificity (spe), and Matthews correlation coefficient (mcc) metrics showed that LSTM significantly outperformed the other algorithms evaluated.Secondly, its performance was especially remarkable when processing continuous sequences, while classical algorithms showed instability, alternating between different states, and LSTM provided a more consistent and stable classification throughout the entire sequence.

Virtual Environment
The virtual environment was a simulated space with various rooms and furniture.During training, the user was placed in this environment to learn how to respond to the stimuli protocol; see Figure 4a.As illustrated in Section 2.3 Experimental Training Protocol, the volunteer was present in the virtual environment but did not interact with it in any way; the volunteer's task was to open and close their eyes in response to the sound stimuli received.Once training was complete and the LSTM classification model was generated, the user was placed in the same VR scenario but could interact with the environment through a menu with scrolling buttons; see Figure 4b.The menu highlighted each button for 10 s (speed of change); after this time, the next button was automatically highlighted.To select a button, the user had to close their eyes (brain state defined as action) when the button was highlighted.The LSTM model had to correctly classify this state (this model works with 5 s segments).The speed of changes was explored in an earlier paper [25].This study showed how the different time windows began to lose significant differences in the classification of the population studied when they reached 5 s.This means that, in the current system, users had a reaction window that allowed them two opportunities to select a particular button.To enhance the user experience, the menu included a "beep" sound when buttons changed and a "click" sound and green color change when a button was selected.
This menu interface was verified by means of three different tests, in which the user had to execute a random sequence of four buttons.As can be seen in Figure 4c, within the VR environment, these target buttons were presented to the user above the menu.Both the first and third tests consisted of the same sequence of buttons to compare improvement and execution times between users.

Results
The research's findings highlight the positive evaluations at both the system control and user experience levels.Additionally, the result shows the applicability of both innovative, state-of-the-art methods in real-world scenarios (the PLV-SF method and the LSTM classifiers).
Table 1 presents the results obtained from the volunteers using the system: • Previous experience (Previous Exp) was used because it is essential to consider the previous experience of the volunteers with a BCI system because of its ability to reduce adaptation time and improve the handling of the technology.

•
Training accuracy (Training Acc) is the benchmark metric of model training performance.

•
The percentage of hits in the test (Test Acc) represents the correct classification at each moment for the classifier when the user is interacting with the menu (to achieve 100% accuracy in the test, the user must select all correct buttons on the first attempt, without selecting any incorrect buttons or omitting the selection of a correct button).• Time refers to the minutes the user takes to complete the test.
These last two parameters, time and test accuracy, were recorded for each test (first fixed sequence, random sequence, and last fixed sequence).Although some users had previous experience in other experiments with various protocols, it was investigated whether there was any difference in the results of this protocol compared to users with no previous experience.A Student's t-test for independent samples was performed on the accuracy and time data obtained in both the first fixed test and the second fixed test.The results showed p-values of 0.185 and 0.685 for accuracy and 0.357 and 0.665 for time, respectively.These results indicate that there is no relationship between the users' experience and their performance in controlling the system, so they can be considered as part of a general population.
The distribution of system accuracy (Figure 5-Accuracies) does not vary much.The accuracies remain in a similar range because the proportion of errors is lower than that of successes.When a user attempts to select a button and fails, they must return to the same button, which generates a higher number of hits to compensate for the failure.The times (Figure 5-Times) show an improvement in most users, who present lower values in the second test (2.56 min in quartile 1 and 4.28 min in quartile 2) compared to the first (3.65 min in quartile 1 and 7.79 min in quartile 2), meaning that a few managed to execute the required commands in less time.
The time results of the second test with fixed commands highlight two outliers.Table 1 shows that this result corresponds to user 6, who became worse in both hits and times, and to user 10, who, despite having high times, improved on their time in the first fixed test.Once the tests were completed, the users answered a series of questions to evaluate their feelings regarding the use of the system, as shown in Table 2. Was the speed of changes between buttons and selection time adequate for system control?4 General evaluation of the use of the applied system (electroencephalograph and virtual reality glasses) (rated from 1 to 5) 5 Overall rating of the interface (from 1 to 5) 6 Overall rating of the system (from 1 to 5) According to the results presented in Figure 6, it is observed that eight out of ten users highlighted the importance of control as the most crucial part of the system, while only two users emphasized the relevance of the user interface for interaction, moving away from the scope of the virtual reality (VR) environment, which, although attractive to them, was not a priority.This evaluation of the different parts of the system highlights the importance of control and the intermediate menu, regardless of the environment used, whether virtual reality or a physical wheelchair.However, it is essential to highlight the importance of virtual reality to ensure safe interaction and facilitate user learning.As for the difficulty in performing the tests, only two considered having difficulties in use, and three defined that the time allowed to change the selection from one button to another was not enough to select the desired button (they needed more time to select it correctly).And, as for the overall ratings of the system, it was obtained that at the level of the equipment used, three users scored 4, and seven scored 5; at the level of the interface, four users scored 4, and six scored 5; and at the level of the overall system, four scored 4 and six scored 5.During the execution of the experiments, the system interacted with users to obtain information about their experience and comfort level in performing the tasks.With this information, along with the results in both accuracy and execution time, a new challenge was posed on another date to users 1, 5, and 7.This test consisted of the user moving from one corner of the room to another, where the refrigerator was located (the interaction with the menu remained the same as in the previous case).The route to be taken was freely selected by the user with two possible options: move forward, turn left, move forward, and move toward the refrigerator or turn right, move forward, turn left, and move toward the refrigerator.Figure 7 shows the routes taken by each user.User 1 successfully completed the route using two different paths, while the other two users, because of a loss of control of the system (due to fatigue and frustration), were unable to complete the second circuit.
During the first path, User 1 encountered some false positives and backtracked after reaching the upper right corner but completed it in 6' 07".In the second path, there were a few challenges, including difficulty rotating at one of the corners and in the middle of the room.Nonetheless, user 1 eventually reached the target in 9' 58".User 7 successfully reached the target by following the first path despite initially making an error when approaching the fridge.However, this user quickly corrected their mistake and completed the task in a total time of 4' 07".Finally, user 8 completed the path with a time of 5' 1".User 8 encountered only one issue at the start of the path when they accidentally walked backward in the wrong direction.Despite this setback, user 8 was able to quickly recover and successfully complete the path.

Discussion
This manuscript presents a practical application of a virtual reality environment that incorporates a BCI system working with the PLV-SF filtering method and an LSTM classifier.
Nowadays, learning takes place in both real and virtual environments.Physical environments carry risks associated with unforeseen situations, which has led to an increase in the adoption of virtual environments as a more secure, low-cost alternative.Within these environments, various factors are identified that make the interaction with the scene similar.However, VR offers a three-dimensional perception that enhances the feeling of presence and realism compared to 2D screen viewing.Much research has documented the comparisons that support the advantages inherent in the use of this technology [46][47][48][49].
In essence, the implementation of BCIs in practical settings still faces limitations.The size and diversity of the sample are key aspects, as they may limit the generalizability of the findings.In this study, despite working with only 10 people, the sample was sought to be diverse and representative of the population.Likewise, this study was based on visual activation, but future research can explore other valid tasks or scenarios to better understand the performance of BCIs in practical environments at the level of usability and user experience.But, without a doubt, two of the main limitations of BCI systems are reliability and adaptability.As such, it is crucial to have applications like the one presented, which allow people to hone their skills and test the system.Moreover, various specialists have underlined the significance of users acquiring proficiency in the utilization of BCIs to enhance their dependability.For instance, Tortora et al. [50] demonstrated that a BCI operator was able to enhance their output through repeated use between tests.Eidel et al. [51] replicated in their study previous research on the use of tactile stimulation for the control of a BCI.The system used was a P300-based BCI, and users had to navigate a virtual wheelchair through a 3D apartment.The study found significant training effects on both online and offline accuracy, which increased significantly with training from 65% to 86% and from 70% to 95%, respectively.In addition, subjective questionnaire data showed high workload and moderate-to-high satisfaction.Other research, developed by Juan et al. [52], analyzed and explored the efficacy of a low-cost neurofeedback training (NFT) system based on a real-time EEG BCI to regulate subjects' working memory (WM) levels.One NFT group received several sessions with a game feedback interface designed to regulate the alpha band, and EEG data were collected and analyzed.The study found that NFT significantly increased alpha band power in the prefrontal and occipital cortexes and improved WM performance, with lower error rates than the control group.
Despite the promising results obtained in the present manuscript, the primary limitation of the proposed system lies in the performance time required by the BCI system to classify each user action.Consequently, future research endeavors will focus on exploring alternative paradigms, such as motor imagery, event-related potential, and/or steady-state visually evoked potential, aiming to reduce BCI response times while maintaining an immersive and natural user experience.

Conclusions
BCI and VR technology has potential applications in fields such as gaming, education, training, and rehabilitation, where intuitive and immersive experiences are desired.This research demonstrates that the combination of a BCI and VR can be used effectively to enable intuitive control of virtual environments by immersing users in real-life situations, making the experience of learning to control the system and improve not only engaging and fun but also completely safe.Furthermore, the applicability of the PLV-SF method and the application of LSTM were demonstrated in a real case.Users showed improvements by reducing the time to complete the tests (comparing the first to the last), obtaining times that went from 3.65 min and 7.79 min in the first and second quartiles to 2.56 min and 4.28 min, respectively.In addition, it was demonstrated how the proposed system works in real cases by allowing different users to reach a specific point in the house in an average time of 6 min and 18.25 s.Overall, this study highlights the potential of BCI-VR technology to enhance the user experience and enable more natural interaction with real-world cases.

Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University of La Laguna (Approval no.CEIBA2020-0405).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.

Data Availability Statement:
The raw data supporting the conclusions of this article will be made available by the authors on request.

Figure 1 .
Figure 1.(a) OpenBCI device.(b) Sensor distribution of the cap according to the 10-20 distribution system.(c) Example of a user using the BCI system with the VR glasses.

Figure 4 .
Figure 4. Different interaction scenes.(a) VR environment.(b) VR environment with scroll button menu.(c) VR with the menu and instructions to follow.

Figure 5 .
Figure 5.Comparison between the two tests with the same static sequence (dots are outliers).

Figure 6 .
Figure 6.Results of the post-experience user questionnaires.

Figure 7 .
Figure 7. Routes taken by users in the room.The red triangles define the first path, the blue ones define the second, and the green triangles define the start.

Table 1 .
Results obtained by the user when interacting with the virtual reality system.

Table 2 .
Test questions after experience in the virtual environment.