Identification of Brain Electrical Activity Related to Head Yaw Rotations

Automatizing the identification of human brain stimuli during head movements could lead towards a significant step forward for human computer interaction (HCI), with important applications for severely impaired people and for robotics. In this paper, a neural network-based identification technique is presented to recognize, by EEG signals, the participant’s head yaw rotations when they are subjected to visual stimulus. The goal is to identify an input-output function between the brain electrical activity and the head movement triggered by switching on/off a light on the participant’s left/right hand side. This identification process is based on “Levenberg–Marquardt” backpropagation algorithm. The results obtained on ten participants, spanning more than two hours of experiments, show the ability of the proposed approach in identifying the brain electrical stimulus associate with head turning. A first analysis is computed to the EEG signals associated to each experiment for each participant. The accuracy of prediction is demonstrated by a significant correlation between training and test trials of the same file, which, in the best case, reaches value r = 0.98 with MSE = 0.02. In a second analysis, the input output function trained on the EEG signals of one participant is tested on the EEG signals by other participants. In this case, the low correlation coefficient values demonstrated that the classifier performances decreases when it is trained and tested on different subjects.


Introduction
In human computer interaction (HCI), design and application of brain-computer interfaces (BCIs) are among the main challenging research activities. BCI technologies aim at converting human mental activities into electrical brain signals, producing a control command feedback to external devices such as robot systems [1]. Recently, scientific literature has shown specific interest in cognitive human reactions' identification, caused by a specific environment perception or an adaptive HCI [2]. Reviews on BCI and HCI can be found in Mühl et al. [3] and Tan and Nijholt [4].
The essential stages for a BCI application consist of a signal acquisition of the brain activities, on the preprocessing and feature extraction, classification, and feedback. The brain signals acquisition may be realized by different devices such as Electroencephalography (EEG), Magnetoencephalography (MEG), Electrocorticography (ECoG), or functional near infrared spectroscopy (fNIRS) [5]. The preprocessing consists of cleaning the input data from noises (called artifacts), while the extraction feature phase deals with selecting, from the input signals, the most relevant features required to discriminate the data according to the specific classification [6]. The classification is the central element of the BCI and it refers to the identification of the correct translation algorithm, which converts the extracting signals features into control commands for the devices according to the user's intention. brain activities to support the driving of tasks in different applications, such as to control autonomous vehicle or wheelchair or robot in general.
In detail, this work is about "using brain electrical activities to recognize head movements in human subjects." Input data are EEG signals collected from a set of 10 participants. Left or right head position as responses to external visual stimulus represent the output data for the experiments. The main purpose of the proposed approach is defining and verifying the BCI system effectiveness in identifying an input-output function between EEG and head different positions. Section 2 introduces BCI architecture used for experiments, while Section 3 shows results coming from different training and testing scenarios. Section 4 briefly reports the conclusions.

System Architecture
The architecture of the system used for the experiments consists of two interacting subsystems: (1) a basic lamp system in charge of generating visual stimuli, and (2) an Enobio ® EEG systems cap by Neuroelectrics (Cambridge, MA, USA) for EEG signal acquisition. The two subsystems can communicate with a PC server through a serial port and a Bluetooth connector, respectively.

Lamp System
The lamp system's main components are a Raspberry pi 3-control unit (Cambridge, UK) and two LED lamps. The PC server hosts a Python application, which randomly sends an input to the Raspberry unit by the serial cable. The Raspberry unit hosts another Python application, which receives commands to switch on/off the lamps. Figure 1 shows the system architecture.
This paper focuses on an original objective in the context of EEG signals classifiers in respect to the literature related to body movements. Even if this work adopts a traditional ANN classifier, the scope of the application represents the main novelty due to the fact that we explore the recognition of the yaw head rotations directed toward a light target by EEG brain activities to support the driving of tasks in different applications, such as to control autonomous vehicle or wheelchair or robot in general.
In detail, this work is about "using brain electrical activities to recognize head movements in human subjects." Input data are EEG signals collected from a set of 10 participants. Left or right head position as responses to external visual stimulus represent the output data for the experiments. The main purpose of the proposed approach is defining and verifying the BCI system effectiveness in identifying an input-output function between EEG and head different positions. Section 2 introduces BCI architecture used for experiments, while Section 3 shows results coming from different training and testing scenarios. Section 4 briefly reports the conclusions.

System Architecture
The architecture of the system used for the experiments consists of two interacting sub-systems: (1) a basic lamp system in charge of generating visual stimuli, and (2) an Enobio ® EEG systems cap by Neuroelectrics (Cambridge, MA, USA) for EEG signal acquisition. The two subsystems can communicate with a PC server through a serial port and a Bluetooth connector, respectively.

Lamp System
The lamp system's main components are a Raspberry pi 3-control unit (Cambridge, UK) and two LED lamps. The PC server hosts a Python application, which randomly sends an input to the Raspberry unit by the serial cable. The Raspberry unit hosts another Python application, which receives commands to switch on/off the lamps. Figure 1 shows the system architecture. The two lamps are positioned at the extreme sides of a table (size: 1.3 × 0.6 m), allowing a typical head rotation (yaw angle) over a −45°/45° range. Figure 2 shows a top vision of the experimental set environment.

EEG Enobio Cap
The sensors connected to this cap can monitor EEG signals at 500 Hz frequency. The Enobio cap works on eight different channels. In order to decrease the artifacts due to muscular activity, the EEG system is equipped with two additional electrodes to apply a differential filtering to the EEG signals. These two electrodes are positioned in a hairless area in the head (usually behind the ears by the neck). In the proposed experiments, we focus on three channels labeled O1, O2, and CZ, according to International Standard System 10/20. The first two are positioned in the occipital lobe; the other in the parietal one ( Figure 3). The reason for this choice is that the signals coming from the occipital lobe are commonly associated with visual processing [32], while the signals coming from the parietal lobe are related to body movement activities. In addition, a good correlation between occipital centroparietal areas improves visual motor performance identification [33]. Positioning electrodes on the head plays a fundamental role in the quality of the data acquisition. For example, using gel may improve the quality of EEG signal. However, the main target of this work is verifying EEG monitoring's feasibility in working conditions, in order to avoid every possible, although limited, action on the workers. For this reason, no gel was used in the sensors positioning phase.

EEG Enobio Cap
The sensors connected to this cap can monitor EEG signals at 500 Hz frequency. The Enobio cap works on eight different channels. In order to decrease the artifacts due to muscular activity, the EEG system is equipped with two additional electrodes to apply a differential filtering to the EEG signals. These two electrodes are positioned in a hairless area in the head (usually behind the ears by the neck). In the proposed experiments, we focus on three channels labeled O1, O2, and CZ, according to International Standard System 10/20. The first two are positioned in the occipital lobe; the other in the parietal one ( Figure 3). The reason for this choice is that the signals coming from the occipital lobe are commonly associated with visual processing [32], while the signals coming from the parietal lobe are related to body movement activities. In addition, a good correlation between occipital centroparietal areas improves visual motor performance identification [33].

EEG Enobio Cap
The sensors connected to this cap can monitor EEG signals at 500 Hz frequency. The Enobio cap works on eight different channels. In order to decrease the artifacts due to muscular activity, the EEG system is equipped with two additional electrodes to apply a differential filtering to the EEG signals. These two electrodes are positioned in a hairless area in the head (usually behind the ears by the neck). In the proposed experiments, we focus on three channels labeled O1, O2, and CZ, according to International Standard System 10/20. The first two are positioned in the occipital lobe; the other in the parietal one ( Figure 3). The reason for this choice is that the signals coming from the occipital lobe are commonly associated with visual processing [32], while the signals coming from the parietal lobe are related to body movement activities. In addition, a good correlation between occipital centroparietal areas improves visual motor performance identification [33]. Positioning electrodes on the head plays a fundamental role in the quality of the data acquisition. For example, using gel may improve the quality of EEG signal. However, the main target of this work is verifying EEG monitoring's feasibility in working conditions, in order to avoid every possible, although limited, action on the workers. For this reason, no gel was used in the sensors positioning phase. Positioning electrodes on the head plays a fundamental role in the quality of the data acquisition. For example, using gel may improve the quality of EEG signal. However, the main target of this work is verifying EEG monitoring's feasibility in working conditions, in order to avoid every possible, although limited, action on the workers. For this reason, no gel was used in the sensors positioning phase.

Simulation Description
During data acquisition, the participant sits in front of the table and wears the EEG Enobio cap, assisted by the operator who checks the electrodes position. Each participant is expected to move his/her head left or right towards the lamp, which is randomly switched on by the Raspberry unit. The lamp stays on for a variable period of time (between six and nine seconds). After turning off, the lamp stays inactive for five seconds. The test participant is expected to move his or her head back to the starting position following the lamp turning off.

Pre-Processing Data
During EEG monitoring, the presence of artifacts and noise in the acquired data was one of the main problems we had to face. Exogenous and endogenous noises can significantly affect reliability of the acquired data. Concerning artifacts, several types have been described in literature [34], among others, such as ocular, muscle, cardiac, and extrinsic artifacts.
In order to limit artifacts, we worked as follows: • Muscle artifacts were intrinsically limited in the EEG signal acquisition system thanks to the two differential electrodes embodied in Enobio Cap. • Extrinsic artifacts were limited by proper signal filtering and normalizing EEG signals. Specifically, we applied a bandpass filtering between 49 and 51 Hz in order to eliminate the noise given by the electrical frequencies [35].

•
In addition, in order to remove linear trends, a high pass filter-cutting frequencies lower than 1 Hz-filtered the overall signal.
The resulting signals, whose unit of measure is µV, have been amplified to a factor 10 5 , and limited between 1 and −1. The reason is to enhance the precision of the following signal analysis. The head positions were classified as follows: −1 for left position, 1 for right, and 0 for forward. The participants were asked to move the head in a normal speed avoiding sudden movements. Thus, transition from one position to the other (e.g., left to forward) was linearly smoothed using a moving average computed on a window of 300 samples (i.e., for a duration of 0.6 s).

Input Output Data Analysis
The testing goal is to find a direct input-output function that is able to relate a certain number of EEG samples to the related value of the head position. This is challenging since, as stated in literature, time variance [36] and sensibility to different participants' reactions [37] are well known obstacles.
Specifically, the goal is to identify a non-linear input-output function, which takes 10 consecutive EEG samples, extracted from O1, O2, and Cz, (hereinafter defined as x(t), which is a 3-component vector sampled at instant t), and the value of the head position in the sample just following the EEG samples (hereinafter defined as y(t)).
A non-linear function f between input x(t) and output y(t) must be identified so that the values y(t) resulting by Equation (1): minimize the minimum squared error (MSE) between y(t) and y(t) values, where MSE computed on one prediction is given by: To keep predictions less sensible to the input noise, the predicted values y(t) are averaged on a moving mean of 300 preceding samples, which is: Results related to the identification reliability of the function f are evaluated against two key performance indexes, MSE and Pearson correlation coefficient r, as reported below: r(y, y) = cov(y, y) where: • σ y and σ y are the standard deviations of y and y; • cov(y, y) is the covariance of y and y.
An ANN with 10 neurons in the hidden layer identified the non-linear input-output function. The identification process is based on Levenberg-Marquardt backpropagation algorithm [38,39]

Data Set
The trials involved 10 participants: one woman (P1) and nine men (P2-P10), aged 25 to 60, with no known history of neurological abnormalities.
All participants, but P5, are right-handed. P2 and P4 are hairless. For two participants, namely P1 and P2, 10 different experiments were recorded; for P10, 2 experiments were recorded while for the others, namely P3-P9, only one experiment was recorded. All tests were 5 min long. Table 1 shows the main files characteristics. From left to right, the columns show: participant ID; file ID; the number of samples in each file; time elapsed from participant's first trial; occurrences percentage related to the three coded head positions (1 R (right), 0 F (forward), and −1 L (left)).
Out of the example, Figure 4 shows P4F1 trend in the three EEG channels versus head movement output signals, filtered and normalized as described in Section 2.3.1.  Out of the example, Figure 4 shows P4F1 trend in the three EEG channels versus head movement output signals, filtered and normalized as described in Section 2.3.1.

First Analysis. Identification of the Function f on the First Half File and Verification on the Second Half
Each file was divided into two equals parts; we named the first "training set," and the second "test set." The training sets always include samples related to the three possible positions (R, F, L). The results on the testing set can be further classified according to r value ranges reported in Table 2 [40].

First Analysis. Identification of the Function f on the First Half File and Verification on the Second Half
Each file was divided into two equals parts; we named the first "training set," and the second "test set." The training sets always include samples related to the three possible positions (R, F, L). The results on the testing set can be further classified according to r value ranges reported in Table 2 [40].  Table 3 shows the performance indexes on the testing set. In 29 files, only two (P1F8 and P1F10) show a moderate correlation; the others show a strong one instead.  Tables 4 and 5 report r and MSE values produced by extracting the functions from the 10 different tests on P1 (rows) and applying them to each test for the same subject (columns). Tables 6 and 7 report the same data produced from P2 tests. The cells in the tables are grayed according to the classification given in Table 2 (white = strong correlation; gray = moderate correlation; dark gray = weak correlation).

Second Analysis. Identification of the Function f on One Participant's Overall Data and Verification on All Participants' Overall Data
Following this approach, the files related to the overall experiments for each participant were used to train ANN in order to test the classifier using each function on each test file. Although the function is identified and verified on the same data, the values on the diagonal (see Tables 8 and 9) showed strong correlation in this analysis too. On the other hand, as expected, testing one subject's function f on another subject's data returns very low correlation coefficient values, almost close to zero. There is just one case that contradicts this statement: we managed to see that functions coming from P1 return results with a good performance (r = 0.52, MSE = 0.38) for the P3 case. This exception is surely fortuitous, although it is quite curious noting that P1 is P2′s mother.    Figure 7), while Figure 6 shows a study case with a medium performance (r = 0.82 and MSE = 0.31).

Second Analysis. Identification of the Function f on One Participant's Overall Data and Verification on All Participants' Overall Data
Following this approach, the files related to the overall experiments for each participant were used to train ANN in order to test the classifier using each function on each test file. Although the function is identified and verified on the same data, the values on the diagonal (see Tables 8 and 9) showed strong correlation in this analysis too. On the other hand, as expected, testing one subject's function f on another subject's data returns very low correlation coefficient values, almost close to zero. There is just one case that contradicts this statement: we managed to see that functions coming from P1 return results with a good performance (r = 0.52, MSE = 0.38) for the P3 case. This exception is surely fortuitous, although it is quite curious noting that P1 is P2 s mother.

Conclusions
The main contribution of this paper is to address an issue that the literature concerning BCI has paid little attention to: the identification of human head movements (yaw rotation) by EEG signals. This kind of system is effectively starting to become present in commercial systems at prototypal level. For example, it will be used more and more in the automotive context, with proprietary systems, which will be, however, mostly based on ANN applications. Thus, for the scientific community, it is hard to be completely aware of the current state of the art prototypes. In our opinion, it is important to share experimental results on these subjects.
Concerning the head yaw rotation studied in this work, from the trials performed on ten different participants, spanning more than two hours of experiments, it seems clear that-under some specific limitations-this goal is achievable.
Specifically, after identifying a proper function over a short period of time (a couple of minutes for each participant), this can predict head positions with a quite relevant accuracy for the remaining minutes. Such accuracy is quite relevant (MSE < 0.35 and r > 0.5, p < 0.01) since it was obtained in 26 out of 28 tested files. Once the function is identified for a single file, this generally shows good results on files involving the same participant in the same day.
However, the results obtained in different analyses proved that EEG signals are time variant and the files recorded in a short time interval may be useful to generate a classifier for human head movements following visual stimuli. As a matter of fact, such correlation appears to be time dependent, or more likely, quite susceptible to sensors' positioning. Besides, a further result of the study, which may represent a drawback but also an important finding of the approach, is related to the fact that the correlation is surely dependent on the specific participant, with the impossibility to predict on another subject when the classifier is trained on another one. This may be a disadvantage in the implementation of the EEG classifier because it seems to be significantly different for each subject, and this precludes the ability to achieve an acceptable level of generalization. However, further studies should demonstrate this when the classifier is identified on a group of several different subjects.
Other important remarks concern the EEG data acquisition reliability, which seems to be extremely dependent on the adherence of the electrodes to the scalp. In the proposed study cases, the two hairless participants achieved better performance in the tests proving that the quality of data collection is closely related to the quality of the predictions.
Future developments will address different arguments. Since, in the trials reported in this paper, EEG is affected both by electrical and illumination stimuli, further efforts should be devoted to separate these two aspects. Secondly, further EEG signal analysis should be performed to outline input-output relations for specific frequency bands.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions present in the informed consent.