Advanced Bioelectrical Signal Processing Methods: Past, Present and Future Approach—Part II: Brain Signals

As it was mentioned in the previous part of this work (Part I)—the advanced signal processing methods are one of the quickest and the most dynamically developing scientific areas of biomedical engineering with their increasing usage in current clinical practice. In this paper, which is a Part II work—various innovative methods for the analysis of brain bioelectrical signals were presented and compared. It also describes both classical and advanced approaches for noise contamination removal such as among the others digital adaptive and non-adaptive filtering, signal decomposition methods based on blind source separation, and wavelet transform.


Introduction
The most complex "computer" in the world is the human brain. Despite numerous attempts, no one has ever managed to completely model its overall operation [1][2][3]. It consists of as many as 100 billion neurons, and each neuron can create as many as 10,000 synaptic connections with other nerve cells. The brain is jelly-like in consistency because it is mostly water [4]. Its mass accounts for about 2% of the total human body mass and consumes 20% of the energy produced by the body. The very process of thinking is based on electricity and chemistry-in an active state, the brain produces energy of about 25 W, which is enough to light up a light bulb. At the moment, despite so much research and such extensive knowledge about the brain, there are still many unknowns [2].
The recent growing interest in the analysis of biomedical data developed a lot of innovations and advances. These created opportunities to develop and apply novel algorithms for more sophisticated and effective noise decontamination of biological signals to obtain clinically useful information-also in real-time. These techniques include, for example, improved classical non-adaptive or adaptive filters, signal decomposition methods, or hybrid algorithms.
Biomedical data (in particular brain signals) are very difficult from an analytical point of view, mainly due to their non-stationary nature and low amplitude and low-frequency range. Moreover, these signals are often noisy and contaminated with various artifacts, which negatively affects their potential processing utility [5,6].
This paper provides an extensive review of the latest methods applied for the processing of brain signals. Herein, the most popular methods were summarized and those most efficient were presented in detail.

Electroencephalography
Electroencephalography (EEG) is a diagnostic method, enabling measurement and recording of the electrical activity of the brain [7][8][9]. The measurement of the EEG can be classified as either invasive or non-invasive, depending on surgical intervention necessity. The non-invasive measurement method is based on electrodes placed on the scalp, in accordance with the "10-20" system (or its modifications such as "10-10" or "10-5") as illustrated in Figure 1 [10,11]. The invasive recordings require e.g., needle electrodes [11]. Left Figure 1. The EEG electrodes placement system "10-20".
Besides typical "10-20" EEG montages, and their variation, it is possible to differentiate other electrodes' placement systems [12,13]. The montages can be bipolar, referential, common average, or Laplacian [12]. These montages have various potential clinical implementations and are used in clinical applications mostly. The main purpose for seeking alternatives to the typical 10-20 system is the necessity for finding particular abnormalities. They are also frequently applied in Epilepsy and sleep-related studies [12,14]. Figure 2 illustrates the above-mentioned alternative montages.  (c) Referential ear montage. The EEG is usually applied for the purpose of neurological and psychological disorders or epileptic seizures examination. It is also used for monitoring various stages of sleep. Another implementation of electroencephalography is brain interaction with external environments in the form of Brain-Computer Interface (BCI) systems [5,[15][16][17][18][19].
The BCI history started already in the 1970s of the 20th century and the rapid development of these systems has led to really efficient communication between the human brain and computers [20]. The BCI refers to a system, which measures and applies signals obtained from the central nervous system. Thus, according to this definition, other Human-Machine Systems based on biomedical data, such as inter alia voice-or and muscleactivated systems, cannot be considered as BCI [20,21]. In these (BCI) systems the signal acquired from the brain is being analyzed and translated into appropriate commands, which may allow full or partial replacement of external devices, such as computer key-board, mouse, or joystick to perform an action [21]. Scheme illustrating the work of a typical BCI system is presented in Figure 3 [20].  . Scheme of a BCI system [20].
As it was mentioned above 10-20-the clinical EEG applies an international system of the surface electrodes placement-"10-20", which was illustrated with Figure 1 [10,11]. The system is derived from the distance between the electrodes, where the "10" and "20" values correspond with the 10 and 20% values of the distance between "mastoids" and between nasion and inion, which are the extreme points. "10-20" system consists of 21 electrodes in total, however, only 19 are placed on the surface of the scalp, where the 2 are usually applied on earlobes as reference channels. There are also variations of this system, and these were mentioned above, where the distance was decreased to 10 or 5% to place more electrodes [10,11,22].
For the clinical measurement of sleep usually from 8 to 21 channels are applied, while scientific activities may require up to 256 channels placed on the scalp surface [22,23]. The usual length of recording is around 20 min, but the sleep stages tracking or the diagnosis of epilepsy may take several hours or days [24,25].
The amplitude of the signal is very low, reaching values of around 100 µV. The most commonly applied sampling frequency for the EEG signal is 512 Hz, for EP (Evoked Potential) up to 6 kHz. The frequency range of the EEG signals is up to 80 Hz, but for routine diagnostic testing a range of 0.5-50 Hz [5,16,[26][27][28].

EEG Recordings
The non-invasive EEG recordings are recorded from the electrodes within the below listed ranges, which correspond with the individual frequency domains (see Figure 4) [5,29,30]  In Figure 5, sample EEG data is presented. In Figure 5a is a 1 s long sample, Figure 5b is the same time-frame sample, but normalized and Figure 5c is a 10 s long spectrogram. The data is raw, unfiltered. The F3-C3 electrode is based on the BANANA montage, similar to the one illustrated with Figure 2a.  Reduction or absence of particular rhythms during the EEG recordings can be considered an abnormality. For example, local brain injury or tumor leads to an abnormal slowing of waves in related areas, spikes, and sharp waves, in turn they may indicate the presence of epileptic foci [26][27][28][32][33][34].
It is also important to mention that the EEG can be used as a first-line method for diagnosis of tumors, stroke and other brain disorders, but its applicability has decreased with the invention and development of high-resolution anatomical imaging techniques such as magnetic resonance imaging (MRI) and computed tomography (CT) [45][46][47]. Despite the limited spatial resolution, the EEG continues to be a valuable tool for research and diagnostics purposes. It is one of the few mobile techniques available and offers millisecondrange temporal resolution which is not possible to be obtained with the use of CT, PET, or MRI, however, some of the modern systems combine all these methods together-creating the so-called hybrid methods [20,46,47].
The routine EEG is unfortunately often an insufficient source of information to establish appropriate medical diagnosis and to determine the most appropriate treatment, however, some improvements in this area have been made, such as recordings carried out during seizures-ictal recording, as opposed to an inter-ictal recording which refers to the EEG recording between seizures. In some cases-for such purposes-polysomnography can be applied [48][49][50]. The epilepsy recording methods can be also improved with the implementation of various machine learning-based methods, which may help to obtain good quality data [51][52][53].
Epilepsy recordings can be carried out either during hospital admission or under ambulatory conditions (outpatient). The hospital stay is usually performed at the Epilepsy Monitoring Unit (EMU) with appropriate medical personnel trained in taking the appropriate care of patients with seizures. The ambulatory monitoring is usually video-based EEG recording and lasts typically one to three days. An admission to EMU is usually longer and can last a week or longer. While in the hospital, seizure medications are usually withdrawn to increase the odds that seizures occur during the stay at EMU [54]. For reasons of safety, medications are not withdrawn during the EEG recordings performed outside of the hospital [54,55]. The ambulatory video EEG (VEEG) recordings are more convenient (less stressful) for the patients and are less expensive than a hospital admission, but their main disadvantage is that the unsupervised patients are prone to the occurrence of various harmful events and the obtained results depend on the patient's self-observation ability and can, therefore, be biased [56][57][58][59].
The EEG was not primarily indicated for headache diagnosis, however, some visual evoked potential changes observed in the data may indicate such problem, which may be the reason for headaches occurrence, which are a recurring common pain problem, and this procedure is sometimes used in a search for a diagnosis, but it has no advantage over routine clinical evaluation [60][61][62][63][64].
Besides expensive devices dedicated for clinical measurements there are devices, which are smaller, portable, and cost-effective [20,65,66]. Numerous papers compare various inexpensive EEG headsets, which are mostly applied for the purpose of the abovementioned BCI systems [5,20,67,68].
To the most popular inexpensive devices belong those designed and developed by the following companies: Emotiv, OpenBCI, Neurosky or Interaxon [20,67,68]. Their products are becoming more and more popular not only due to their price or recordings' quality, but also due to their ease of use [67,73,74]. The aspect of ergonomics during EEG measurements is playing a more and more important role [73][74][75].
When it comes to ergonomic or ease of use-some systems require recordings from a smaller number of electrodes to minimize potential data analysis and to ease the montage of the device. Single-channel recordings are becoming more and more popular and some studies proved that these systems still can provide reliable information [76][77][78][79].

Artifacts Present in the EEG Recordings
The EEG signals collected from the surface of the scalp are highly contaminated with various types of internal and external artifacts and background noise [5,80]. It is because the EEG data is prone to disturbances from the inter alia recording unit, amplifiers and cables or motion of other persons around the patient, or by the measured patient itself (as the ECG, breathing, or eye blinking) [5,15,81,82]. Besides the ECG, the EEG recording is influenced more by this noise because of its very small amplitude [82,83]. In the case of insufficient noise removal, the EEG can be totally suppressed and illegible for further analysis and interpretation [16]. The power line interference also frequently occurs in the signal and is one of the main issues, besides EMG-and ECG-related artifacts, in analysis of brain signals (in particular EEG) [84,85]. For such purposes adaptive filters based on a least mean squares (LMS), or those based on normalized LMS [86] can be applied [84,87], or low-pass notch filters [88].
The myopotentials caused by the movements of muscles in the area of the temple and forehead are the most common physiological interference in the EEG signals [89]. However, these potentials take a short time compared to the potentials generated by the brain cells. Moreover, their shape and frequency can easily be identified in the EEG recordings. Some unique patterns of these artifacts can be observed in the case of some movement-related disorders (e.g., rhythmic sinusoidal artifacts of 4-6 Hz for Parkinson's disease) [90,91].
The low-frequency motion artifacts are usually caused by eye and tongue movements [83,92,92]. The tongue movement results in the wide band of potentials (often within the range of the delta frequency band) which decreases from frontal to occipital area, but it does not reach as expressive amplitudes as the eye artifact [83,[92][93][94]. The eye movements are visible in any EEG recording and they are useful for the identification of the sleep stages [95,96]. The motion of the eyeball generates the alternating current with a high amplitude which is visible on electrodes around the eyes [83,92]. Also, the potentials of the muscles of the eye socket are the source of the artifact. Usually, this artifact causes a significant decrease in the EEG amplitude. The slow or steep waves in the EEG signal originate as a result of the breathing synchronously with an inhale or exhale in channels of electrodes, which a patient lies on.
The EEG signal can interfere also with the ECG signal which is easily detectable due to its rhythm and regularity. The voltage level of this artifact differs by measurements. Hence, the ECG signal is ordinarily measured together with the EEG when the effect of the ECG on it can be clearly visible [97][98][99].

EEG Signal Processing Methods
The perturbations induced with various artifacts and random noise are particularly difficult to correct because of their high amplitude, wide spectral distribution, and variable topographical distribution [5,15,16]. Therefore, denoising of the EEG data is a very challenging pre-processing step prior to qualitative or quantitative EEG signal analysis. From the processing methods, both multi-channel and single-channel techniques became widely used, such as e.g., adaptive filters, Wavelet Transform (WT), or Independent Component Analysis (ICA) [100][101][102]. All these methods are summarized in Table 1.

Filtering Methods
For diagnostic purposes-appropriate filtering of the EEG signals plays a crucial role as it can make the data more legible. Therefore, various filtering methods have been tested for many years [103]. Implementation of the traditional, conventional, statistical signal processing methods (such as inter alia: Fourier, Laplace, etc.) can give weak, unsatisfactory results [5,33]. Most commonly applied filters belonging inter alia spatial filters improve significantly signal-to-noise ratio (SNR) [104]. It is hard to choose the most appropriate filtering method due to the non-stationary character of these data and their low amplitude. Most of the typical, classical filters can remove important information from the signal and affect it in a negative way [5,16]. Various smoothing filters, such as inter alia Savitzky-Golay filter can also significantly improve the overall data quality [16]. In many cases, the implementation of the Kalman filters gave very positive results [105,106].
As was mentioned above, more and more channels are utilized in the analysis of the EEG signals, which can impede their potential practical implementation. This one is also one of the reasons for applying appropriate filtering methods [103,107]. In this case (a large number of electrodes applied) spatio-temporal filtering provided satisfactory results [107].
A very interesting area of filtering is non-integer order (fractional) filters. Fractional Calculus is a very interesting mathematical area, which despite being discovered in the 19th century, has become popular very recently [108,109]. They have been successfully applied in numerous different fields such as bioimpedance, image processing, control, encryption, or filtering of biomedical signals [108,[110][111][112]. They are useful due to the noninteger order base of the filter, which allows higher flexibility than while using traditional integer-order filters [109].

Wavelet Transform
The WT-based processing methods became popular in biomedical signal processing because of their ability of the high-frequency components suppression and separation of the signal's low-frequency components [113][114][115], and play a very important role in the analysis and processing of the EEG, mostly of course due to the nature of these signals. Implementation of the Wavelet-filters can be a good solution, in comparison to the traditional signal processing methods, and such a solution was in detail presented in [33]. There are several types of WT such as continuous wavelet transform (CWT), discrete wavelet transform (DWT), stationary wavelet transform (SWT), pitch synchronous wavelet transform (PSWT), etc. The used type of WT depends on input signal and application. Input signal also substantially affects the choice of maternal wavelet and degree of decomposition (for all types of WT). There are a lot of maternal wavelets such as Haar, Daubechies, Biorthogonal, Coiflet, Symlets, Morlet, Mexican Hat, and Meyer.
In [116,117], the authors compare more types of maternal wavelets used for noise removal from the EEG data. The Daubechies wavelets (mainly db8) and Mexican Hat wavelets were considered to be more accurate when recording signals from the healthy subjects (control group) but in the case of the epileptic subjects, the orthogonal Meyer wavelets were more efficient.

Independent Component Analysis
The algorithm of ICA is based on the assumption of the linear combination of the independent sources of signals which are represented with a vector with a mixing matrix which its inverse matrix represents the source of the original signals. The computing of this inverse matrix is the task of the ICA algorithm. In the EEG processing, the ICA is frequently applied for the purpose of EEG signals denoising [118]. The ICA is often used in combination with other filtering techniques [100], such as inter alia Kalman filters [119]. The ICA is also useful in the analysis of multichannel EEG data [120,121].
In [122], the potential of the neural-signal EEG-based methods for enhancing human building interaction under various indoor temperatures were explored. As a pre-processing, the Matlab toolbox, which contains the HPF with a cut-off frequency 3 Hz to remove the DC offset and the low-frequency skin potentials artifacts, and then the LPF at 45 Hz to remove high-frequency noises, was used. Non-stereotyped artifacts such as large movements noises were visually rejected by manually scrolling the data and the remaining artifacts, such as eye blinks and muscle activities were removed with a built-in ICA algorithm.
Corradino et al. [123] used the ICA-based methods for the detection of artifacts in the EEG which were removed with another algorithm, the ordinary least square. The matrix of independent components was further filtered with a notch filter because there was a presumption of a certain frequency of the detected artifact. The procedure proved to be effective but not suitable for online signal processing due to the implementation of the time-consuming ICA algorithm.
Li et al. [124] proposed the method which includes the following three steps: independent components decomposition, the common interference identification, and its removal. For the ICA implementation, FastICA was adopted with the maximum-negative entropy principle. The putative common interference component is assumed as a distal signal, which had approximately the same effect on all channels. To obtain local brain activities more accurately, these distal common interference components should be removed.
Arnin et al. [125] presented the signal processing method for the purpose of BCI development which was introduced for severe neurological diseases especially for the locked-in patients, who are severely or totally neuro-muscularly impaired, which means, they are unable to move or speak, so their only way for communication is the use of brain signals [126,127]. The concept is to detect one's movement intention and use it to control external devices such as wheelchair or rehabilitation equipment. After exploring the EEG data-set, the present artifacts were reduced using the ICA and the result was an artifact-free signal, which was later processed using different feature extraction methods [128][129][130].
As it has been stated above-the ICA is an efficient and popular method in the analysis of various biomedical data, in particular EEG [120,121]. It is a type of BSS (Blind Source Separation) method usually applied in the analysis of the EEG signals to separate various artifacts, such as eye blinks or muscle artifacts [120].
Most of the ICA-based algorithms require large amounts of training data and are suitable for offline solutions only assuming spatio-temporal stationarity of the analyzed data, as in popular e.g., Infomax ICA and FastICA, which are also computationally expensive and time-inefficient [120,121].
In [120], the authors presented ICA-based algorithm on large 61 channel data for the purpose of online brain and artifacts decomposition and obtained interesting results, proving its efficiency in online solution. They applied an algorithm called ORICA (Online Recursive Independent Component Analysis), which was compared with the offline Infomax. The results were very promising and proved that the ORICA is a very powerful tool for online EEG data analysis [120,131].

Empirical Mode Decomposition
Empirical Mode Decomposition (EMD) method is often used for biomedical signal processing due to its adaptability and signal-dependency. The algorithm extracts the individual Intrinsic Mode Functions (IMFs) in each iteration from detected maximum and minimum extremes. Such decomposition is advantageous in eliminating the low-frequency noise and narrow-band information [132]. The method allows processing single-channel signals in contrast to ICA.
Amo et al. [133] determined whether detection of gamma-band activity can be improved when a filter, based on the EMD, was added to the pre-processing block of the single-channel EEG signals. In their study-the EEGs from 25 control subjects were registered in basal and motor activity (hand movements) using only one EEG channel. Over the basic signal, the IMF signals were computed. The gamma-band activity was computed using power spectrum density in the 30-60 Hz range.
Chen et al. [134] used the EEG signals obtained from patients in two different statesan aware state and anesthetized. The data was decomposed into a set of the Intrinsic Mode Functions (IMFs) with the implementation of the EMD algorithm. The Fast Fourier transform (FFT) and Hilbert transform (HT) analyses were then performed on each IMF to determine the frequency spectra. The probability distributions of the expected frequency values were generated for the same IMF in the two above-mentioned groups of patients. The corresponding statistical data, including analysis of variance tests, was also calculated. A receiver operating characteristic curve was used to identify the optimal frequency value to discriminate between the two states of consciousness.
Gaur et al. [135] used a BCI system to translate human motion intentions using signals generated with the electrical activity of the brain such as the EEG into control signals. One of the major challenges in the BCI research is a classification of the human motion intentions from the non-stationary EEG signals [136]. In [135] a novel subjectspecific multivariate EMD-based filtering method was proposed, namely, the SS-MEMDBF (subject-specific multivariate empirical mode decomposition), which classifies the motor imagery (MI) based EEG signals into multiple classes. The MEMD method simultaneously decomposed the multi-channel EEG signals into a group of multivariate IMFs (MIMFs). This decomposition enabled the extraction of the cross-channel information and also localization of the specific frequency information. The MIMFs are considered as narrowband, amplitude, and frequency modulated signals. The EMD-based method can be also applied for the purpose of muscular interference removal and to clean the signal from the EOG artifacts [137].

Time-Frequency Image Dimensionality Reduction
Most of the EEG processing methods, such as the ICA, are only available for the signal processing of the multi-electrode EEG. However, the results obtained from only single channels are becoming more and more popular [77,89,138]. For the single-channel EEG, methods such as WT or adaptive filters are very popular, but they just suppress the noise outside the filtering frequency band, while the co-channel interference is left unprocessed [138]. For the co-channel interference removal, a method based on the timefrequency (T-F) image dimensionality reduction was proposed in [97]. The innovative idea of the proposed method was that it can be applicable for a single electrode EEG signal enhancement and the background noise can be suppressed in the entire time-frequency space. The proposed method was experimentally validated with a set of real EEG data. The experimental results indicated that the proposed method was effective in the EEG single electrode co-channel interference suppression.

Neural Networks
Denoising of EEG based on neural networks started to be investigated mainly due to its usability in real-time and, of course, high accuracy. For example, Leite et al. [139] proposed a deep convolutional auto-encoder for eye-blinking and jaw-clenching artifacts elimination, which proved the superiority to traditional filtering methods. The elimination of ocular artifacts (OAs) using a deep learning network (DLN) was investigated in [140]. The filtration was performed in two stages; in the offline stage, training samples without artifacts were used to train DLN to reconstruct the EEG signals. After, this trained DLN is used as a filter to automatically remove OAs from the contaminated EEG signals in the online stage. The performance of the proposed method was compared with the classic ICA, kurtosis-ICA, second-order blind identification, and shallow network method. Besides the higher success rate of eliminating artifacts, the method brought other advantages, such as no-needed additional EOG reference signals, the possibility of analyzing a smaller number of channels, and time savings.
In another study [141], ocular, muscular, and cardiac artifacts were removed from EEG using an adaptive filter based on radial basis function network and functional link neural network (called FLN-RBFN-based filter). This method was tested in both offline and online modes. While offline learning algorithm has simpler computations and assumes the EEG signal stationary, a more demanding online algorithm is updated based on the new incoming patterns in each cycle, thus well meets the non-stationarity of the EEG signal. However, both approaches work well in artifact removal with significantly considerable improvement in results. Real-time EEG denoising was also proposed by Cowan et al. [142] by using the smart electrode employing DNN that learns continuously to remove both eye-blink and muscle artifacts and works based on adaptive filtering.

Adaptive Neuro-Fuzzy Inference System
Adaptive Neuro-Fuzzy Inference System (ANFIS) is an adaptive soft-computing method based on analytical methods, Boolean logic, sharp classification, and deterministic searching. The soft computing methods are those, which include among the others: artificial neural networks (ANN), genetic algorithms (GAs), fuzzy logic (FL), adaptive neuro-fuzzy inference systems (ANFIS), support vector machines (SVM), and data mining (DM) [143]. The ANFIS combines fuzzy inference function and artificial neural network, taking advantage of both powerful techniques. The basic idea of ANFIS is an architecture utilizing a fuzzy system to represent interpretable knowledge and the learning capability of neural networks to optimize their parameters [144].
Adaptive filtering of OAs using ANFIS was proposed by Chen et al. [145] on both simulated and real signals. In the first case, the ANFIS outperformed adaptive filtering and ADALINE (Adaptive Linear Element) methods. The authors also investigated the influence of time delay of OAs, which occurs in real EEG measurement. Here, it was found that the number of ANFIS inputs plays a role in the efficiency of the artifact removal-using a higher number resulted in slight improvement at the cost of a longer convergence time. This sensitivity of the chosen number of inputs was also proved in real data sets. In this case, the ANFIS was also considered a suitable tool for OAs and other noise removal.
The elimination of OAs by ANFIS was also presented in [146], where the method was improved using a genetic algorithm to optimize the parameters of the ANFIS structure. The proposed method reached better performance in SNR compared to the traditional ANFIS algorithm, especially when a higher number of iterations was used.

Hybrid Methods
A combination of at least two individual techniques is considered a hybrid method in this paper. These methods are beneficial due to their performance and accuracy. However, their implementation has higher requirements on computational cost and complexity in contrast with other above-discussed methods and thus, their application has to be considered depending on its purposes.
To improve pre-processing of EEG data, a combination of WT and ICA was proposed in [147,148]. The principle of this method lies in the separation of artifact-independent components, filtering of this component by WT to remove any brain activity, and finally, project back the artifacts to be subtracted from EEG signals to get clean EEG data. The method proved its efficiency in artifact removal and computational time with respect to particular ICA approaches.
For the removal of OAs, the combination of the WT-based and the ANC-based methods was useful [149][150][151]. The reference input was derived from the contaminated EEG, it was uncorrelated with the EEG but had a strong correlation with the OAs and met the conditions of the ANC as a reference input. After the wavelet decomposition (using the appropriate maternal wavelet and decomposition level), soft thresholding was applied on the three lowest levels. The reference signal was obtained from the application of a wavelet reconstruction to these new wavelet coefficients. The OAs can be eliminated with the implementation of the ANC filter with the reconstructed reference signal [152,153].
Jafarifarmand et al. [154] investigated real-time processing and OAs removal using a combination of ICA and ANC. In this approach, ICA is applied on only a few EEG channels close to the artifact origin. The resulting independent component, most relevant to the artifact, is then used as a reference signal for ANC, employing fully automated neural networks. Due to real-time operation and a small number of channels required, the method is promising for BCI applications.
Torse et al. [155] compared the several EEG pre-processing methods such as FastICA, runICA, the Principal Component Analysis (PCA), and adaptive filtering, both with and without a reference signal, examining their effect on the EOG artifacts and epileptic recording. The methods were compared in accordance with the two aspects: middle quadratic errors and computational times. Adaptive filtering differs significantly from the other popular methods by its short computation time, but at the expense of accuracy, which is achieved much higher than the other methods; ICA (most accurate) and PCA (fastest but lower accuracy than ICA). The application of ICA provided very good results in removing eye-blinking artifacts [101,156]. Table 1 summarizes the recently used EEG signal processing methods. The following criteria were chosen for the evaluation of their performance and comparison (same criteria is used in Tables 2 and 3):  • Real-time is a parameter defining whether the method can be used in online mode, which is very desirable for usability in clinical practice.

Summary of the EEG Signals' Processing Methods
-Yes: these methods are suitable for real-time applications.

-
No: these methods are not suitable for real-time applications or applications where a small delay is critical.
• Implementation complexity classifies the overall complexity in terms of the deployment in clinical practice to evaluate the economic availability of hardware and software to all patients.
-Simple: these methods are composed of well-known functions and basic mathematical operations, so it is simple to implement them -Medium: these methods contain advanced signal processing algorithms that are not commonly available and thus harder to implement.

-
Complex: these methods contain advanced signal processing methods and complex algorithms making it very challenging to design and implement them.
The presented comparison is highly subjective and based on the earlier experience of the team of authors. Our results were verified by a set of basic experiments on a predefined signal sample.

Other Methods-Brief Summary
As it has been mentioned above, in BCI system, many methods applied for feature extraction have been proposed and applied, such as among the others: analysis of raw time-series, signal power estimation, etc. [20,157]. Most of these include classical, typical methods such as various Fourier transforms or wavelet decomposition, which belong to the classic methods applied for signal processing purposes [5,157]. One of the most interesting, although, not explored significantly is a cross-frequency coupling (CFC) method, which has a wide applicability potential for the BCI systems [157,158]. The CFC defines different phenomena related to interactions between different frequency oscillations in the human brain [159].
One of the very interesting studies involving CFC was presented in [157], where the authors focused on decoding brain signals generated by the flashing images using the CFC estimator-PAC (phase-to-amplitude coupling) to see how the phase of the lower frequency ranges affects the higher oscillations. Their study proved the CFC to be an efficient and valuable estimator. Similar results regarding CFC-PAC were also shown in [160,161].
The use of CFC is being applied usually for cognitive and perceptual processes purposes. Despite some promising results [157,160,161] it has some limitations, as it is flexible only either over time or over frequency [162]. For example, Michael Cohen et al.
presented a method for transient cross-frequency assessment in [158,162], which enables multiple, dynamic, and CFC structures over time and frequency.
In [163], various CFC measures were tested on both real and simulated EEG signals. The obtained results were as follows: the CFC detection was correct under noisy conditions; the CFC detection was correct in simulated data; prominent delta-alpha CFC was identified in the real, resting-state EEG.
It is also important to mention functional brain connectivity, which can be defined as the temporal correlation among the activity of various neural assemblies. With the use of functional connectivity techniques the following brain signals (data) can be derived: local field potential (LFP) recordings, Electroencephalography (EEG), Magnetoencephalography (MEG), Positron Emission Tomography (PET), and Functional Magnetic Resonance Imaging (fMRI) [164].
Interesting approach in various methods assessment based on simulated EEG signals was presented in [165]. The authors of the work decided to focus on highlighting potential pitfalls while using various numerical methods instead of proving information regarding their efficiency. On the other hand, in [164], various mathematical methods for functional and effective connectivity calculation in both EEG and MEG signals were presented, in particular endeavor, linear, and non-linear.
For various diseases diagnostics purposes, such as Epilepsy [166,167], Parkinson's disease [168] deep learning-based methods are being frequently applied [169]. In order to improve the diagnostics processes the use of automatic systems, based on neural networks (e.g., CNN) or various expert systems are being developed and applied [166,167,170].
In [166] the obtained accuracy regarding Epilepsy detection was 99.1 ± 0.9, which is an excellent result. The authors applied Convolutional Neural Networks. In [168] the authors presented an interesting automated system of Parkinson's detection, based in CNN architecture, with 88.25% accuracy, 84.71% sensitivity, and 91.77% specificity. The obtained results are very good and promising.
Besides medical purposes, deep learning techniques in the analysis of EEG data can be also applied for biometrics purposes [171] or real-time IoT (Internet of Things) applications [172], or for emotions' recognition purposes [173].

Evoked Potentials
As a derivative of the EEG recording-evoked potentials (EPs) or event-related potentials (ERPs) can be observed as a brain's reaction to external stimuli (visual, somatosensory, or auditory) [174,175]. The EPs refer to the averaged EEG responses, which are time-locked to more complex processing of stimuli; this technique is used in cognitive science, cognitive psychology, and psycho-physiological research, they can also be applied as biometric indicators [175]. The EPs are weak signals, characterized by a very low amplitude in comparison to the full EEG data, and are often buried in the activity of associated systems, because the response to the external stimuli is generated with only a small percentage of the brain neurons. Therefore, their SNR can be improved using the synchronized averaging or filtering, and the repeating of the stimuli ten to thousand times. The repetition of the same stimuli causes the response with similar characteristics. The objects of interest are the signal's amplitude, length, and the repeating of the stimuli ten to thousand times. The repetition of the same stimuli causes the response with similar characteristics. The objects of interest are the signal's amplitude, length, and latency of the response on the stimulus [174,176].
The common, classic surface EEG electrodes are usually used for the recording of the EPs, except for intra-operative sensing from the cortex, where the cortical strips or grids are placed. The placement of the electrodes is the same as in the EEG measurements ("10-20" system, see Figure 1). The leads are different from the classical EEG system and they are not internationally standardized. It is possible to use both unipolar and bipolar leads, usually a maximum of four channels [26,27,176].

EP Recordings
The characteristic of the EP (event potentials) recording depends on the type of the applied stimuli. The response can be ipsilateral when the hemisphere corresponding to the half of the body where the receptors were stimulated reacts on the auditory EPs. On the contrary, the contralateral response can be evoked with visual and somatosensory stimuli, when the opposite hemisphere is activated [26,27]. A nomenclature of the EP waveform can be derived from the two following methods: • Somatosensory evoked potentials (SEP)-are generated by the nervous system following a somatosensory stimulus. For this method, the components are labeled according to the polarity and mean latency in normal subjects, e.g., P100, N20 (see Figure 6) [176]. • Auditory evoked potentials (AEP)-follow audio stimulation. For this method, components are numbered according to their polarity in sequence, e.g., N1, N2, N3 (see Figure 7).

Somatosensory EP
The somatosensory EP (SEP) represents the response to the stimulation of the receptor fibers of the peripheral nerves, most often of the limbs. Stimuli are generated with electrical excitation using a neuro-stimulator and surface electrodes placed close to the nerve (e.g., nervus medianus in the area of the palm, nervus peroneus in the area of knee-joint) [178,179]. The stimulating impulse has a length from 50 µs to 1 ms and is repeated 200 times with a frequency of 3-10 Hz. The current of the stimuli gains the amplitude of about 25-50 mA [180,181].
The length of the recorded response is about 40 ms in the case of the upper limb and 60-80 ms in the case of the lower limb. The SEP signal has a frequency range of 30-3000 Hz. The voltage of the signal is very low, typically only 5-10 µV. After the stimulation, the certain time interval, which reflects the time necessary for excitation transmission from the peripheral nerve to the somatosensory cortex, is held before the signal recording (25 ms for upper limb, 40 ms for lower limb). This time delay is often the main indication of pathological characteristics [26,27,182].

Auditory EP
The auditory EP (AEP) is investigated with short-time sounds (100 µs) with different frequencies, so-called clicks using headphones in only one ear or both ears at the same time. The AEP stimuli are repeated 2000 times with the frequency of 1-50 Hz [183][184][185].
The signal is measured with the electrodes placed on the processus mastoideus on the temporal bone just behind the auricles. The amplitude of the recorded signal reaches about 0.5 µV [27]. Waveforms of the AEP can be represented in three levels of latency, i.e., early, middle and late response, which are distinguished by the different denominations, see Figure 6 [181,186,187]. The correlation between the click and the obtained EEG signal determines the sensitivity of the central nervous system to the played sound, which can be diagnosed as pathological when the recorded signal does not occur on the specific frequency [182]. The AEP can be applied for the purpose of brain death estimation or to check the level of anesthetics [186,188].

Visual EP
For the examination of the visual EP (VEP) (see Figure 8), usually chessboard reverse stimuli are used [189,190]. The subject observes the computer monitor (or special glasses) with the chessboard containing black and white boxes [189][190][191]. The boxes swap their color in the periodic intervals (white boxes change to black and vice versa) with a frequency of 1 Hz. The VEP stimuli can be repeated only 100 times because a large number of neurons is involved in the reaction, and the activity of the visual analyzer is well localizable [189].  [192].
The sensing takes place on the occipital cerebral lobe, where the visual analyzer is located. The frequency range of the signal is narrower than the other modalities, up to 100 Hz. The signal amplitude gains a voltage of 5-10 µV. A reaction to the stimulus comes after 100 ms [27,182].

Event-Related Potentials (ERPs)
The ERPs represent a brain examination very closely related to the EPs examination. They investigate the response of the central nervous system to psycho-physiological events when detecting the EEG resulting from a combination of various stimuli [193][194][195].
The stimuli emulate a certain environment, for example, smell, electric currents, and muscle stimuli [194]. The most common stimuli for the ERPs are visual patterns and light, similarly to the VEPs, but the difference is in the combination of the basic stimuli usually used in the EPs, so that the brain is tested for its response to complicated tasks, when arranging the stimuli according to the specific archetypes [195].
The ERP signal is very small in amplitude, so commonly, the repetitive measurements are performed and signals are averaged, which would increase the SNR [182].

Clinical Applications
The SEP is used in diagnostics of neuropathy, multiple sclerosis, and other pathological conditions caused by nerve demyelination. This examination can also be applied for evaluation of the depth of coma or anesthesia and to prognosis assessment [177,186,188].
The AEP enables the determination of various disorders such as cochlea or acoustic nerve, which are expressed with the lower amplitude of the response. The examination enables detection of both mechanical and neurological hearing damages. The delayed responses may point at lesions of the brain stem. Certain pathological auditory conditions can be treated with the implementation of various surgical methods [196].
The VEP examination is applied in the diagnostics of multiple sclerosis when the demyelination of the optic nerve occurs and can be used also for the determination of the difference between right and left vision [197][198][199].
The measurements of the ERP are often used to investigate the neurophysiologic correlation between factual knowledge, awareness, and attention, and they can also be used to identify specific components or patterns in the myopotentials [26,27,182].

EP Processing Methods
Because the EP is a signal measured in the same way as the EEG signal, the similar artifacts and interference can affect them, such as background noise, movement of patient or electrodes, and other electrical activity of the patient's body (cardiac and ocular functions, breathing or myopotentials) [200,201], what was described in more detail in Section 2. In addition, the EP signal is usually contaminated with the spontaneous EEG signal of the brain, which gains multiple amplitudes. When measuring the SEPs, the stimulus artifact is present and possible to be observed in the signal, as a reaction on the electrical impulse stimulating peripheral nerves. A similar problem arises when stimulating a cochlear implant in the AEPs [94,202,203].
Many publications dealing with the problems of the EP signals measurement focus on the stimulus artifacts removal [204][205][206]. The elimination of this artifact is more challenging than the simple filtering of the analyzed signal, because the frequency spectra of the EP signal and the applied stimulus often overlap, so the filtering of the stimulus frequencies can distort the desired EPs. The so-called sample-and-hold method (see [207][208][209][210][211][212][213]) has been the most widespread method for over 20 years when the circuit switches to the "hold" mode immediately prior to the used stimulation so that the recording amplifier is prevented from the stimulus artifact detection. After that, the circuit moves back to the "sample" mode and passes the signal to the amplifier. This technique is useful in case of the clear separation of the artifact from the signal in time. However, the estimation of the "hold" mode duration plays a significant role, because when it is too long, the desired signal can be completely removed from the recording, and if too short the residual artifact remains in the signal affecting it in a negative way. Thus, the sample-and-hold method was suitable only for studies using low-rate electrical stimulation and investigating long latency responses, so for other types of examinations, the more sophisticated methods should be used. Thus, with the conversion of the signal from the analog to the digital form and the associated development of modern software techniques, many signal processing methods were deeply investigated.
In the case of using a high rate of stimuli, which is 200-5000 pulses/s-Heffer et al. [214] used a sample-and-interpolation technique to remove the stimulus artifact events with minimal distortion of the desired action potentials [214,215]. As a pre-processing step an HP filter was applied to remove the DC offset and the baseline drift under 5 Hz. This method was based on a prior knowledge or previously conducted measurement of the maximum stimulus duration from the recorded data, followed by the identification of stimulus artifact events with a computer control stimulation or using amplitude threshold level crossing, where the threshold is a minimal value of the stimulus current needed to induce a neural response. The proper removal of the occurring artifacts was then realized with the determination of a sample point prior to the stimulus artifact and a sample point following the end of this artifact, the interpolation straight line between these two points and replacing the original samples with the interpolated values. The method successfully eliminated the high-rate stimuli with the amplitude several times higher than the EP signal, when the signal was practically lost in the interference.
The other area of interest in the processing of the EPs is the extraction of the desired signal from the background brain activity (the spontaneous EEG), which enables to obtain single-trial response instead of averaging a large number of responses [216,217]. Because the frequency ranges of the EPs and the background EEG overlap, it is more advised to implement sophisticated methods than the classic filtering. As an example of a sophisticated method the Wiener filter or the adaptive algorithms can be considered (see [218][219][220][221][222][223]). However, with the development of computing technology, other methods could overcome these techniques, such as the following: WT, ICA, or Principal Component Analysis (PCA) [224]. The summary of these techniques, their properties, and their efficiency are listed in Table 2.

Wavelet Transform
Quiroga et al. [225,226] dealt with the denoising of a single trial of the EP recording to increase the diagnostic information, which can lie in the variations in responses to stimuli. In this study, the extraction of the single-trial VEPs and AEPs from the background EEG was made with the implementation of the WT method using quadratic biorthogonal B-splines as the mother functions. The filtering procedure was based on the wavelet decomposition of the single-trial at five levels, and further the identification of the components non-correlated with the average EPs signal when setting them to zero. The inverse transform then brought the denoised single-trial response. par In another study, Quiroga and Garcia [227] proposed the WT denoising based on the decomposition of the average ERPs signal, when the coefficients correlated with the ERPs signal were identified and those remaining were simply set to zero. The single trial was denoised using the obtained coefficients and then reconstructed. This procedure gained the lower RMSE values in comparison with the previously mentioned Wiener filter, which is also frequently used for the EPs signal's denoising.
Ahmadi et al. [228] proposed an automatic denoising implementation for visualization of a single trial of the ERPs based on the WT with an automatic selection of wavelet coefficients based on the inter-and intra-scale correlation of the neighboring wavelet coefficients and their deviation from the signal's baseline. For the denoising of the single-trial ERPs, the coefficients were obtained with these followings: (1) the wavelet decomposition of the average ERPs, and (2) the hard thresholding and the so-called zero-trees procedure, when the not significant wavelet coefficients with respect to the given threshold were deleted (see [229]). The method shows significant improvement in the process of identification of the amplitude and the latency of the evoked responses, and the lower RMSE in comparison with the original signal and the other WT procedures based on thresholding.
Wang et al. [230] proposed the WT denoising of the EPs using the Daubechies family of wavelets and soft thresholding. The results were compared with the filtering using Wiener filter and the adaptive algorithms (RLS, LMS) when the reference signal was calculated as the ensemble average of all trials except the trial, which is being considered. The WT-based approach reached the SNR values significantly higher and the RMSE values significantly lower than the other filtering methods. The WT method was found to be more effective for the EPs extraction.

Independent Component Analysis
Iyer et al. [231] dealt with the denoising of the AEPs with the implementation of the iterative ICA. The ICA approach is based on the idea that the activity resulting from an experimental stimulus is independent of the neuro-physiological artifacts and the background EEG. The advantages of the ICA in the area of the EPs denoising are the extraction of the individual components and the estimation of these components in the single-trial signals. The process of denoising the single-trial includes the correlation computation between the average EP and the independent components obtained with the application of the ICA on all the blocks of 10 single trials. The independent components, which are correlated less than a defined threshold, are set to zero. According to this study, the iterative ICA technique provided better results than the WT in the estimation of singletrial responses and elimination of the background brain activity, thus distinguishing the important components of the EP.
Zouridakis et al. [232] also proposed the ICA-based denoising of the EPs in comparison with the implementation of the WT. The iterative process allowed the components of interest to be distinguished better-peaks become sharper and conversely, the data outside the region of interest became flatter. Also, peaks occurred at nearly consistent latency with some jitter. When using filtering with the WT there is a wide variation in the latency of the interest components.
Lee et al. [233] presented the ICA method for the processing of the VEP signals. The filtered EEG data (in the range of 0.1-50 Hz) was decomposed into independent components with the FastICA algorithm, and the correlation between spatial maps of independent components and predetermined spatial template was calculated to select the components for data reconstruction. The method was efficient for the elimination of the ongoing EEG signal and related artifacts.
In [234] was presented a solution for EP analysis, where the Independent Component Analysis (ICA) was combined with fuzzy clustering, which can be an interesting way for further investigations.

Principal Component Analysis
Principal Component Analysis (PCA) is a very popular signal processing method and is based on a similar principle as the ICA but here the signal is decomposed to linearly uncorrelated components (not statistically independent, as in case of the ICA).
Palaniappan et al. [235] proposed denoising of the VEP signal with the application of the PCA technique, which would be useful to retain the most important components from the signal, and to denote both noise and the background EEG. After computing the principal components, those with eigenvalues greater than 1.0 were considered to be part of the VEP subspace, while the rest were marked as noise. The reconstructed signal was then filtered with the Butterworth pass-band filter of 30-50 Hz, and the resulting data were normalized. The authors investigated the effect of this processing step on the outcome of the VEP feature extraction, and claimed that both the PCA and the normalization improved the classification performance up to 96.50% (the lowest improvement was 66.12%). Also, the PCA processing helped to reduce computational complexity and time algorithms classification.
The PCA method with two levels was proposed in [236]. The first level included decomposition of the matrix of the VEP channels. This step removed noise from the signal, which was random in contrast to the highly correlated brain signal in individual channels. The principal components with eigenvalues higher than 1.0 were again used for the reconstruction. The resulting denoised signal was then decomposed, as the second PCA level, and was applied across single trials (not across channels of a trial as previously) to remove the background EEG signal.
Mowla et al. [237] proposed the iterative PCA algorithm for the EP estimation. The PCA was applied to each group of 10 randomly assigned single trials. The correlation between the principal components and the average EP was computed and if its value lied below the defined threshold, this component was considered to contain mainly the background EEG. Thus, this principal component was replaced with zeros to eliminate the noise. The percentage of correct latency for the iterative PCA reached 97%, while using other methods (the iterative ICA and canonical correlation analysis), it was around 70%.

Hybrid Methods
Hu et al. [238] proposed a method for laser processing of the EP signals using a combination of the ICA and the WT with the main aim to reach the higher SNR. As a first step, the recorded signal was filtered with a Band-Pass (BP) filter between 1 and 30 Hz, and the artifacts caused by the eye movements and blinking were removed using the ICA. After that, the low-SNR components were detected again using ICA and their SNR was improved by using WT.
Zou et al. [239] proposed a new approach for the reduction of the number of required trials for efficient extraction of the ERPs by combining the WT and the PCA methods. First, the SNR of the ERPs was improved using the WT, so the principal components could be extracted. Then, the selected principal components were used to reconstruct a denoised signal. The obtained results showed higher values of the SNR and the lower RMSE than each method used separately.

Electrocorticography
If a patient with epilepsy is being considered for a resection surgery, it is often necessary to localize the focus (source) of the epileptic brain activity with a resolution greater than what is provided with a classic scalp-based, surface EEG [240]. This is because the cerebrospinal fluid, skull, and scalp confound the electrical potentials recorded with the background EEG. In such cases, the neurosurgeons typically implant strips and grids of electrodes (or deep electrodes, which would penetrate the study area) under the dura mater, through either a craniotomy or a burr hole. The recording of these signals is referred to as electrocorticography (ECoG), subdural EEG (sdEEG), or intracranial EEG (icEEG) [5]. The signal recorded from the ECoG has a higher voltage than the data recorded from the surface EEG [240]. Also the low voltage, high-frequency components, which cannot be seen in signals recorded with the classic scalp EEG can be seen clearly in the ECoG recordings. Furthermore, smaller electrodes, which cover a smaller parcel of the brain surface, enable recording of even lower voltage, faster components of the brain activity. Some clinical sites record from the penetrating micro-electrodes, used in case a tissue with the implanted electrode is damaged, so such penetrating electrode "moves" and in result allows recording from areas with healthy tissue [5]. The ECoG is performed mainly on patients who will have to have a follow-up resection procedure for sure [241,242]. Figure 9 shows an example of the ECoG signal.

ECoG Recordings
As the ECoG signals consist of synchronized postsynaptic potentials (local field potentials) recorded directly from the exposed crust surface , which occur mainly in the cortical pyramidal cells . These must overcome several layers of the cortex, cerebrospinal fluid (CSF), pia mater, and arachnoid mater before reaching the subdural recording electrodes located just below the dura mater. In order to reach the electrodes placed directly on the scalp (conventional electroencephalogram-EEG) the electrical signals must also be conducted through the skull, where the potentials decrease rapidly due to the low conductivity of the bone. The lack of such "obstacles" makes the spatial resolution of the ECoG much higher than the one obtained from the surface EEG, which is a critical imaging advantage. The ECoG offers a time resolution of approximately 5 ms and a spatial resolution of 1 cm [5,20,240,243,244].
The local field potential obtained from the depth electrode provides a measure of the neural population in a sphere with a radius of 0.5-3 mm around the tip of the electrode. At a sufficiently high sampling frequency (more than 10 kHz) they enable to measure the action potentials. In this case, the spatial resolution of the individual neurons and the field of view of the individual electrode oscillates around 0.05-0.35 mm [5,240,245,246].

Clinical Applications
The ECoG is used mainly to locate the epileptogenic zones during preliminary planning, mapping cortical functions, and prediction of the success of epileptic surgical resection. The ECoG, despite its invasiveness, has several advantages over alternative-noninvasive diagnostic methods [242]: • Flexible placement of recording and stimulation electrodes; • It can be performed at any stage before, during and after surgery; • It allows direct electrical stimulation of the brain and identification of critical areas of the cortex, which must be avoided during surgery; • It provides greater accuracy and sensitivity than the scalp EEG recordings, as the spatial resolution is higher and the signal-to-noise ratio is better due to closer proximity to the neural activity.
Limitations of the ECoG [247]: • Limited sampling times recording may be impossible; • Electrodes' placement is limited with the area of the exposed cortex and the time of surgery, which causes limited view field and sampling errors' occurrence; • The recording is influenced by the anesthetics, analgesics, and the surgery itself.

ECoG Processing Methods
Because the ECoG is a signal measured from the electrical activity of the brain (similar to the EEG), although it requires surgical intervention and therefore is considered to be invasive, it is also very prone to the occurrence of similar internal and external artifacts to those affecting classic EEG recordings, such as inter alia: background noise, patient movement (the ECoG is performed under general anesthesia, so this artifact is a minor consequence) or electrodes and other electrical activity of the patient's body (cardiac and ocular functions, respiration or myopotentials) [248,249].
The electrical activity recorded from the cortical surface of the brain represents the average of many individual synaptic processes. With reduction of the micro-electrode fields, the spatial resolution of the electrocorticography (ECoG) can be increased. However, as the electrode impedance increases, more significant noise is recorded from the electrode-tissue interface and the environment [247]. Signal interpretation is often improved with post-processing through filtering or application of various pattern recognition methods [248,250].
Spatial spectral analysis is necessary to derive spatial patterns from the current ECoG recordings to determine the optimal interval between electrodes in arrays, and to design the appropriate spatial filters, in particular for the purpose of information extraction regarding dynamics of the human brain's gamma activity [251].
Deeb [252] analyzed the ECoG and the local field potential signals in patients diagnosed with Parkinson's disease (PD) to monitor the effects of the disease on their brain activity. It was assumed that there were present low beta and high beta sub-bands ranging between 13-20 Hz and 20-30 Hz. They carried out tests whether this hypothesis applies to all acquired signals in the analyzed data set. The study began with the implementation of the Fourier transform along with the scale-space detection method to test whether the signal exhibits the greatest beta-range behavior. They then extracted the instantaneous frequencies and amplitudes of the signals in the beta range using the empirical wavelet transform (EWT).
Chen et al. [253] proposed a method for extraction and classification of the preactive and the ictal ECoG, based on mutual information together with a supporting vector machine, which has not only high accuracy but also high pace. At the end of the testing, the two methods based on the EMD and ripple were applied as controls.
Hossain et al. [254] investigated phase relationships between the ECoG signals via the Hilbert-Huang transform (HHT) in combination with the EMD. They performed the spatial and temporal filtering of the initial signals, followed by the tuning of the EMD parameters.
Seo et al. [255] proposed a new method based on the dynamic mode decomposition (DMD) to find a significant contrast between the ictal and the interictal patterns in the epileptic EEG data. The DMD-extracted features clearly capture the phase transition of a specific frequency between the channels corresponding with the ictal state and the channel corresponding with the interictal state, such as direct current shift and high-frequency oscillation (HFO). Their study was of a case-study character as it was performed on one patient only, where the ECoG classification tests were carried out at different time intervals, but it showed that the captured phenomenon is a unique pattern, which occurs in the patient's ictal onset zone.
Ince et al. [256] described in their study an adaptive approach for the classification of the multi-channel ECoG recordings applied for the purpose of the BCI (the invasive one, see [5]). In particular, they implemented a strategy of extracting time plane features from the multi-channel ECoG signals into their proposed approach using divalent undecided wave packet transformation. The transformation of wavelets packets with a double-tree generated a redundant dictionary of functions with different time-frequency resolutions. Rather than evaluating the individual for the discriminant performance of each electrode or candidate element, the proposed approach implemented a wrapper strategy for the selection of a subset of elements from a redundant structured dictionary through evaluation of their combination classification performance. This allowed the algorithm to optimally select non-informative properties originating from different cortical regions and/or timefrequency sites. In Table 3 summary of the most common ECoG signal processing methods was in detail presented.
Finally, the last method worth mentioning in this section is stereotactic electroencephalography (sEEG), which is an invasive method applied for acquiring brain signals. It uses penetrating depth electrodes for electrophysiological brain activity measurement and is most commonly used for epilepsy-generic zones identification. The sEEG-implanted electrodes provide a sparse sampling from deeper brain structures, which cannot be captured by e.g., ECoG. It is also important to mention that the cortical sampling of the sEEG is much sparser than the ECoG, however, it leads to fewer surgical complications than the electrocortical recordings [257][258][259][260]. The sEEG method involves stereotactic orthogonal implantation of depth electrodes (in average 11) [258]. Unfortunately, despite its advantages, it has received very little interest in BCI-related studies [257].

Discussion
The methods applied for brain signals processing described in this paper are a subjective choice of the authors. This field of science is currently extremely popular, being the subject of interest to a multitude of scientific groups from around the world, therefore it was impossible to describe absolutely all the methods currently used for these signals' analysis, thus, the authors decided to focus on those they consider as the most effective and/or popular. These methods, introduced and summarized in the previous sections, have still some drawbacks and limitations. Furthermore, many challenges remain in the field of brain-related signals processing. This section lists and discusses these challenges and provides some ideas of the future directions and research possibilities.

Current Challenges
The analysis of brain signals is an extremely difficult task, mainly due to the characteristics of these signals, including their non-stationarity, susceptibility to the occurrence of various types of artifacts or external and internal disturbances, and their low amplitude and frequency range [5,15,16,20]. Moreover, their analysis is challenging due to their nondeterministic nature and absence of specific features such as in the case of ECG signals [5,6]. Although some of the presented methods can suppress considerable amounts and types of noise, they are mostly designed for a given purpose and thus not able to adapt to various environments and conditions. To develop such a robust denoising system, one has to have access to the suitable data set to train and test the algorithms on.
As mentioned in [261], there is the emergence to create a standardized international database, generated using a uniform methodology and data analysis. This would enable us to study individual differences, multi-modal interrelationships and the specificity, or generalizability, of the findings to be examined. For potential practice purposes a very large database with various links with open-source EEG data can be found on GitHub repository in [262].
Due to the Open Science movement, a number of journals have now begun to mandate that authors provide access to the raw data used within the experiments in their papers (e.g., see [263]). Moreover, there are many publicly available EEG databases available, for example, on Physionet.org [264,265]. When choosing a suitable database for their tests, the researchers should keep in mind that the recordings significantly differ based on the purpose the data-set was created for.
To test the filtering methods, there are databases contaminated with motion artifacts [266]. There are also plenty of publicly available databases for assessment of neurological status in sleep [267], or in a specific condition, such as anesthesia [268]. Special attention is also paid to the research of epilepsy, therefore there are also specific databases containing EEG signals during epileptic seizures in both adults [269,270] or pediatric patients [271].
Furthermore, some databases include recordings obtained when the subject was exposed to certain conditions to test his/her cognitive functions or reactions. For example, the subjects could be recorded when performing different motor/imagery tasks [272], before and during the performance of mental arithmetic tasks [273], upon rapid presentation of images through the Rapid Serial Visual Presentation (RSVP) protocol [274], or under the stimulation of flickering lights [275].

Future of Brain Signals' Analysis
The future of research and innovation development in the area of brain signal processing methods is driven by the following aspects [276]: the increasing expectations and new requirements coming from the clinical practice and other users; the rapid progress in data analysis techniques, such as among the others machine learning, data fusion, or complex network analysis, which enable the development of more and more potential applications for neuroscientific purposes.
The below-mentioned categories appear to be the most promising:

1.
Big Data: Another future direction is related with big data area, as the big data enable to provide a lot of knowledge and data, necessary for advance methods such as neural networks and deep learning to extract features representing brain functions, mechanisms, or even various disorders or diseases [5,20,277]. Data integration in this field is a very challenging task, as it is necessary for the neuroscientists to measure, share and integrate data [278]. Firstly, it is necessary to have a unified data-set with the same category of subjects, measuring techniques, and protocols applied in it. It is also important to mention the development of other measurement techniques, which provide brain data of better quality, however, techniques such as functional nearinfrared spectroscopy (fNIRS) and functional magnetic resonance imaging (fMRI) are expensive and more difficult to operate, and also to analyze. On the other hand, the EEG monitoring provides advantages, such as non-invasiveness, easiness-tooperate, and cost-efficiency [20,279], which makes the EEG particularly suitable for this task [276,280].

2.
Machine learning (ML): ML-and pattern-recognition-based methods have been widely applied in neurological signals analysis. They provide new approaches in decoding and enable the characterization of task-related brain states and their extraction from non-informative high-dimensional EEG data. There has been growing interest in the use of ML techniques to analyze EEG [281][282][283]. Multiple studies provided evidence that ML enables efficient extraction of meaningful information even from noisy or contaminated data. The emerging methods of ML, such as transfer learning, reinforcement learning, and ensemble learning, have been gradually used in neuroscience. For example, some new deep neural networks, such as generative adversarial networks and spiking neural networks, have already been applied as powerful tools for EEG decoding, and transfer learning is often adopted by researchers in the area of BCI to increase the accuracy of cross-individual prediction. Also, the BCIs have been widely used to predict behavioral variables and psycho-physiological states from neurological data (particularly EEG) [20,280].

3.
Multi-modality: Multi-modal neuroimaging can provide a more complimentary picture of the brain and its interaction with other organs. There are many ways to create such a multi-modal system [284]. One of the most commonly applied methods is EEG monitoring, which can be combined with other measurement methods [28,[285][286][287][288][289][290][291]: • brain imaging techniques, such as MRI and fNIRS; • biological signals, such as ECG and EMG; • brain stimulation techniques, such as trans-cranial magnetic stimulation (TMS) and trans-cranial direct current stimulation (tDCS).
Nevertheless, the multi-modal neurological imaging and/or monitoring is associated with specific signal processing and data analyses challenges, such as inter alia [20,[292][293][294][295][296][297]: • the EEG may obtain artifacts from other biological signals (such as EMG) or be distorted by the noise produced by accompanied devices for imaging (such as MRI) or stimulation (such as TMS). Therefore, signal processing and noise removal techniques play a particularly important role in this field; • in terms of data analyses, fusing different neurological modalities to provide complimentary information poses a great issue. Data-driven multivariate methods and machine learning methods can play a role in the analyses of multi-modal brain imaging data.

4.
Real-time implementation: highly desired in practical applications such as brain-computer interfaces and neurofeedback [298,299]. Real-time applications are very challenging tasks, mostly due to the nature of these signals, however, some promising studies can be seen in: [291,298,300].

Conclusions
This paper presented the summary and classification of the signals generated by human brain activity, i.e., EEG, EPs, and ECoG. This work is a free continuation of part 1 article dealing with cardiac signals [301]. We summarized the latest articles published within the last 5 years and investigated key information about individual brain signals and associated advanced processing methods with minimization of conference papers and focus on articles published in prestigious journals.
Thanks to the knowledge gained over the years and the combination of the progress in both, the technological and medical solutions, it is possible to monitor the activity of selected parts of the human body using appropriate equipment, which allows obtaining relevant information from biomedical signals [302,303]. The future research lies in discovering additional information in those signals, which has yet been hidden. This will be possible thanks to the rapid development of the acquisition technology, signal processing methods, and analysis.