## 1. Introduction

Rapidly changing neural oscillations are fundamental features of a working central nervous system. These oscillations can be seen as rhythmic changes either in cellular spiking behavior or subthreshold membrane potential in a single neuron. Large ensembles of these neurons can generate synchronous activity that results in rhythmic oscillations in the local field potential (LFP). These oscillations reflect the excitability of neurons. Their key role is to assist brain communication across a huge network of neurons via synchronous excitation [

1]. At certain frequencies, oscillations are initiated by specific tasks, the outcome of which determines their amplitude or power [

2]. Alpha oscillations have generally been considered a ubiquitous characteristic of neural activity; however, recent observations have demonstrated their roles in active inhibition [

3,

4] and attention [

5]. Although alpha frequencies are slower and tend to distribute frontally in older subjects [

6], the largest alpha amplitude is observed at the scalp over the occipital and parietal areas of the brain [

7]. Additionally, alpha oscillations are also evident in the form of mu oscillations over motor cortices [

8]. These studies suggest that alpha oscillations play a specific role in brain information processing associated with specific functions in motor, perceptual, and cognitive processes. However, the role of oscillations is yet to be revealed.

To understand the role of these oscillations in motor, perceptual, and cognitive processes, accurate estimations of instantaneous phase and amplitude are required. In most studies, instantaneous phase relationships are categorized post-hoc due to the requirement for analysis in the time-frequency domain. The focus on oscillatory phase does not imply that the amplitude of ongoing oscillations has no impact. In fact, the phase of an oscillatory signal can only be reliably computed when this signal has a significant amplitude, both in the mathematical and biophysical sense. Oscillatory amplitude in various frequency bands bears significant relations to sensory perception and attention [

5,

7,

9,

10]. The instantaneous frequency of electroencephalography (EEG) oscillations has also been investigated [

11,

12,

13,

14]. However, the ongoing oscillatory phase has been largely overlooked, at least up until recent studies, which showed the effects of the instantaneous phase on perceptual performance [

15,

16,

17].

To reveal the functional role of the phase of neural oscillations, real-time prediction of the instantaneous phase is important. Pavlides in 1988 [

18] and Holscher in 1997 [

19] built analog circuits that triggered stimulation at the peak, trough, and zero crossing of the hippocampal LFP based on the assumption of a sufficiently narrow bandwidth. Hyman [

20] employed a dual-window discrimination method for peak detection in theta oscillations. The aforementioned approach required manual calibration in a specific setting; thus, real-time operation would not be possible. More recently, Chen [

21] used autoregressive (AR) modelling to accurately estimate the instantaneous phase and frequency of an intracranial EEG theta oscillation and phase-locked stimulation in real time. This study used a genetic algorithm to optimize several parameters before deploying the algorithm. Although the optimization procedure was a major limitation, this system provided a benchmark for comparing oscillations in other frequency bands. Alternatively, wavelet ridge extraction can be utilized to estimate the instantaneous phase [

22]. This method is robust when presented with simultaneous multiple oscillatory schemes. However, the implementation of such a system in real time might be computationally expensive. Therefore, an adaptive method is needed. Closed-loop neuroscience has gained significant attention over the past few years with the latest technological advances. Phase information in real-time systems could have applications in brain- computer interfaces [

23], closed-loop stimulator devices [

24,

25,

26], and electrical stimulation of animal models based on a specific phase [

18,

19]. There is substantial experimental and therapeutic potential in state-dependent brain stimulation as the participant or patient performs the task simultaneously. We are mainly interested in estimating the current instantaneous phase utilizing the visual alpha oscillation so that depending on the brain state, we can decide whether oscillatory phase-dependent stimulation should be given or not. This reflection of a closed-loop brain-state-dependent stimulation system can be seen as a new brain-computer interface (BCI) approach. Herein, we propose a novel approach for time-series forward prediction that was developed based on the conventional AR model [

21,

27] with an extension of an adaptive least mean square (LMS)-based AR model. The AR model can be constructed using various algorithms to calculate model coefficients, each one of them fulfilling different purposes. These methods minimize the prediction error, either only forward prediction error or both backward and forward prediction errors. This study focuses on the forward prediction error and demonstrates that the LMS-based AR model minimizes the forward prediction error because its coefficients can be adjusted dynamically, and it performs the AR model better in real-time for long prediction lengths. Since adaptive methods can dynamically track the coefficients and allow more accurate prediction, we chose the LMS-based AR model.

## 4. Discussion

Estimation of the phase of EEG rhythms is challenging due to their low signal-to-noise ratio and dynamic nature. In this study, we presented an adaptive method to estimate the phase of alpha oscillations and compared the results with those of the AR model. Our aim was to improve real-time EEG applications that depend on phase estimates. This approach estimates instantaneous frequency and phase of an EEG segment (three channels: O1, O2, and Oz) and then forecasts the signal based on these aforementioned methods. Two prediction lengths (128 ms and 256 ms) of the EEG segment were investigated, and performance was evaluated in terms of PLV.

Previously, Zrenner [

27] used the AR model [

21]. For comparison and consistency purposes, we applied the AR model with the same filtering method, AR model order, length of EEG data segment, and prediction length. Additionally, we assessed how the future prediction window affected the performance of both methods. Earlier studies used the Hilbert–Huang transform [

45,

46] and complex wavelet transform [

47] to extract frequency and phase information from EEG signals. These methods are limited because the task of predicting a future signal is hard to resolve using analysis of non-stationary data. Rendering methods such as AR assume stationarity over short time periods, therefore they are not suitable for closed- loop and real-time applications. Our adaptive method relies on frequent updates to cater for EEG signals with non-stationarity, thus making it possible to predict the future signal while adjusting to dynamic changes over time.

Considering the precise phase dynamics of an ongoing oscillation, the use of closed-loop neurostimulation has escalated significantly in the last few years. Specifically, a prior study [

48] used Fast Fourier- and Hilbert transform-based algorithms for phase-locked stimulation of different EEG bands. The authors proposed a short window prediction algorithm based on an intermittent protocol with distinct windows for phase extraction and prediction. As in our study, they used the PLV metric for performance evaluation and exceeded it to 0.6 for alpha band detection. Similarly, they reported that performance declined relative to increasing the size of prediction length in both methods. The major limitations of this study were a relatively small sample size and neglect to show the functioning of a whole closed-loop system. A study published in 2018 reported three different phase prediction methods (AR model, Hilbert-based, and zero crossing) [

44]. Different performance metrics were applied, including PLV, phase synchrony based on entropy, and degree deviation. Their study confirmed that PLV decreases with an increasing time-window as indicated by an increase in alpha fluctuations at longer intervals. Drawbacks of the study include a small sample size (eight), use of only one channel (Oz), and failure to determine whether the AR or Hilbert-based methods were optimal.

Most studies estimate the phase based on standard signal processing methods. Recently machine learning techniques, specifically deep learning methods, have been implemented in many BCI systems. An eleven layered Convolutional Neural Network (CNN) model was used to detect schizophrenia [

49]. High classification accuracy 81% for the subject and 98% for non-subject based testing were achieved. Despite high classification accuracies, the major limitations include small data pool, and CNN is costly to compute as compared to traditional machine learning algorithms. CNN was also implemented in an automated detection system for Parkinson’s disease with an accuracy of 88.25% [

50]. Another study employed Principal Component Analysis (PCA)-based CNN for p300 EEG signals [

51]. PCA was used to reduce the dimension of the signal and computational cost by retaining the data features of the original signal. A combination of the continuous wavelet transform and simplified CNN named SCNN was implemented to enhance the recognition rate of motor imagery EEG signals [

52]. Compared with CNN, the SCNN shortens the training time and reduces the parameters; however, the classification accuracy needs to be improved. An interesting study implemented the deep learning algorithm in neuromarketing utilizing EEG signals for a product-based preference detection. Although the deep neural network produced good results, a random forest algorithm yields similar results on the same dataset [

53]. Two-level cascaded CNN was proposed to detect the stego texts generated by substituting synonyms [

54]. The proposed steganalysis algorithms showed enhanced prediction performance with an accuracy of 82.2%. A study investigated machine-learning methods to extract the instantaneous phase of an EEG signal centered at POz [

55]. Similar to our study, the authors also performed frequency band optimization based on individual alpha frequency. To compose analytic signals, the algorithm was split into two branches: on the first path, to generate an input signal, data were epoched only, and on the second path, to generate an output signal, an FIR band-pass filter was applied on the data. The second path was directly followed by Hilbert transform and then epoching as the last step before evaluation of the model. The data was trained with an optimized filter, and instantaneous phase recovered via application of the Hilbert transform and non-causal filtering to minimize MSE. The main disadvantage of this procedure is the need for preliminary data prior to the principal experiment for training purposes. Since real-time phase estimation does not provide future information, the trained filters depend on the quality of the signal and their underlying properties. As a result, this method does not assume unbiased phase estimation. Although deep learning techniques are highly efficient, they require a copious amount of data for training. Another challenge is that it requires a huge amount of processing power and, therefore, is a costly affair.

Recently, databases comprising abundant similar time-series have been available for many applications. For such data, utilizing traditional univariate methods in time-series forecasting leaves a great possibility for generating a precise forecast unexploited. A comprehensive review article sheds light on utilizing CNN, Long Short Term Memory (LSTM), and Deep Belief Networks for financial time-series forecasting [

56]. Recently, recurrent neural networks, particularly LSTM networks, have proven to be able to surpass the traditional state-of-the-art time-series forecasting methods. LSTM has attracted much attention and has been widely used in various domains in data science such as forecasting of the production rate of petroleum oil [

57], wind speed forecasting [

58], weather forecasting [

59], and speech recognition [

60]. Although the accuracy of the methods, as mentioned above, may decline in the presence of the heterogeneous time-series data [

61], we need to further investigate the possibility of using such recent techniques, which have been applied to other data types already mentioned.

The goal of this study was to estimate the EEG signal phase by utilizing an adaptive method. In support of earlier studies, our results clearly show that for longer prediction intervals, the LMS-based AR model outperforms the AR model. Even for single channel O1, the LMS-based AR model shows higher samples crossing the significance line. Our proposed method may also be deployed as an alternative to the AR model, when longer prediction length is the key. The LMS-based AR model offers a feasible approach for mimicking the results of previous studies while incurring faster adaptation to the underlying EEG signal at lower computational costs. Limitations of the current study include the need to apply (a) the proposed method for different EEG rhythms, (b) performance evaluation in some behavioral tasks (rather than a simple case of resting state), and (c) single performance metric (i.e., PLV). Further work will involve overcoming the limitations and implementation in a real-time scenario, such as by closed-loop stimulation or exploring new vistas for alpha oscillations.