Decoding Electroencephalography Signal Response by Stacking Ensemble Learning and Adaptive Differential Evolution
Abstract
:1. Introduction
2. Description of the Benchmark
3. Methods
3.1. Stacked Generalization
3.2. Models Used in STACK Methodology
- A GP consists of a collection of random variables that follow a Gaussian distribution and are completely characterized by their mean and covariance (kernel) function [39]. The particular method in this paper adopts a GP with a radial basis function kernel. In this model, the sigma value serves as the sole hyperparameter.
- SVR involves identifying support vectors (points) near a hyperplane that maximizes the margin between two-point classes, which is determined by the difference between the target value and a threshold [42]. SVR incorporates a kernel, a function used to assess the similarity between two observations, enabling it to handle nonlinear problems. The radial basis function kernel is specifically employed in the model described in this paper. In this model, the sigma value serves as the sole hyperparameter.
- LASSO uses regularization terms to improve the model’s accuracy by sacrificing some bias to reduce the variance. Additionally, if there are some correlated predictors, LASSO selects the best set of them [40]. The hyperparameter of this model is the regularization value.
- The MLP is a feedforward neural network composed of one input layer (system’s input), one or more hidden layers, and one output layer (system’s outputs). In contrast to artificial neural networks without hidden layers, the MLP can solve nonlinearly separable problems [41]. In this paper, the MLP has a single hidden layer. The hidden layer activation function is the sigmoid function, while the output activation function is the identity. Using this configuration, the only hyperparameter for this model is the number of Hidden Units.
- XGBoost is based on a gradient boosting ensemble learning model and uses an additive learning strategy, which adds the best individual model to the current forecast model in the i-th prediction. A complexity term in the objective function is added to control the models’ complexity and helps smooth the final learned weights to prevent overfitting. Regularization parameters are used to avoid the estimation variance related to the base learner and to shrink them to zero [38]. The hyperparameters of this model are the number of boosting iterations, regularization, regularization, and the learning rate.
- Cubist is a rule-based model and operates using the regression tree principle [43]. Each regression tree and leaf is built with a rule associated with the information contained in each leaf. Given all rules, the final prediction is obtained as a rule’s linear combination. The concepts of committees (set of several Cubist results aggregated using the average) and the neighborhood are considered to build an accurate model. The hyperparameters of this model are the number of committees and neighbors.
3.3. Adaptive Differential Evolution with Optional External Archive
- Initialization: The first population is randomly initialized according to
- Mutation: Considering that the mutation strategies of classical DE are fast, but the convergence can be less reliable, the JADE approach adopts the mutation vector as follows. With as the set of current solutions and as the poor solutions archived, the mutation vector can be written as
- Crossover: In the crossover step, a binomial operation forms the trial/offspring vector or individual i in generation g and dimension d, , where
- Selection: Finally, it is necessary to find the new population by evaluating the elements () with the cost function. In this respect, if the value of the evaluated cost function for the offspring is better than the value for the parent (), the new element () will be the mutated vector. Otherwise, it will be equal to the early parent.
4. JADE-STACK Algorithmic Description
- 1.
- Initially, six perturbation signals of the seven available for each participant are used as the training set. Every participant’s seventh and last signals are used as a test set. This split is applied based on the initial evaluation of the dataset reported in [10,11] so it will be possible to compare the results. Moreover, considering the first six perturbation signals, a six-fold CV is used to train and validate the proposed model. Consequently, according to the proposal by [10], system identification is made for each participant individually one step ahead and three steps ahead, as suggested in [11]. Some features are obtained based on lagged input in one instant []. The mean, standard deviation, and skewness in a window of three observations are defined. The difference between the lagged inputs in one and two instants is obtained. Finally, and the logarithm of are also calculated to be used as input features.
- 2.
- In sequence, the models described in Section 3.2 are trained according to the structure
- 3.
- In the meta-learner training, the JADE approach is used to find the Cubist hyperparameters based on the maximization of variance accounted for (VAF):Moreover, the RMSE is also computed as described in (10):
- 4.
- Finally, the results obtained in steps 2 and 3 are utilized to assess the statistical differences between the compared models using the variance-accounted-for (VAF) criterion. The Friedman test can be employed [55,56] to determine whether there is a significant difference among more than two models. This test helps determine whether a set of k models (greater than two) exhibit statistically significant differences in their results. The test statistic, denoted as , is calculated as follows:Here, follows a chi-squared distribution with degrees of freedom, based on n observations and k groups. The null hypothesis assumes that no difference exists between the results of the different groups.Subsequently, if the null hypothesis is rejected, a post hoc test is necessary to identify which groups have distinct results. In this case, the Nemenyi multiple-comparison test can be employed. This test determines the critical difference (), defined as
5. Results and Discussion
6. Conclusions and Future Research
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Guo, Y.; Wang, L.; Li, Y.; Luo, J.; Wang, K.; Billings, S.; Guo, L. Neural activity inspired asymmetric basis function TV-NARX model for the identification of time-varying dynamic systems. Neurocomputing 2019, 357, 188–202. [Google Scholar] [CrossRef]
- Ayala, H.V.H.; Gritti, M.C.; Coelho, L.S. An R library for nonlinear black-box system identification. SoftwareX 2020, 11, 100495. [Google Scholar] [CrossRef]
- Villani, L.G.; da Silva, S.; Cunha, A. Damage detection in uncertain nonlinear systems based on stochastic Volterra series. Mech. Syst. Signal Process. 2019, 125, 288–310. [Google Scholar] [CrossRef] [Green Version]
- Zhou, H.Y.; Huang, L.K.; Gao, Y.M.; Lučev Vasić, Ž.; Cifrek, M.; Du, M. Estimating the ankle angle induced by FES via the neural network-based Hammerstein model. IEEE Access 2019, 7, 141277–141286. [Google Scholar] [CrossRef]
- Aljamaan, I.A. Nonlinear system identification of cortical response in EEG elicits by wrist manipulation. In Proceedings of the 17th International Multi-Conference on Systems, Signals Devices (SSD), Monastir, Tunisia, 20–23 July 2020; pp. 91–96. [Google Scholar] [CrossRef]
- Jain, S.; Deshpande, G. Parametric modeling of brain signals. In Proceedings of the 8th International Conference on Information Visualisation, Research Triangle Park, NC, USA, 15 October 2004; pp. 85–91. [Google Scholar] [CrossRef]
- Liu, X.; Yang, X. Robust identification approach for nonlinear state-space models. Neurocomputing 2019, 333, 329–338. [Google Scholar] [CrossRef]
- Billings, S.A. Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains; John Wiley & Sons: Hoboken, NJ, USA, 2013; pp. 1–574. [Google Scholar]
- Worden, K.; Green, P. A machine learning approach to nonlinear modal analysis. Mech. Syst. Signal Process. 2017, 84, 34–53. [Google Scholar] [CrossRef]
- Vlaar, M.P.; Birpoutsoukis, G.; Lataire, J.; Schoukens, M.; Schouten, A.C.; Schoukens, J.; Van Der Helm, F.C.T. Modeling the nonlinear cortical response in EEG evoked by wrist joint manipulation. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 205–215. [Google Scholar] [CrossRef]
- Tian, R.; Yang, Y.; van der Helm, F.C.T.; Dewald, J.P.A. A novel approach for modeling neural responses to joint perturbations using the NARMAX method and a hierarchical neural network. Front. Comput. Neurosci. 2018, 12, 96. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gu, Y.; Yang, Y.; Dewald, J.P.A.; van der Helm, F.C.; Schouten, A.C.; Wei, H.L. Nonlinear modeling of cortical responses to mechanical wrist perturbations Using the NARMAX method. IEEE Trans. Biomed. Eng. 2021, 68, 948–958. [Google Scholar] [CrossRef]
- van de Ruit, M.; Cavallo, G.; Lataire, J.; van der Helm, F.C.T.; Mugge, W.; van Wingerden, J.W.; Schouten, A.C. Revealing time-varying joint impedance with kernel-based regression and nonparametric decomposition. IEEE Trans. Control Syst. Technol. 2020, 28, 224–237. [Google Scholar] [CrossRef]
- Westwick, D.T.; Kearney, R.E. Separable least squares identification of nonlinear Hammerstein models: Application to stretch reflex dynamics. Ann. Biomed. Eng. 2001, 29, 707–718. [Google Scholar] [CrossRef]
- Nicolas-Alonso, L.F.; Corralejo, R.; Gomez-Pilar, J.; Álvarez, D.; Hornero, R. Adaptive stacked generalization for multiclass motor imagery-based brain computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 702–712. [Google Scholar] [CrossRef]
- Dalhoumi, S.; Dray, G.; Montmain, J. Knowledge transfer for reducing calibration time in brain-computer interfacing. In Proceedings of the 26th IEEE International Conference on Tools with Artificial Intelligence, Limassol, Cyprus, 10–12 November 2014; pp. 634–639. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, W.; Zhang, Q.; Zheng, N. Transfer learning-based muscle activity decoding scheme by low-frequency sEMG for Wearable low-cost application. IEEE Access 2021, 9, 22804–22815. [Google Scholar] [CrossRef]
- Lee, B.H.; Jeong, J.H.; Lee, S.W. SessionNet: Feature similarity-based weighted ensemble learning for motor imagery classification. IEEE Access 2020, 8, 134524–134535. [Google Scholar] [CrossRef]
- Fagg, A.H.; Ojakangas, G.W.; Miller, L.E.; Hatsopoulos, N.G. Kinetic trajectory decoding using motor cortical ensembles. IEEE Trans. Neural Syst. Rehabil. Eng. 2009, 17, 487–496. [Google Scholar] [CrossRef]
- Li, Z.; Wu, D.; Hu, C.; Terpenny, J. An ensemble learning-based prognostic approach with degradation-dependent weights for remaining useful life prediction. Reliab. Eng. Syst. Saf. 2019, 184, 110–122. [Google Scholar] [CrossRef]
- Ribeiro, M.H.D.M.; Stefenon, S.F.; de Lima, J.D.; Nied, A.; Mariani, V.C.; Coelho, L.S. Electricity price forecasting based on self-adaptive decomposition and heterogeneous ensemble learning. Energies 2020, 13, 5190. [Google Scholar] [CrossRef]
- Mendes-Moreira, J.; Soares, C.; Jorge, A.M.; Sousa, J.F.d. Ensemble approaches for regression: A survey. ACM Comput. Surv. 2012, 45, 1–40. [Google Scholar] [CrossRef]
- Ribeiro, M.H.D.M.; da Silva, R.G.; Fraccanabbia, N.; Mariani, V.C.; Coelho, L.S. Electricity energy price forecasting based on hybrid multi-stage heterogeneous ensemble: Brazilian commercial and residential cases. In Proceedings of the IEEE World Congress on Computational Intelligence (IEEE WCCI), International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Ribeiro, M.H.D.M.; da Silva, R.G.; Ribeiro, G.T.; Mariani, V.C.; Coelho, L.S. Cooperative Ensemble Learning Model Improves Electric Short-Term Load Forecasting. Chaos Solitons Fractals 2023, 166, 112982. [Google Scholar] [CrossRef]
- Ribeiro, M.H.D.M.; Coelho, L.S. Ensemble approach based on bagging, boosting and stacking for short-term prediction in agribusiness time series. Appl. Soft Comput. 2020, 86, 105837. [Google Scholar] [CrossRef]
- da Silva, R.G.; Moreno, S.R.; Ribeiro, M.H.D.M.; Larcher, J.H.K.; Mariani, V.C.; Coelho, L.d.S. Multi-Step Short-Term Wind Speed Forecasting Based on Multi-Stage Decomposition Coupled with Stacking-Ensemble Learning Approach. Int. J. Electr. Power Energy Syst. 2022, 143, 108504. [Google Scholar] [CrossRef]
- Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
- Mariani, V.C.; Och, S.H.; Coelho, L.S.; Domingues, E. Pressure prediction of a spark ignition single cylinder engine using optimized extreme learning machine models. Appl. Energy 2019, 249, 204–221. [Google Scholar] [CrossRef]
- Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Opara, K.R.; Arabas, J. Differential evolution: A survey of theoretical analyses. Swarm Evol. Comput. 2019, 44, 546–558. [Google Scholar] [CrossRef]
- Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
- Zhang, J.; Sanderson, A.C. JADE: Self-adaptive differential evolution with fast and reliable convergence performance. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 2251–2258. [Google Scholar] [CrossRef]
- Jiang, H.; Zhang, Y.; Muljadi, E.; Zhang, J.J.; Gao, D.W. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization. IEEE Trans. Smart Grid 2018, 9, 3341–3350. [Google Scholar] [CrossRef]
- Silva, P.C.L.; de Oliveira e Lucas, P.; Sadaei, H.J.; Guimarães, F.G. Distributed evolutionary hyperparameter optimization for fuzzy time series. IEEE Trans. Netw. Serv. Manag. 2020, 17, 1309–1321. [Google Scholar] [CrossRef]
- Mendes, A.; Togelius, J.; Nealen, A. Hyper-heuristic general video game playing. In Proceedings of the 2016 IEEE Conference on Computational Intelligence and Games (CIG), Santorini, Greece, 20–23 September 2016; pp. 1–8. [Google Scholar] [CrossRef]
- Gagné, C.; Sebag, M.; Schoenauer, M.; Tomassini, M. Ensemble learning for free with evolutionary algorithms? In Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation (GECCO’07), London, UK, 7–11 July 2007; ACM: New York, NY, USA, 2007; pp. 1782–1789. [Google Scholar] [CrossRef] [Green Version]
- Vlaar, M.P.; Solis-Escalante, T.; Vardy, A.N.; Van Der Helm, F.C.T.; Schouten, A.C. Quantifying nonlinear contributions to cortical responses evoked by continuous wrist manipulation. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 481–491. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar] [CrossRef] [Green Version]
- Rasmussen, C.E. Gaussian processes in machine learning. In Advanced Lectures on Machine Learning, Proceedings of the ML Summer Schools 2003, Canberra, Australia, 2–14 February 2003, Tübingen, Germany, 4–16 August 2003; Revised Lectures; Springer: Berlin/Heidelberg, Germany, 2004; pp. 63–71. [Google Scholar] [CrossRef] [Green Version]
- Tibshirani, R. Regression shrinkage and selection via the LASSO. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Rumelhart, D.; McClelland, J.L.; Group, P.R. Parallel Distributed Processing: Foundations; MIT Press: Cambridge, MA, USA, 1986; Volume 1. [Google Scholar]
- Drucker, H.; Burges, C.J.C.; Kaufman, L.; Smola, A.J.; Vapnik, V. Support vector regression machines. In Advances in Neural Information Processing Systems 9, Proceedings of the 1996 Conference, Denver, CO, USA, 2–5 December 1996; Mozer, M.C., Jordan, M.I., Petsche, T., Eds.; MIT Press: Cambridge, MA, USA, 1997; pp. 155–161. [Google Scholar]
- Quinlan, J.R. Combining instance-based and model-based learning. In Proceedings of the Tenth International Conference on International Conference on Machine Learning (ICML’93), Amherst, MA, USA, 27–29 July 1993; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1993; pp. 236–243. [Google Scholar]
- Oostenveld, R.; Praamstra, P. The five percent electrode system for high-resolution EEG and ERP measurements. Clin. Neurophysiol. 2001, 112, 713–719. [Google Scholar] [CrossRef] [PubMed]
- Ribeiro, M.H.D.M.; Mariani, V.C.; Coelho, L.S. Multi-step ahead meningitis case forecasting based on decomposition and multi-objective optimization methods. J. Biomed. Inform. 2020, 111, 103575. [Google Scholar] [CrossRef] [PubMed]
- Kuhn, M. Building predictive models in R using the Caret package. J. Stat. Softw. 2008, 28, 1–26. [Google Scholar] [CrossRef] [Green Version]
- da Silva, R.G.; Ribeiro, M.H.D.M.; Fraccanabbia, N.; Mariani, V.C.; Coelho, L.S. Multi-step ahead bitcoin price forecasting based on VMD and ensemble learning methods. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Ribeiro, M.H.D.M.; da Silva, R.G.; Mariani, V.C.; Coelho, L.S. Short-term forecasting COVID-19 cumulative confirmed cases: Perspectives for Brazil. Chaos Solitons Fractals 2020, 135, 109853. [Google Scholar] [CrossRef] [PubMed]
- Cui, S.; Yin, Y.; Wang, D.; Li, Z.; Wang, Y. A stacking-based ensemble learning method for earthquake casualty prediction. Appl. Soft Comput. 2021, 101, 107038. [Google Scholar] [CrossRef]
- Stefenon, S.F.; Ribeiro, M.H.D.M.; Nied, A.; Mariani, V.C.; Coelho, L.S.; Leithardt, V.R.Q.; Silva, L.A.; Seman, L.O. Hybrid wavelet stacking ensemble model for insulators contamination forecasting. IEEE Access 2021, 9, 66387–66397. [Google Scholar] [CrossRef]
- da Silva, R.G.; Ribeiro, M.H.D.M.; Moreno, S.R.; Mariani, V.C.; Coelho, L.S. A novel decomposition-ensemble learning framework for multi-step ahead wind energy forecasting. Energy 2021, 216, 119174. [Google Scholar] [CrossRef]
- Guo, X.; Gao, Y.; Zheng, D.; Ning, Y.; Zhao, Q. Study on short-term photovoltaic power prediction model based on the Stacking ensemble learning. Energy Rep. 2020, 6, 1424–1431. [Google Scholar] [CrossRef]
- Mariani, V.C.; Justi Luvizotto, L.G.; Guerra, F.A.; Coelho, L.S. A hybrid shuffled complex evolution approach based on differential evolution for unconstrained optimization. Appl. Math. Comput. 2011, 217, 5822–5829. [Google Scholar] [CrossRef]
- Worden, K.; Barthorpe, R.; Cross, E.; Dervilis, N.; Holmes, G.; Manson, G.; Rogers, T. On evolutionary system identification with applications to nonlinear benchmarks. Mech. Syst. Signal Process. 2018, 112, 194–232. [Google Scholar] [CrossRef]
- Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
- Carrasco, J.; García, S.; Rueda, M.; Das, S.; Herrera, F. Recent trends in the use of statistical tests for comparing swarm and evolutionary computing algorithms: Practical guidelines and a critical review. Swarm Evol. Comput. 2020, 54, 100665. [Google Scholar] [CrossRef] [Green Version]
- Nemenyi, P.B. Distribution-Free Multiple Comparisons; Princeton University: Princeton, NJ, USA, 1963. [Google Scholar]
- R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2018. [Google Scholar]
- Ayala, H.V.H.; Coelho, L.S. Cascaded evolutionary algorithm for nonlinear system identification based on correlation functions and radial basis functions neural networks. Mech. Syst. Signal Process. 2016, 68–69, 378–393. [Google Scholar] [CrossRef]
- Wolpert, D.; Macready, W. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
- Adam, S.P.; Alexandropoulos, S.A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. In Approximation and Optimization; Springer Optimization and Its Applications; Springer: Cham, Switzerland, 2019; Volume 145, pp. 57–82. [Google Scholar] [CrossRef]
- Altman, D.G.; Bland, J.M. Statistics notes: Absence of evidence is not evidence of absence. BMJ 1995, 311, 485. [Google Scholar] [CrossRef] [Green Version]
- Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar] [CrossRef] [Green Version]
- Mallipeddi, R.; Suganthan, P.N. Differential evolution algorithm with ensemble of parameters and mutation and crossover strategies. In Swarm, Evolutionary, and Memetic Computing, Proceedings of the First International Conference on Swarm, Evolutionary, and Memetic Computing, SEMCCO 2010, Chennai, India, 16–18 December 2010; Springer: Berlin/Heidelberg Germany, 2010; pp. 71–78. [Google Scholar] [CrossRef]
- Coelho, L.S.; Mariani, V.C. Particle swarm optimization with quasi-Newton local search for solving economic dispatch problem. In Proceedings of the 2006 IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, 8–11 October 2006; pp. 3109–3113. [Google Scholar] [CrossRef]
- Coelho, L.S.; Mariani, V.C. Economic dispatch optimization using hybrid chaotic particle swarm optimizer. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007; pp. 1963–1968. [Google Scholar] [CrossRef]
- de Vasconcelos Segundo, E.H.; Mariani, V.C.; Coelho, L.S. Metaheuristic inspired on owls behavior applied to heat exchangers design. Therm. Sci. Eng. Prog. 2019, 14, 100431. [Google Scholar] [CrossRef]
- de Vasconcelos Segundo, E.H.; Mariani, V.C.; Coelho, L.d.S. Design of heat exchangers using falcon optimization algorithm. Appl. Therm. Eng. 2019, 156, 119–144. [Google Scholar] [CrossRef]
- da Silva, L.S.A.; Lúcio, Y.L.S.; Coelho, L.S.; Mariani, V.C.; Rao, R.V. A comprehensive review on Jaya optimization algorithm. Artif. Intell. Rev. 2022, 56, 4329–4361. [Google Scholar] [CrossRef]
- Klein, C.E.; Mariani, V.C.; Coelho, L.S. Cheetah based optimization algorithm: A novel swarm intelligence paradigm. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 25–27 April 2018; pp. 685–690. [Google Scholar]
Base Learners (Layer-0) | Meta-Learner (Layer-1) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
LASSO | MLP | GP | SVR | XGBoost | Cubist | ||||||
Participant | Regularization | #Hidden Units | Sigma | Sigma | Cost | #Boosting Iterations | L2 Regularization | L1 Regularization | Learning Rate | Committees | Neighbors |
1 | 0.9 | 3 | 0.089 | 0.071 | 1 | 150 | 0 | 0.1 | 0.3 | 51 | 0 |
2 | 0.9 | 5 | 0.075 | 0.066 | 1 | 150 | 0.1 | 0.1 | 0.3 | 29 | 1 |
3 | 0.9 | 5 | 0.074 | 0.07 | 1 | 150 | 0.1 | 0.1 | 0.3 | 16 | 1 |
4 | 0.9 | 5 | 0.073 | 0.068 | 1 | 150 | 0.1 | 0.0001 | 0.3 | 30 | 1 |
5 | 0.9 | 5 | 0.085 | 0.078 | 1 | 150 | 0.1 | 0.1 | 0.3 | 16 | 1 |
6 | 0.9 | 5 | 0.077 | 0.077 | 1 | 150 | 0.1 | 0.1 | 0.3 | 2 | 1 |
7 | 0.9 | 5 | 0.076 | 0.076 | 1 | 150 | 0.1 | 0.1 | 0.3 | 16 | 1 |
8 | 0.9 | 5 | 0.07 | 0.071 | 1 | 150 | 0.1 | 0.0001 | 0.3 | 0 | 1 |
9 | 0.9 | 5 | 0.075 | 0.074 | 1 | 150 | 0.1 | 0.1 | 0.3 | 4 | 0 |
10 | 0.9 | 5 | 0.08 | 0.07 | 1 | 50 | 0.0001 | 0 | 0.3 | 44 | 9 |
Participant | LASSO | MLP | GP | SVR | XGBoost | JADE-STACK |
---|---|---|---|---|---|---|
1 | 0.0122 | 0.1397 | 0.6262 | 0.1592 | 2.3284 | 15.0342 |
2 | 0.0243 | 0.1408 | 0.5754 | 0.1576 | 2.2248 | 33.4170 |
3 | 0.0377 | 0.2476 | 0.0180 | 0.2561 | 3.6448 | 23.4528 |
4 | 0.0381 | 0.1817 | 0.9091 | 0.2936 | 3.0103 | 25.6545 |
5 | 0.0330 | 0.1907 | 0.9047 | 0.2678 | 2.9858 | 18.4714 |
6 | 0.0329 | 0.1717 | 0.9105 | 0.2506 | 3.0247 | 17.7122 |
7 | 0.0339 | 0.1730 | 0.9433 | 0.2372 | 2.9763 | 28.6318 |
8 | 0.0332 | 0.1770 | 0.9360 | 0.2489 | 3.0037 | 16.4489 |
9 | 0.0378 | 0.1826 | 0.8871 | 0.2470 | 3.0370 | 19.6592 |
10 | 0.0342 | 0.1758 | 0.9986 | 0.2641 | 2.9894 | 26.5665 |
Average | 0.0317 | 0.1781 | 0.7709 | 0.2382 | 2.9225 | 22.5049 |
Std | 0.0075 | 0.0282 | 0.2837 | 0.0424 | 0.3756 | 5.6899 |
LASSO | MLP | GP | SVR | XGBoost | JADE-STACK | DE-STACK | GA-STACK | PSO-STACK | NARMAX- HNN [11] | NARMAX- P [11] | Volterra1 [10] | Volterra2 [10] | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Participant | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | VAF | VAF | ||
1 | 0.2225 | 94.75 | 0.2392 | 94.12 | 0.3298 | 88.52 | 0.2919 | 90.96 | 0.2791 | 91.77 | 0.2076 | 96.12 | 0.2122 | 95.01 | 0.2063 | 95.28 | 0.2051 | 95.34 | 94.37 | 95.52 | 38.37 | 45.00 |
2 | 0.2667 | 94.06 | 0.2792 | 93.58 | 0.4136 | 85.74 | 0.3781 | 88.10 | 0.3909 | 87.26 | 0.2616 | 94.48 | 0.2631 | 94.21 | 0.2629 | 94.22 | 0.2625 | 94.25 | 92.83 | 94.74 | 29.12 | 34.00 |
3 | 0.2650 | 93.21 | 0.2975 | 91.48 | 0.3466 | 88.40 | 0.3073 | 90.86 | 0.3251 | 89.81 | 0.2545 | 95.04 | 0.2616 | 93.47 | 0.2505 | 94.01 | 0.2525 | 93.90 | 90.95 | 92.95 | 32.18 | 40.00 |
4 | 0.3299 | 86.70 | 0.3565 | 84.59 | 0.4073 | 79.72 | 0.3724 | 83.04 | 0.4103 | 79.45 | 0.3336 | 91.59 | 0.3433 | 85.56 | 0.3309 | 86.61 | 0.3480 | 85.18 | 91.02 | 91.94 | 28.10 | 50.00 |
5 | 0.2529 | 93.12 | 0.2767 | 91.86 | 0.3144 | 89.39 | 0.2752 | 91.86 | 0.3183 | 89.11 | 0.2348 | 94.96 | 0.2504 | 93.24 | 0.2347 | 94.06 | 0.2340 | 94.09 | 92.58 | 94.04 | 53.74 | 56.00 |
6 | 0.2068 | 94.59 | 0.2223 | 93.91 | 0.2963 | 88.89 | 0.2620 | 91.31 | 0.2739 | 90.60 | 0.2031 | 95.99 | 0.2047 | 94.54 | 0.2043 | 94.55 | 0.2035 | 94.58 | 93.76 | 93.72 | 61.07 | 46.00 |
7 | 0.2226 | 95.25 | 0.2672 | 94.43 | 0.3000 | 91.38 | 0.2713 | 92.97 | 0.2933 | 91.75 | 0.2167 | 95.39 | 0.2215 | 95.13 | 0.2169 | 95.34 | 0.2247 | 95.04 | 93.08 | 95.73 | 54.30 | 60.00 |
8 | 0.2924 | 91.01 | 0.3454 | 88.51 | 0.3836 | 84.57 | 0.3514 | 87.02 | 0.4083 | 82.55 | 0.2950 | 93.10 | 0.3061 | 90.19 | 0.2939 | 90.95 | 0.3293 | 88.64 | 90.23 | 91.90 | 39.95 | 51.00 |
9 | 0.2831 | 92.48 | 0.3071 | 91.33 | 0.4144 | 83.97 | 0.3792 | 86.67 | 0.3953 | 85.37 | 0.2834 | 92.37 | 0.2949 | 91.90 | 0.2864 | 92.33 | 0.2841 | 92.44 | 90.36 | 92.24 | 26.35 | 36.00 |
10 | 0.2137 | 95.83 | 0.2485 | 95.09 | 0.3216 | 90.57 | 0.2787 | 92.94 | 0.2627 | 93.69 | 0.2186 | 95.99 | 0.2128 | 95.94 | 0.2129 | 95.95 | 0.2126 | 95.93 | 94.15 | 96.28 | 65.19 | 44.00 |
Average | 0.2555 | 93.10 | 0.2840 | 91.89 | 0.3528 | 87.12 | 0.3167 | 89.57 | 0.3357 | 88.14 | 0.2509 | 94.50 | 0.2571 | 92.92 | 0.2500 | 93.33 | 0.2556 | 92.94 | 92.33 | 93.91 | 42.84 | 46.20 |
Std | 0.0377 | 2.5237 | 0.0414 | 3.0615 | 0.0452 | 3.3915 | 0.0457 | 3.0731 | 0.0566 | 4.2675 | 0.0407 | 1.5273 | 0.0440 | 2.9279 | 0.0408 | 2.6421 | 0.0483 | 3.2360 | 1.4935 | 1.5458 | 13.7891 | 7.8842 |
LASSO | MLP | GP | SVR | XGBoost | JADE-STACK | DE-STACK | GA-STACK | PSO-STACK | NARMAX- HNN [11] | NARMAX- P [11] | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Participant | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | RMSE | VAF | VAF | VAF |
1 | 0.4605 | 77.62 | 0.5460 | 70.97 | 0.5626 | 66.39 | 0.5514 | 67.95 | 0.5654 | 66.12 | 0.4655 | 77.59 | 0.3618 | 73.81 | 0.4610 | 77.81 | 0.4577 | 78.28 | 63.44 | 57.08 |
2 | 0.6822 | 61.16 | 0.6467 | 65.57 | 0.7530 | 52.82 | 0.7531 | 52.85 | 0.7760 | 50.13 | 0.6589 | 63.77 | 0.6584 | 63.82 | 0.6662 | 62.98 | 0.6638 | 63.35 | 56.85 | 39.53 |
3 | 0.6319 | 61.39 | 0.7430 | 47.20 | 0.5953 | 66.02 | 0.5908 | 66.43 | 0.5578 | 70.08 | 0.6023 | 64.92 | 0.6342 | 61.14 | 0.5949 | 65.78 | 0.5961 | 65.63 | 67.16 | 31.17 |
4 | 0.6173 | 53.62 | 0.6698 | 46.84 | 0.6528 | 48.00 | 0.6415 | 49.96 | 0.7169 | 37.54 | 0.6252 | 52.37 | 0.6571 | 47.47 | 0.6194 | 53.68 | 0.6479 | 48.71 | 74.89 | 32.26 |
5 | 0.5485 | 67.66 | 0.6118 | 60.96 | 0.5699 | 65.26 | 0.5538 | 67.05 | 0.6426 | 55.64 | 0.5327 | 69.51 | 0.5564 | 66.80 | 0.5340 | 69.34 | 0.5333 | 69.48 | 82.31 | 61.57 |
6 | 0.4700 | 72.10 | 0.5190 | 68.02 | 0.5411 | 63.12 | 0.5223 | 65.50 | 0.5706 | 59.68 | 0.4656 | 72.55 | 0.4633 | 72.85 | 0.4620 | 72.98 | 0.4555 | 73.81 | 75.55 | 49.18 |
7 | 0.4985 | 76.20 | 0.6660 | 67.53 | 0.5515 | 70.85 | 0.5402 | 72.10 | 0.5454 | 71.49 | 0.4930 | 76.74 | 0.5106 | 75.07 | 0.4843 | 67.53 | 0.5274 | 73.54 | 74.32 | 65.35 |
8 | 0.6207 | 59.55 | 0.7953 | 43.11 | 0.6779 | 52.03 | 0.6685 | 53.03 | 0.7168 | 46.70 | 0.6085 | 61.08 | 0.6486 | 55.87 | 0.6283 | 58.56 | 0.6510 | 55.45 | 43.40 | 32.57 |
9 | 0.6232 | 63.69 | 0.6775 | 57.40 | 0.7086 | 53.12 | 0.7034 | 54.30 | 0.6932 | 55.09 | 0.6290 | 63.47 | 0.6108 | 65.61 | 0.6334 | 62.90 | 0.6289 | 63.33 | 77.16 | 37.98 |
10 | 0.5348 | 73.92 | 0.6664 | 66.34 | 0.6008 | 67.07 | 0.5955 | 67.69 | 0.5304 | 74.32 | 0.5445 | 73.03 | 0.5353 | 73.84 | 0.5379 | 73.65 | 0.5363 | 73.71 | 78.44 | 64.21 |
Average | 0.5688 | 66.69 | 0.6541 | 59.39 | 0.6214 | 60.47 | 0.6120 | 61.69 | 0.6315 | 58.68 | 0.5625 | 67.50 | 0.5637 | 65.63 | 0.5621 | 66.52 | 0.5698 | 66.53 | 69.3520 | 47.09 |
Std | 0.0727 | 7.6288 | 0.0779 | 9.6891 | 0.0690 | 7.6575 | 0.0730 | 7.7109 | 0.0840 | 11.3155 | 0.0680 | 7.4431 | 0.0931 | 8.5171 | 0.0723 | 6.9528 | 0.0746 | 8.7289 | 11.2907 | 13.2842 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ribeiro, M.H.D.M.; da Silva, R.G.; Larcher, J.H.K.; Mendes, A.; Mariani, V.C.; Coelho, L.d.S. Decoding Electroencephalography Signal Response by Stacking Ensemble Learning and Adaptive Differential Evolution. Sensors 2023, 23, 7049. https://doi.org/10.3390/s23167049
Ribeiro MHDM, da Silva RG, Larcher JHK, Mendes A, Mariani VC, Coelho LdS. Decoding Electroencephalography Signal Response by Stacking Ensemble Learning and Adaptive Differential Evolution. Sensors. 2023; 23(16):7049. https://doi.org/10.3390/s23167049
Chicago/Turabian StyleRibeiro, Matheus Henrique Dal Molin, Ramon Gomes da Silva, José Henrique Kleinubing Larcher, Andre Mendes, Viviana Cocco Mariani, and Leandro dos Santos Coelho. 2023. "Decoding Electroencephalography Signal Response by Stacking Ensemble Learning and Adaptive Differential Evolution" Sensors 23, no. 16: 7049. https://doi.org/10.3390/s23167049
APA StyleRibeiro, M. H. D. M., da Silva, R. G., Larcher, J. H. K., Mendes, A., Mariani, V. C., & Coelho, L. d. S. (2023). Decoding Electroencephalography Signal Response by Stacking Ensemble Learning and Adaptive Differential Evolution. Sensors, 23(16), 7049. https://doi.org/10.3390/s23167049