Next Article in Journal
Optimizing Pothole Detection in Pavements: A Comparative Analysis of Deep Learning Models
Previous Article in Journal
Application of Ion Exchange Resin Capsules in Water Pollution Source Investigating in Taichung Area, Central Taiwan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Dynamic Tikhonov State Forecasting Based on Large-Scale Deep Neural Network Constraints †

1
Department of Systems, Instituto Tecnológico Metropolitano, Medellín 050012, Colombia
2
School Applied Sciences and Engineering, Universidad EAFIT, Medellín 050021, Colombia
3
Department of Electrical Engineering, Research Group in Automatic Control, Universidad Tecnológica de Pereira, Pereira 660003, Colombia
*
Author to whom correspondence should be addressed.
Presented at the 9th International Conference on Time Series and Forecasting, Gran Canaria, Spain, 12–14 July 2023.
Eng. Proc. 2023, 39(1), 28; https://doi.org/10.3390/engproc2023039028
Published: 29 June 2023
(This article belongs to the Proceedings of The 9th International Conference on Time Series and Forecasting)

Abstract

:
This work presents dynamic Tikhonov state forecasting based on large-scale deep neural network constraint for the solution to a dynamic inverse problem of electroencephalographic brain mapping. The dynamic constraint is obtained by using a large-scale deep neural network to approximate the dynamics of the state evolution in a discrete large-scale state-space model. An evaluation by using neural networks with several hidden layer configurations is performed to obtain the adequate structure for large-scale system dynamic tracking. The proposed approach is evaluated over two models of 2004 and 10,016 states in discrete time. The models are related to an electroencephalographic problem for EEG generation. A comparison analysis is performed by using static and dynamic Tikhonov approaches with simplified dynamic constraints. By considering the obtained results it can be concluded that the deep neural networks adequately approximate large-scale state dynamics by improving the dynamic inverse problem solutions.

1. Introduction

Deep neural networks (DNNs) have emerged as promising tools for state estimation. DNNs can learn complex non-linear relationships between inputs and outputs, making them well-suited for estimating dynamic systems. Additionally, DNNs can handle high-dimensional data and can be adapted to handle various state estimation problems. For example, Zhang et al. [1] and Li et al. [2] discussed the impact of deep learning on the field of inverse problems, with the former proposing a residual learning-based deep convolutional neural network (CNN) approach for image denoising. They reviewed existing work in this area and covered the basics of deep learning and its application for inverse problems. However, Zhang et al. [3] and Chien et al. [4] focused on the use of Tikhonov regularization for training DNNs. Zhang et al. [1] used the convergent block coordinate descent (CBCD) algorithm for training with Tikhonov regularization, showing its effectiveness through experimental results on various datasets. Chien et al. [4] explored the use of Tikhonov regularization in acoustic modelling, showing that adding Tikhonov regularization can improve the generalization performance of DNNs.
In order to solve inverse problems using DNNs and regularization, in Fkham, et al. [5] the authors developed a DNN-based method for automatically learning the regularization parameters in inverse problems, resulting in improved accuracy and robustness. Alternatively, Nguyen et al. [6] incorporated prior knowledge about the inverse problem into the network architecture, resulting in improved accuracy and robustness compared to traditional DNN-based methods. Furthermore, Romano et al. [7] developed a method called regularization by denoising (RED) for inverse problems, using a deep denoising neural network to regularize the solution of an inverse problem. In contrast, Mao et al. [8] introduced a deep learning-based approach for image restoration using a profound convolutional encoder–decoder network architecture with symmetric skip connections to handle the inverse problem and the experimental results that demonstrate the effectiveness of the proposed approach were tested on several benchmark datasets.
According to Kolowrocki et al. [9], large-scale complex systems need to be modelled in order to identify the cross-correlation of variables through their inherent complex dynamics. In many cases, their dynamics are hard to describe using non-linear equations due to their inherent couplings and complexity. As Sockeel et al. [10] reported, electroencephalographic signals are a clear example of large-scale systems where the discrete state-space model is represented by a large-scale non-linear state evolution equation and a measurement equation. The state estimation in EEG is an ill-conditioned, ill-posed inverse problem that requires additional constraints to be solved adequately as Sanchez-Bornot [11] mentioned. In many cases, as Wang et al. [12] reported, the number of states to be estimated is large and requires high-performance computing.
This paper proposes a dynamic Tikhonov state forecasting method based on large-scale DNNs as dynamic constraints, evaluated for the solution to an EEG inverse problem for neural activity estimation and compared with the static version of the method. The non-linear dynamic of state evolution is approximated by using the non-linear structure of the DNNs for two brain models with 2004 and 10,016 sources in which the EEG dynamics are simulated and approximated using the state measurements. Qualitative and quantitative analysis is performed for the state estimation for dynamic tracking, where the quantitative analysis is measured in terms of the least-squared estimation error. As a result, an improvement in the dynamic tracking of the EEG is achieved for the proposed dynamic Tikhonov based on the DNN approach obtained in comparison with the static and dynamic Tikhonov approaches.
The paper is organized as follows: in Section 2 the dynamic Tikhonov structure based on DNN constraints is presented; in Section 3, the state forecasting results for the proposed approach and the static and dynamic Tikhonov approaches are shown. Finally, in Section 4, the conclusions and final remarks are presented.

2. Materials and Methods

2.1. Forward Dynamic Problem

The measurements equation for the state-space representation of the EEG dynamics can be described as follows:
y k = A x k + ϵ k
where y k are the vector time series measurements at time k, x k is the state vector, and A is the lead-field matrix. In addition, the intrinsic dynamic evolution of the states x k is defined through a non-linear differences equation, as follows:
x k = f ( x k 1 , x k 2 , ) + η k
where f is a non-linear dynamic difference equation that describes the state evolution. The structure of f ( . ) can be defined as a non-linear physically motivated model, as used in [13].

2.2. Dynamic Tikhonov Based on DNN

Consider a cost function defined by
J k = y k A x k 2 2 + λ 2 x k x k 2 2
where x k the a priori state estimation, and where the solution can be computed as
x ^ k = ( A T A + λ 2 I ) 1 ( A T y k + λ 2 x k )
In this work, the a priori estimation is performed by using a DNN in order to consider the dynamic evolution of the states, therefore approximating the function described in (2) as follows
x k = Φ ( x ^ k 1 )
where Φ is the DNN. Therefore,
J k = y k A x k 2 2 + λ 2 x k Φ ( x ^ k 1 ) 2 2
where the solution for state forecasting can be computed as
x ^ k = ( A T A + λ 2 I ) 1 ( A T y k + λ 2 Φ ( x ^ k 1 ) )
The DNN in Figure 1 shows the structure of the DNN Φ of (5), used to approximate f in (5). It has an input layer x k 1 , three hidden layers which are all fully connected, and an output layer x k with two outputs.

3. Results

3.1. Experimental Setup

In order to evaluate the performance of the proposed dynamic Tikhonov approach based on DNN dynamic constraints, a simulation of the time series corresponding to the EEG was performed using Equations (1) and (2). To this end, the lead-field matrix corresponding to the New York head model was used [14].
The model considered two different source configurations, a detailed representation of which can be seen in Figure 2.
The non-linear function for states evolution f ( . ) was simulated considering the structures proposed in [13] as follows:
x k = A 1 x k 1 + A 2 x k 2 + A 3 x k 1 2 + A 4 x k 2 3 + A 5 x k η
where ∘ is the Hadamard product and η is the delayed state. The simulation of the EEG was developed considering A 1 = 0.8 I , and A 2 , A 3 , A 4 , and A 5 equal to zero. The approximation of the function f was performed by the DNN Φ by using a structure with three hidden layers, ReLU activation functions and an L2 regularization parameter to avoid overfitting and reduce model complexity to improve performance on the test data. A comparative analysis in terms of the least-squared error was performed considering the static and dynamic Tikhonov approaches. The implementation of the proposed DNN approach was performed in Python using TensorFlow.
Figure 3 and Figure 4 show the simulated EEG using a brain model with 2004 and 10,016 states of four EEG time series measurements, respectively.

3.2. State Forecasting Results

The state forecasting results show that the proposed dynamic Tikhonov based on the DNN approach achieves the closest fit to the trustworthy sources, as evidenced by the lower estimation error compared to the static and dynamic Tikhonov approaches, as shown in the following graphs for the two models and the estimation with different hidden layers. Results were obtained by testing the New York model for 2004 and 10,016 states, comparing the behaviour of the data estimates with different architectures, making variations from zero layers, for a simple linear regression model, to three layers, obtaining the best performance with the dynamic Tikhonov based on the DNN approach.
Real and estimated states using the Tikhonov and dynamic Tikhonov based on the DNN are shown in Figure 5 and Figure 6 using one and three hidden layers, respectively. Four states (10, 600, 1200, and 2000) were used for the model to adequate estimate the behaviuor of the source in different states. In all four cases, it can be seen that the behaviour of the dynamic Tikhonov based on the DNN approach overcomes the Tikhonov approach, and that the three-layer architecture achieves a better performance compared to the one-layer architecture. In order to observe the behaviour of the implemented methodology, a test with a 2004-state model (considered small) was made.
Once several architectures for the model with 2004 states were tested, they were extrapolated to a model with 10,016 state to observe the dynamic behaviour of the system with Tikhonov and dynamic Tikhonov based on DNN approaches. This comparison for the architectures was also made for the model with 10,016 states. Figure 7 and Figure 8 show the results obtained by using one and three hidden layers, respectively. Four states (states 100, 2500, 5000, and 10,000) were used to observe the behaviour of the approaches with respect to the source in different states for a bigger model.
Overall, the models with 2004 and 10,016 sources and three hidden layers demonstrated the efficacy of the proposed dynamic Tikhonov based on the DNN approach for state estimation in the context of the EEG inverse problem. The approach achieved lower estimation errors, was closer to sources, and had faster convergence rates than the static and dynamic Tikhonov approaches.
Figure 9 shows the estimation results from the forecasting, demonstrating that the DNN approach continues to achieve the closest fit to the actual sources and exhibits the lowest prediction error compared to the static and dynamic Tikhonov approaches. This suggests that the DNN approach is more effective in capturing the complex non-linear dynamics of the EEG sources.
Table 1 shows the estimation results for the static and dynamic Tikhonov approach in terms of the least-squared error for a model with 2004 and 10,016 states.
The least-square error for the dynamic Tikhonov approach is significantly lower than for the static Tikhonov approach for both models. This suggests that incorporating the temporal dynamics in the state estimation problem can significantly improve the accuracy of the estimation. In addition, Table 2 shows the estimation errors by using the proposed dynamic Tikhonov with DNN constraints approach using several configurations of hidden layers in terms of the least-squared error.
In terms of the least-squared estimation error, the dynamic Tikhonov approach with DNN constraints outperformed both the static and dynamic Tikhonov approaches, as well as achieved better results with an increasing number of hidden layers.

4. Discussion and Conclusions

The proposed dynamic Tikhonov based on the DNN constraints approach was evaluated to solve an EEG inverse problem for neural activity estimation. By approximating the non-linear dynamic of state evolution using DNNs for two brain models with 2004 and 10,016 sources, the method improves the dynamic tracking of EEG. Qualitative and quantitative analyses were conducted for state forecasting for dynamic tracking, and the proposed method showed better results than the static and dynamic Tikhonov approaches. The findings indicate that using a dynamic approach that considers changing dynamics over time can improve the estimation accuracy, especially when combined with DNN constraints and multiple hidden layers. The proposed method also has the potential for future behaviour prediction and could be valuable in solving inverse problems in various applications with dynamic behaviour and non-linear dynamics.

Author Contributions

Conceptualization, E.G., J.M.; methodology, E.G., C.M., J.M.; software, E.G., C.M., J.M.; validation, E.G., C.M., J.M.; formal analysis, E.G.; investigation, C.M.; resources, E.G., C.M., J.M.; data curation, E.G., C.M., J.M.; writing—original draft preparation, E.G., C.M., J.M.; writing—review and editing, E.G., C.M., J.M.; visualization, E.G., C.M., J.M.; supervision, E.G., J.M.; project administration, E.G.; funding acquisition, E.G., C.M., J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by Project No. 6-22-8 entitled “Identificación y control de sistemas multivariables interconectados a gran escala” by Universidad Tecnológica de Pereira, Pereira, Colombia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Li, H.; Schwab, J.; Antholzer, S.; Haltmeier, M. Nett: Solving inverse problems with deep neural networks. Inverse Probl. 2020, 36, 065005. [Google Scholar] [CrossRef] [Green Version]
  3. Zhang, Z.; Brand, M. Convergent block coordinate descent for training tikhonov regularized deep neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  4. Chien, J.T.; Lu, T.W. Tikhonov regularization for deep neural network acoustic modeling. In Proceedings of the 2014 IEEE Spoken Language Technology Workshop (SLT), South Lake Tahoe, NV, USA, 7–10 December 2014; pp. 147–152. [Google Scholar]
  5. Afkham, B.M.; Chung, J.; Chung, M. Learning regularization parameters of inverse problems via deep neural networks. Inverse Probl. 2021, 37, 105017. [Google Scholar] [CrossRef]
  6. Nguyen, H.V.; Bui-Thanh, T. Tnet: A model-constrained tikhonov network approach for inverse problems. arXiv 2021, arXiv:2105.12033. [Google Scholar]
  7. Romano, Y.; Elad, M. The little engine that could: Regularization by denoising (red). In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  8. Mao, X.J.; Shen, C.; Yang, Y.B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  9. Kolowrocki, K. 10—Large complex systems. In Reliability of Large and Complex Systems; Elsevier: Amsterdam, The Netherlands, 2014; Volume 2. [Google Scholar]
  10. Sockeel, S.; Schwartz, D.; Pélégrini-Issac, M.; Benali, H. Large-scale functional networks identified from resting-state EEG using spatial ICA. PLoS ONE 2016, 11, e0146845. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Sanchez-Bornot, J.M.; Sotero, R.C.; Kelso, S.; Coyle, D. Solving large-scale meg/EEG source localization and functional connectivity problems simultaneously using state-space models. arXiv 2022, arXiv:2208.12854. [Google Scholar]
  12. Wang, Q.; Loh, J.M.; He, X.; Wang, Y. A latent state space model for estimating brain dynamics from electroencephalogram (EEG) data. Biometrics, 2022; Early View. [Google Scholar] [CrossRef]
  13. Giraldo-Suarez, E.; Martinez-Vargas, J.D.; Castellanos-Dominguez, G. Reconstruction of neural activity from EEG data using dynamic spatiotemporal constraints. Int. J. Neural Syst. 2016, 26, 1650026. [Google Scholar] [CrossRef] [PubMed]
  14. Huang, Y.; Parra, L.C.; Haufe, S. The new york head—A precise standardized volume conductor model for EEG source localization and tes targeting. NeuroImage 2016, 140, 150–162. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. DNN used to consider the dynamic evolution of the states in the Tikhonov method.
Figure 1. DNN used to consider the dynamic evolution of the states in the Tikhonov method.
Engproc 39 00028 g001
Figure 2. New York head model was selected for two source configurations: a model with n = 2004 sources, and a model with n = 10,016 sources.
Figure 2. New York head model was selected for two source configurations: a model with n = 2004 sources, and a model with n = 10,016 sources.
Engproc 39 00028 g002
Figure 3. Simulated EEG for a head model with 2004 states. (a) Two channels of simulated EEG: channels 1 and 2. (b) Two channels of simulated EEG: channels 3 and 4.
Figure 3. Simulated EEG for a head model with 2004 states. (a) Two channels of simulated EEG: channels 1 and 2. (b) Two channels of simulated EEG: channels 3 and 4.
Engproc 39 00028 g003
Figure 4. Simulated EEG for a head model with 10,016 states. (a) Two channels of simulated EEG: channels 1 and 2. (b) Two channels of simulated EEG: channels 3 and 4.
Figure 4. Simulated EEG for a head model with 10,016 states. (a) Two channels of simulated EEG: channels 1 and 2. (b) Two channels of simulated EEG: channels 3 and 4.
Engproc 39 00028 g004
Figure 5. Output estimation for the one-layer architecture of the Tikhonov and dynamic Tikhonov based on the DNN models for 2004 states.
Figure 5. Output estimation for the one-layer architecture of the Tikhonov and dynamic Tikhonov based on the DNN models for 2004 states.
Engproc 39 00028 g005
Figure 6. Output estimation for the three-layer architecture of the Tikhonov and dynamic Tikhonov based on the DNN models for 2004 states.
Figure 6. Output estimation for the three-layer architecture of the Tikhonov and dynamic Tikhonov based on the DNN models for 2004 states.
Engproc 39 00028 g006
Figure 7. Output estimation for the one-layer architecture of the Tikhonov and dynamic Tikhonov based on the DNN models for 10,016 states.
Figure 7. Output estimation for the one-layer architecture of the Tikhonov and dynamic Tikhonov based on the DNN models for 10,016 states.
Engproc 39 00028 g007
Figure 8. Output estimation for the three-layer architecture of the Tikhonov and dynamic Tikhonov based on the DNN models for 10,016 states.
Figure 8. Output estimation for the three-layer architecture of the Tikhonov and dynamic Tikhonov based on the DNN models for 10,016 states.
Engproc 39 00028 g008
Figure 9. Output estimation of the Tikhonov and dynamic Tikhonov based on the DNN models for 2004 and 10,016 states.
Figure 9. Output estimation of the Tikhonov and dynamic Tikhonov based on the DNN models for 2004 and 10,016 states.
Engproc 39 00028 g009
Table 1. Mean-squared estimation error.
Table 1. Mean-squared estimation error.
ModelRegularized LS Model 2 KRegularized LS Model 10 K
E s t a t i c 639.016 602.9669
D y n a m i c 127.4491 95.4469
Table 2. Mean-squared estimation error.
Table 2. Mean-squared estimation error.
Hidden LayersRegularized LS-DNN Model 2 KRegularized LS-DNN Model 10 K
0 439.4857 897.7086
1 348.5712 60.4926
2 177.608 42.6174
3 104.813 34.2116
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Molina, C.; Martinez, J.; Giraldo, E. Dynamic Tikhonov State Forecasting Based on Large-Scale Deep Neural Network Constraints. Eng. Proc. 2023, 39, 28. https://doi.org/10.3390/engproc2023039028

AMA Style

Molina C, Martinez J, Giraldo E. Dynamic Tikhonov State Forecasting Based on Large-Scale Deep Neural Network Constraints. Engineering Proceedings. 2023; 39(1):28. https://doi.org/10.3390/engproc2023039028

Chicago/Turabian Style

Molina, Cristhian, Juan Martinez, and Eduardo Giraldo. 2023. "Dynamic Tikhonov State Forecasting Based on Large-Scale Deep Neural Network Constraints" Engineering Proceedings 39, no. 1: 28. https://doi.org/10.3390/engproc2023039028

Article Metrics

Back to TopTop