FPGA-Accelerated ESN with Chaos Training for Financial Time Series Prediction
Abstract
1. Introduction
2. Overview of the Employed Datasets
2.1. S&P 500
2.2. ISEQ Overall Historical Data
2.3. XAU/USD Historical Data
2.4. USD/JPY Historical Data
2.5. Pci-Suntek Tech Stock Prices (600728.SS)
2.6. FTSE 100 Historical Data ()
3. Overview of the Employed Chaotic Sequences and Standard Initialization in Training Neural Network Models
3.1. The Employed Chaotic Systems
3.1.1. Standard Chen System
3.1.2. Perturbed Chen Oscillator
3.1.3. The Lorenz System
3.1.4. Mackey–Glass Time Series
3.2. He and Xavier Standard Initialization
3.3. Chaotic System Distribution
4. Recurrent Neural Network and Reservoir Computing Model: General Architectures
4.1. Gated Recurrent Unit Model
4.2. Echo State Network Model
5. Prediction Models Validation
5.1. Chaotic Initialization Methodology
| Algorithm 1 Chaotic Sequence-Initiated ESN |
|
5.2. Prediction Result Comparison
6. Hardware Implementation of the Proposed ESN Model
6.1. Precision Analysis
6.2. Hyperbolic Tangent Function Implementation
6.3. Matrix Multiplication Implementation
6.4. Reservoir State Implementation
6.5. Utilization Results and Performance Evaluation Comparison
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, H.; Mo, Y. Sparse compressed deep echo state network with improved arithmetic optimization algorithm for chaotic time series prediction. Expert Syst. Appl. 2025, 259, 125249. [Google Scholar] [CrossRef]
- Chen, J.; Wen, Y.; Nanehkaran, Y.; Suzauddola, M.; Chen, W.; Zhang, D. Machine learning techniques for stock price prediction and graphic signal recognition. Eng. Appl. Artif. Intell. 2023, 121, 106038. [Google Scholar] [CrossRef]
- Wen, S.C.; Yang, C.H. Time series analysis and prediction of nonlinear systems with ensemble learning framework applied to deep learning neural networks. Inf. Sci. 2021, 572, 167–181. [Google Scholar] [CrossRef]
- Yu, H.; Dong, J.; Li, B. Economic insight through an optimized deep learning model for stock market prediction regarding Dow Jones industrial Average index. Expert Syst. Appl. 2025, 291, 128473. [Google Scholar] [CrossRef]
- Huang, Y.; Pei, Z.; Yan, J.; Zhou, C.; Lu, X. A combined Adaptive Gaussian Short-Term Fourier Transform and Mamba framework for stock price prediction. Eng. Appl. Artif. Intell. 2025, 162, 112588. [Google Scholar] [CrossRef]
- Jia, B.; Wu, H.; Guo, K. Chaos theory meets deep learning: A new approach to time series forecasting. Expert Syst. Appl. 2024, 255, 124533. [Google Scholar] [CrossRef]
- Viehweg, J.; Walther, D.; Mäder, P. Temporal convolution derived multi-layered reservoir computing. Neurocomputing 2025, 617, 128938. [Google Scholar] [CrossRef]
- Pham, P.; Pedrycz, W.; Vo, B. Dual attention-based sequential auto-encoder for COVID-19 outbreak forecasting: A case study in Vietnam. Expert Syst. Appl. 2022, 203, 117514. [Google Scholar] [CrossRef] [PubMed]
- Yan, M.; Huang, C.; Bienstman, P.; Tino, P.; Lin, W.; Sun, J. Emerging opportunities and challenges for the future of reservoir computing. Nat. Commun. 2024, 15, 2056. [Google Scholar] [CrossRef]
- Bukhari, A.H.; Raja, M.A.Z.; Alquhayz, H.; Almazah, M.M.; Abdalla, M.Z.; Hassan, M.; Shoaib, M. Predictive analysis of stochastic stock pattern utilizing fractional order dynamics and heteroscedastic with a radial neural network framework. Eng. Appl. Artif. Intell. 2024, 135, 108687. [Google Scholar] [CrossRef]
- Zhou, X.; Hao, Y.; Liu, Y.; Dang, L.; Qiao, B.; Zuo, X. Short-term prediction of dissolved oxygen and water temperature using deep learning with dual proportional-integral-derivative error corrector in pond culture. Eng. Appl. Artif. Intell. 2025, 142, 109964. [Google Scholar] [CrossRef]
- Chattopadhyay, A.; Hassanzadeh, P.; Subramanian, D. Data-driven predictions of a multiscale Lorenz 96 chaotic system using machine-learning methods: Reservoir computing, artificial neural network, and long short-term memory network. Nonlinear Process. Geophys. 2020, 27, 373–389. [Google Scholar] [CrossRef]
- Gonbadi, L.; Rostami, H.; Sahafizadeh, E.; Rostami, S.; Nejad, M.M.; Shirzadi, A. Input driven optimization of echo state network parameters for prediction on chaotic time series. Sci. Rep. 2025, 15, 33005. [Google Scholar] [CrossRef]
- Chen, X.; Chen, L.; Li, S.; Jin, L. A mirrored echo state network with application to time series prediction. Inf. Sci. 2025, 716, 122260. [Google Scholar] [CrossRef]
- Wang, H.; Wang, Z.; Yu, M.; Liang, J.; Peng, J.; Wang, Y. Grouped Vector Autoregression Reservoir Computing Based on Randomly Distributed Embedding for Multistep-Ahead Prediction. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 17265–17279. [Google Scholar] [CrossRef]
- Na, X.; Ren, W.; Liu, M.; Han, M. Hierarchical Echo State Network With Sparse Learning: A Method for Multidimensional Chaotic Time Series Prediction. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 9302–9313. [Google Scholar] [CrossRef] [PubMed]
- Tang, Y.; Song, Z.; Zhu, Y.; Hou, M.; Tang, C.; Ji, J. Adopting a dendritic neural model for predicting stock price index movement. Expert Syst. Appl. 2022, 205, 117637. [Google Scholar] [CrossRef]
- Kleyko, D.; Frady, E.P.; Kheffache, M.; Osipov, E. Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 1688–1701. [Google Scholar] [CrossRef] [PubMed]
- Hassaan, Z.A.; Yacoub, M.H.; Said, L.A. Gated recurrent unit accelerators for financial time series prediction on field-programmable gate array. Eng. Appl. Artif. Intell. 2025, 162, 112534. [Google Scholar] [CrossRef]
- Sharobim, B.K.; Yacoub, M.H.; Sayed, W.S.; Radwan, A.G.; Said, L.A. Artificial Neural Network Chaotic PRNG and simple encryption on FPGA. Eng. Appl. Artif. Intell. 2023, 126, 106888. [Google Scholar] [CrossRef]
- Mao, G.; Rahman, T.; Maheshwari, S.; Pattison, B.; Shao, Z.; Shafik, R.; Yakovlev, A. Dynamic Tsetlin Machine Accelerators for On-Chip Training Using FPGAs. IEEE Trans. Circuits Syst. I Regul. Pap. 2025, 72, 6962–6975. [Google Scholar] [CrossRef]
- Wasef, M.; Rafla, N. SoC Reconfigurable Architecture for Implementing Software Trained Recurrent Neural Networks on FPGA. IEEE Trans. Circuits Syst. I Regul. Pap. 2023, 70, 2497–2510. [Google Scholar] [CrossRef]
- Liu, F.; Li, H.; Hu, W.; He, Y. Review of neural network model acceleration techniques based on FPGA platforms. Neurocomputing 2024, 610, 128511. [Google Scholar] [CrossRef]
- Foreign Currency Financial Reporting from Euro to Yen to Yuan; Wiley: Hoboken, NJ, USA, 2012; pp. 251–266. [CrossRef]
- Zhang, X.; Zhu, Y.; Lou, X. Reconfigurable and Energy-Efficient Architecture for Deploying Multi-Layer RNNs on FPGA. IEEE Trans. Circuits Syst. I Regul. Pap. 2024, 71, 5969–5982. [Google Scholar] [CrossRef]
- AbdElbaky, M.H.; Yacoub, M.H.; Sayed, W.S.; Said, L.A. High-performance FPGA-accelerated LSTM neural network for chaotic time series prediction. AEU Int. J. Electron. Commun. 2025, 199, 155845. [Google Scholar] [CrossRef]
- Xu, H.; Chai, L.; Luo, Z.; Li, S. Stock movement prediction via gated recurrent unit network based on reinforcement learning with incorporated attention mechanisms. Neurocomputing 2022, 467, 214–228. [Google Scholar] [CrossRef]
- Farhadi, A.; Zamanifar, A.; Alipour, A.; Taheri, A.; Asadolahi, M. A Hybrid LSTM-GRU Model for Stock Price Prediction. IEEE Access 2025, 13, 117594–117618. [Google Scholar] [CrossRef]
- Li, T.; Guo, Z.; Li, Q. Decomposition-based deep projection-encoding echo state network for multi-scale and multi-step wind speed prediction. Expert Syst. Appl. 2025, 266, 126074. [Google Scholar] [CrossRef]
- Gong, Y.; Lun, S.; Li, M. Broad-ESN Based on Radical Activation Function for Predicting Time Series With Multiple Variables. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 17310–17321. [Google Scholar] [CrossRef]
- Delshad, A.; Cherry, E.M. Predicting complex time series with deep echo state networks. Chaos 2025, 35, 093126. [Google Scholar] [CrossRef]
- Sharifi Ghazijahani, M.; Cierpka, C. On the spatial prediction of the turbulent flow behind an array of cylinders via echo state networks. Eng. Appl. Artif. Intell. 2025, 144, 110079. [Google Scholar] [CrossRef]
















| Model | Key Parameters | Trainable Parameters | Optimization |
|---|---|---|---|
| ESN | Reservoir size = 100 Spectral radius = Leakage rate = Sparsity = Regularization = | Only output weights: 101 parameters | No backpropagation; only linear regression |
| GRU | Hidden size = 64 Epochs = 150 Batch size = 64 Learning rate = Dropout = | Weights and biases (for two hidden layers) + FC-layer weights: ∼50,000 parameters | Gradient-based with adaptive learning rate |
| Employed Datasets | Models | Performance Metrics | |||||
|---|---|---|---|---|---|---|---|
| MSE | MAE | MAPE | |||||
| Chaotic | Non-Chaotic | Chaotic | Non-Chaotic | Chaotic | Non-Chaotic | ||
| S&P 500 | GRU (proposed) | 0.000183 | 0.000129 | 0.043727 | 0.009958 | 5.0875% | 1.2055% |
| ESN (proposed) | 0.000006 | 0.00659 | 0.0017 | 0.0627 | 0.209923% | 2.357% | |
| N-BEATS in [6] | 4.031 | 5.562 | ____ | ____ | 0.389% | 0.468% | |
| ISEQ Historical Data | GRU (proposed) | 0.00211053 | 0.000080316 | 0.0342342 | 0.0068244 | 6.07399% | 2.1758% |
| ESN (proposed) | 0.000034 | 0.015005 | 0.00413 | 0.089523 | 0.842156% | ____ | |
| LC ARFIMA GARCH in [10] | ____ | 603 | ____ | 603 | ____ | 0.072 | |
| XAU/USD Historical Data | GRU (proposed) | 0.000568 | 0.0006516 | 0.018667 | 0.02112 | 3.26004% | 3.6862% |
| ESN (proposed) | 0.0004659 | 0.015696 | 0.017021 | 0.0959 | 3.0093% | ____ | |
| VAR in [2] | ____ | 2.625 | ____ | ____ | ____ | ____ | |
| USD/JPY Historical Data | GRU (proposed) | 0.004955 | 0.002373 | 0.057375 | 0.037068 | 4.62643% | 3.0454% |
| ESN (proposed) | 0.002201 | 1.8405 | 0.03501 | 1.1495 | 2.8868 | ____ | |
| VAR in [2] | ____ | 0.0625 | ____ | ____ | ____ | ____ | |
| Pci-Suntek Tech Stock Price | GRU (proposed) | 0.00078 | 0.000855 | 0.0199 | 0.02194 | 7.67069% | 8.1976% |
| ESN (proposed) | 0.000683 | 0.017508 | 0.018295 | 0.093627 | 6.93355% | ____ | |
| VAR in [2] | ____ | 0.0612 | ____ | ____ | ____ | ____ | |
| FTSE 100 Historical Data | GRU (proposed) | 0.000775 | 0.0005818 | 0.02130 | 0.018116 | 2.58066% | 2.2101% |
| ESN (proposed) | 0.0004787 | 0.008423 | 0.015755 | 0.068239 | 1.8993% | 13.6599% | |
| VAR in [2] | ____ | 1097 | ____ | ____ | ____ | ____ | |
| Coefficients |
| Resource | Utilization | Available | Utilization % |
|---|---|---|---|
| LUTs | 15,833 | 2,424,000 | |
| FFs | 4306 | 484,800 | |
| BRAMs | 600 | ||
| DSPs | 104 | 1920 |
| Metric | ESN in [18] | intESN in [18] | ESN (Proposed) | ||||
|---|---|---|---|---|---|---|---|
| Neurons | 100 | 200 | 300 | 100 | 200 | 300 | 100 |
| Device | Xilinx Zynq-7000 FPGA | Xilinx Zynq-7000 FPGA | Kintex-UltraScale KCU105 | ||||
| Max Frequency (MHz) | 100 MHz | 100 MHz | 83.5 MHz | ||||
| Power (W) | 1.73 | 1.78 | 1.95 | 1.59 | 1.6 | 1.6 | 0.677 W (dynamical power 0.197 W) |
| Precision | 32-bit float | 3-bit integer | 19-bit (16-bit fractional, 3-bit integer) | ||||
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hassaan, Z.A.; Yacoub, M.H.; Said, L.A. FPGA-Accelerated ESN with Chaos Training for Financial Time Series Prediction. Mach. Learn. Knowl. Extr. 2025, 7, 160. https://doi.org/10.3390/make7040160
Hassaan ZA, Yacoub MH, Said LA. FPGA-Accelerated ESN with Chaos Training for Financial Time Series Prediction. Machine Learning and Knowledge Extraction. 2025; 7(4):160. https://doi.org/10.3390/make7040160
Chicago/Turabian StyleHassaan, Zeinab A., Mohammed H. Yacoub, and Lobna A. Said. 2025. "FPGA-Accelerated ESN with Chaos Training for Financial Time Series Prediction" Machine Learning and Knowledge Extraction 7, no. 4: 160. https://doi.org/10.3390/make7040160
APA StyleHassaan, Z. A., Yacoub, M. H., & Said, L. A. (2025). FPGA-Accelerated ESN with Chaos Training for Financial Time Series Prediction. Machine Learning and Knowledge Extraction, 7(4), 160. https://doi.org/10.3390/make7040160

