Towards Fault Tolerance of Reservoir Computing in Time Series Prediction
Abstract
:1. Introduction
1.1. Related Work
1.2. Contributions
2. Fault Tolerant Echo State Networks
2.1. Model Structure
2.2. Random Fault Tolerance Mechanism
- If the state output value of the neuron inside the reservoir is in the range of (−1, 1), then it is not at fault;
- If the state output value of the neuron inside the reservoir is always stuck at 0, it experiences a computational crash;
- If the state output value of the neuron inside the reservoir is always stuck at or arbitrarily deviates from the expected value, a Byzantine fault occurs [15].
2.3. Fault Handling Process
Algorithm 1 Random fault processing on neurons. |
|
Algorithm 2 Fault detection method. |
|
Algorithm 3 Fault tolerance method. |
|
3. Simulation Experiments
3.1. Datasets and Experimental Settings
3.2. Evaluation Metrics
3.3. Recoverable Performance Analysis
3.4. Statistical Validation
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Vlachasa, P.R.; Pathakbc, J.; Huntde, B.R.; Sapsisf, T.P.; Girvan, M.; Ott, E.; Koumoutsakosa, P. Backpropagation algorithms and reservoir computing in recurrent neural networks for the forecasting of complex spatiotemporal dynamics. Neural Netw. 2020, 126, 191–217. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mansoor, M.; Grimaccia, F.; Leva, S.; Mussetta, M. Comparison of echo state network and feed-forward neural networks in electrical load forecasting for demand response programs. Math. Comput. Simul. 2021, 184, 282–293. [Google Scholar] [CrossRef]
- Sun, X.C.; Gui, G.; Li, Y.Q.; Liu, R.P.; An, Y.L. ResInNet: A Novel Deep Neural Network With Feature Reuse for Internet of Things. IEEE Internet Things J. 2019, 6, 679–691. [Google Scholar] [CrossRef]
- Sun, X.C.; Li, T.; Li, Q.; Huang, Y.; Li, Y.Q. Deep Belief Echo-State Network and Its Application to Time Series Prediction. Knowl.-Based Syst. 2017, 130, 17–29. [Google Scholar] [CrossRef]
- Scardapane, S.; Uncini, A. Semi-Supervised echo state networks for audio classification. Cogn. Comput. 2017, 9, 125–135. [Google Scholar] [CrossRef]
- Zhang, S.H.; Sun, Z.Z.; Wang, M.; Long, J.Y.; Bai, Y.; Li, C. Deep Fuzzy Echo State Networks for Machinery Fault Diagnosis. IEEE Trans. Fuzzy Syst. 2019, 28, 1205–1218. [Google Scholar] [CrossRef]
- Deng, H.L.; Zhang, L.; Shu, X. Feature Memory-Based Deep Recurrent Neural Network for Language Modeling. Appl. Soft Comput. 2018, 68, 432–446. [Google Scholar] [CrossRef]
- Chen, M.Z.; Saad, W.; Yin, C.C.; Debbah, M. Data Correlation-Aware Resource Management in Wireless Virtual Reality (VR): An Echo State Transfer Learning Approach. IEEE Trans. Commun. 2019, 67, 4267–4280. [Google Scholar] [CrossRef] [Green Version]
- Yang, X.F.; Zhao, F. Echo State Network and Echo State Gaussian Process for Non-Line-of-Sight Target Tracking. IEEE Syst. J. 2020, 14, 3885–3892. [Google Scholar] [CrossRef]
- Hu, R.; Tang, Z.R.; Song, X.; Luo, J.; Wu, E.Q.; Chang, S. Ensemble echo network with deep architecture for time-series modeling. Neural Comput. Appl. 2021, 33, 4997–5010. [Google Scholar] [CrossRef]
- Liu, J.X.; Sun, T.N.; Luo, Y.L.; Yang, S.; Cao, Y.; Zhai, J. Echo State Network Optimization Using Binary Grey Wolf Algorithm. Neurocomputing 2020, 385, 310–318. [Google Scholar] [CrossRef]
- Li, Y.; Li, F.J. PSO-based growing echo state network. Appl. Soft Comput. 2019, 85, 105774. [Google Scholar] [CrossRef]
- Han, X.Y.; Zhao, Y. Reservoir computing dissection and visualization based on directed network embedding. Neurocomputing 2021, 445, 134–148. [Google Scholar] [CrossRef]
- Barredo Arrieta, A.; Gil-Lopez, S.; Laña, I.; Bilbao, M.N.; Del Ser, J. On the post-hoc explainability of deep echo state networks for time series forecasting, image and video classification. Neural Comput. Appl. 2022, 34, 10257–10277. [Google Scholar] [CrossRef]
- Torres-Huitzil, C.; Girau, B. Fault and Error Tolerance in Neural Networks: A Review. IEEE Access 2017, 5, 17322–17341. [Google Scholar] [CrossRef]
- Li, W.S.; Ning, X.F.; Ge, G.J.; Chen, X.M.; Wang, Y.; Yang, H.Z. FTT-NAS: Discovering fault-tolerant neural architecture. In Proceedings of the 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC), Beijing, China, 13–16 January 2020; pp. 211–216. [Google Scholar] [CrossRef] [Green Version]
- Zhao, K.; Di, S.; Li, S.H.; Liang, X.; Zhai, Y.J.; Chen, J.Y.; Ouyang, K.M.; Cappello, F.; Chen, Z.Z. FT-CNN: Algorithm-based fault tolerance for convolutional neural networks. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 1677–1689. [Google Scholar] [CrossRef]
- Wang, J.; Chang, Q.Q.; Chang, Q.; Liu, Y.S.; Pal, N.R. Weight noise injection-based MLPs with group lasso penalty: Asymptotic convergence and application to node pruning. IEEE Trans. Cybern. 2018, 49, 4346–4364. [Google Scholar] [CrossRef]
- Dey, P.; Nag, K.; Pal, T.; Pal, N.R. Regularizing multilayer perceptron for robustness. IEEE Trans. Syst. Man. Cybern. Syst. 2017, 48, 1255–1266. [Google Scholar] [CrossRef]
- Wang, H.; Feng, R.B.; Han, Z.F.; Leung, C.S. ADMM-based algorithm for training fault tolerant RBF networks and selecting centers. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3870–3878. [Google Scholar] [CrossRef]
- Duddu, V.; Rajesh Pillai, N.; Rao, D.V.; Balas, V.E. Fault tolerance of neural networks in adversarial settings. J. Intell. Fuzzy Syst. 2020, 38, 5897–5907. [Google Scholar] [CrossRef] [Green Version]
- Kosaian, J.; Rashmi, K.V. Arithmetic-intensity-guided fault tolerance for neural network inference on GPUs. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, St. Louis, MO, USA, 14–19 November 2021; pp. 1–15. [Google Scholar] [CrossRef]
- Gong, J.; Yang, M.F. Evolutionary fault tolerance method based on virtual reconfigurable circuit with neural network architecture. IEEE Trans. Evol. Comput. 2017, 22, 949–960. [Google Scholar] [CrossRef]
- Naeem, M.; McDaid, L.J.; Harkin, J.; Wade, J.J.; Marsland, J. On the role of astroglial syncytia in self-repairing spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2370–2380. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.X.; McDaid, L.J.; Harkin, J.; Karim, S.; Johnson, A.P.; Millard, A.G.; Hilder, J.; Halliday, D.M.; Tyrrell, A.M.; Timmis, J. Exploring self-repair in a coupled spiking astrocyte neural network. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 865–875. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liu, S.S.; Reviriego, P.; Lombardi, F. Selective neuron re-computation (SNRC) for error-tolerant neural networks. IEEE Trans. Comput. 2021, 71, 684–695. [Google Scholar] [CrossRef]
- Hoang, L.H.; Hanif, M.A.; Shafique, M. Ft-Clipact: Resilience Analysis of Deep Neural Networks and Improving Their Fault Tolerance Ssing Clipped Activation. In Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition (DATE), Grenoble, France, 9–13 March 2020; pp. 1241–1246. [Google Scholar] [CrossRef]
- Peng, Y.X.; Sun, K.H.; He, S.B. A discrete memristor model and its application in Hénon map. Chaos Solitons Fractals 2020, 137, 109873. [Google Scholar] [CrossRef]
- Li, Y.; Li, F.J. Growing deep echo state network with supervised learning for time series prediction. Appl. Soft Comput. 2022, 128, 109454. [Google Scholar] [CrossRef]
- Raca, D.; Quinlan, J.J.; Zahran, A.H.; Sreenan, C.J. Beyond throughput: A 4G LTE dataset with channel and context metrics. In Proceedings of the 9th ACM Multimedia Systems Conference, Amsterdam, The Netherlands, 12–15 June 2018; pp. 460–465. [Google Scholar] [CrossRef]
- Jaeger, H. Short Term Memory in Echo State Networks; Fraunhofer-Gesellschaft: Sankt Augustin, Germany, 2002. [Google Scholar] [CrossRef]
Fault Type | Fault Tolerance Method | Main Idea | Literature Number |
---|---|---|---|
Neuron faults | Redundancy removed | Remove redundant nodes in hidden layer based on Lasso penalty term | [18] |
Weight perturbation | Improved algorithm | Introduce three regularization terms to penalize the systematic error | [19] |
Weight perturbation | Improved algorithm | Fault tolerance objective function with -parametric term | [20] |
External interference | Improved training | Train deep neural network (DNN) with input noise and gradient noise | [21] |
Convolutional layer fault | Improved algorithm | Based on four algorithm-based fault tolerance (ABFT) schemes | [17] |
Neural network layers | Improved algorithm | Strength-guided ABFT reduces the time of redundant fault-tolerant execution of neural networks | [22] |
Neuron removal | Genetic algorithm | Genetic algorithm-based fault tolerance using programmable structures | [23] |
Fault Type | Fault Tolerance Method | Main Idea | Literature Number |
---|---|---|---|
Neuron failure | Recovery technique | Reestablishing neuronal firing activity by enhancing the weight of healthy synapses | [24] |
Weight failure | Recovery technique | Mapping between input and output is established by Bienenstock Cooper Munro learning rules | [25] |
Neuron faults | Recovery technique | Recalculate those neurons with unreliable classification results | [26] |
Weight perturbation | Low-latency fault detection | Using error mitigation techniques to improve DNN failure recovery capability | [27] |
Serial Number | Contributions |
---|---|
1 | We constructed ESN models with random neuron failures based on the original ESN, considering and analyzing the behavioral pattern of reservoir neurons with random failures during network training. |
2 | For random faults of neurons, we proposed a fault detection mechanism to detect whether faults occur during network training. |
3 | We designed an active fault tolerance strategy to adaptively withdraw the randomly faulty neurons from the current computational task and maintain the normal operation of RC. |
4 | We gave the main algorithm flows of the proposed model, including the neuron random fault handling algorithm, the fault detection algorithm, and the random fault tolerance algorithm. |
5 | We evaluated the proposed model using a number of widely-used time series benchmarks with different sources and characteristics, experimentally verifying the effectiveness of the fault tolerance scheme and that the proposed model is able to recover from faults. |
Variables | Meaning |
---|---|
Random variables | |
K | The size of the ESN input layer is K |
N | The size of the reservoir neuron is N |
L | The size of the ESN output layer is L |
E | The output error of the reservoir is E |
Matrix variables | |
Network input at time t | |
The state output of the reservoir neuron at time t | |
The L-dimensional readout at time t | |
The input weight matrix from K input to N reservoir neurons | |
The internal weight matrix of N reservoir neurons | |
The feedback weight matrix from L readout neuron to N reservoir neurons | |
The output weight matrix of the ESN network | |
X | Reservoir state collection matrix |
Generalized inverse matrix of X | |
Y | Target output matrix of ESN network |
Parameter | Size |
---|---|
Spectral radius | 0.8 |
Reservoir sparsity | 0.1 |
Dataset division | 1:1 |
Maximum ratio of neural faults | 10% |
Data | Model | NRMSE | MAE | R |
---|---|---|---|---|
Henon Map | RAFT-ESN | 0.0704 | 0.0050 | 0.9949 |
WESN | 0.0486 | 0.0024 | 0.9976 | |
NARMA | RAFT-ESN | 0.2023 | 0.0409 | 0.9590 |
WESN | 0.2073 | 0.0430 | 0.9570 | |
MSO | RAFT-ESN | 0.0077 | 5.9671 × 10 | 0.9999 |
WESN | 0.0127 | 1.6164 × 10 | 0.9998 | |
Car | RAFT-ESN | 0.0921 | 0.0085 | 0.9915 |
WESN | 0.0954 | 0.0091 | 0.9909 | |
Pedestrian | RAFT-ESN | 0.2197 | 0.0483 | 0.9516 |
WESN | 0.1471 | 0.0216 | 0.9783 | |
Train | RAFT-ESN | 0.0988 | 0.0098 | 0.9902 |
WESN | 0.1003 | 0.0101 | 0.9899 |
Modle | Henon Map | NARMA | MSO | Car | Pedestrian | Train |
---|---|---|---|---|---|---|
RAFT-ESN | 19.8279 | 29.0554 | 25.5781 | 29.0554 | 33.0639 | 33.8662 |
WESN | 17.9732 | 28.8861 | 24.3062 | 28.8861 | 33.1130 | 33.8940 |
Data | Parameters | RAFT-ESN | WESN |
---|---|---|---|
Henon Map | H | 0 | 0 |
p | 0.9868 | 0.9734 | |
NARMA | H | 0 | 0 |
p | 0.8024 | 0.7401 | |
MSO | H | 0 | 0 |
p | 0.9974 | 0.9885 | |
Car | H | 0 | 0 |
p | 0.6221 | 0.7520 | |
Pedestrian | H | 0 | 0 |
p | 0.6471 | 0.7974 | |
Train | H | 0 | 0 |
p | 0.9413 | 0.9488 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, X.; Gao, J.; Wang, Y. Towards Fault Tolerance of Reservoir Computing in Time Series Prediction. Information 2023, 14, 266. https://doi.org/10.3390/info14050266
Sun X, Gao J, Wang Y. Towards Fault Tolerance of Reservoir Computing in Time Series Prediction. Information. 2023; 14(5):266. https://doi.org/10.3390/info14050266
Chicago/Turabian StyleSun, Xiaochuan, Jiahui Gao, and Yu Wang. 2023. "Towards Fault Tolerance of Reservoir Computing in Time Series Prediction" Information 14, no. 5: 266. https://doi.org/10.3390/info14050266