Abstract Reservoir Computing
Abstract
:1. Introduction
- An abstract regulariser leading to robust weights for reservoir computing systems
- A closed form solution for the regression problem, using the abstract regulariser
- Numerical study on the robustness of physical reservoir computing systems against different types of errors
2. Materials and Methods
2.1. Reservoir Computing
2.1.1. Training
2.1.2. Exploitation
2.2. Mass-Spring Networks
2.3. Abstract Reservoir Computing
2.4. Experimental Setup
2.4.1. Mass-Spring Network
2.4.2. Hénon Time-Series
2.4.3. NARMA10 Time-Series
2.4.4. NARMA20 Time-Series
2.4.5. Baselines
- Training with ridge regression (classical model)
- Training with linear regression and added noise (noise model)
2.4.6. Sensor Augmentations
- Sensor Failure
- -
- Before testing, masses were randomly selected with a given probability p, and their readings were forced to 0 during testing.
- Sensor Noise
- -
- Gaussian noise with 0-mean and varying standard deviation is added during testing.
- Fixed Sensor Displacement
- -
- Sensor readings were displaced by a fixed value z.
- Mass Position Displacement
- -
- The mass positions were randomly displaced by a random vector .
3. Results and Discussion
4. Conclusions and Outlook
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A. Results
Dataset | Model | Sensor Failure MSE ± Std Dev@Parameter Value | |||
---|---|---|---|---|---|
0.00 | 0.01 | 0.02 | 0.03 | ||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise | ||||
Dataset | Model | Sensor Failure MSE ± Std Dev@Parameter Value | |||
0.04 | 0.05 | 0.06 | 0.07 | ||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise | ||||
Dataset | Model | Sensor Failure MSE ± Std Dev@Parameter Value | |||
0.08 | 0.09 | 0.10 | |||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise |
Dataset | Model | Sensor Noise MSE ± Std Dev@Parameter Value | |||
---|---|---|---|---|---|
0.00 | 0.01 | 0.02 | 0.03 | ||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise | ||||
Dataset | Model | Sensor Noise MSE ± Std Dev@Parameter Value | |||
0.04 | 0.05 | 0.06 | 0.07 | ||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise | ||||
Dataset | Model | Sensor Noise MSE ± Std Dev@Parameter Value | |||
0.08 | 0.09 | 0.10 | |||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise |
Dataset | Model | Sense or Shift MSE ± Std Dev@Parameter Value | |||
---|---|---|---|---|---|
0.00 | 0.01 | 0.02 | 0.03 | ||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise | ||||
Dataset | Model | Sensor Shift MSE ± Std Dev@Parameter Value | |||
0.04 | 0.05 | 0.06 | 0.07 | ||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise | ||||
Dataset | Model | Sensor Shift MSE ± Std Dev@Parameter Value | |||
0.08 | 0.09 | 0.10 | |||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise |
Dataset | Model | Mass Displacement MSE ± Std Dev@Parameter Value | |||
---|---|---|---|---|---|
0.00 | 0.01 | 0.02 | 0.03 | ||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise | ||||
0.04 | 0.05 | 0.06 | 0.07 | ||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise | ||||
Dataset | Model | Mass Displacement MSE ± Std Dev@Parameter Value | |||
0.08 | 0.09 | 0.10 | |||
Hénon | abstract | ||||
Hénon | classical | ||||
Hénon | noise | ||||
NARMA10 | abstract | ||||
NARMA10 | classical | ||||
NARMA10 | noise | ||||
NARMA20 | abstract | ||||
NARMA20 | classical | ||||
NARMA20 | noise |
References
- Jaeger, H. The “Echo State” Approach to Analysing and Training Recurrent Neural Networks-with an Erratum Note; Technical Report; German National Research Center for Information Technology GMD: Bonn, Germany, 2001; Volume 148. [Google Scholar]
- Maass, W.; Markram, H. On the computational power of circuits of spiking neurons. J. Comput. Syst. Sci. 2004, 69, 593–616. [Google Scholar] [CrossRef] [Green Version]
- Fernando, C.; Sojakka, S. Pattern Recognition in a Bucket. In Proceedings of the European Conference on Artificial Life, Dortmund, Germany, 14–17 September 2003; Volume 2801, pp. 588–597. [Google Scholar] [CrossRef]
- Vandoorne, K.; Dierckx, W.; Schrauwen, B.; Verstraeten, D.; Baets, R.; Bienstman, P.; Campenhout, J. Toward optical signal processing using Photonic Reservoir Computing. Opt. Express 2008, 16, 11182–11192. [Google Scholar] [CrossRef] [PubMed]
- Vandoorne, K.; Dambre, J.; Verstraeten, D.; Schrauwen, B.; Bienstman, P. Parallel Reservoir Computing Using Optical Amplifiers. IEEE Trans. Neural Netw. 2011, 22, 1469–1481. [Google Scholar] [CrossRef] [PubMed]
- Nakajima, K.; Hauser, H.; Li, T.; Pfeifer, R. Information processing via physical soft body. Sci. Rep. 2015, 5, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hauser, H.; Füchslin, R.; Nakajima, K. Morphological Computation: The Body as a Computational Resource. In Opinions and Outlooks on Morphological Computation; Hauser, H., Füchslin, R., Pfeifer, R., Eds.; Self-Published: Zürich, Switzerland, 2014; pp. 226–244. [Google Scholar]
- Bhovad, P.; Li, S. Physical reservoir computing with origami and its application to robotic crawling. Sci. Rep. 2021, 11, 1–18. [Google Scholar] [CrossRef] [PubMed]
- Hauser, H.; Ijspeert, A.J.; Füchslin, R.M.; Pfeifer, R.; Maass, W. Towards a theoretical foundation for morphological computation with compliant bodies. Biol. Cybern. 2011, 105, 355–370. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Cousot, P.; Cousot, R. Abstract interpretation. In Proceedings of the 4th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages—POPL ’77, Paris, France, 15–17 January 1977; ACM Press: New York, NY, USA, 1977. [Google Scholar] [CrossRef] [Green Version]
- Singh, G.; Gehr, T.; Püschel, M.; Vechev, M. An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 2019, 3, 1–30. [Google Scholar] [CrossRef] [Green Version]
- Singh, G.; Gehr, T.; Püschel, M.; Vechev, M. Boosting Robustness Certification of Neural Networks. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Gehr, T.; Mirman, M.; Drachsler-Cohen, D.; Tsankov, P.; Chaudhuri, S.; Vechev, M. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 21–23 May 2018. [Google Scholar] [CrossRef]
- Mirman, M.; Gehr, T.; Vechev, M. Differentiable Abstract Interpretation for Provably Robust Neural Networks. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, Stockholm, Sweden, 10–15 July 2018; Dy, J., Krause, A., Eds.; PMLR: Stockholm, Sweden, 2018; Volume 80, pp. 3578–3586. [Google Scholar]
- Van Der Hoeven, J. Ball arithmetic. In Proceedings of the Conference Logical Approaches to Barriers in Computing and Complexity, Greifswald, Germany, 17–20 February 2010. [Google Scholar]
- Senn, C.W.; Kumazawa, I. Abstract Echo State Networks. In Artificial Neural Networks in Pattern Recognition; Schilling, F.P., Stadelmann, T., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 77–88. [Google Scholar]
- Coulombe, J.C.; York, M.C.; Sylvestre, J. Computing with networks of nonlinear mechanical oscillators. PLoS ONE 2017, 12, e0178663. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Provot, X. Deformation Constraints in a Mass-Spring Model to Describe Rigid Cloth Behaviour. In Proceedings of the Graphics Interface ’95, Québec, QC, Canada, 17–19 May 1995; Canadian Human-Computer Communications Society: Toronto, ON, Canada, 1995; pp. 147–154. [Google Scholar]
- Urbain, G.; Degrave, J.; Carette, B.; Dambre, J.; Wyffels, F. Morphological Properties of Mass–Spring Networks for Optimal Locomotion Learning. Front. Neurorobot. 2017, 11, 16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Murai, A.; Hong, Q.Y.; Yamane, K.; Hodgins, J.K. Dynamic skin deformation simulation using musculoskeletal model and soft tissue dynamics. Comput. Vis. Media 2017, 3, 49–60. [Google Scholar] [CrossRef] [Green Version]
- Johansson, F. Ball Arithmetic as a Tool in Computer Algebra. In Maple in Mathematics Education and Research; Gerhard, J., Kotsireas, I., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 334–336. [Google Scholar]
- Fieker, C.; Hart, W.; Hofmann, T.; Johansson, F. Nemo/Hecke: Computer Algebra and Number Theory Packages for the Julia Programming Language. In Proceedings of the 2017 ACM on International Symposium on Symbolic and Algebraic Computation (ISSAC ’17), Kaiserslautern, Germany, 25–28 July 2017; ACM: New York, NY, USA, 2017; pp. 157–164. [Google Scholar] [CrossRef] [Green Version]
- O’Donoghue, B.; Chu, E.; Parikh, N.; Boyd, S. Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding. J. Optim. Theory Appl. 2016, 169, 1042–1068. [Google Scholar] [CrossRef] [Green Version]
- Bezanson, J.; Edelman, A.; Karpinski, S.; Shah, V.B. Julia: A fresh approach to numerical computing. SIAM Rev. 2017, 59, 65–98. [Google Scholar] [CrossRef] [Green Version]
- Goudarzi, A.; Banda, P.; Lakin, M.R.; Teuscher, C.; Stefanovic, D. A Comparative Study of Reservoir Computing for Temporal Signal Processing. arXiv 2014, arXiv:1401.2224. [Google Scholar]
- Hénon, M. A Two-Dimensional Mapping with a Strange Attractor; Springer: New York, NY, USA, 1976; Volume 50, pp. 69–77. [Google Scholar] [CrossRef]
Augmentation | Parameter | Range |
---|---|---|
Sensor Failure | p | |
Sensor Noise | ||
Fixed Sensor Displacement | z | |
Mass Position Displacement | k |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Senn, C.W.; Kumazawa, I. Abstract Reservoir Computing. AI 2022, 3, 194-210. https://doi.org/10.3390/ai3010012
Senn CW, Kumazawa I. Abstract Reservoir Computing. AI. 2022; 3(1):194-210. https://doi.org/10.3390/ai3010012
Chicago/Turabian StyleSenn, Christoph Walter, and Itsuo Kumazawa. 2022. "Abstract Reservoir Computing" AI 3, no. 1: 194-210. https://doi.org/10.3390/ai3010012
APA StyleSenn, C. W., & Kumazawa, I. (2022). Abstract Reservoir Computing. AI, 3(1), 194-210. https://doi.org/10.3390/ai3010012