# Photonic Reservoir Computer with Output Expansion for Unsupervized Parameter Drift Compensation

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Reservoir Computing with Output Layer Expansion

#### 2.2. Output Expansion with First and Second Degree Polynomials

#### 2.3. Slow Noise and Feature Dependent Weights

#### 2.4. Setup

## 3. Results

#### 3.1. Ability of Random Features to Capture Parameter Variations

#### 3.2. Memory Capacity

#### 3.3. Nonlinear Channel Equalization Task

## 4. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## Appendix A. Analysis of Phase-Drift in Simple Delay-Based Reservoir Computer

## Appendix B. Construction of Slow Features

## References

- Maass, W.; Natschläger, T.; Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput.
**2002**, 14, 2531–2560. [Google Scholar] [CrossRef] [PubMed] - Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science
**2004**, 304, 78–80. [Google Scholar] [CrossRef] [Green Version] - Verstraeten, D.; Schrauwen, B.; d’Haene, M.; Stroobandt, D. An experimental unification of reservoir computing methods. Neural Netw.
**2007**, 20, 391–403. [Google Scholar] [CrossRef] [PubMed] - Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I. Information processing using a single dynamical node as complex system. Nat. Commun.
**2011**, 2, 468. [Google Scholar] [CrossRef] [Green Version] - Paquot, Y.; Duport, F.; Smerieri, A.; Dambre, J.; Schrauwen, B.; Haelterman, M.; Massar, S. Optoelectronic reservoir computing. Sci. Rep.
**2012**, 2, 287. [Google Scholar] [CrossRef] [PubMed] - Larger, L.; Soriano, M.C.; Brunner, D.; Appeltant, L.; Gutiérrez, J.M.; Pesquera, L.; Mirasso, C.R.; Fischer, I. Photonic information processing beyond Turing: An optoelectronic implementation of reservoir computing. Opt. Express
**2012**, 20, 3241–3249. [Google Scholar] [CrossRef] [PubMed] - Duport, F.; Schneider, B.; Smerieri, A.; Haelterman, M.; Massar, S. All-optical reservoir computing. Opt. Express
**2012**, 20, 22783–22795. [Google Scholar] [CrossRef] [PubMed] - Brunner, D.; Soriano, M.C.; Mirasso, C.R.; Fischer, I. Parallel photonic information processing at gigabyte per second data rates using transient states. Nat. Commun.
**2013**, 4, 1364. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Vinckier, Q.; Duport, F.; Smerieri, A.; Vandoorne, K.; Bienstman, P.; Haelterman, M.; Massar, S. High-performance photonic reservoir computer based on a coherently driven passive cavity. Optica
**2015**, 2, 438–446. [Google Scholar] [CrossRef] - Duport, F.; Smerieri, A.; Akrout, A.; Haelterman, M.; Massar, S. Fully analogue photonic reservoir computer. Sci. Rep.
**2016**, 6, 22381. [Google Scholar] [CrossRef] [Green Version] - Larger, L.; Baylón-Fuentes, A.; Martinenghi, R.; Udaltsov, V.S.; Chembo, Y.K.; Jacquot, M. High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification. Phys. Rev. X
**2017**, 7, 011015. [Google Scholar] [CrossRef] - Pauwels, J.; Verschaffelt, G.; Massar, S.; Van der Sande, G. Distributed Kerr Non-linearity in a Coherent All-Optical Fiber-Ring Reservoir Computer. Front. Phys.
**2019**, 7, 138. [Google Scholar] [CrossRef] [Green Version] - Vandoorne, K.; Dambre, J.; Verstraeten, D.; Schrauwen, B.; Bienstman, P. Parallel reservoir computing using optical amplifiers. IEEE Trans. Neural Netw.
**2011**, 22, 1469–1481. [Google Scholar] [CrossRef] - Vandoorne, K.; Mechet, P.; Van Vaerenbergh, T.; Fiers, M.; Morthier, G.; Verstraeten, D.; Schrauwen, B.; Dambre, J.; Bienstman, P. Experimental demonstration of reservoir computing on a silicon photonics chip. Nat. Commun.
**2014**, 5, 3541. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Bueno, J.; Maktoobi, S.; Froehly, L.; Fischer, I.; Jacquot, M.; Larger, L.; Brunner, D. Reinforcement learning in a large-scale photonic recurrent neural network. Optica
**2018**, 5, 756–760. [Google Scholar] [CrossRef] [Green Version] - Katumba, A.; Heyvaert, J.; Schneider, B.; Uvin, S.; Dambre, J.; Bienstman, P. Low-loss photonic reservoir computing with multimode photonic integrated circuits. Sci. Rep.
**2018**, 8, 2653. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Harkhoe, K.; Van der Sande, G. Dual-mode semiconductor lasers in reservoir computing. In Neuro-Inspired Photonic Computing; International Society for Optics and Photonics: Bellingham, WA, USA, 2018; Volume 10689, p. 106890B. [Google Scholar]
- Mesaritakis, C.; Syvridis, D. Reservoir computing based on transverse modes in a single optical waveguide. Opt. Lett.
**2019**, 44, 1218–1221. [Google Scholar] [CrossRef] - Sunada, S.; Kanno, K.; Uchida, A. Using multidimensional speckle dynamics for high-speed, large-scale, parallel photonic computing. Opt. Express
**2020**, 28, 30349–30361. [Google Scholar] [CrossRef] - Paudel, U.; Luengo-Kovac, M.; Shaw, T.J.; Valley, G.C. Optical reservoir computer using speckle in a multimode waveguide. In AI and Optical Data Sciences; Jalali, B., Kitayama, K., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2020; Volume 11299, pp. 19–24. [Google Scholar] [CrossRef]
- Rafayelyan, M.; Dong, J.; Tan, Y.; Krzakala, F.; Gigan, S. Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction. Phys. Rev. X
**2020**, 10, 041037. [Google Scholar] - Van der Sande, G.; Brunner, D.; Soriano, M.C. Advances in photonic reservoir computing. Nanophotonics
**2017**, 6, 561–576. [Google Scholar] [CrossRef] - Wyffels, F.; Schrauwen, B.; Stroobandt, D. Stable output feedback in reservoir computing using ridge regression. In International Conference on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 2008; pp. 808–817. [Google Scholar]
- Soriano, M.C.; Ortín, S.; Brunner, D.; Larger, L.; Mirasso, C.R.; Fischer, I.; Pesquera, L. Optoelectronic reservoir computing: Tackling noise-induced performance degradation. Opt. Express
**2013**, 21, 12–20. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Alata, R.; Pauwels, J.; Haelterman, M.; Massar, S. Phase noise robustness of a coherent spatially parallel optical reservoir. IEEE J. Sel. Top. Quantum Electron.
**2019**, 26, 1–10. [Google Scholar] [CrossRef] - Wiskott, L.; Sejnowski, T.J. Slow feature analysis: Unsupervised learning of invariances. Neural Comput.
**2002**, 14, 715–770. [Google Scholar] [CrossRef] [PubMed] - Jaeger, H. Short Term Memory in Echo State Networks; GMD-Forschungszentrum Informationstechnik, 2001; Volume 5. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.720.3974 (accessed on 15 July 2021).
- Dambre, J.; Verstraeten, D.; Schrauwen, B.; Massar, S. Information processing capacity of dynamical systems. Sci. Rep.
**2012**, 2, 514. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**Illustration of the output expansion scheme. In the example shown, 3 neurons are measured ${X}_{m}$, 1 random feature ${Y}_{R}$ is constructed with random weights ${W}_{R}$ and this auxiliary feature is mixed with ${X}_{m}$ to obtain a total of 6 output features ${X}_{out}$. These output features are combined with trained readout weights ${W}_{out}$ to form a task-solving reservoir output ${Y}_{out}$. This output expansion contains polynomial functions of ${X}_{m}$ of first and second degree. The corresponding subsets of readout weights are labeled ${W}_{out}^{\left(1\right)}$ and ${W}_{out}^{\left(2\right)}$. Larger numbers of neurons N and auxiliary features P are supported by the proposed scheme.

**Figure 2.**Illustration of the weight-tuning scheme. The example shows 3 measured neural responses ${X}_{m}$ which are combined with 3 time-dependent readout weights ${\tilde{W}}_{out}$ following Equation (16) to form 1 task-solving output ${Y}_{out}$ following Equation (17). The example has 1 random auxiliary feature ${Y}_{R}$, which is obtained with random weights ${W}_{R}$ and used to tune ${\tilde{W}}_{out}$. Larger numbers of neurons N and auxiliary features P are supported by the proposed scheme. This example is equivalent with the scheme shown in Figure 1.

**Figure 3.**Schematic of the fiber-ring cavity of length L used to implement an optical reservoir. In the input layer, a polarization controller maps the input polarization onto a polarization eigenmode of the cavity. Data is injected by means of a Mach–Zehnder modulator (MZM). A coupler with power transmission coefficient $T=50\%$ couples the input field ${E}_{in}^{\left(n\right)}\left(\tau \right)$ to the cavity field ${E}^{\left(n\right)}(z,\tau )$ and couples to the output field ${E}_{out}^{\left(n\right)}\left(\tau \right)$, where n is the roundtrip index, $\tau $ is time (with $0<\tau <{t}_{R}$) and z is the longitudinal position in the ring cavity. A photodetector (PD) records the neural responses to be processed by a digital computer where the output expansion is realized.

**Figure 4.**Example of experimental phase variations over different iterations of the experiment. The solid line is the measured phase, based on the pulse interference, and the dots represent the estimated phase using a linear combination of all 20 random features. The iterations take place approximately every second. The experiment is carefully shielded so that $\theta $ varies slowly.

**Figure 5.**Example of simulated phase variations within 1 iteration of the simulated experiment. The full line represents the true phase variations, covering the full $2\pi $ range and with kHz bandwidth, and the dots represent the estimated phase, obtained using a linear combination of all 20 random features.

**Figure 6.**Correlation coefficients obtained when mapping increasing sets of random features to $\mathrm{cos}\theta $ using linear regression. For the experimental comparison, an estimate of $\mathrm{cos}\theta $ is used, whereas in simulation, the known value of $\mathrm{cos}\theta $ is used. Error bars are obtained by running experiments/simulations for several iterations and using different sets of random weights for the construction of the random features.

**Figure 7.**Experimental and simulated memory capacity of the reservoir computer when the number of random output features used is increased from 0 to 20. The stacked vertical bars are color-coded to represent (from the bottom up) the total memory capacities of degrees 1 (dark blue), 2 (red), 3 (orange), 4 (purple) and 5 (green) and all higher degrees combined (light blue). As such, the total height represents the total memory capacity of the system.

**Figure 8.**Experimental and simulated results on the 4-level channel equalization benchmark task. The symbol error rate is reported as a function of the number of random features that are used to tune the task-related readout weights. Error bars are obtained by running experiments/simulations for several iterations and using different sets of random weights for the construction of the random features.

**Figure 9.**Additional simulated results on the 4-level channel equalization benchmark task. The symbol error rate is reported as a function of the total number of readout weights ${N}^{\prime}$ that must be optimized. Different curves represent reservoirs with different numbers of neurons N, ranging from 10 to 40. For each system, the number P of random features that are used in the output expansion is varied as 0, 1, 2, 5 and 10 from left to right, which affects ${N}^{\prime}=N(P+1)$. Error bars are obtained by running experiments/simulations for several iterations and using different sets of random weights for the construction of the random features.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Pauwels, J.; Van der Sande, G.; Verschaffelt, G.; Massar, S.
Photonic Reservoir Computer with Output Expansion for Unsupervized Parameter Drift Compensation. *Entropy* **2021**, *23*, 955.
https://doi.org/10.3390/e23080955

**AMA Style**

Pauwels J, Van der Sande G, Verschaffelt G, Massar S.
Photonic Reservoir Computer with Output Expansion for Unsupervized Parameter Drift Compensation. *Entropy*. 2021; 23(8):955.
https://doi.org/10.3390/e23080955

**Chicago/Turabian Style**

Pauwels, Jaël, Guy Van der Sande, Guy Verschaffelt, and Serge Massar.
2021. "Photonic Reservoir Computer with Output Expansion for Unsupervized Parameter Drift Compensation" *Entropy* 23, no. 8: 955.
https://doi.org/10.3390/e23080955