Next Article in Journal
Mimicking Multiorbital Systems with SU(N) Atoms: Hund’s Physics and Beyond
Next Article in Special Issue
The 1S0 Pairing Gap in Neutron Matter
Previous Article in Journal
Benchmarking Plane Waves Quantum Mechanical Calculations of Iron(II) Tris(2,2′-bipyridine) Complex by X-ray Absorption Spectroscopy
Previous Article in Special Issue
Polaron-Depleton Transition in the Yrast Excitations of a One-Dimensional Bose Gas with a Mobile Impurity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Reservoir Computing for Speckle Disorder Potentials

IFISC, Institut de Física Interdisciplinària i Sistemes Complexos (UIB-CSIC), UIB Campus, E-07122 Palma de Mallorca, Spain
Condens. Matter 2022, 7(1), 17; https://doi.org/10.3390/condmat7010017
Submission received: 14 December 2021 / Revised: 19 January 2022 / Accepted: 26 January 2022 / Published: 28 January 2022
(This article belongs to the Special Issue Computational Methods for Quantum Matter)

Abstract

:
Quantum reservoir computing is a machine learning approach designed to exploit the dynamics of quantum systems with memory to process information. As an advantage, it presents the possibility to benefit from the quantum resources provided by the reservoir combined with a simple and fast training strategy. In this work, this technique is introduced with a quantum reservoir of spins and it is applied to find the ground state energy of an additional quantum system. The quantum reservoir computer is trained with a linear model to predict the lowest energy of a particle in the presence of different speckle disorder potentials. The performance of the task is analyzed with a focus on the observable quantities extracted from the reservoir and it is shown to be enhanced when two-qubit correlations are employed.

1. Introduction

In the last few years, the study of quantum systems has taken advantage of the increasing interest in and the developments of machine learning techniques to face both theoretical and experimental challenges, which has led to the emergence of the broad field of quantum machine learning [1,2,3,4,5]. Some successful examples of the use of machine learning include, among others, the detection and classification of quantum phases [6,7,8,9,10,11,12,13], the prediction of the ground state energy and other characteristic quantities of quantum systems [14,15,16,17], and the enhanced control and readout in experimental setups [18,19,20,21]. Additionally, many efforts are devoted to developing machine learning algorithms that exploit quantum resources, aiming to find a quantum advantage in performing tasks. Quantum reservoir computing (QRC) and related approaches belong to this last category [22,23].
The concept of QRC was introduced in [24] as an extension to the quantum realm of classical reservoir computing (RC) [25,26,27,28]. The main idea behind RC, as an unconventional computing method [29,30], is the use of the natural dynamics of systems to process information, together with a simplified training strategy [31]. For supervised learning techniques, for instance in the case of deep neural networks, one of the major drawbacks is the training process of models with typically thousands of free parameters to be optimally adjusted, which requires a lot of computational resources and/or time. Instead of that, in RC, the connections between the constituents of the reservoir are kept fixed and only the output quantities from the reservoir are involved in the training process and could be easily retrained for a different purpose. This scheme has been shown to be sufficient to achieve very good performances in diverse tasks [32,33,34,35].
Quantum reservoirs are good candidates to be exploited for computational purposes for several reasons. First of all, the number of degrees of freedom in quantum systems increases exponentially with the number of constituents. Therefore, with relatively small systems, a large state space is available, which has been shown to be beneficial, i.e., it increases the memory capacity [24,36,37,38,39]. In second place, the presence of entanglement can also contribute to achieving a quantum advantage when quantum correlations are exploited [39]. Finally, there exist several proposals suitable to be implemented in a wide variety of experimental platforms to realize not only classical tasks but also quantum ones [22], for instance, entanglement detection [40], quantum state tomography [41], and quantum state preparation [42,43].
In this article, the possibility of using a quantum reservoir to study another quantum system is explored as an alternative to classical machine learning models. A first goal in this work is to show that a quantum reservoir can be used to make predictions on the ground state energy of a quantum particle in a speckle disorder potential [44] by only providing as an input this external potential and by using only a linear model to train the output observables. This problem is of relevance for understanding the Anderson localization phenomenon in quantum systems due to the presence of disorder, which determines their transport properties [45]. Additionally, we aim to analyze the effect on the performance when two-body quantum correlations in the reservoir are used compared with one-body observables for the mentioned task.
This work is organized as follows. In Section 2, the details on the quantum system in study are provided together with the database used. The description of how the input is encoded into the quantum reservoir is found in Section 3. In Section 4, the characteristics of the quantum reservoir system are explained. In Section 5, the quantum reservoir computing procedure is presented with the mathematical description of the state of the reservoir and its observables. The expressions of the trained models and an analysis of their performance are given in Section 6. Finally, the discussion of the results and the conclusions are in Section 7.

2. Database of Speckle Disorder Potentials and Ground State Energies

The problem to be addressed with the model proposed in this work consists of finding the lowest energy of the following Hamiltonian, which describes a particle of mass m in one dimension with position x in the presence of an external potential V ( x ) :
H = 2 2 m 2 x 2 + V ( x ) ,
where is the reduced Planck constant. V ( x ) is a speckle potential that in cold atoms experiments is created by means of optical fields passing through a diffusion plate [46,47,48] and can be numerically produced with Gaussian random numbers [49,50]. This potential introduces disorder into the system and originates the Anderson localization phenomenon [44,45].
In previous studies, classical machine learning models with convolutional neural networks have been shown to be able to make very good predictions on the first energies of this system and for different system sizes by applying transfer learning protocols [16]. Additionally, the extension to the system with few repulsively interacting bosons [51] has also been explored including the particle number as an additional feature to the trained model [17].
The database used in this article is part of the database used in [17], which is publicly available at [52]. In this work, the first 10,000 speckle potential instances of the single-particle dataset and their corresponding ground state energies are used. The energies in the database were computed numerically by means of exact diagonalization as explained in [51] and in more detail in [53].

3. Input Ecoding into the Quantum Reservoir

The values of each different speckle potential instance in the dataset are provided in a discrete grid in the space of K = 1024 points, V ( x k ) = V k , with k = 1 , , K . Therefore, our input is a set of vectors of size 1024 where the spatial structure of each potential is given by the order of the elements. For this reason, the values of the potential are introduced into the dynamics of the quantum reservoir in the same order. From the point of view of the quantum reservoir, a given speckle potential corresponds to an external time-dependent signal fed at discrete times, t = k Δ t .
The input at a given time, V k , is encoded in one of the qubits of the system (see Figure 1), qubit 1, by setting its state as [24,39,54,55]:
| ψ k = 1 s k | 0 + s k | 1 ,
where s k = V k / V m a x , and the value of V m a x is always the same and fixed as the maximum reached among all the 10,000 speckle instances in the dataset. In this way, s k [ 0 , 1 ] to have a properly normalized state for the qubit, and all inputs are rescaled to the same quantity. The basis states that are used are the eigenstates of the Pauli matrix σ ^ z , namely σ ^ z | 0 = | 0 and σ ^ z | 1 = | 1 .

4. Hamiltonian of the Reservoir of Spins

The quantum reservoir employed in this work is a system consisting of N = 6 spins (or qubits). The unitary dynamics of this system are governed by the following transverse-field Ising Hamiltonian:
H ^ R = 1 2 i = 1 N h σ ^ i z + i < j N J i j σ ^ i x σ ^ j x ,
where σ ^ i z and σ ^ j x are Pauli matrices acting on qubits i and j, respectively. The spin–spin couplings J i j , represented as lines of different thickness in Figure 1, are randomly generated once from a uniform distribution in the interval [ J s / 2 , J s / 2 ] and then kept constant. We work on a system of units with = 1 and J s = 1 . The time intervals Δ t , are expressed in units of 1 / J s , and h, that correspond to an external magnetic field in the z direction, fixed at h = 10 J s , such that the system is in the appropriate dynamical regime [54]. This kind of system was in the original proposal of QRC in [24] and has been extensively studied for information processing purposes in further several works [39,54,55,56,57,58,59].

5. Quantum Reservoir Computer Operation

The role of the quantum reservoir is to provide a map between the input speckle to the output observables. They carry the information about the input that has been processed during the time evolution of the reservoir system. The memory of the system, for the present purposes, is exploited within each instance. However, the system is reset before the introduction of each speckle potential at time t = 0 . In this way, there are no dependencies between consecutive instances. The general scheme of the procedure is depicted in Figure 1.
The density matrix that describes the quantum state of the reservoir of spins before the injection of each potential reads:
ρ 0 = | 0 , , 0 0 , , 0 | .
Afterwards, for a given speckle potential instance, the state of the reservoir at each time step k is given by:
ρ k = e i H ^ R Δ t | ψ k ψ k | T r 1 ( ρ k 1 ) e i H ^ R Δ t ,
where T r 1 ( ) indicates the partial trace with respect to the first qubit. The dependence on the speckle points, V k , is through fixing the state of the first qubit, | ψ k ψ k | , in the form of Equation (2). Between input injections, there is a unitary evolution of the state of the reservoir of Δ t duration governed by the Hamiltonian in (3).
From the state of the reservoir, the expectation values of the following single-qubit observables at each time step k are computed as:
σ ^ i α k = T r ρ k σ ^ i α ,
for all spins, i = 1 , , N , and in the three directions α = x , y , z . Afterwards, the average over all time steps is taken to obtain:
σ ^ i α 1 K k = 1 K σ ^ i α k .
Similarly, the expectation values of the two-qubit observables are calculated,
σ ^ i α σ ^ j β k = T r ρ k σ ^ i α σ ^ j β ,
and
σ ^ i α σ ^ j β 1 K k = 1 K σ ^ i α σ ^ j β k ,
with i < j and α , β = x , y , z .

6. Training and Predictions of the Models

From the output observables of the quantum reservoir, two models to make predictions, E ˜ , on the ground state energies are proposed. Both are constructed by fitting a least squares linear model with the training dataset, which corresponds to the first 7500 speckle instances and target energies E. The quality of the models is tested with the remaining 2500 potentials, and it is quantified by the mean absolute error (MAE),
MAE = 1 M l = 1 M | E l E ˜ l | ,
and the coefficient of determination
R 2 = 1 l = 1 M ( E l E ˜ l ) 2 l = 1 M ( E ¯ E l ) 2 ,
where E ¯ is the mean energy E ¯ ( 1 / M ) l = 1 M E l , and M is the number of instances, M = 7500 for the training data, and M = 2500 for the test data. If R 2 = 0 , the predicted and the target energies are not linearly related; whereas, for R 2 = 1 , the predictions are perfect.
In the first case, the single-qubit observables in Equation (7) are employed and the final output, E ˜ , for each instance is written as:
E ˜ = v 0 + i = 1 N v 1 , i σ ^ i x + v 2 , i σ ^ i y + v 3 , i σ ^ i z + v 4 , i σ ^ i x 2 + v 5 , i σ ^ i y 2 + v 6 , i σ ^ i z 2 + v 7 , i σ ^ i x σ ^ i y + v 8 , i σ ^ i y σ ^ i z + v 9 , i σ ^ i z σ ^ i x + v 10 , i σ ^ i x σ ^ i y σ ^ i z .
In this last model and for a system with N = 6 , there are 61 free parameters, v, with the corresponding labels that are optimized during the training.
In the second case, there are 451 different weights w to be adjusted when the two-qubit quantities in Equation (9) are used in the similar following way:
E ˜ = w 0 + i < j = 1 N w 1 , i , j σ ^ i x σ ^ j x + w 2 , i , j σ ^ i y σ ^ j y + w 3 , i , j σ ^ i z σ ^ j z + w 4 , i , j σ ^ i x σ ^ j x 2 + w 5 , i , j σ ^ i y σ ^ j y 2 + w 6 , i , j σ ^ i z σ ^ j z 2 + w 7 , i , j σ ^ i x σ ^ j x σ ^ i y σ ^ j y + w 8 , i , j σ ^ i y σ ^ j y σ ^ i z σ ^ j z + w 9 , i , j σ ^ i z σ ^ j z σ ^ i x σ ^ j x + w 10 , i , j σ ^ i x σ ^ j x σ ^ i y σ ^ j y σ ^ i z σ ^ j z + i j = 1 N w 11 , i , j σ ^ i x σ ^ j y + w 12 , i , j σ ^ i y σ ^ j z + w 13 , i , j σ ^ i z σ ^ j x + w 14 , i , j σ ^ i x σ ^ j y 2 + w 15 , i , j σ ^ i y σ ^ j z 2 + w 16 , i , j σ ^ i z σ ^ j x 2 + w 17 , i , j σ ^ i x σ ^ j y σ ^ i y σ ^ j z + w 18 , i , j σ ^ i y σ ^ j z σ ^ i z σ ^ j x + w 19 , i , j σ ^ i z σ ^ j x σ ^ i x σ ^ j y + w 20 , i , j σ ^ i x σ ^ j y σ ^ i y σ ^ j z σ ^ i z σ ^ j x .
As in the neural network models in [16,17], the predictions E ˜ are functions of the speckle points in space. In the present case, the required nonlinear dependence of the outputs on the input values of the potential are, in general, guaranteed by both the form of input encoding in Equation (2) and an appropriate choice of the Hamiltonian parameters [58]. Beyond that, further nonlinear dependencies on the input have been introduced by combining the observables in (12) and (13) because it is beneficial to increase the performance without losing the linearity of the models.
The quality of the predictions of the two models is shown in Figure 2. Remarkably, the model that produces the predictions from the single-qubit observables of the reservoir with only 61 optimized parameters is able to learn from the training data without overfitting. The MAE and R 2 for the training data are 0.1126 and 0.8808, respectively, and practically equal to the values of the test data provided in Figure 2. In panel (a), in the distribution of the absolute error, there is a considerable number of speckle instances whose error is below the MAE. Moreover, in panel (b), there is a clear correlation between the target energies and the predicted ones for the test data reflected on the large value of R 2 = 0.880 , which is close to 0.9. If the model of two-qubit observables is employed instead, the accuracy of the predictions is, in general, improved. The peak of the distribution in panel (c) of the absolute error is sharper and closer to 0, as well as the value of the MAE. In accordance, in the scatter plot in panel (d) the value of R 2 surpasses 0.9, and we are closer to the ideal situation. Furthermore, in this case with more free parameters, 451, the comparison between the values of the MAE = 0.0833 and R 2 = 0.936 for the training data and the test data in panels (c) and (d) indicates that there is not a significant overfitting problem.

7. Discussion and Conclusions

The results obtained with the models proposed in this work show that a quantum reservoir is suitable to be used to address the problem of making predictions on the ground state energy of different speckle disorder potentials. By following this approach, the computational capabilities of the quantum dynamics of the system are exploited, and linear models with the observables of the quantum reservoir are sufficient to achieve a noticeable accuracy. In this way, we have taken advantage of both a simple and fast training strategy and the presence of quantum correlations. This paves the way to further develop models that follow a similar strategy, for instance, for systems of interacting particles in the presence of a speckle potential.
In fact, for practical purposes, the quality of the predictions should be increased in order to compete with state-of-the-art deep convolutional neural networks. To reach this aim, several strategies could be followed. It would be interesting to explore the effect of changing the form of the input encoding into the quantum reservoir and to study its impact on the quality of the predictions of the models. In addition to that, in the present work we have only explored the possibility of using single-qubit and two-qubit observables. The extension to three-body quantities and beyond should be considered and could contribute to improving the performance. This would lead to more flexible models, as the number of observables would be increased as well as the number of free parameters. Additionally, in a similar way, increasing the number of spins would enlarge the Hilbert space and increase the capabilities of the quantum reservoir. Regarding the Hamiltonian of the reservoir system, the values of the couplings and the external magnetic field could be seen as hyperparameters. As the performance in realizing the task depends on their values, to improve the results presented in this work, they could be optimized by defining an additional validation dataset.

Funding

This research was funded by: the Spanish State Research Agency, through the Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R&D, grant number MDM-2017-0711; Comunitat Autònoma de les Illes Balears, Govern de les Illes Balears, through the QUAREC project, grant number PRD2018/47; and the Spanish State Research Agency through the QUARESC project, grant numbers PID2019-109094GB-C21 and -C22/AEI/10.13039/501100011033.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study are openly available in Zenodo at [52].

Acknowledgments

The author acknowledges Rodrigo Martínez-Peña, Gian Luca Giorgi, Miguel C. Soriano, and Roberta Zambrini for useful discussions and valuable comments and a careful reading of this manuscript.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
QRCQuantum Reservoir Computing
RCReservoir Computing
MAEMean Absolute Error

References

  1. Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature 2017, 549, 195–202. [Google Scholar] [CrossRef]
  2. Dunjko, V.; Briegel, H.J. Machine learning & artificial intelligence in the quantum domain: A review of recent progress. Rep. Prog. Phys. 2018, 81, 074001. [Google Scholar] [CrossRef] [Green Version]
  3. Carrasquilla, J. Machine learning for quantum matter. Adv. Phys. X 2020, 5, 1797528. [Google Scholar] [CrossRef]
  4. Schuld, M.; Sinayskiy, I.; Petruccione, F. An introduction to quantum machine learning. Contemp. Phys. 2015, 56, 172–185. [Google Scholar] [CrossRef] [Green Version]
  5. Carleo, G.; Cirac, I.; Cranmer, K.; Daudet, L.; Schuld, M.; Tishby, N.; Vogt-Maranto, L.; Zdeborová, L. Machine learning and the physical sciences. Rev. Mod. Phys. 2019, 91, 045002. [Google Scholar] [CrossRef] [Green Version]
  6. Kottmann, K.; Huembeli, P.; Lewenstein, M.; Acín, A. Unsupervised Phase Discovery with Deep Anomaly Detection. Phys. Rev. Lett. 2020, 125, 170603. [Google Scholar] [CrossRef]
  7. Dong, X.Y.; Pollmann, F.; Zhang, X.F. Machine learning of quantum phase transitions. Phys. Rev. B 2019, 99, 121104. [Google Scholar] [CrossRef] [Green Version]
  8. Carrasquilla, J.; Melko, R.G. Machine learning phases of matter. Nat. Phys. 2017, 13, 431–434. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, L. Discovering phase transitions with unsupervised learning. Phys. Rev. B 2016, 94, 195105. [Google Scholar] [CrossRef] [Green Version]
  10. Rem, B.S.; Käming, N.; Tarnowski, M.; Asteria, L.; Fläschner, N.; Becker, C.; Sengstock, K.; Weitenberg, C. Identifying quantum phase transitions using artificial neural networks on experimental data. Nat. Phys. 2019, 15, 917–920. [Google Scholar] [CrossRef]
  11. Huembeli, P.; Dauphin, A.; Wittek, P. Identifying quantum phase transitions with adversarial neural networks. Phys. Rev. B 2018, 97, 134109. [Google Scholar] [CrossRef] [Green Version]
  12. Canabarro, A.; Fanchini, F.F.; Malvezzi, A.L.; Pereira, R.; Chaves, R. Unveiling phase transitions with machine learning. Phys. Rev. B 2019, 100, 045129. [Google Scholar] [CrossRef] [Green Version]
  13. Schindler, F.; Regnault, N.; Neupert, T. Probing many-body localization with neural networks. Phys. Rev. B 2017, 95, 245134. [Google Scholar] [CrossRef] [Green Version]
  14. Mills, K.; Spanner, M.; Tamblyn, I. Deep learning and the Schrödinger equation. Phys. Rev. A 2017, 96, 042113. [Google Scholar] [CrossRef] [Green Version]
  15. Pilati, S.; Pieri, P. Simulating disordered quantum Ising chains via dense and sparse restricted Boltzmann machines. Phys. Rev. E 2020, 101, 063308. [Google Scholar] [CrossRef]
  16. Pilati, S.; Pieri, P. Supervised machine learning of ultracold atoms with speckle disorder. Sci. Rep. 2019, 9, 5613. [Google Scholar] [CrossRef] [Green Version]
  17. Mujal, P.; Martínez Miguel, A.; Polls, A.; Juliá-Díaz, B.; Pilati, S. Supervised learning of few dirty bosons with variable particle number. SciPost Phys. 2021, 10, 73. [Google Scholar] [CrossRef]
  18. Wigley, P.B.; Everitt, P.J.; van den Hengel, A.; Bastian, J.W.; Sooriyabandara, M.A.; McDonald, G.D.; Hardman, K.S.; Quinlivan, C.D.; Manju, P.; Kuhn, C.C.N.; et al. Fast machine-learning online optimization of ultra-cold-atom experiments. Sci. Rep. 2016, 6, 25890. [Google Scholar] [CrossRef] [Green Version]
  19. Tranter, A.D.; Slatyer, H.J.; Hush, M.R.; Leung, A.C.; Everett, J.L.; Paul, K.V.; Vernaz-Gris, P.; Lam, P.K.; Buchler, B.C.; Campbell, G.T. Multiparameter optimisation of a magneto-optical trap using deep learning. Nat. Commun. 2018, 9, 4360. [Google Scholar] [CrossRef]
  20. Barker, A.J.; Style, H.; Luksch, K.; Sunami, S.; Garrick, D.; Hill, F.; Foot, C.J.; Bentine, E. Applying machine learning optimization methods to the production of a quantum gas. Mach. Learn. Sci. Technol. 2020, 1, 015007. [Google Scholar] [CrossRef]
  21. Flurin, E.; Martin, L.S.; Hacohen-Gourgy, S.; Siddiqi, I. Using a Recurrent Neural Network to Reconstruct Quantum Dynamics of a Superconducting Qubit from Physical Observations. Phys. Rev. X 2020, 10, 011006. [Google Scholar] [CrossRef] [Green Version]
  22. Mujal, P.; Martínez-Peña, R.; Nokkala, J.; García-Beni, J.; Giorgi, G.L.; Soriano, M.C.; Zambrini, R. Opportunities in Quantum Reservoir Computing and Extreme Learning Machines. Adv. Quantum Technol. 2021, 2100027. [Google Scholar] [CrossRef]
  23. Ghosh, S.; Nakajima, K.; Krisnanda, T.; Fujii, K.; Liew, T.C.H. Quantum Neuromorphic Computing with Reservoir Computing Networks. Adv. Quantum Technol. 2021, 4, 2100053. [Google Scholar] [CrossRef]
  24. Fujii, K.; Nakajima, K. Harnessing Disordered-Ensemble Quantum Dynamics for Machine Learning. Phys. Rev. Appl. 2017, 8, 024030. [Google Scholar] [CrossRef] [Green Version]
  25. Lukoševičius, M.; Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009, 3, 127–149. [Google Scholar] [CrossRef]
  26. Jaeger, H.; Haas, H. Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [Green Version]
  27. Maass, W.; Markram, H. On the computational power of circuits of spiking neurons. J. Comput. Syst. Sci. 2004, 69, 593–616. [Google Scholar] [CrossRef] [Green Version]
  28. Brunner, D.; Soriano, M.C.; Van der Sande, G. Photonic Reservoir Computing: Optical Recurrent Neural Networks; Walter de Gruyter GmbH & Co KG: Berlin, Germany, 2019. [Google Scholar]
  29. Konkoli, Z. On reservoir computing: From mathematical foundations to unconventional applications. In Advances in Unconventional Computing; Springer: Heidelberg, Germany, 2017; pp. 573–607. [Google Scholar]
  30. Adamatzky, A.; Bull, L.; De Lacy Costello, B.; Stepney, S.; Teuscher, C. Unconventional Computing 2007; Luniver Press: London, UK, 2007. [Google Scholar]
  31. Butcher, J.B.; Verstraeten, D.; Schrauwen, B.; Day, C.R.; Haycock, P.W. Reservoir computing and extreme learning machines for non-linear time-series data analysis. Neural Netw. 2013, 38, 76–89. [Google Scholar] [CrossRef]
  32. Lukoševičius, M. A practical guide to applying echo state networks. In Neural Networks: Tricks of the Trade; Springer: Heidelberg, Germany, 2012; pp. 659–686. [Google Scholar]
  33. Antonik, P.; Marsal, N.; Brunner, D.; Rontani, D. Human action recognition with a large-scale brain-inspired photonic computer. Nat. Mach. Intell. 2019, 1, 530–537. [Google Scholar] [CrossRef] [Green Version]
  34. Alfaras, M.; Soriano, M.C.; Ortín, S. A Fast Machine Learning Model for ECG-Based Heartbeat Classification and Arrhythmia Detection. Front. Phys. 2019, 7, 103. [Google Scholar] [CrossRef] [Green Version]
  35. Pathak, J.; Hunt, B.; Girvan, M.; Lu, Z.; Ott, E. Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach. Phys. Rev. Lett. 2018, 120, 024102. [Google Scholar] [CrossRef] [Green Version]
  36. Schuld, M.; Killoran, N. Quantum machine learning in feature Hilbert spaces. Phys. Rev. Lett. 2019, 122, 040504. [Google Scholar] [CrossRef] [Green Version]
  37. Abbas, A.; Sutter, D.; Zoufal, C.; Lucchi, A.; Figalli, A.; Woerner, S. The power of quantum neural networks. Nat. Comput. Sci. 2021, 1, 403–409. [Google Scholar] [CrossRef]
  38. Nokkala, J.; Martínez-Peña, R.; Zambrini, R.; Soriano, M.C. High-Performance Reservoir Computing with Fluctuations in Linear Networks. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef]
  39. Martínez-Peña, R.; Nokkala, J.; Giorgi, G.L.; Zambrini, R.; Soriano, M.C. Information Processing Capacity of Spin-Based Quantum Reservoir Computing Systems. Cognit. Comput. 2020, 1–12. [Google Scholar] [CrossRef]
  40. Ghosh, S.; Opala, A.; Matuszewski, M.; Paterek, T.; Liew, T.C.H. Quantum reservoir processing. NPJ Quantum Inf. 2019, 5, 35. [Google Scholar] [CrossRef]
  41. Ghosh, S.; Opala, A.; Matuszewski, M.; Paterek, T.; Liew, T.C.H. Reconstructing Quantum States with Quantum Reservoir Networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 1–8. [Google Scholar] [CrossRef]
  42. Ghosh, S.; Paterek, T.; Liew, T.C.H. Quantum Neuromorphic Platform for Quantum State Preparation. Phys. Rev. Lett. 2019, 123, 260404. [Google Scholar] [CrossRef] [Green Version]
  43. Krisnanda, T.; Ghosh, S.; Paterek, T.; Liew, T.C. Creating and concentrating quantum resource states in noisy environments using a quantum neural network. Neural Netw. 2021, 136, 141–151. [Google Scholar] [CrossRef]
  44. Aspect, A.; Inguscio, M. Anderson localization of ultracold atoms. Phys. Today 2009, 62, 30–35. [Google Scholar] [CrossRef]
  45. Anderson, P.W. Absence of Diffusion in Certain Random Lattices. Phys. Rev. 1958, 109, 1492–1505. [Google Scholar] [CrossRef]
  46. Clément, D.; Varón, A.F.; Retter, J.A.; Sanchez-Palencia, L.; Aspect, A.; Bouyer, P. Experimental study of the transport of coherent interacting matter-waves in a 1D random potential induced by laser speckle. New J. Phys. 2006, 8, 165. [Google Scholar] [CrossRef]
  47. Billy, J.; Josse, V.; Zuo, Z.; Bernard, A.; Hambrecht, B.; Lugan, P.; Clément, D.; Sanchez-Palencia, L.; Bouyer, P.; Aspect, A. Direct observation of Anderson localization of matter waves in a controlled disorder. Nature 2008, 453, 891–894. [Google Scholar] [CrossRef]
  48. Roati, G.; D’Errico, C.; Fallani, L.; Fattori, M.; Fort, C.; Zaccanti, M.; Modugno, G.; Modugno, M.; Inguscio, M. Anderson localization of a non-interacting Bose–Einstein condensate. Nature 2008, 453, 895–898. [Google Scholar] [CrossRef] [Green Version]
  49. Modugno, M. Collective dynamics and expansion of a Bose–Einstein condensate in a random potential. Phys. Rev. A 2006, 73, 013606. [Google Scholar] [CrossRef] [Green Version]
  50. Huntley, J. Speckle photography fringe analysis: Assessment of current algorithms. Appl. Opt. 1989, 28, 4316–4322. [Google Scholar] [CrossRef]
  51. Mujal, P.; Polls, A.; Pilati, S.; Juliá-Díaz, B. Few-boson localization in a continuum with speckle disorder. Phys. Rev. A 2019, 100, 013603. [Google Scholar] [CrossRef] [Green Version]
  52. Mujal, P.; Martínez Miguel, A.; Polls, A.; Juliá-Díaz, B.; Pilati, S. Database used for the supervised learning of few dirty bosons with variable particle number. Zenodo 2020. [Google Scholar] [CrossRef]
  53. Mujal, P. Interacting Ultracold Few-Boson Systems. Ph.D. Thesis, Universitat de Barcelona, Barcelona, Spain, 2019. Available online: http://brunojulia.fqa.ub.edu/works/PMT_phD_Thesis_book.pdf (accessed on 30 January 2022).
  54. Martínez-Peña, R.; Giorgi, G.L.; Nokkala, J.; Soriano, M.C.; Zambrini, R. Dynamical Phase Transitions in Quantum Reservoir Computing. Phys. Rev. Lett. 2021, 127, 100502. [Google Scholar] [CrossRef]
  55. Kutvonen, A.; Fujii, K.; Sagawa, T. Optimizing a quantum reservoir computer for time series prediction. Sci. Rep. 2020, 10, 14687. [Google Scholar] [CrossRef]
  56. Chen, J.; Nurdin, H.I. Learning nonlinear input–output maps with dissipative quantum systems. Quantum Inf. Process. 2019, 18, 198. [Google Scholar] [CrossRef] [Green Version]
  57. Nakajima, K.; Fujii, K.; Negoro, M.; Mitarai, K.; Kitagawa, M. Boosting Computational Power through Spatial Multiplexing in Quantum Reservoir Computing. Phys. Rev. Appl. 2019, 11, 034021. [Google Scholar] [CrossRef] [Green Version]
  58. Mujal, P.; Nokkala, J.; Martínez-Peña, R.; Giorgi, G.L.; Soriano, M.C.; Zambrini, R. Analytical evidence of nonlinearity in qubits and continuous-variable quantum reservoir computing. J. Phys. Complex. 2021, 2, 045008. [Google Scholar] [CrossRef]
  59. Tran, Q.H.; Nakajima, K. Higher-Order Quantum Reservoir Computing. arXiv 2020, arXiv:2006.08999. [Google Scholar]
Figure 1. Schematic representation of the use of a quantum reservoir to predict the ground state energy of a quantum particle in a speckle potential. The spatial-dependent values of the speckle potential are transformed into a time-dependent signal fed into the state of one spin of the reservoir at discrete time steps. The quantum reservoir system evolves between input injections. Different observables are extracted from the reservoir system and used in the training process to produce ground state energy predictions.
Figure 1. Schematic representation of the use of a quantum reservoir to predict the ground state energy of a quantum particle in a speckle potential. The spatial-dependent values of the speckle potential are transformed into a time-dependent signal fed into the state of one spin of the reservoir at discrete time steps. The quantum reservoir system evolves between input injections. Different observables are extracted from the reservoir system and used in the training process to produce ground state energy predictions.
Condensedmatter 07 00017 g001
Figure 2. (Left) Absolute error distributions corresponding to the test data of the model with single-qubit observables in (a) and two-qubit observables in (c). A dashed line indicates the mean absolute error (MAE). (Right) Predicted energies E ˜ as a function of the target energies E for the test data. In (b), the predictions are made with the model in Equation (12) and, in (d), with the one in (13). The ideal situation, with perfect predictions, is depicted with a dashed line and would correspond to R 2 = 1 .
Figure 2. (Left) Absolute error distributions corresponding to the test data of the model with single-qubit observables in (a) and two-qubit observables in (c). A dashed line indicates the mean absolute error (MAE). (Right) Predicted energies E ˜ as a function of the target energies E for the test data. In (b), the predictions are made with the model in Equation (12) and, in (d), with the one in (13). The ideal situation, with perfect predictions, is depicted with a dashed line and would correspond to R 2 = 1 .
Condensedmatter 07 00017 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mujal, P. Quantum Reservoir Computing for Speckle Disorder Potentials. Condens. Matter 2022, 7, 17. https://doi.org/10.3390/condmat7010017

AMA Style

Mujal P. Quantum Reservoir Computing for Speckle Disorder Potentials. Condensed Matter. 2022; 7(1):17. https://doi.org/10.3390/condmat7010017

Chicago/Turabian Style

Mujal, Pere. 2022. "Quantum Reservoir Computing for Speckle Disorder Potentials" Condensed Matter 7, no. 1: 17. https://doi.org/10.3390/condmat7010017

APA Style

Mujal, P. (2022). Quantum Reservoir Computing for Speckle Disorder Potentials. Condensed Matter, 7(1), 17. https://doi.org/10.3390/condmat7010017

Article Metrics

Back to TopTop