Next Article in Journal
Towards a Truly Pragmatic Concept of Knowledges
Previous Article in Journal
Digital Transformation as a Reconstruction of Knowledge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

A QFT Approach to Data Streaming in Natural and Artificial Neural Networks †

1
Faculty of Philosophy, Pontifical Lateran University, Vatican City, 00120 Rome, Italy
2
Department of Physics “E.R. Caianiello”, University of Salerno, Fisciano, 84084 Salerno, Italy
*
Author to whom correspondence should be addressed.
Presented at the Conference on Theoretical and Foundational Problems in Information Studies, IS4SI Summit 2021, Online, 12–19 September 2021.
Proceedings 2022, 81(1), 106; https://doi.org/10.3390/proceedings2022081106
Published: 19 September 2021

Abstract

:
In the actual panorama of machine learning (ML) algorithms, the issue of the real-time information extraction/classification/manipulation/analysis of data streams (DS) is acquiring an ever-growing relevance. They arrive generally at high speed and always require an unsupervised real-time analysis for individuating long-range and higher order correlations among data that are continuously changing over time (phase transitions). This emphasizes the infinitary character of the issue, i.e., the continuous change of the signifying number of degrees of freedom characterizing the statistical representation function, challenging the classical ML algorithms, both in their classical and quantum versions, as far as all are based on the (stochastic) search for the global minimum of some cost/energy function. The physical analogue must be studied in the realm of quantum field theory (QFT) for dissipative systems as biological and neural systems, which are able to map between different phases of quantum fields, using the formalism of the Bogoliubov transform (BT). By applying the BT in a reversed way, on the system-thermal bath energetically balanced states, it is possible to define the powerful computational tool of the “doubling of the degrees of freedom” (DDF), making the choice of the signifying finite number of the degrees of freedom dynamic and then automatic, so to suggest a different class of unsupervised ML algorithms for solving the DS issue.

1. Introduction: The Infinitely Many Degrees of Freedom in Data Streaming

During the last twenty years, considerable research has been conducted on the development of probabilistic machine learning algorithms, especially in the field of the artificial neural networks (ANN), for dealing with the problem of data streaming classification and, more generally, for the real-time information extraction/manipulation/analysis of (infinite) data streams (see [1,2,3,4] for updated reviews on this topic). For instance, sensor networks, healthcare monitoring, social networks, financial markets, etc., are among the main sources of data streams, often arriving at high speed and always requiring a real-time analysis, above all for individuating long-range and higher order correlations among data that are continuously changing over time.
Indeed, the standard statistical machine learning algorithms in ANN models, starting from their progenitor, the so-called back-propagation (BP) algorithm [5], were developed for static and huge bases of data (“big data”), but are systematically inadequate and unadaptable for the analysis of DS. That is, for the analysis and processing of dynamic bases of data, characterized by sudden changes in the correlation length among the variables (phase transitions), and by the unpredictable variation of the number of the signifying degrees of freedom of the probability distributions. We will refer to the characterization of the system deriving from the dynamically evolving set of infinitely many degrees of freedom as the infinitary character of the system.
From the computational standpoint, the solution to the infinitary character of the DS problem is in principle unreachable by a Turing Machine (TM), either classical or quantum (QTM). Indeed, for dealing with the DS infinitary challenge, the increasing of the computational speed, derived by the usage of quantum machine learning algorithms, is not very helpful, either using “quantum gates” (QTM), or using “quantum annealing” (quantum Boltzmann Machine (QBM)); both objects of intensive research during the last few years (see [6,7,8] for updated reviews).
Generally, in the case of ANNs, the improvement provided by the Boltzmann-Machine (BM) learning algorithm to the stochastic gradient descent (GD) algorithm, for the weight refresh of the BP learning algorithm, is that the BM uses “thermal fluctuations” for jumping out of the local minima of the cost function (simulated annealing), to avoid the main limitation of the GD algorithm in machine learning [9].
In this framework, the advantage of quantum annealing in a QBM is that it uses the “quantum vacuum fluctuations” instead of the thermal fluctuations of the classical annealing for bringing the system out of swallow (local) minima, by resorting to the “quantum tunnelling” effect [10]. This outperforms the thermal annealing, especially where the potential energy (cost) landscape consists of high but thin barriers surrounding shallow local minima [11,12]. However, despite the improvement that, at least in some specific cases, QBM can provide to the procedure for finding the absolute minimum size/length/cost/distance among a very large, even though finite set of possible solutions, the problem of DS remains because in this case this “finitary” supposition does not hold.

2. The Analogy with the Infinitary Character of QFT Dynamics in Brains

As the analogy with the coarse-graining problem in statistical physics emphasizes very well, the search for the global minimum of the energy function makes sense after the system performed a phase transition. That is, after that a sudden change in the correlation length among the variables of the system, generally under the action of an external field, determined a new way by which the variables are aggregated for defining the signifying number of the degrees of freedom N, characterizing the system statistics after the transition.
In other terms, the infinitary challenge implicit in DS is related to phase transitions so that, from QFT standpoint, this is the same phenomenon of the infinite number of degrees of freedom of the Haag Theorem [13], characterizing the quantum superposition in QFT systems in conditions far from equilibrium. This requires the extension of the QFT formalism to dissipative systems, inaugurated by the pioneering works of N. Bogoliubov [14,15] and H. Umezawa [16,17,18]. The Bogoliubov transform allows mapping between the different phases of the boson and the fermion quantum fields, making the dissipative QFT—differently from QM and from QFT for closed system—able to describe systems continuously undergoing phase transitions.
Indeed, inspired by the modeling of natural brains as many-body systems, the QFT dissipative formalism has been used to model ANNs [19,20,21]. The mathematical formalism of QFT (details in [22]) requires that for open (dissipative) systems, such as the brain, which is in a permanent “trade” or “dialogue” with its environment, the degrees of freedom of the system (the brain), say A need to be “doubled” by introducing the degrees of freedom A ˜ describing the environment, according to the coalgebraic scheme: A A × A ˜ . Indeed, Hopf coproducts (sums) are generally used in quantum physics to calculate the total energy of a superposition quantum state. In the case of a dissipative system, the coproducts represent the total energy of a state balanced between the system and its thermal bath. In this case, because the two terms of the coproduct are not mutually interchangeable such as in the case of QM closed systems, we are led to consider the non-commutative q-deformed Hopf (co)algebras, out of which the Bogoliubov transformations involving the A ,   A ˜ modes are derived, and where the q-deformation parameter is a thermal parameter, strictly related with the Bogoliubov transform.
These transformations induce phase transitions, i.e., transitions through physically distinct spaces of the states describing different dynamical regimes in which the system can sit. The brain is thus, continuously undergoing phase transitions (criticalities), under the action of the inputs from the environment (à modes). The brain activity is, therefore, the result of a continual balancing of fluxes of energy (in all its forms), exchanged with the environment. The balancing is controlled by the minimization of the free energy at each step of time evolution. Since fluxes “in” for the brain (A modes) are fluxes “out” for the environment (à modes), and vice-versa, the à modes are the time-reversed images of the A modes; they represent the Double of the system [23]. In such a way, by the doubling of the algebras—of the state spaces, and of the Hilbert spaces—, and thus by inserting the degrees of freedom of the environment (thermal bath), the Hamiltonian canonical representation of a (closed) dynamic system can be recovered also in the case of a dissipative system.

3. From Natural to Artificial Quantum Neural Net Dynamics

From the theoretical computer science (TCS) standpoint, this means that the system satisfies the notion of a particular type of automaton, the Labelled State Transition Machine (LTM), i.e., the so-called infinite-state LTM, coalgebraically interpreted, and used in TCS for modelling infinite streams of data [21,24]. However, the doubling of the degrees of freedom (DDF) { A , A ˜ } just illustrated, and characterizing a dissipative QFT system, acts as a dynamic, unsupervised selection criterion of admissible because balanced states (minimum of the free energy). Effectively, it acts as a mechanism of “phase locking” between the data flow (environment) and the system dynamics.
Moreover, each system-environment entangled (doubled) state is univocally characterized by a dynamically generated code, or dynamic labelling (memory addresses). In our model, an input triggers the spontaneous breakdown of the symmetry (SBS) of the system dynamical equations. As a result of SBS, massless modes, called Nambu–Goldstone (NG) modes, are dynamically generated [25,26]. They are boson quanta of long-range correlations among the system elementary components, and their coherent condensation value N in the system ground state (the least energy state or vacuum state | 0 , that in our dissipative case is a balanced, or 0-sum energy state with T > 0) describes the recording of the information carried by that input, indexed univocally (labeled) in N [21].
Coherence denotes that the long-range correlations are not destructively interfering in the system ground state. The memory state turns out to be, therefore, a squeezed coherent state: | 0 ( θ ) N = j w j ( θ ) | w j N to which Glauber information entropy measure Q directly applies [27], with | w j denoting (the statistical weights of the) states of A and A ˜ pairs, and θ is the time- and temperature-dependent Bogoliubov transformation parameter. | 0 ( θ ) N is, therefore, a time-dependent state at finite temperature T > 0. It is effectively an entangled (system-environment) state of the modes A and Ã, which provides the mathematical description of the unavoidable interdependence between the brain and its environment. Coherence and entanglement imply that quantities relative to the A modes depend on the corresponding ones of the à modes.

4. Conclusion: A Possible Quantum Optics Implementation of the DDF Algorithm

To conclude, the more natural implementation of such a quantum computational architecture for unsupervised data streaming machine learning, based on the DDF principle, is provided by an optical ANN, using the tools of optical interferometry, just as in the applications discussed in [28,29]. The fully programmable architecture of this optical chip allows us “to depict” over coherent light waves as many interference figures as we like, and overall, to maintain their phase coherences stable in time, so to allow the implementation of quantum computing architectures (either quantum gates, or squeezed coherent states), working at room temperature. In our application for data streaming analysis, the DDF principle can be applied in a recursive way, by using the mutual information as a measure of phase distance, such as an optimization tool of error minimization of the input-output phase mismatch. In this architecture, indeed, the input of the net is not on the initial conditions of the net dynamics, such as in the ANN architectures based on statistical mechanics, but on the boundary conditions (thermal bath) of the system, to implement the architecture of a net in continuous learning, as required by the data streaming challenge.

Author Contributions

G.B. and G.V. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rutkowski, L.; Jaworski, M.; Duda, P. Probabilistic Neural Networks for the Streaming Data Classification. In Stream Data Mining: Algorithms and Their Probabilistic Properties. Studies in Big Data; Springer: Cham, Switzerland, 2020; Volume 56, pp. 245–277. [Google Scholar]
  2. Bifet, A.; Gavalda, V.; Holmes, G.; Pfahringer, B. Machine Learning for Data Streams with Practical Examples in MOA.; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  3. Gama, J. A survey on learning from data streams: Current and future trends. Prog. Artif. Intell. 2012, 1, 45–55. [Google Scholar] [CrossRef] [Green Version]
  4. Aggarwal, C. Data Streams: Models and Algorithms; Springer: New York, NY, USA, 2007. [Google Scholar]
  5. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  6. Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature 2017, 549, 195–202. [Google Scholar] [CrossRef] [PubMed]
  7. Amin, M.A.; Andriyash, E.; Rolfe, J.; Kulchytskyy, B.; Melko, R. Quantum Boltzmann Machine. Phys. Rev. X 2018, 8, 021050. [Google Scholar] [CrossRef] [Green Version]
  8. Beer, K.; Bondarenko, D.; Farrelly, T.; Osborne, T.J.; Salzmann, R.; Scheiermann, D.; Wolf, R. Training deep quantum neural networks. Nat. Commun. 2020, 11, 808. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, V. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  10. Kadowaki, T.; Nishimori, H. Quantum annealing in the transverse Ising model. Phys. Rev. E 1998, 58, 5355. [Google Scholar] [CrossRef] [Green Version]
  11. Morita, S.; Nishimori, H. Mathematical foundation of quantum annealing. J. Math. Phys. 2008, 49, 125210. [Google Scholar] [CrossRef]
  12. Heim, B.; Rønnow, T.F.; Isakov, S.V.; Troyer, M. Quantum versus classical annealing of Ising spin glasses. Science 2015, 348, 215–217. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Haag, R. On quantum field theories. Mat. Fys. Medd. 1995, 29, 1–37. [Google Scholar]
  14. Bogoliubov, N.N. On a new method in the theory of superconductivity. Nuovo Cim. 1958, 7, 794–805. [Google Scholar] [CrossRef]
  15. Bogoliubov, N.N.; Tolmachev, V.V.; Shirkov, D.V. A New Method in the Theory of Superconductivity; Consultants Bureau Inc.: New York, NY, USA; Chapman & Hall: London, UK, 1959. [Google Scholar]
  16. Takahashi, Y.; Umezawa, H. Thermo-Field Dynamics. Collect. Phenom. 1975, 2, 55–80. [Google Scholar] [CrossRef]
  17. Umezawa, H. Advanced Field Theory: Micro, Macro and Thermal; American Institute of Physics: New York, NY, USA, 1993. [Google Scholar]
  18. Umezawa, H. Development in concepts in quantum field theory in half century. Math. Jpn. 1995, 41, 109–124. [Google Scholar]
  19. Pessa, E.; Vitiello, G. Quantum dissipation and neural net dynamics. Biochem. Bioenergy 1999, 48, 339–342. [Google Scholar] [CrossRef] [Green Version]
  20. Vitiello, G. Neural networks and many/body sytems. Essays in honor of Eliano Pessa. In Multiplicity and Interdisciplinarity; Minati, G., Ed.; Springer: Cham, Switzerland, 2021; in press. [Google Scholar]
  21. Basti, G.; Capolupo, A.; Vitiello, G. Quantum Field Theory and Coalgebraic Logic in Theoretical Computer Science. Prog. Biophys. Mol. Biol. 2017, 130 Pt A, 39–52. [Google Scholar] [CrossRef] [Green Version]
  22. Blasone, M.; Jizba, P.; Vitiello, G. Quantum Field Theory and Its Macroscopic Manifestations. Boson Condensations, Orderd Patterns and Topological Defects; Imperial College Press: London, UK, 2011. [Google Scholar]
  23. Vitiello, G. My Double Unveiled; John Benjamins Publishing Co.: Amsterdam, The Netherlands, 2001. [Google Scholar]
  24. Rutten, J.J.M. Universal coalgebra: A theory of systems. Theor. Comput. Sci. 2000, 249, 3–80. [Google Scholar] [CrossRef] [Green Version]
  25. Nambu, Y. Quasiparticles and Gauge Invariance in the Theory of Superconductivity. Phys. Rev. 1960, 117, 648–663. [Google Scholar] [CrossRef]
  26. Goldstone, J. Field Theories with Superconductor Solutions. Nuovo Cim. 1961, 19, 154–164. [Google Scholar] [CrossRef]
  27. Keitel, C.H.; Wodkiewicz, K. Measuring information via Glauber’s Q-representation. In Proceedings of the Second International Workshop on Squeezed States and Uncertainty Relations; Hahn, D., Kim, J.S., Manko, V.I., Eds.; Goddard Space Flight Center Publications: Greenbelt, MD, USA, 1993. [Google Scholar]
  28. Basti, G.; Bentini, G.; Chiarini, M.; Parini, A.; Artoni, A.; Braglia, F.; Braglia, S.; Farabegoli, S. Sensor for security and safety applications based on a fully integrated monolithic electro-optical programmable microdiffractive device. In Electro-Optical and Infrared Systems: Technology and Applications XVI; SPIE Publications: Strasbourg, France, 2019; Volume 11159, pp. 1–11. [Google Scholar]
  29. Parini, A.; Chiarini, M.; Basti, G.; Bentini, G. Lithium niobate-based programmable micro-diffraction device for wavelength-selective switching applications. In Emerging Imaging and Sensing Technologies for Security and Defence IV; SPIE Publications: Strasbourg, France, 2019; Volume 11630, pp. 1–10. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Basti, G.; Vitiello, G. A QFT Approach to Data Streaming in Natural and Artificial Neural Networks. Proceedings 2022, 81, 106. https://doi.org/10.3390/proceedings2022081106

AMA Style

Basti G, Vitiello G. A QFT Approach to Data Streaming in Natural and Artificial Neural Networks. Proceedings. 2022; 81(1):106. https://doi.org/10.3390/proceedings2022081106

Chicago/Turabian Style

Basti, Gianfranco, and Giuseppe Vitiello. 2022. "A QFT Approach to Data Streaming in Natural and Artificial Neural Networks" Proceedings 81, no. 1: 106. https://doi.org/10.3390/proceedings2022081106

Article Metrics

Back to TopTop