Next Article in Journal
A Grouping Differential Evolution Algorithm Boosted by Attraction and Repulsion Strategies for Masi Entropy-Based Multi-Level Image Segmentation
Previous Article in Journal
Are Words the Quanta of Human Language? Extending the Domain of Quantum Cognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards a Theory of Quantum Gravity from Neural Networks

by
Vitaly Vanchurin
1,2
1
National Center for Biotechnology Information, NIH, Bethesda, MD 20894, USA
2
Duluth Institute for Advanced Study, Duluth, MN 55804, USA
Entropy 2022, 24(1), 7; https://doi.org/10.3390/e24010007
Submission received: 16 November 2021 / Revised: 13 December 2021 / Accepted: 19 December 2021 / Published: 21 December 2021
(This article belongs to the Section Quantum Information)

Abstract

:
Neural network is a dynamical system described by two different types of degrees of freedom: fast-changing non-trainable variables (e.g., state of neurons) and slow-changing trainable variables (e.g., weights and biases). We show that the non-equilibrium dynamics of trainable variables can be described by the Madelung equations, if the number of neurons is fixed, and by the Schrodinger equation, if the learning system is capable of adjusting its own parameters such as the number of neurons, step size and mini-batch size. We argue that the Lorentz symmetries and curved space-time can emerge from the interplay between stochastic entropy production and entropy destruction due to learning. We show that the non-equilibrium dynamics of non-trainable variables can be described by the geodesic equation (in the emergent space-time) for localized states of neurons, and by the Einstein equations (with cosmological constant) for the entire network. We conclude that the quantum description of trainable variables and the gravitational description of non-trainable variables are dual in the sense that they provide alternative macroscopic descriptions of the same learning system, defined microscopically as a neural network.

1. Introduction

Quantum mechanics is a well-defined mathematical framework that proved to be very successful for modeling a wide range of complex phenomena in high energy and condensed matter physics, but it fails to give any reasonable explanations for a phenomenon as simple as a measurement, i.e., the measurement problem. It is completely unclear what is actually happening with the wave-function during the measurement and what role (if any) observers play in the process. Unfortunately, none of the current interpretations of quantum mechanics provide a satisfactory answer to the above questions. In the Copenhagen interpretation it is simply postulated that during measurement the wave-function undergoes a sudden collapse. That is fine, but then one should view quantum mechanics as a phenomenological theory with its limits of validity. In the many-worlds interpretation the wave-function describes the state of the entire universe which evolves unitarily and nothing ever collapses [1]. That is an opposite view where quantum mechanics is a fundamental theory, but it is not a very useful theory as it makes no probabilistic predictions that could be checked experimentally. In the recent years, the so-called emergent quantum mechanics is becoming more popular [2,3,4,5,6,7,8,9], but what is usually missing is a microscopic description of the dynamics from which the complex wave-function and the Schrodinger equation could emerge. Moreover, if quantum mechanics does emerge from a statistical theory, for example, due to averaging over some hidden variables [10], then the hidden variables must be non-local [11]. In this paper we describe a microscopic theory of neural networks from which the quantum behavior does emerge (for the trainable variables) and the hidden variables (or the non-trainable variables) are non-local [12,13]. In fact, as we shall see, the very notion of locality is also an emergent phenomenon that arises from the learning dynamics of neural networks.
General relativity is another well-defined mathematical framework that was developed for modeling a wide range of astrophysical and cosmological phenomena, but it is also incomplete since it does not describe what happens in space-time singularities and it does not directly explain the indirect observations of dark matter, dark energy and cosmic inflation. Of course, we can also treat general relativity as a (highly successful, but still) phenomenological theory with its own limits of validity and model all of these phenomena with phenomenological fields, but then certain important questions cannot be answered. And that includes not only very general questions about the nature of dark matter, dark energy and cosmic inflation, but also more specific questions about assigning probabilities to cosmological observations, i.e., the measure problem [14]. Perhaps, in a more complete theory of quantum gravity all of these questions would have answers and, in fact, some progress in developing such a theory had been made in context of AdS/CFT [15,16,17] and loop quantum gravity [18,19,20]. Another possibility is that gravity is an emergent phenomenon [21,22,23,24] similar to thermodynamics, and then it does not make sense to quantize the metric tensor as all other fields, but, instead, we should try to figure out from which microscopic theory the general theory of relativity could emerge. In this paper we describe how not only general relativity, but also quantum mechanics, Lorentz invariance and space-time can all emerge from the learning dynamics of neural networks [12]. Note that the idea of using neural networks to describe gravity was also explored in ref. [25] in the context of quantum neural networks and black holes, and in ref. [26] in the context of matrix models and cosmology.
The paper is organized as follows. In the following section we define the microscopic theory of neural networks and develop a statistical description of the learning phenomenon. In Section 3 we derive Madelung equations which can be used for modeling the dynamics of trainable variables both in and out of equilibrium. In Section 4 we show that if the learning system is capable of adjusting its own parameters such as step size, mini-batch size and/or acceptance of neurons, then the trainable variables must evolve according to the Schrodinger equation. In Section 5 we consider the dynamics of non-trainable variables of individual neurons to show how the null, time-like and space-like vectors emerge. In Section 6 we exploit the freedom of local transformations to define an emergent space-time and the metric tensor. In Section 7 we consider minimally interacting states of neurons to show that the neurons must move along geodesics in the emergent space-time. In Section 8 we argue that the dynamics of non-trainable variables in the entire network must be described by an action which is equivalent to the Einstein-Hilbert action up to a boundary term and by the cosmological constant which imposes a constraint on the number of neurons. The main results of the paper are discussed in Section 9.

2. Neural Networks

In general, a neural network can be defined as a septuple ( x , P ^ , p , w ^ , b , f , H ) , where:
  • x , column state vector of boundary (i.e., input/output) and bulk (i.e., hidden) neurons,
  • P ^ , boundary projection operator to subspace spanned by boundary neurons,
  • p P ^ x , probability distribution which describes the training dataset,
  • w ^ , weight matrix which describes connections and interactions between neurons,
  • b , column bias vector which describes bias in the inputs of individual neurons,
  • f ( y ) , activation map which describes a non-linear part of the dynamics,
  • H ( x , b , w ^ ) = H ( x , q ) , loss function where q denotes collectively both b and w ^ .
(See ref. [27] for details.) There are two types of degrees of freedom: non-trainable variables q (or the bias vector b and weight matrix w ^ ) and non-trainable variables x (or the state of boundary P ^ x and bulk ( I ^ P ^ ) x neurons). The state of the boundary neurons is updated either periodically or randomly from a training dataset which is described by some probability distribution p ( x ) , and between the updates both bulk and boundary neurons evolve according to
x ( t ) = f w ^ x ( t 1 ) + b
where the activation map acts separately on each component, i.e., f i ( y ) = f i ( y i ) (e.g., hyperbolic tangent tanh ( y ) , rectifier linear unit function max ( 0 , x ) , etc.)
The main objective of learning is to find the trainable variables q (or the bias vector b and weight matrix w ^ ) which minimize the time-average of some loss function. For example, the boundary loss function is
H ( x , q ) = 1 2 x f w ^ x + b T P ^ x f w ^ x + b
and the bulk loss function can be defined as
H ( x , q ) = 1 2 x f w ^ x + b T x f w ^ x + b + 1 2 V ( x , q )
where in addition to the first term, which represents a sum of local errors over all neurons, there may be a second term which represents either local objectives or constraints imposed by a neural architecture [27]. Note that the boundary loss is usually used in supervised learning, but the bulk loss may be used for both supervised and unsupervised learning tasks.
To develop a statistical description of learning [27], consider a joint probability distribution over both trainable q and non-trainable x variables,
p ( x , q ) = p ( q ) p ( x | q )
where p ( q ) and p ( x | q ) denote, respectively, the marginal and conditional distributions. If the non-trainable variables quickly equilibrate, then their distribution must be given by the maximum entropy distribution [28,29],
p ( x | q ) p ( x ) exp β H ( x , q )
where β is a Lagrange multiplier which imposes a constraint on the loss function,
U ( q ) = d N x p ( x | q ) H ( x , q ) .
The corresponding free energy is
F ( q , t ) = 1 β log d N x p ( x , t ) exp β H ( x , q )
where the explicit and implicit dependencies of the free energy F ( q , t ) on time t are due to, respectively, stochastic dynamics of p ( x , t ) and learning dynamics of q ( t ) . The total change of the free energy is given by
d d t F ( q , t ) = k d q k d t F ( q , t ) q k + F ( q , t ) t = γ k F ( q , t ) q k 2 + F ( q , t ) t
where we assume that the trainable variables experience a classical drift in the direction of the gradient of the free energy,
d q k d t = γ F ( q , t ) q k .
Note that the parameter γ can be either positive or negative depending on whether the free energy is minimized or maximized with resect to a given trainable variable. If evolution is dominated by stochastic dynamics, then according to the second law of thermodynamics the entropy must increase and then the free energy is minimized, but if evolution is dominated by learning, then according to the second law of learning the entropy must decrease and then the free energy can be maximized [27]. We will come back to the issue of sign of γ in the following sections.

3. Madelung Equations

On the shortest time scales (or when the free energy F ( q , t ) does not change significantly) the dynamics of the probability density p ( q , t ) can be modeled by the Fokker-Planck equation,
p ( q , t ) t = k q k D p ( q , t ) q k d q k d t p ( q , t ) = k q k D p ( q , t ) q k + γ F ( q , t ) q k p ( q , t )
where we used (9) and D is the diffusion coefficient. On longer time scales the dynamics of the free energy is given by (8) and an additional assumption must be made. By following the analysis of refs. [12,13,27] we assume that the long time scale dynamics is governed by the principle of stationary entropy production [9]. The principle states that the path taken by a system is the one for which the entropy production is stationary, subject to whatever constraints are imposed on the system.
The entropy production of trainable variables q can be estimated by calculating the total change in the Shannon entropy,
S q d K q p ( q , t ) log p ( q , t ) ,
which can be expressed as
(12) Δ S q = 0 T d t d S q ( t ) d t = 0 T d t d K q d p d t log p 0 T d t d K q d p d t (13) = 0 T d t d K q p D k log p q k 2 + γ k log p q k F q k .
where in the first line we used conservation of probabilities, i.e., d K q p ( q , t ) = 1 and in the second line we used (10), integrated by parts and neglected the boundary terms by imposing either periodic or vanishing boundary conditions. Equation (13) describes the system on short time scales, but on longer time scales an addition constraint must be imposed to satisfy (8). The overall optimization problem is solved by constraining deviations of the free energy production (8) from its time-averaged value using the method of Lagrange multipliers. The corresponding “action” functional is given by,
S [ p , F , α ] = Δ S q α d K q p 0 T d t d F d t d F d t t f = 0 T d t d K q p D k log p q k 2 + γ k log p q k F q k + α γ k F q k 2 α F t + α V = 0 T d t d K q p D γ 4 α k log p q k 2 + α γ k F ˜ q k 2 α F ˜ t + α V .
In the second line we defined the “potential”
V ( q ) f + d F ( q , t ) d t t
where t is the time average, and in the third line we completed the square to define
F ˜ ( q , t ) F ( q , t ) + log p ( q , t ) 2 α
and also used conservation of probabilities, i.e., d K q p ( q , t ) = 1 .
Note that in refs. [12,13,27] a functional similar to (14) was obtained, but only in a near equilibrium limit or when the entropy production due to learning is negligible, i.e., γ k 2 F q k 2 D k log p q k 2 . In contrast, in (14) we completed the square and redefined the free energy F F ˜ which allowed us to keep all the terms. By varying S [ p , F ˜ , α ] with respect to p and F ˜ (i.e., original probability distribution, but shifted free energy (16)) we also obtain the Madelung hydrodynamic equations [30],
t p = k q k u k p
t u j = k u k q k u j 1 M q j V 2 2 M k 2 p q k 2
with “velocity” of the fluid
u k 1 M F ˜ q k
and “mass”
M 1 2 γ
but the “Planck’s constant” is now
1 α 4 D γ α 1 .
Therefore, we conclude, that the Madelung description of trainable variables must remain valid arbitrary far away from the learning equilibrium, suggesting that the effect is more general than previously thought.

4. Schrodinger Equation

All of the solutions of the Madelung Equations (17) and (18) are also solutions of the Schrodinger equation, but the opposite is not true [31] and so the system is not exactly quantum. To study a limit when a fully quantum behavior emerges we have to assume that the learning system is described by a grand canonical ensemble of neurons and that the exact number of neurons N is unobservable [13]. Then a constant shift in the free energy is unobservable, i.e.,
F ˜ F ˜ + μ N N Z .
where the “chemical potential” of neurons (or another Lagrange multiplier which imposes a constraint on the number of neurons) is defined as
μ = ± 2 π .
Using (22) the functional (14) can be rewritten as,
S [ Ψ ] = α 0 T d t d K q 2 2 M k Ψ * q k Ψ q k i Ψ * Ψ t + V Ψ * Ψ
where the “wave-function” is
Ψ p exp i F ˜ .
(See ref. [13] for details.)
It is assumed that , D and α are all positive, but γ and μ can be either positive or negative. By combining (21) and (23) we obtain a quadratic equation,
μ 2 π 2 α 2 4 D γ α + 1 = 0
whose solutions are
α ± = 2 D γ ± 2 D γ 2 μ 2 π 2 μ 2 π 2 .
For the real solutions to exist, the following inequality must be satisfied
2 D γ μ 2 π = .
Evidently, for γ μ / D > 4 π the inequality (28) cannot be satisfied and thus the quantum (or Schrodinger, but not Madelung) description breaks down. To restore the quantumness the learning system must readjust γ , μ and/or D such that the inequality (28) is saturated. In other words the learning system must decrease either the step size, the mini-batch size and/or the chemical potential by γ , D and μ until
γ μ D = 4 π .
However, if we want the “Planck constant” to remain constant, then the chemical potential (23) must be constant, and the only parameters that should vary are the number of neurons, the step size and the mini-batch size, or N, γ and D. Evidently, the learning efficiently which is achievable only through quantumness (e.g., quantum annealing) is tightly connected to the ability of the learning system to dynamically adjust its own parameters (e.g., step size, mini-batch size, number of neurons). On the other hand the Madelung description is always appropriate both in and out of equilibrium.

5. Lorentz Symmetry

In the previous sections we discussed the entropy production Δ S q of trainable variables q , but the dynamics of non-trainable variables was described only at the level of its free energy. In this section we are interested instead in the entropy production Δ S x of non-trainable variables x which we approximate as a sum of entropy productions of individual neurons,
Δ S x ( t ) i Δ S x , i ( t ) .
It is assumed that the state of neurons changes quasi-periodically [12], i.e.,
x i 1 ( t ) x i 2 ( t + 1 ) x i d ( t + d 1 ) x i 1 ( t + d ) .
For concreteness, we assume that d = 3 which corresponds to the three spatial dimensions. Then, the entropy production can be modeled as a function of “displacements”,
x ˙ i a ( t ) 1 3 x i a ( t + 3 ) x i a ( t )
and computational time t, i.e.
Δ S x , i ( t ) = Δ S x , i x ˙ i a ( t ) , t .
In general, there are two contributions to the entropy production: positive due to the second law of thermodynamics and negative due to the second law of learning [27]. The main idea is to model the positive entropy production as some non-negative function σ i , + ( t ) 0 of computational time t and the negative entropy production (or entropy destruction) as some non-positive function σ i , ( x ˙ i a ( t ) ) 0 of displacements x ˙ i a ( t ) . Then the total entropy production is given by
Δ S x , i x ˙ i a ( t ) , t = σ i , + ( t ) + σ i , ( x ˙ i a ( t ) ) = x ˙ i 0 ( t ) 2 + σ i , ( x ˙ i a ( t ) )
where we defined a monotonic function
x i 0 ( t ) 0 t d τ σ i , + ( τ ) .
In addition, the entropy destruction σ i , ( x ˙ i a ( t ) ) 0 must vanish if there are no displacements x ˙ i a = ( 0 , 0 , 0 ) which implies that there are no zeroth and first order terms in a perturbative expansion around origin, i.e.
σ i , ( x ˙ i a ( t ) ) a , b g i , a b x ˙ i a ( t ) x ˙ i b ( t ) .
Here g i , a b ( t ) is some positive definite matrix and the displacements x ˙ i a ( t ) are assumed to be small so that the third order terms can be neglected. By substituting (36) in (34) we obtain
Δ S x , i x ˙ i a ( t ) , t x ˙ i 0 ( t ) 2 a , b g i , a b x ˙ i a ( t ) x ˙ i b ( t ) .
and if we define temporal components of the matrix, g i , 00 = 1 and g i , 0 a = g i , a 0 = 0 , then (37) takes a more covariant form (For brevity of notations summation over repeated raised and lowered indices is implied everywhere unless explicitly stated otherwise.)
Δ S x , i x ˙ i μ ( t ) g i , μ ν x ˙ i μ ( t ) x ˙ i ν ( t ) .
for metric signature ( , + , + , + ) . Note that for very large displacements the third order terms may become important and then the approximation in (36) would break down and the Lorentz symmetry in (38) would be broken.
It is convenient to think of x ˙ i μ as a four-vector in the tangent space at the “position” of i’th neuron. Indeed, if, macroscopically, one can only observe the entropy (or entropy production), then we have Lorentz invariance in a sense that different representation of the four-vectors x ˙ i μ , that are connected to each other through Lorentz transformations, are indistinguishable. In a local equilibrium, the stochastic entropy production x ˙ i 0 ( t ) 2 is balanced by the entropy destruction due to learning g i , a b x ˙ i a ( t ) x ˙ i b ( t ) and the entropy remains constant. Therefore, the null displacement vectors, g i , μ ν x ˙ i μ ( t ) x ˙ i ν ( t ) = 0 , describe neurons in equilibrium, Δ S x , i = 0 . Moreover, the time-like displacement vectors, g i , μ ν x ˙ i μ ( t ) x ˙ i ν ( t ) < 0 , describe neurons for which stochastic dynamics dominates, Δ S x , i > 0 , and the space-like displacement vectors, g i , μ ν x ˙ i μ ( t ) x ˙ i ν ( t ) > 0 , (if such displacement vectors can be stable) describe neurons for which learning dynamics dominates, Δ S x , i < 0 .

6. Emergent Space-Time

The local space-time coordinates of individual neurons, x i μ , can be transformed using shifts, rotations and boosts, i.e.
x i μ ( t ) = Λ i , μ μ x i μ ( t ) + a i μ
where Λ i , μ μ is a Lorentz matrix. If the matrix g i , μ ν is transformed using inverse Lorentz matrix Λ i , μ μ ,
g μ ν ( x i ) = Λ i , μ μ Λ i , ν ν g i , μ ν
then the entropy production does not change,
(41) Δ S x , i = g i , μ ν x ˙ i μ ( t ) x ˙ i ν ( t ) = Λ i , μ μ Λ i , ν ν g i , μ ν Λ i , μ α x ˙ i α ( t ) Λ i , μ β x ˙ i β ( t ) (42) = g i , μ ν x ˙ i μ ( t ) x ˙ i ν ( t ) .
(Note that we adopted a standard notation of primed-unprimed indices often used for coordinate transformations [32]). The main idea is to exploit the freedom of transformations to make an appropriately weighted average of g i , μ ν matrices as close to the flat metric η μ ν as possible,
g μ ν ( t , x 1 , x 2 , x 3 ) i g i , μ ν g i e 1 2 x a x i a ( t ) g i , a b x b x i b ( t ) i g i e 1 2 x a x i a ( t ) g i , a b x b x i b ( t ) η μ ν
where g i = det g i , a b and summation in the exponent is taken over only spatial components, a , b = 1 , 2 , 3 . For simplicity, we assume that all of the local space-times are transformed into “synchronous gauge” with global time coordinate
t = x 0 = x i 0
and
g 00 ( x ) = 1 g a 0 ( x ) = 0 g 0 a ( x ) = 0 .
Note that from now on the coordinate time is denoted by t = x 0 which need not be the same as computational time.
It is convenient to introduce the curly brackets notation,
i ( 2 π ) 3 / 2 e 1 2 x a x i a ( t ) g i , a b x b x i b ( t )
and then the (weighted average) metric tensor (43) can be expressed as
g μ ν ( x ) = g i , μ ν g i g i
and (weighted average) inverse metric tensor is defined as
g μ ν ( x ) g i μ ν g i g i .
It is not immediately clear what is the relation between g μ ν ( x ) and g μ ν ( x ) , but if the emergent space-time is nearly flat (43), then we can expand both (47) and (48) around flat metric to obtain
g i , μ ν = η μ ν + ϵ h i , μ ν + O ( ϵ 2 )
g i μ ν = η μ ν ϵ η μ α η ν β h i , α β + O ( ϵ 2 )
and verify that the product of the (weighted average) metric tensor and of the (weighted average) inverse metric tensor is indeed identity,
g μ ν g ν λ = g i μ ν g i g i g i , ν λ g i g i η μ ν ϵ { η μ α η ν β h i , α β g i } g i η ν λ + ϵ { h i , ν λ g i } g i + O ( ϵ 2 ) = δ λ μ + O ( ϵ 2 ) .
In general, the curly brackets (46) can be used for mapping discrete indices i to continuous spatial coordinates ( x 1 , x 2 , x 3 ) . For example, the total number of neurons can be expressed as
d D x g i = i d 3 x g i ( 2 π ) 3 e 1 2 x a x i a ( t ) g i , a b x b x i b ( t ) = i 1 = N
which suggests that
g ( x ) g i
should be interpreted as the number density of neurons in the emergent space. Moreover, using the perturbative expansions (49) and (50) we can check that the determinant of the metric tensor g a b is the same as the weighted sum of determinants of g i , a b , i.e.,
det g μ ν ( x ) = det g a b = 1 + ϵ 2 Tr h a b + O ( ϵ 2 ) = 1 + ϵ 2 Tr ( h i , a b ) g i g i + O ( ϵ 2 ) = { 1 } + ϵ Tr ( h i , a b ) { 1 } + ϵ 2 Tr ( h i , a b ) + O ( ϵ 2 ) = { g i } + O ( ϵ 2 ) = g ( x ) + O ( ϵ 2 ) .

7. Geodesic Equation

The proper time of a given neuron can be identified with the square root of the entropy production (65), i.e.,
Δ τ ( t ) = Δ S i , x g i , μ ν x ˙ i μ ( t ) x ˙ i ν ( t ) .
If we are interested in a more macroscopic and localized distribution of neurons, then their average entropy production can be approximated using the metric tensor (47),
Δ τ ( t ) = Δ S i , x g μ ν ( x i ) x ˙ i μ ( t ) x ˙ i ν ( t ) .
In a continuum limit (56) becomes,
d τ d t Δ τ = g μ ν ( x i ) x ˙ i μ ( t ) x ˙ i ν ( t )
which is usually expressed as a square of infinitesimal line element,
d τ 2 = g μ ν ( x i ) d x i μ ( t ) d x i ν ( t ) .
By integrating the proper time from initial position x i μ ( 0 ) at time t = 0 to final position x i μ ( T ) at time t = T we obtain,
τ [ x i μ ( T ) | x i μ ( 0 ) ] 0 T d t d τ d t 0 T d t g μ ν ( x i ) x ˙ i μ ( t ) x ˙ i ν ( t ) .
According to the principle of stationary entropy production, it is expected that the neuron would “travel” along a path (from initial x i μ ( 0 ) to final x i μ ( T ) position) which extremizes (in this case maximizes) the entropy production or, equivalently, the proper time τ [ x i μ ( T ) | x i μ ( 0 ) ] .
By setting variations of the proper time (59) with resect to the trajectory x i μ ( t ) to zero,
δ τ [ x i μ ( T ) | x i μ ( 0 ) ] δ x i μ ( t ) = 0
we obtain the geodesic equation
d 2 x i μ d t 2 = Γ α β 0 d x i μ d t Γ α β μ d x i α d t d x i β d t
or, equivalently, in terms of proper time
d 2 x i μ d τ 2 = Γ α β μ d x i α d τ d x i β d τ
where the Christoffel symbol is defined as
Γ α β μ 1 2 g μ ν g ν α x β + g ν β x α g α β x ν .
(See ref. [32] for a detailed derivation of the geodesic equation.) This result suggests that in the limit of minimal interactions, described by the metric tensor g μ ν ( x i ) , the localized states of neurons are expected to move along geodesics in the emergent space-time.

8. Einstein Equations

In this section, we are interested in the total entropy production of the non-trainable variables in the entire neural network during global time interval T,
Δ S x [ g ] 0 T d t i Δ S x , i x ˙ i ( t ) , t .
The entropy production of individual neurons (38) in the synchronous gauge is
Δ S x , i x ˙ i μ ( t ) 1 g i , a b x ˙ i a ( t ) x ˙ i b ( t )
and after integrating by parts we obtain
Δ S x [ g ] 0 T d t i 1 g i , a b d x i a ( t ) d t d x i b ( t ) d t = 0 T d t i g i , a b d 2 x i a ( t ) d t 2 x i b ( t ) + N T
where we have neglected the boundary term. We can also drop the constant term N T (which is irrelevant for variational problems) and rewrite the entropy production using Gaussian integration formula,
Δ S x [ g ] = 0 T d t d 3 x i g i , a b d 2 x i a ( t ) d t 2 x i b ( t ) g i ( 2 π ) 3 / 2 e 1 2 x a x i a ( t ) g i , a b x b x i b ( t ) = 0 T d t d 3 x i g i , a b d 2 x i a ( t ) d t 2 x b x i b ( t ) g i ( 2 π ) 3 / 2 e 1 2 x a x i a ( t ) g i , a b x b x i b ( t ) = 0 T d t d 3 x g i , a b d 2 x i a ( t ) d t 2 x b x i b ( t ) g i
where in the last line the definition of curly brackets (46) was used. Using the geodesic Equation (61) the total entropy production (67) can be recast into the following form,
Δ S x [ g ] 0 T d t d 3 x Γ α β 0 ( x ) d x i a d t g i , a b x b x i b ( t ) d x i α ( t ) d t d x i β ( t ) d t g i Γ α β a ( x ) g i , a b x b x i b ( t ) d x i α ( t ) d t d x i β ( t ) d t g i .
To proceed further, we make a crucial assumption that on average,
d x i α ( t ) d t d x i β ( t ) d t 1 2 g i α β + g α β ( x i ) .
In other words, we assume that displacement of i’th neuron depends equally on its own covariance matrix g i α β and on the weighted average covariance matrix g α β ( x i ) . Then by plugging (69) into (67) and using (48) and (53) we get,
Δ S x [ g ] 1 2 0 T d t d 3 x g α β Γ α β 0 d x i a d t g i , a b x b x i b ( t ) g i g α β Γ α β a g i , a b x b x i b ( t ) g i + Γ α β 0 d x i a d t g i , a b x b x i b ( t ) g i α β g i Γ α β a g i , a b x b x i b ( t ) g i α β g i = 1 2 0 T d t d 3 x g α β Γ α β μ x μ g i Γ α β μ x μ g i α β g i = 1 2 0 T d t d 3 x g g α β Γ α β μ Γ γ μ γ Γ α β μ x μ g g α β .
It is now easy to show that (70) is equivalent to the Einstein-Hilbert action up to a boundary term, i.e.,
Δ S x [ g ] 1 2 d t d 3 x g g α β Γ α β μ Γ γ μ γ Γ α β μ x μ g g α β = d t d 3 x g g α β Γ α β μ Γ γ μ γ Γ α γ μ Γ β μ γ + x μ Γ α β μ x β Γ α μ μ + boundary term = d t d 3 x g g α β R α β + boundary term
where the Ricci tensor is
R μ ν Γ α β μ Γ γ μ γ Γ α γ μ Γ β μ γ + x μ Γ α β μ x β Γ α μ μ .
By setting variations of the entropy production Δ S x [ g ] with respect to the inverse metric tensor g μ ν to zero we obtain the vacuum Einstein equations,
δ Δ S x [ g ] δ g μ ν = 0 R μ ν 1 2 g α β R α β g μ ν = 0 .
(See ref. [32] for a detailed derivation of the Einstein equations from the Einstein-Hilbert action.)
So far the total number of neurons N was fixed, but, as was argued in ref. [13] and in Section 4, for the quantumness to emerge the number of neurons N must vary. Such variations can be introduced into the variational problem by defining a functional,
S [ g , Λ ] Δ S x [ g ] + 2 Λ N ¯ N = d t d 3 x g g α β R α β 2 Λ + 2 Λ N ¯
where we used (52), (53) and (71). By varying S [ g , Λ ] with resect to the inverse metric g μ ν , we obtain Einstein equations with cosmological constant, i.e.
R μ ν 1 2 g α β R α β g μ ν + Λ g μ ν = 0 .
Evidently, the Lagrange multiplier 2 Λ constraints the average number of neurons and plays a role of the cosmological constant Λ in the gravitational description of non-trainable variables. We recall that in the quantum description of trainable variables (see Section 4) the Lagrange multiplier μ = ± 2 π also constraints the average number of neurons, but instead it plays the role of the Planck’s constant . Evidently, the role of the Lagrange multipliers 2 Λ and μ = ± 2 π in the gravitational description of non-trainable variables and in quantum description of trainable variables is very different.
In statistical description, the parameter Λ would play the role of a “chemical potential” which would be responsible for both “neurogenesis” and “neurodegeneration”. If the parameter can vary in time, then for a system with a small number of neurons (e.g., early Universe) Λ would be larger, but for a systems with a large number of neurons (e.g., late Universe) Λ would be smaller. This can potentially explain both: the early-time accelerated expansion (i.e., cosmic inflation) and the late-time accelerated expansion (i.e., the dark energy), but for the former case a more thorough modeling of the spatial variations of the number density of neurons is required. In addition, the dynamics of trainable variables q ( t ) must be described by either Madelung or Schrodinger equations (see Section 3 and Section 4) and thus additional equations of motions must be satisfied and additional constraints must be imposed. However, from the point of view of the metric dynamics, there should exist an appropriately defined energy momentum tensor T μ ν that would be acting as a source in the Einstein equations,
R μ ν 1 2 g α β R α β g μ ν + Λ g μ ν = κ T μ ν .
In addition, it is important to model possible deviations from the assumption (69) in the context of astrophysical observations of, for example, dark matter. Of course, all such generalizations require a more careful modeling of the dynamics of the trainable variables which is beyond the scope of this paper.

9. Discussion

All successful physical models are built on top of mathematical frameworks or theories. These theories are never proven in a rigorous mathematical sense, but instead they are validated through either repeated experiments or observations of the Universe around us. In the twenties century two such theories were first proposed—quantum mechanics and general relativity—and then successfully applied to modeling physical phenomena on a wide range of scales from 10 19 m (i.e., high-energy experiments) to 10 + 26 m (i.e., cosmological observations). However, all of the attempts to treat one of these theories as fundamental, and the other one as emergent have so far failed (i.e., the problem of quantum gravity). In addition, both theories seem to fall apart with introduction of macroscopic observers like ourselves. In some sense, the situation with observers was even worse than with physical phenomena, since we did not even have a mathematical framework for modeling observers. Indeed, there is not a single self-consistent and paradox-free definition of macroscopic observers that could describe what is actually happening with quantum state during measurement (i.e., the measurement problem) or how to assign probabilities to cosmological observations (i.e., the measure problem). Fortunately, the situation is changing and now we do have a mathematical framework of neural networks which can describe many (if not all) biological phenomena [33]. The main question, however, remains: can the theory of neural networks be the fundamental theory [12] from which (not only macroscopic observers [34] or some complex phenomena [35], but) all biological and physical phenomena emerge? If so, then the theories of quantum mechanics and general relativity must not be fundamental, but emergent.
The idea that quantum mechanics can emerge from anything classical, including neural networks, is very counterintuitive. And the main problem is not that in quantum mechanics we are dealing with probabilities and in classical physics everything is deterministic. Even in quantum mechanics the wave-function Ψ ( q ) evolves deterministically and it is only because of the measurements the probabilities p ( q ) = | Ψ ( q ) | 2 arise. In fact, this is not very different from statistical mechanics, but what is difference is that in quantum mechanics not only probabilities, or square-root of probabilities | Ψ ( q ) | , but also the complex phase of the wave-function Im ( log ( Ψ ( q ) ) ) , evolves according to the Schrodinger equations. To show that this might be possible in a given dynamical system requires two non-trivial steps. The first step is to provide a microscopic interpretation of the complex phase which, in the case of neural networks, is the free energy of non-trainable variables Im ( log ( Ψ ( q ) ) ) = F ˜ ( q ) / . Note that the microscopic interpretation of the phase was also given in ref. [9] for constrained systems and in refs. [12,27] for equilibrium systems, but as was shown in Section 3 similar results also hold for non-equilibrium systems. The second step is to show that the complex phase, or the free energy F ˜ ( q ) in the case of neural networks, is multivalued. The multivaluedness condition is essential for the fully quantum behaivor to emerge [31] and in the case of neural networks it is satisfied for a grand-canonical ensemble of neurons [13]. In Section 4 we extended this result to non-equilibrium systems that are capable of adjusting its own parameters (e.g., number of neurons, step size, mini-batch size). More precisely, we have shown that the quantum description of neural networks is appropriate for modeling the non-equilibrium dynamics of trainable variables with non-trainable (or hidden) variables modeled through their free energy and the number of neurons constrained by a Lagrange multiplier which plays the role of the Planck constant.
The problem of emergent gravity [21,22,23,24] is even more complicated, just because it is impossible to study the emergence of general relativity until the space, time and space-time symmetries had already emerged. In the context of neural networks, the problem was first studied in ref. [27] and more specifically in ref. [12], but in both cases the description was too phenomenological or architecture-dependent for anything substantial to be said about the nature of dark energy, dark matter or cosmic inflation. In this paper we improved our understanding of the emergent gravity on several fronts. First of all, we showed that the Lorentz symmetries emerge from the equilibrium dynamics for null vectors, from the stochastic entropy production for time-like vectors and from the entropy destruction due to learning for space-like vectors. This is in agreement with a common view that “time” has a thermodynamic origin, but it also suggests that “space” must emerge from learning. Secondly, we used the freedom of Lorentz transformations to define the emergent space-time and the metric tensor which is, by construction, as close as possible to being flat. In fact, it was essential for the space-time to be nearly flat and we expect the relativistic description to break down in regions of high curvature. Thirdly, we considered localized states of neurons, with minimal interactions described by the metric tensor, to show that they must move along geodesics in the emergent space-time. And finally, we showed that the general relativistic description is appropriate for modeling the dynamics of non-trainable variables with trainable variables modeled through their energy-momentum tensor and with the number of neurons constrained by a Lagrange multiplier which plays the role of the cosmological constant.
In conclusion, we would like to emphasize that the quantum and gravitational descriptions presented in this paper are dual in the sense that they provide alternative macroscopic descriptions of the same learning system, defined microscopically as a neural network. This duality does not have an obvious connection to the holographic duality [15,16,17] although such possibility was discussed in ref. [12]. On the other hand, a fully quantum descriptions can only emerge from a neural network if the number of neurons is not fixed in which case a constraint on the number of neurons must be imposed in both sectors, i.e., gravitational and quantum. The Lagrange multiplier which imposes the constraint in the quantum description is the Planck constant (see Section 4), but the Lagrange multiplier which imposes the constant in the gravitational description is the cosmological constant (see Section 8). This implies that a quantum system can only be dual to a gravitational system with cosmological constant as in AdS/CFT [15,16,17], but the sign of the cosmological constant can be arbitrary.

Funding

This research received no external funding.

Acknowledgments

The author is grateful to Mikhail Katsnelson for very useful discussions. This work was supported in part by the Foundational Questions Institute (FQXi) and the Oak Ridge Institute for Science and Education (ORISE).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Everett, H. Relative State Formulation of Quantum Mechanics. Rev. Mod. Phys. 1957, 29, 454–462. [Google Scholar] [CrossRef] [Green Version]
  2. Adler, S. Quantum Theory as an Emergent Phenomenon; Cambridge UP: Cambridge, UK, 2004. [Google Scholar]
  3. Hooft, G.’t. Emergent Quantum Mechanics and Emergent Symmetries. AIP Conf. Proc. 2007, 957, 154–163. [Google Scholar]
  4. Blasone, M.; Jizba, P.; Scardigli, F. Can quantum mechanics be an emergent phenomenon? J. Phys. Conf. Ser. 2009, 174, 012034. [Google Scholar] [CrossRef] [Green Version]
  5. Grossing, G.; Fussy, S.; Mesa Pascasio, J.; Schwabl, H. The Quantum as an Emergent System. J. Phys. Conf. Ser. 2012, 361, 012008. [Google Scholar] [CrossRef] [Green Version]
  6. Acosta, D.; de Cordoba, P.F.; Isidro, J.M.; Santander, J.L.G. Emergent quantum mechanics as a classical, irreversible thermodynamics. Int. J. Geom. Methods Mod. Phys. 2013, 10, 1350007. [Google Scholar] [CrossRef]
  7. Fernandez De Cordoba, P.; Isidro, J.M.; Perea, M.H. Emergent quantum mechanics as a thermal ensemble. Int. J. Geom. Methods Mod. Phys. 2014, 11, 1450068. [Google Scholar] [CrossRef] [Green Version]
  8. Caticha, A. Entropic Dynamics: Quantum Mechanics from Entropy and Information Geometry. Ann. Phys. 2019, 531, 1700408. [Google Scholar] [CrossRef] [Green Version]
  9. Vanchurin, V. Entropic mechanics: Towards a stochastic description of quantum mechanics. Found. Phys. 2019, 50, 40–53. [Google Scholar] [CrossRef] [Green Version]
  10. Bohm, D. A Suggested Interpretation of the Quantum Theory in Terms of ’Hidden Variables’ I. Phys. Rev. 1952, 85, 166–179. [Google Scholar] [CrossRef]
  11. Bell, J. On the Einstein Podolsky Rosen Paradox. Physics Physique Fizika 1964, 1, 195–200. [Google Scholar] [CrossRef] [Green Version]
  12. Vanchurin, V. The World as a Neural Network. Entropy 2020, 22, 1210. [Google Scholar] [CrossRef]
  13. Katsnelson, M.I.; Vanchurin, V. Emergent Quantumness in Neural Networks. Found. Phys. 2021, 51, 94. [Google Scholar] [CrossRef]
  14. Vanchurin, V.; Vilenkin, A.; Winitzki, S. Predictability crisis in inflationary cosmology and its resolution. Phys. Rev. D. 2000, 61, 083507. [Google Scholar] [CrossRef] [Green Version]
  15. Witten, E. Anti-de Sitter space and holography. Adv. Theor. Math. Phys. 1998, 2, 253. [Google Scholar] [CrossRef]
  16. Susskind, L. The World as a hologram. J. Math. Phys. 1995, 36, 6377. [Google Scholar] [CrossRef] [Green Version]
  17. Maldacena, J.M. The Large N limit of superconformal field theories and supergravity. Int. J. Theor. Phys. 1999, 38, 1113. [Google Scholar] [CrossRef] [Green Version]
  18. Ashtekar, A. New Variables for Classical and Quantum Gravity. Phys. Rev. Lett. 1986, 57, 2244–2247. [Google Scholar] [CrossRef]
  19. Rovelli, C.; Smolin, L. Loop Space Representation of Quantum General Relativity. Nucl. Phys. 1990, 80, B331. [Google Scholar] [CrossRef]
  20. Ashtekar, A.; Bojowald, M.; Lewandowski, J. Mathematical structure of loop quantum cosmology. Adv. Theor. Math. Phys. 2003, 7, 233–268. [Google Scholar] [CrossRef]
  21. Jacobson, T. Thermodynamics of space-time: The Einstein equation of state. Phys. Rev. Lett. 1995, 75, 1260. [Google Scholar] [CrossRef] [Green Version]
  22. Padmanabhan, T. Thermodynamical Aspects of Gravity: New insights. Rep. Prog. Phys. 2010, 73, 046901. [Google Scholar] [CrossRef] [Green Version]
  23. Verlinde, E.P. On the Origin of Gravity and the Laws of Newton. J. High Energy Phys. 2011, 1104, 029. [Google Scholar] [CrossRef] [Green Version]
  24. Vanchurin, V. Covariant Information Theory and Emergent Gravity. Int. J. Mod. Phys. A 2018, 33, 1845019. [Google Scholar] [CrossRef]
  25. Dvali, G. Black Holes as Brains: Neural Networks with Area Law Entropy. Fortschritte der Physik 2018, 66, 1800007. [Google Scholar] [CrossRef] [Green Version]
  26. Alexander, S.; Cunningham, W.J.; Lanier, J.; Smolin, L.; Stanojevic, S.; Toomey, M.W.; Wecker, D. The Autodidactic Universe. arXiv 2021, arXiv:2104.03902. [Google Scholar]
  27. Vanchurin, V. Toward a theory of machine learning. Mach. Learn. Sci. Technol. 2021, 2, 035012. [Google Scholar] [CrossRef]
  28. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. Ser. II 1957, 106, 620–630. [Google Scholar] [CrossRef]
  29. Jaynes, E.T. Information Theory and Statistical Mechanics II. Phys. Rev. Ser. II. 1957, 108, 171–190. [Google Scholar] [CrossRef]
  30. Madelung, E. Quantentheorie in hydrodynamischer Form. Z. Phys. 1927, 40, 322–326. (In German) [Google Scholar] [CrossRef]
  31. Wallstrom, T.C. Inequivalence between the Schrödinger equation and the Madelung hydrodynamic equations. Phys. Rev. A 1994, 49, 1613–1617. [Google Scholar] [CrossRef]
  32. Carroll, S.M. Spacetime and Geometry: An Introduction to General Relativity; Addison-Wesley: San Francisco, CA, USA, 2004. [Google Scholar]
  33. Vanchurin, V.; Wolf, Y.I.; Katsnelson, M.O.; Koonin, E.V. Towards a Theory of Evolution as Multilevel Learning. arXiv 2021, arXiv:2110.14602. [Google Scholar]
  34. Vanchurin, V.; Wolf, Y.I.; Katsnelson, M.O.; Koonin, E.V. Thermodynamics of Evolution and the Origin of Life. arXiv 2021, arXiv:2110.15066. [Google Scholar]
  35. Katsnelson, M.I.; Vanchurin, V.; Westerhout, T. Self-organized criticality in Neural Networks. arXiv 2021, arXiv:2107.03402. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vanchurin, V. Towards a Theory of Quantum Gravity from Neural Networks. Entropy 2022, 24, 7. https://doi.org/10.3390/e24010007

AMA Style

Vanchurin V. Towards a Theory of Quantum Gravity from Neural Networks. Entropy. 2022; 24(1):7. https://doi.org/10.3390/e24010007

Chicago/Turabian Style

Vanchurin, Vitaly. 2022. "Towards a Theory of Quantum Gravity from Neural Networks" Entropy 24, no. 1: 7. https://doi.org/10.3390/e24010007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop