Next Article in Journal
Learning Model Discrepancy of an Electric Motor with Bayesian Inference
Previous Article in Journal
TI-Stan: Adaptively Annealed Thermodynamic Integration with HMC
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Entropic Dynamics for Learning in Neural Networks and the Renormalization Group †

Instituto de Fisica, Universidade de Sao Paulo, 05508-090 Sao Paulo, SP, Brazil
Presented at the 39th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Garching, Germany, 30 June–5 July 2019.
Proceedings 2019, 33(1), 10; https://doi.org/10.3390/proceedings2019033010
Published: 25 November 2019

Abstract

:
We study the dynamics of information processing in the continuous depth limit of deep feed-forward Neural Networks (NN) and find that it can be described in language similar to the Renormalization Group (RG). The association of concepts to patterns by NN is analogous to the identification of the few variables that characterize the thermodynamic state obtained by the RG from microstates. We encode the information about the weights of a NN in a Maxent family of distributions. The location hyper-parameters represent the weights estimates. Bayesian learning of new examples determine new constraints on the generators of the family, yielding a new pdf and in the ensuing entropic dynamics of learning, hyper-parameters change along the gradient of the evidence. For a feed-forward architecture the evidence can be written recursively from the evidence up to the previous layer convoluted with an aggregation kernel. The continuum limit leads to a diffusion-like PDE analogous to Wilson’s RG but with an aggregation kernel that depends on the the weights of the NN, different from those that integrate out ultraviolet degrees of freedom. Approximations to the evidence can be obtained from solutions of the RG equation. Its derivatives with respect to the hyper-parameters, generate examples of Entropic Dynamics in Neural Networks Architectures (EDNNA) learning algorithms. For simple architectures, these algorithms can be shown to yield optimal generalization in student- teacher scenarios.

1. Introduction

Neural networks are information processing systems that learn from examples. Loosely inspired in biological neural systems, they have been used for several types of problems such as classification, regression, dimensional reduction and clustering [1]. Biological systems selection is based on a measure of performance that combines not only accuracy but also ease of computation and implementation. Predictions based on expectations over posterior Bayesian distributions may lead to saturating bounds for optimal accuracy learning but will typically lack in ease of computation and speed in reaching a result. Neural networks are parametric models and if we don’t address the determination of the architecture, which we don’t in this paper, the problem of learning from examples is reduced to obtaining fast estimates of the weights or parameters, avoiding the integration over large dimensional spaces. The spectacular explosion of applications in several areas is witness to the fact that several training methods and large data sets are available. Despite these victories, the mechanisms of information dynamics processing remain obscure and despite several decades of theoretical analysis using methods of Statistical Mechanics, much remains to be understood. Here we study on-line learning in feed-forward architectures, where (input,output) examples are presented one at a time. Theoretical analysis is easier than for batch or off-line learning where the cost function depends on a large number of example pairs, however on-line accuracy performance remains high. This is in part due to the fact that since the cost function changes from example to example, the local minima of the cost function that plague off-line learning are not so important. Local stationary points of the learning dynamics are still a problem, but good performances are possible. An important problem to be addressed is what cost function is the most appropriate. If an algorithm is going to be successful it has to approach Bayesian estimates for the available information. But any Bayes algorithm leads to high, even in the millions, dimensional integrals. Monte Carlo strategies cannot be used if simplicity is a requirement. The strategy to determine optimized algorithms for on-line learning has been studied in the past for restricted scenarios and architectures. We present a more general approach, with the following strategy. We are in a situation of incomplete information, thus a probability distribution represents, at a given point in the dynamics, what is known about the parameters. We have to commit to a family of distributions and we choose a Maxent family. Location hyperparameters give the current estimate of the weights. A new (input,output) example pair arrives and Bayes rule permits an update. The choice of the likelihood is a reflection of what we know about the architecture of the NN. In general it is not conjugated to the chosen family.
Still, the Bayes posterior, while not in the family, points to a unique member of the family, since it imposes new constraints on the expected values of the generators.
The resulting learning algorithm is the entropic dynamics imposed by the arrival of information in the examples that induces a change of the hyperparameters of the family. It turns out that changes in the weights are in the direction of decreasing the model Bayesian evidence and it is a stochastic gradient descent algorithm, where the cost function is the log evidence of the model.
The denominator of the Bayes update can be interpreted either as the evidence of the model or alternatively as the predictive probability distribution of the output conditioned on the input and the weights. Once it is written as the marginalization over the internal representation, i.e. the activation values of the internal units, of the joint distribution of activities of the whole network, and under the supposition that the information flows only from one layer to the next, a Markov chain structure follows. Recursion relations of the partial evidence up to a given internal layer are obtained and in the continuous depth limit (CDL) a Fokker-Planck parabolic partial differential equation is obtained. It generalizes Wilson’s Renormalization Group [2] diffusion equation for general kernels. The usual, e.g., majority rule that eliminates high frequency degrees of freedom are replaced by the weights of the NN. The RG dynamics can be seen as a classifier of Statistical Mechanics microstates into thermodynamics states. A NN extracts the relevant degrees of freedom that describe the macroscopic concept onto which an input pattern is to be assigned. The first authors to relate the RG and NN were [3] and [4] generating a large flow of ideas into the possible connections between these two areas [5,6,7].

2. Maxent Distributions and Bayesian Learning

Let f a ( w ) , for a = 1 , . . . K , w I R N , be the generators of a family Q of distributions Q ( w | λ ) . If information about w is given in the form of constraints I E Q ( f a ) = F a , for the set of numbers { F a } a = 1 , K , the Maxent distribution is
Q ( w | λ ) = 1 z exp i = 1 K λ a f a ( w ) ,
where z ensures normalization. Then
ln z λ a = F a and Q ( w | λ ) λ a = ( f a + F a ) Q ( w | λ ) .
Now consider a system learning a map from inputs x to outputs y, and the model is a known function which depends on a parameter array w : y = T ( x ; w ) . The aim of learning is to obtain the parameters from the information in the learning set D n = { ( x i , y i ) } i = 1 , n . We want to obtain a distribution for the parameters and consider that up to n 1 examples the information is coded in a member of the Q family: Q ( w | λ n 1 ) = Q n 1 . Calling the likelihood of the problem L n = P ( y n | x n , w ) , the product rule permits the Bayesian updating
P n = P ( w | D n ) = Q n 1 L n Z n ,
where the partition function or the evidence is Z ( y n | x n , λ n 1 ) = Q n 1 L n d w = P ( y n | x n λ n 1 ) . The Bayes posterior given by eq. 3 in general doesn’t belong to the Q family. We have to choose the member of the family that is closest to the Bayes posterior. This is the Maxent posterior. The way to proceed is based on the fact that a member of the Q family is determined solely by the values of the constraints { F a } . The Bayes posterior defines a set of values for the constraints { f a } . It points in a unique way to the Maxent posterior Q n within the family Q , obtained at the extreme of
S [ Q n | | Q n 1 ] = Q n log Q n Q n 1 d w Δ λ a I E n ( f a ) f a ,
subject to the only possible constraints on its expected values I E n ( f a ) which are taken to be the Bayes posterior expected values f a . Then for every generator
I E Q n ( f a ) = Q n 1 L n Z n f a ( w ) d w = I E P n ( f a ) = F a n .
Subtract from both sides F a n 1 , and use equation 2, then
F a n F a n 1 = ln Z λ a n 1
since the likelihood is independent of the Lagrange multiplier. This learning dynamics is deduced from entropy maximization and thus will be called Entropic dynamics. Learning occurs along the gradient of the log evidence. It will turn out that the sign is such that typically the evidence for the new model is higher than before learning. These equations hold for any family, but it is interesting to consider the case that will be most likely to be useful in practice, where the family is determined by the functions f 0 = 1 , f i = w i and f i j = w i w j , for i , j = 1 , N . The constraints after n examples are the normalization, I E ( w i ) = w ^ n i and I E ( w i w j ) = ( C n ) i j + w ^ n i w ^ n j . The result is the gaussian family Q exp ( λ 0 i λ i w i i j λ i j w i w j ) . The entropic dynamics update equations, driven by the arrival of the n t h example are
w ^ n = w ^ n 1 + C n 1 . w ^ n 1 log Z n ,
C n = C n 1 + C n 1 . w ^ n 1 2 log Z n . C n 1 .
For a layered network, these are the equations associated to the update of the weights afferent to a particular unit in layer d from unit i in layer d 1 and of the component of the covariance matrix describing the correlation between weights coming from units i and j. The update equations, induced by a maximum entropy approximation to Bayesian learning is the learning algorithm of the neural network which implements the map y = T ( x ; w ^ ) .
An approximation to this scheme was found for simple networks with no hidden units using a variational procedure ([8]) and applied to several architectures [9,10,11,12,13]. Then Opper [14] showed the Bayesian connection, explored elsewhere [15]. Recently it has been applied to societies of interacting neural networks [16,17,18,19]. While [12] attacked the neural network with a hidden layer, the challenge remains to study networks with deep architectures.

3. Deep Multilayer Perceptron

In this section we show that the evidence for a multilayer feedforward neural network can be written recursively as a map. Actually we will get two maps that are essentially the same. This type of map is typical of Renormalization Group transformations and in a continuous limit representation of the neural network as a field theory, we will show that the map leads to a partial differential equation analogous to Wilson’s diffusion-like RG equation.
We fix our attention at the n t h example, and hence don’t write temporal (lower) indices anymore. A layer (upper) index now appears and x d is the internal representation at the the unit layer d. Layers start with d = 0 and the depth of the network is D. Layer d weights are collectively denoted w d and individually w i j d is the weight connecting unit i at layer d 1 to unit j at layer d. The data pair used for the learning step are X 0 and y. The distributions of the representation at the input is δ ( x 0 X 0 ) and at the output δ ( x D y ) . The partition function Z ( y n | x n , λ n 1 ) in Equation (3) is Z ( X D | x 0 , λ ) = Q ( w | λ ) L d w , where Q ( w | λ ) is the prior joint distribution of the weights over all the layers. We will eventually take this to be a product over layers, Q ( w | λ ) = d = 1 D 1 Q ( w d | λ d ) . which will permit a simpler analytical treatment, but it is not a necessity at this moment. To obtain the likelihood we marginalize the joint distribution of the internal representations P ( x D , x D 1 . . . . x 1 | x 0 , w 1 , w D ) over all internal representations at the hidden units doing the same trick that leads to the Chapman-Kolmogorov equation
L = P ( x D | x 0 = X 0 , w 1 , w D ) = P ( x D , x D 1 , x 1 | x 0 = X 0 , w 1 , w D ) d = 1 D 1 d x d .
The evidence can be written as
Z D ( x D | X 0 , λ ) = Q T ( x D , x D 1 . . . . x 1 | x 0 = X 0 , λ ) d = 1 D 1 d x d .
where
Q T ( x D , x D 1 . . . . x 1 | x 0 = X 0 , λ ) = P ( x D , x D 1 x 1 | x 0 = X 0 , w 1 , . . . w D ) × d = 1 D 1 Q ( w d | λ d ) d w d
is the joint transition distribution. Define the partially integrated Z d for any d = 1 . . . . D
Z d ( x D , x D 1 , . . . . x d | x 0 , λ ) = Q T ( x D , x D 1 . . . . x 1 | x 0 = X 0 , λ ) d = 1 d 1 d x d .
It satisfies the recursion
Z d = Z d 1 d x d 1 .
and the evidence is
Z D = Z d d = d D 1 d x d
At this point this is analogous to a Statistical Mechanics (SM) or euclidean field theory (EFT) partition function in which all field configurations with momentum components above a cutoff have been integrated out. The equivalent of the effective action of the EFT, or the renormalized hamiltonian in the SM is log Z d .
Now we get a similar map, where the renormalization group transformation of the internal representations can be seen. Recall the likelihood in equation 9 and use the product rule
L = P ( x D | x 0 , w 1 , w D ) = P ( x D | x D 1 w D ) P ( x D 1 . . . . x 1 | x 0 , w 1 , w D ) d = 1 D 1 d x d
and finally
L = P ( x D | x 0 , w 1 , w D ) = d = 1 D 1 P ( x d + 1 | x d , w d + 1 ) d x d
Since the prior is also a product, then the partition function Z D = Z D ( x D = y | x 0 = X 0 , { λ d } ) is given by
Z D = d = 1 D Q d ( w d | λ d ) P ( x d | x d 1 , w d ) d = 1 D d x d 1 d w d
We integrate over x 0 and x D with the constraints that their distribution are deltas at the input X 0 and output y.
Z D = d = 1 D d w d d x d 1 Q d ( w d | λ d ) P ( x d | x d 1 , w d )
Define the evidence up to a given layer ρ ( x d ) , with initial condition ρ ( x 0 ) = δ ( x 0 X 0 ) and the map
ρ ( x d + 1 ) = ρ ( x d ) P ( x d + 1 | x d w d + 1 ) Q d + 1 ( w d + 1 | λ d + 1 ) d x d d w d + 1
The last step for the map of a network of depth D is for x D = y leading to the evidence of the model defined by the architecture of the network with weight and hyperparameters given by the set of λ d :
Z D ( y ) = ρ ( x D ) = ρ ( x D 1 ) P ( x D | x D 1 w D ) Q D ( w D | λ D ) d x D 1 d w D
Define a layer to layer transition distribution
Q d 1 T ( x d | x d 1 λ d ) = P ( x d | x d 1 , w d ) Q d ( w d | λ d ) d w d
 
then, we have a map that gives the evidence after d layers as an integral over internal representations at layer d 1 of the evidence at layer d 1 with a kernel Q T that implements an aggregation RG-like step:
ρ ( x d ) = d x d 1 ρ ( x d 1 ) Q d 1 T ( x d | x d 1 , λ d )
We have obtained two RG-like maps, Equations (13) and (20). Z d depends on all internal representations from layer d to D and on all the hyperparameters λ . The simpler ρ d only depends on the internal representation at layer d and on the hyperparameters of the previous layers. The map for Z d is simpler and the map for ρ d requires, at each step the input on the transition distribution Q T ( x d | x d 1 , λ d ) . The transition distribution describes the renormalization group like transformation implemented by the neural network that takes the internal representation at one layer to the next. It is simple to see that
Z d = ρ ( x d ) d d D Q T ( x d + 1 | x d λ d )

3.1. Generalized RG Differential Equation of a Neural Network in the Continuous Depth Limit

The layer index is obviously discrete, but we can take the continuous limit, where now layers are represented by a time like τ variable. A discrete variable i still labels the units. The evidence at depth τ is related to the evidence at depth τ 0 by a generalization of Equation (20):
ρ ( x , τ ) = Q T ( x ( τ ) | x ( τ 0 ) , λ ) ρ ( x , τ 0 ) D x ,
where the integration measure D x = i d x i . The distribution Q T ( x ( τ ) | x ( τ 0 ) , λ ) is the probability, that a network with parameters λ , conditional on being in state x at τ 0 has an internal representation x at depth τ . It must satisfy the composition law
Q T ( x ( τ + Δ τ ) | x ( τ 0 ) , λ ) = Q T ( x ( τ + Δ τ ) | z ( τ ) , λ ) Q T ( z ( τ ) | x ( τ 0 ) , λ ) D z
For a deterministic neural network, conditional on the weights w , the evolution of the internal representation is given by the transfer function. To obtain a well behaved limit it is supposed to vary slowly:
x i ( τ + Δ τ ) = T i ( x ( τ ) , w ) = x i ( τ ) + Δ τ b ˜ i ( x ( τ ) , w ) ,
so that interpretation of b ˜ is the gradient of the transfer function. The transition distribution is
Q T ( x | τ , x , τ 0 , λ ) = τ [ τ 0 , τ ] δ x ( τ + Δ τ ) T ( x ( τ ) , w ) Q ( w | λ , τ ) d w τ ,
obtained by integrating over all configuration of the weights in the slice. We have chosen a Gaussian family to represent the informational state of the network, which now takes the form of a product of Gaussians for all τ slices:
Q ( w | λ , τ ) τ exp 1 2 { Δ w · C τ 1 · Δ w }
where Δ w = w w ^ τ and λ = { w ^ τ , C τ } for all values of τ , but only the hyperparameters of the particular slice under consideration matters. To define the continuous limit we impose that the limits below exit:
lim Δ τ 0 1 Δ τ Q T ( x | τ + Δ τ , x , τ , λ ) ( x x ) D x = I E w [ b ˜ ( x ( τ ) , w ] = b ( x , τ , λ ) , lim Δ τ 0 1 Δ τ Q T ( x | τ + Δ τ , x , τ , λ ) ( x i x i ) ( x j x j ) D x = I E w [ b ˜ i ( x ( τ ) , w ) b ˜ j ( x ( τ ) , w ) ] = B i j ( x , τ , λ ) .
At each layer the drift vector b ( x , τ , λ ) is the expected value of the change in internal representation and the diffusion matrix B i j ( x , τ , λ ) to the expectation of quadratic change, which are related to the expected values of the gradient and Hessian of the transfer function respectively. As usual, take the time derivative of the expected value, with respect to Q T ( x | x , λ ) of a well behaved test function g ( x ) . Taylor expand g ( x ) around x and integrate by parts, use that g ( x ) is arbitrary and obtain that Q T satisfies a parabolic PDE and so does the evidence (see Equation (22))
ρ ( x , τ ) τ = x i ( b i ( x , τ , λ ) ρ ( x , τ ) ) + 1 2 2 x i x j ( B i j ( x , τ , λ ) ρ ( x , τ ) ) .
The long time limit of Equation (26) is the predictive distribution ρ ( y , τ = D ) = P ( y | x 0 , λ ) . Equation (26) is a generalization of an analogous diffusion equation which appears in Wilson’s incomplete integration formulation of the renormalization group (e.g., [2]). It extends the type of transformation by permitting that the transformations that leads from τ to τ + are not a simple spatial average, which would eliminate high spatial frequency components. Instead, the transformations are mediated by the weights w ^ . It differs from the usual statistical mechanics or field theories also in the following sense. In those approaches, the transformation w ^ is known and uniform and the aim is to obtain the final ρD, which describes the infrared limit or the thermodynamics of the theory. In supervised learning in neural networks, the starting point, defined by the input X0 and the output Y are given. The problem is to find the correct set of weights w ^ that implements the correct input-output association. There are two regimes for the neural network. In the learning phase the set of examples is a set of microscopic-macroscopic variables that describe a task. The aim of learning is to determine the appropriate generalized RG transformation that maps from the microscopic description to the macroscopic. After learning, the network is used to find out, for the current RG transformation, the unknown macroscopic generalized thermodynamics or infrared properties associated to the microstate. The next step is to derive optimized learning algorithms, from the solutions of Equation (26) and the EDNNA learning described by (7) and (8).

References

  1. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 19 November 2019).
  2. Wilson, K.G.; Kogut, J. The renormalization group and the ϵ expansion. Phys. Rep. 1974, 12, 75–199. [Google Scholar] [CrossRef]
  3. Bény, C. Deep Learning and the Renormalization Group. Available online: https://arxiv.org/abs/1301.3124 (accessed on 19 November 2019).
  4. Mehta, P.; Schwab, D.J. An exact mapping between the Variational Renormalization Group and Deep Learning. arXiv 2014, arXiv:1410.3831. [Google Scholar]
  5. Koch-Janusz, M.; Ringel, Z. Mutual information, neural networks and the renormalization group. Nat. Phys. 2018, 14, 578–582. [Google Scholar] [CrossRef]
  6. Li, S.H.; Wang, L. Neural Network Renormalization Group. Phys. Rev. Lett. 2018, 121, 260601. [Google Scholar] [CrossRef] [PubMed]
  7. Lin, H.W.; Tegmark, M.; Rolnick, D. Why Does Deep and Cheap Learning Work So Well? J. Stat. Phys. 2017, 168, 1223–1247. [Google Scholar] [CrossRef]
  8. Kinouchi, O.; Caticha, N. Optimal generalization in perceptrons. J. Phys. A 1992, 25, 6243. [Google Scholar] [CrossRef]
  9. Biehl, M.; Riegler, P. On-Line Learning with a Preceptron. Europhys. Lett. 1994, 28, 525. [Google Scholar] [CrossRef]
  10. Kinouchi, O.; Caticha, N. Lower Bounds for Generalization with Drifting Rules. J. Phys. A 1993, 26, 6161. [Google Scholar] [CrossRef]
  11. Copelli, M.; Caticha, N. On-line learning in the Committee Machine. J. Phys. A 1995, 28, 1615. [Google Scholar] [CrossRef]
  12. Vicente, R.; Caticha, N. Functional optimization of online algorithms in multilayer neural networks. J. Phys. A Gen. Phys. 1997, 30. [Google Scholar] [CrossRef]
  13. Caticha, N.; de Oliveira, E. Gradient descent learning in and out of equilibrium. Phys. Rev. E 2001, 63, 061905. [Google Scholar] [CrossRef]
  14. Opper, M. A Bayesian Approach to Online Learning in On-line Learning in Neural Networks; Saad, D., Ed.; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  15. Solla, S.A.; Winther, O. Optimal online learning: A Bayesian approach. Comput. Phys. Commun. 1999, 121–122, 94–97. [Google Scholar] [CrossRef]
  16. Caticha, N.; Vicente, R. Agent-based Social Psychology: From Neurocognitive Processes to Social Data. Adv. Complex Syst. 2011, 14, 711–731. [Google Scholar] [CrossRef]
  17. Vicente, R.; Susemihl, A.; Jerico, J.; Caticha, N. Moral foundations in an interacting neural networks society: A statistical mechanics analysis. Phys. A Stat. Mech. Appl. 2014, 400, 124–138. [Google Scholar] [CrossRef]
  18. Caticha, N.; Cesar, J.; Vicente, R. For whom will the Bayesian agents vote? Front. Phys. 2015, 3. [Google Scholar] [CrossRef]
  19. Caticha, N.; Alves, F. Trust, law and ideology in a NN agent model of the US Appellate Courts. In ESANN 2019 Proceedings, Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 24–26 April 2019; pp. 511–5162019. ISBN 978-287-587-065-0. Available online: https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2019-72.pdf (accessed on 19 November 2019).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Caticha, N. Entropic Dynamics for Learning in Neural Networks and the Renormalization Group. Proceedings 2019, 33, 10. https://doi.org/10.3390/proceedings2019033010

AMA Style

Caticha N. Entropic Dynamics for Learning in Neural Networks and the Renormalization Group. Proceedings. 2019; 33(1):10. https://doi.org/10.3390/proceedings2019033010

Chicago/Turabian Style

Caticha, Nestor. 2019. "Entropic Dynamics for Learning in Neural Networks and the Renormalization Group" Proceedings 33, no. 1: 10. https://doi.org/10.3390/proceedings2019033010

Article Metrics

Back to TopTop