Next Article in Journal
Systemic Importance of China’s Financial Institutions: A Jump Volatility Spillover Network Review
Previous Article in Journal
Differential Parametric Formalism for the Evolution of Gaussian States: Nonunitary Evolution and Invariant States
Previous Article in Special Issue
Statistical Approaches for the Analysis of Dependency Among Neurons Under Noise
Open AccessArticle

Entropic Dynamics in Neural Networks, the Renormalization Group and the Hamilton-Jacobi-Bellman Equation

Instituto de Física, Universidade de São Paulo, São Paulo, SP, 05315-970 CEP, Brazil
Entropy 2020, 22(5), 587; https://doi.org/10.3390/e22050587
Received: 13 March 2020 / Revised: 18 May 2020 / Accepted: 18 May 2020 / Published: 23 May 2020
We study the dynamics of information processing in the continuum depth limit of deep feed-forward Neural Networks (NN) and find that it can be described in language similar to the Renormalization Group (RG). The association of concepts to patterns by a NN is analogous to the identification of the few variables that characterize the thermodynamic state obtained by the RG from microstates. To see this, we encode the information about the weights of a NN in a Maxent family of distributions. The location hyper-parameters represent the weights estimates. Bayesian learning of a new example determine new constraints on the generators of the family, yielding a new probability distribution which can be seen as an entropic dynamics of learning, yielding a learning dynamics where the hyper-parameters change along the gradient of the evidence. For a feed-forward architecture the evidence can be written recursively from the evidence up to the previous layer convoluted with an aggregation kernel. The continuum limit leads to a diffusion-like PDE analogous to Wilson’s RG but with an aggregation kernel that depends on the weights of the NN, different from those that integrate out ultraviolet degrees of freedom. This can be recast in the language of dynamical programming with an associated Hamilton–Jacobi–Bellman equation for the evidence, where the control is the set of weights of the neural network. View Full-Text
Keywords: neural networks; renormalization group; entropic dynamics; learning algorithms neural networks; renormalization group; entropic dynamics; learning algorithms
MDPI and ACS Style

Caticha, N. Entropic Dynamics in Neural Networks, the Renormalization Group and the Hamilton-Jacobi-Bellman Equation. Entropy 2020, 22, 587.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop