Special Issue "Statistical Mechanics of Neural Networks"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Statistical Physics".

Deadline for manuscript submissions: closed (30 June 2019).

Special Issue Editor

Prof. Dr. Hanoch Gutfreund
E-Mail Website
Guest Editor
Former President, The Hebrew University of Jerusalem, Edmond Safra Campus, 91904 Jerusalem, Israel
Interests: history of physics; condensed matter physics; statistical mechanics; computational neuroscience

Special Issue Information

Dear Colleagues,

Neural networks are highly interconnected systems of identical elements called neurons, which capture the basic physiological features of biological nerve cells. They serve as models to explore the collective behavior of nervous systems of living organisms. They are also applied in the study and design of artificial intelligence systems performing such functions as learning, information storage and retrieval, pattern recognition and decision making. The history of neural network modeling begins with the work of McCulloch and Pitts, who introduced in 1943 the notion of the formal neuron as a two-state threshold element and showed that a network of such elements can perform any logical calculation. A landmark in this history was the work of Hopfield, who suggested a fruitful analogy between the asymptotic dynamics of such networks and equilibrium properties of random magnetic systems. Hopfield’s work paved the road to the introduction of concepts and methods of statistical mechanics of random physical systems to the study of neural networks. Neural networks are oversimplified representations of the biological networks they are meant to emulate, but they can be systematically analyzed, numerically and sometimes even analytically. Thus, they can shed light on the principles of information processing in biological systems. The statistical analysis of neural networks became the cornerstone of the emerging discipline of computational neuroscience.

This is a call for papers on different aspects of the field of statistical mechanics of neural networks—from historical perspectives, to contemporary applications in brain research and artificial intelligence.

Prof. Dr. Hanoch Gutfreund
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Associative memory
  • Attractors
  • Hebbian learning rules
  • Hopfield models
  • Ising spin models
  • Network dynamics
  • Neural computation
  • Synapses, inhibition and excitation
  • Temporal patterns
  • Information storage capacity
  • Information retrieval
  • Pattern recognition

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Critical Dynamics Mediate Learning of New Distributed Memory Representations in Neuronal Networks
Entropy 2019, 21(11), 1043; https://doi.org/10.3390/e21111043 - 26 Oct 2019
Cited by 2 | Viewed by 863
Abstract
We explore the possible role of network dynamics near a critical point in the storage of new information in silico and in vivo, and show that learning and memory may rely on neuronal network features mediated by the vicinity of criticality. Using a [...] Read more.
We explore the possible role of network dynamics near a critical point in the storage of new information in silico and in vivo, and show that learning and memory may rely on neuronal network features mediated by the vicinity of criticality. Using a mean-field, attractor-based model, we show that new information can be consolidated into attractors through state-based learning in a dynamical regime associated with maximal susceptibility at the critical point. Then, we predict that the subsequent consolidation process results in a shift from critical to sub-critical dynamics to fully encapsulate the new information. We go on to corroborate these findings using analysis of rodent hippocampal CA1 activity during contextual fear memory (CFM) consolidation. We show that the dynamical state of the CA1 network is inherently poised near criticality, but the network also undergoes a shift towards sub-critical dynamics due to successful consolidation of the CFM. Based on these findings, we propose that dynamical features associated with criticality may be universally necessary for storing new memories. Full article
(This article belongs to the Special Issue Statistical Mechanics of Neural Networks)
Show Figures

Figure 1

Open AccessArticle
Beyond the Maximum Storage Capacity Limit in Hopfield Recurrent Neural Networks
Entropy 2019, 21(8), 726; https://doi.org/10.3390/e21080726 - 25 Jul 2019
Cited by 2 | Viewed by 1862
Abstract
In a neural network, an autapse is a particular kind of synapse that links a neuron onto itself. Autapses are almost always not allowed neither in artificial nor in biological neural networks. Moreover, redundant or similar stored states tend to interact destructively. This [...] Read more.
In a neural network, an autapse is a particular kind of synapse that links a neuron onto itself. Autapses are almost always not allowed neither in artificial nor in biological neural networks. Moreover, redundant or similar stored states tend to interact destructively. This paper shows how autapses together with stable state redundancy can improve the storage capacity of a recurrent neural network. Recent research shows how, in an N-node Hopfield neural network with autapses, the number of stored patterns (P) is not limited to the well known bound 0.14 N , as it is for networks without autapses. More precisely, it describes how, as the number of stored patterns increases well over the 0.14 N threshold, for P much greater than N, the retrieval error asymptotically approaches a value below the unit. Consequently, the reduction of retrieval errors allows a number of stored memories, which largely exceeds what was previously considered possible. Unfortunately, soon after, new results showed that, in the thermodynamic limit, given a network with autapses in this high-storage regime, the basin of attraction of the stored memories shrinks to a single state. This means that, for each stable state associated with a stored memory, even a single bit error in the initial pattern would lead the system to a stationary state associated with a different memory state. This thus limits the potential use of this kind of Hopfield network as an associative memory. This paper presents a strategy to overcome this limitation by improving the error correcting characteristics of the Hopfield neural network. The proposed strategy allows us to form what we call an absorbing-neighborhood of state surrounding each stored memory. An absorbing-neighborhood is a set defined by a Hamming distance surrounding a network state, which is an absorbing because, in the long-time limit, states inside it are absorbed by stable states in the set. We show that this strategy allows the network to store an exponential number of memory patterns, each surrounded with an absorbing-neighborhood with an exponentially growing size. Full article
(This article belongs to the Special Issue Statistical Mechanics of Neural Networks)
Show Figures

Figure 1

Open AccessArticle
Stationary-State Statistics of a Binary Neural Network Model with Quenched Disorder
Entropy 2019, 21(7), 630; https://doi.org/10.3390/e21070630 - 26 Jun 2019
Cited by 1 | Viewed by 1192
Abstract
In this paper, we study the statistical properties of the stationary firing-rate states of a neural network model with quenched disorder. The model has arbitrary size, discrete-time evolution equations and binary firing rates, while the topology and the strength of the synaptic connections [...] Read more.
In this paper, we study the statistical properties of the stationary firing-rate states of a neural network model with quenched disorder. The model has arbitrary size, discrete-time evolution equations and binary firing rates, while the topology and the strength of the synaptic connections are randomly generated from known, generally arbitrary, probability distributions. We derived semi-analytical expressions of the occurrence probability of the stationary states and the mean multistability diagram of the model, in terms of the distribution of the synaptic connections and of the external stimuli to the network. Our calculations rely on the probability distribution of the bifurcation points of the stationary states with respect to the external stimuli, calculated in terms of the permanent of special matrices using extreme value theory. While our semi-analytical expressions are exact for any size of the network and for any distribution of the synaptic connections, we focus our study on networks made of several populations, that we term “statistically homogeneous” to indicate that the probability distribution of their connections depends only on the pre- and post-synaptic population indexes, and not on the individual synaptic pair indexes. In this specific case, we calculated analytically the permanent, obtaining a compact formula that outperforms of several orders of magnitude the Balasubramanian-Bax-Franklin-Glynn algorithm. To conclude, by applying the Fisher-Tippett-Gnedenko theorem, we derived asymptotic expressions of the stationary-state statistics of multi-population networks in the large-network-size limit, in terms of the Gumbel (double exponential) distribution. We also provide a Python implementation of our formulas and some examples of the results generated by the code. Full article
(This article belongs to the Special Issue Statistical Mechanics of Neural Networks)
Show Figures

Figure 1

Open AccessArticle
Stability Analysis for Memristor-Based Complex-Valued Neural Networks with Time Delays
Entropy 2019, 21(2), 120; https://doi.org/10.3390/e21020120 - 28 Jan 2019
Viewed by 1068
Abstract
In this paper, the problem of stability analysis for memristor-based complex-valued neural networks (MCVNNs) with time-varying delays is investigated extensively. This paper focuses on the exponential stability of the MCVNNs with time-varying delays. By means of the Brouwer’s fixed-point theorem and M-matrix, [...] Read more.
In this paper, the problem of stability analysis for memristor-based complex-valued neural networks (MCVNNs) with time-varying delays is investigated extensively. This paper focuses on the exponential stability of the MCVNNs with time-varying delays. By means of the Brouwer’s fixed-point theorem and M-matrix, the existence, uniqueness, and exponential stability of the equilibrium point for MCVNNs are studied, and several sufficient conditions are obtained. In particular, these results can be applied to general MCVNNs whether the activation functions could be explicitly described by dividing into two parts of the real parts and imaginary parts or not. Two numerical simulation examples are provided to illustrate the effectiveness of the theoretical results. Full article
(This article belongs to the Special Issue Statistical Mechanics of Neural Networks)
Show Figures

Figure 1

Open AccessArticle
Modeling of a Neural System Based on Statistical Mechanics
Entropy 2018, 20(11), 848; https://doi.org/10.3390/e20110848 - 05 Nov 2018
Cited by 1 | Viewed by 1113
Abstract
The minimization of a free energy is often regarded as the key principle in understanding how the brain works and how the brain structure forms. In particular, a statistical-mechanics-based neural network model is expected to allow one to interpret many aspects of the [...] Read more.
The minimization of a free energy is often regarded as the key principle in understanding how the brain works and how the brain structure forms. In particular, a statistical-mechanics-based neural network model is expected to allow one to interpret many aspects of the neural firing and learning processes in terms of general concepts and mechanisms in statistical physics. Nevertheless, the definition of the free energy in a neural system is usually an intricate problem without an evident solution. After the pioneering work by Hopfield, several statistical-mechanics-based models have suggested a variety of definition of the free energy or the entropy in a neural system. Among those, the Feynman machine, proposed recently, presents the free energy of a neural system defined via the Feynman path integral formulation with the explicit time variable. In this study, we first give a brief review of the previous relevant models, paying attention to the troublesome problems in them, and examine how the Feynman machine overcomes several vulnerable points in previous models and derives the outcome of the firing or the learning rule in a (biological) neural system as the extremum state in the free energy. Specifically, the model reveals that the biological learning mechanism, called spike-timing-dependent plasticity, relates to the free-energy minimization principle. Basically, computing and learning mechanisms in the Feynman machine base on the exact spike timings of neurons, such as those in a biological neural system. We discuss the consequence of the adoption of an explicit time variable in modeling a neural system and the application of the free-energy minimization principle to understanding the phenomena in the brain. Full article
(This article belongs to the Special Issue Statistical Mechanics of Neural Networks)
Show Figures

Figure 1

Open AccessFeature PaperArticle
The Capacity for Correlated Semantic Memories in the Cortex
Entropy 2018, 20(11), 824; https://doi.org/10.3390/e20110824 - 26 Oct 2018
Cited by 2 | Viewed by 1817
Abstract
A statistical analysis of semantic memory should reflect the complex, multifactorial structure of the relations among its items. Still, a dominant paradigm in the study of semantic memory has been the idea that the mental representation of concepts is structured along a simple [...] Read more.
A statistical analysis of semantic memory should reflect the complex, multifactorial structure of the relations among its items. Still, a dominant paradigm in the study of semantic memory has been the idea that the mental representation of concepts is structured along a simple branching tree spanned by superordinate and subordinate categories. We propose a generative model of item representation with correlations that overcomes the limitations of a tree structure. The items are generated through “factors” that represent semantic features or real-world attributes. The correlation between items has its source in the extent to which items share such factors and the strength of such factors: if many factors are balanced, correlations are overall low; whereas if a few factors dominate, they become strong. Our model allows for correlations that are neither trivial nor hierarchical, but may reproduce the general spectrum of correlations present in a dataset of nouns. We find that such correlations reduce the storage capacity of a Potts network to a limited extent, so that the number of concepts that can be stored and retrieved in a large, human-scale cortical network may still be of order 107, as originally estimated without correlations. When this storage capacity is exceeded, however, retrieval fails completely only for balanced factors; above a critical degree of imbalance, a phase transition leads to a regime where the network still extracts considerable information about the cued item, even if not recovering its detailed representation: partial categorization seems to emerge spontaneously as a consequence of the dominance of particular factors, rather than being imposed ad hoc. We argue this to be a relevant model of semantic memory resilience in Tulving’s remember/know paradigms. Full article
(This article belongs to the Special Issue Statistical Mechanics of Neural Networks)
Show Figures

Figure 1

Open AccessArticle
Statistical Mechanics of On-Line Learning Under Concept Drift
Entropy 2018, 20(10), 775; https://doi.org/10.3390/e20100775 - 10 Oct 2018
Cited by 5 | Viewed by 1957
Abstract
We introduce a modeling framework for the investigation of on-line machine learning processes in non-stationary environments. We exemplify the approach in terms of two specific model situations: In the first, we consider the learning of a classification scheme from clustered data by means [...] Read more.
We introduce a modeling framework for the investigation of on-line machine learning processes in non-stationary environments. We exemplify the approach in terms of two specific model situations: In the first, we consider the learning of a classification scheme from clustered data by means of prototype-based Learning Vector Quantization (LVQ). In the second, we study the training of layered neural networks with sigmoidal activations for the purpose of regression. In both cases, the target, i.e., the classification or regression scheme, is considered to change continuously while the system is trained from a stream of labeled data. We extend and apply methods borrowed from statistical physics which have been used frequently for the exact description of training dynamics in stationary environments. Extensions of the approach allow for the computation of typical learning curves in the presence of concept drift in a variety of model situations. First results are presented and discussed for stochastic drift processes in classification and regression problems. They indicate that LVQ is capable of tracking a classification scheme under drift to a non-trivial extent. Furthermore, we show that concept drift can cause the persistence of sub-optimal plateau states in gradient based training of layered neural networks for regression. Full article
(This article belongs to the Special Issue Statistical Mechanics of Neural Networks)
Show Figures

Figure 1

Open AccessArticle
Modelling of Behavior for Inhibition Corrosion of Bronze Using Artificial Neural Network (ANN)
Entropy 2018, 20(6), 409; https://doi.org/10.3390/e20060409 - 26 May 2018
Cited by 5 | Viewed by 1725
Abstract
In this work, three models based on Artificial Neural Network (ANN) were developed to describe the behavior for the inhibition corrosion of bronze in 3.5% NaCl + 0.1 M Na2SO4, using the experimental data of Electrochemical Impedance Spectroscopy (EIS). [...] Read more.
In this work, three models based on Artificial Neural Network (ANN) were developed to describe the behavior for the inhibition corrosion of bronze in 3.5% NaCl + 0.1 M Na2SO4, using the experimental data of Electrochemical Impedance Spectroscopy (EIS). The database was divided into training, validation, and test sets randomly. The parameters process used as the inputs of the ANN models were frequency, temperature, and inhibitor concentration. The outputs for each ANN model and the components in the EIS spectrum (Zre, Zim, and Zmod) were predicted. The transfer functions used for the learning process were the hyperbolic tangent sigmoid in the hidden layer and linear in the output layer, while the Levenberg–Marquardt algorithm was applied to determine the optimum values of the weights and biases. The statistical analysis of the results revealed that ANN models for Zre, Zim, and Zmod can successfully predict the inhibition corrosion behavior of bronze in different conditions, where what was considered included variability in temperature, frequency, and inhibitor concentration. In addition, these three input parameters were keys to describe the behavior according to a sensitivity analysis. Full article
(This article belongs to the Special Issue Statistical Mechanics of Neural Networks)
Show Figures

Figure 1

Back to TopTop