Next Article in Journal
Morphology, Polarization Patterns, Compression, and Entropy Production in Phase-Separating Active Dumbbell Systems
Previous Article in Journal
Higher-Order Correlations Between Thermodynamic Fluctuations in Compressible Aerodynamic Turbulence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy Production and Irreversibility in the Linearized Stochastic Amari Neural Model

1
Department of Mathematics & Physics, University of Campania “Luigi Vanvitelli”, Viale Lincoln 5, 81100 Caserta, Italy
2
Gran Sasso Science Institute (GSSI), Viale F. Crispi 7, 67100 L’Aquila, Italy
3
INFN-LNGS, Via Giovanni Acitelli, 22, 67100 L’Aquila, Italy
4
Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova, Italy
5
INFN, Sezione di Padova, via Marzolo 8, 35131 Padova, Italy
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(11), 1104; https://doi.org/10.3390/e27111104 (registering DOI)
Submission received: 31 July 2025 / Revised: 13 October 2025 / Accepted: 22 October 2025 / Published: 25 October 2025
(This article belongs to the Section Non-equilibrium Phenomena)

Abstract

One among the most intriguing results coming from the application of statistical mechanics to the study of the brain is the understanding that it, as a dynamical system, is inherently out of equilibrium. In the realm of non-equilibrium statistical mechanics and stochastic processes, the standard observable computed to determine whether a system is at equilibrium or not is the entropy produced along the dynamics. For this reason, we present here a detailed calculation of the entropy production in the Amari model, a coarse-grained model of the brain neural network, consisting of an integro-differential equation for the neural activity field, when stochasticity is added to the original dynamics. Since the way to add stochasticity is always to some extent arbitrary, particularly for coarse-grained models, there is no general prescription to do so. We precisely investigate the interplay between noise properties and the original model features, discussing in which cases the stationary state is in thermal equilibrium and which cases it is out of equilibrium, providing explicit and simple formulae. Following the derivation for the particular case considered, we also show how the entropy production rate is related to the variation in time of the Shannon entropy of the system.

1. Neural Dynamics Irreversibility and Statistical Mechanics

One possible description of large-scale brain activity is the one given by coarse-grained models based on the dynamics of neural fields. In such models, the average activity of a large neuronal population at a given position is encoded in the specific form of the integro-differential equations which govern the field dynamics [1]. Such a description is envisaged for understanding the behavior of a large number of interacting units without including all possible microscopic details of the real dynamics. Since the pioneering contribution of Wilson and Cowan [2] and Amari [3,4], tools from both equilibrium and non-equilibrium statistical mechanics, with particular reference to the emergence of critical behaviors [1,5,6,7,8,9], have been extensively used for characterizing neural activity. In particular, the necessity to consider non-equilibrium statistical mechanics has been motivated by the evidence that brain activity exhibits features of an intrinsically irreversible process [10]. An issue which therefore stays at the forefront of the interplay between theoretical physics and neuroscience is how to use the tools of non-equilibrium statistical mechanics to ascertain the irreversible nature of brain dynamics from data. To this end, among other areas, the study of fluctuation–dissipation relationships, namely how the internal correlations of a given system are connected to its response to an external perturbation, has been the most widely considered. The crucial feature of fluctuation–dissipation relationships is that they take a different form depending on whether the nature of system dynamics is irreversible or not. These relationships have therefore been considered to study the functional relationship between the evoked response to external stimuli and steady-state correlations in brain activity [11,12], revealing its irreversible nature [13]. The main limitation of the fluctuation–dissipation approach to the study of irreversibility dynamics in brain activity is the necessity to perform response experiments: the possibility to ascertain the nature of its dynamics by simply monitoring specific indicators without the intervention of external stimuli is therefore appealing. For this purpose, it is quite useful to study how fluctuations in relevant observables related to brain activity are correlated in time, particularly functions of the kind C ( t , t ) , for which we do not need to specify either the specific observable or the structure of the function, but we just need to know that t is the time of the first detection of the signal and t is the time of the second detection of the signal, t > t . The role of C ( t , t ) is then to tell us how the fluctuations at t are correlated to those at t . By definition, the system obtains an equilibrium/reversible dynamics when the time-ordered pattern of fluctuations is statistically equivalent when the same sequence is considered from t to t or from t to t . In this situation, the dynamics of the system are said to experience time reversal symmetry, in which case the correlation function C ( t , t ) also has the same symmetry, i.e., C ( t , t ) = C ( t , t ) . It is then sufficient to spot a single observable for which the above symmetry is broken, i.e., C ( t , t ) C ( t , t ) , to say that the time reversal symmetry is broken, namely the system is irreversible. Unfortunately the behavior of the time-correlation functions strongly depend on the choice of the observables and on their specific functional form, which must often be chosen case by case. While the use of a time-correlation function presents a clear advantage, namely that one does need a specific model for the system’s microscopic dynamics, it also has a major limitation: the fact that it is not clear in all cases a priori what the correct observable to choose is for the correlation functions [14]. On the contrary, an objective and universal indicator of the lack of thermal equilibrium is the so-called entropy production rate, which is zero for systems at thermal equilibrium and positive for systems with irreversible dynamics, and it can be measured directly from “unperturbed” dynamics, namely from the spontaneous dynamical evolution of the system, without applying external stimuli. The only limitation of entropy production is that in order to obtain a reliable estimate of it, one needs to introduce an appropriate theoretical model of the system: it cannot be computed solely from data without any assumptions on the underlying model [14]. It is for this reason that, if one wishes to study the irreversibility, and hence entropy production, from brain data in connection to the Amari model, the first task is to study the explicit formula for entropy production which can be obtained for the case at hand. In particular, since the original equations of the model are deterministic, a necessary technical step in order to use the standard formalism to compute entropy production, without considering the formalism proper for deterministic systems [15], is to randomize the dynamics. The task of the present discussion is precisely to show how different methods of dynamic randomization are connected to the intrinsic properties of the models yielding different results for entropy production.
This paper is structured as follows: In Section 2, we recall from the realm of non-equilibrium statistical mechanics and stochastic thermodynamics the modern definition of entropy production in stochastic systems, first provided in a famous paper by Lebowitz and Spohn [16], also mentioning that it is related to the Shannon entropy of the system of interest. The purpose of this section is also to motivate the choice of entropy production rather than observables to characterize the degree of irreversibility in the system. Since the tools introduced in Section 2 apply specifically to stochastic systems, in Section 3, we introduce a linearized version of the Amari equation, proposing a standard protocol to randomize it and commenting on which version is the simplest choice in terms of model properties, which leads to an equilibrium state with reversible dynamics, where the entropy production is zero. Then in Section 4, we present our first original result: the complete calculation of entropy production as an indicator of irreversibility in the Lebowitz–Spohn approach, where the probability of trajectories forward in time is compared with the probability of trajectories backward in time. The resulting expression clarifies the peculiar interplay between the Amari model’s properties and the properties of the superimposed stochasticity in determining the reversible or irreversible nature of the dynamics. Section 5 is then dedicated to showing how the same expression obtained in Section 4 from the Lebowitz–Spohn formalism can be obtained by different means by studying the rate of variation in time of the Shannon entropy, another concept typical of stochastic thermodynamics. The peculiarity of this section is the discussion in terms of a functional formalism for both the Shannon entropy and the Fokker–Planck equation governing the stochastic dynamics of the neural activity field, which is not so common in the literature. Furthermore, by comparing the Lebowitz–Spohn entropy production with the variation in Shannon entropy, this section is useful for gaining better physical insights into the object of study. Section 6 is then dedicated to showing how a simple explicit expression of entropy production can be computed when assuming translational invariance in the space of coordinates for the original equation, making transparent the role played by the symmetry properties of the linearized Amari equation force. Finally in Section 7, we present our conclusions.

2. Entropy Production in Non-Equilibrium Statistical Mechanics

As noted in the previous section, an important concept that neuroscience, in particular the study of brain activity, has borrowed from non-equilibrium statistical mechanics is entropy production [17,18], a measurable quantity which characterizes the degree of irreversibility of brain dynamics. For instance, entropy production has been related to the hierarchical and asymmetrical structure of brain connections [19,20,21], with techniques also being successfully applied to the analysis of experimental signals [10,11]. It has also been shown that different estimates of entropy production from empirical datasets reveal a correlation to the consciousness level and the activity recorded in subjects performing different tasks [10,13,19,22]. Moreover, an interesting decomposition of entropy production in oscillatory modes has been applied to study how the brain’s rhythms contribute to dissipation and information processing [23]. Let us therefore define and elucidate the meaning of entropy production. The fact that entropy, which is a state function of a macroscopic system, grows along an irreversible transformation is a well-known fact from the 19th century. An account of how to quantitatively relate the entropy constantly produced along the irreversible dynamics of a stationary non-equilibrium state to macroscopic transport coefficients can be found in a historical book by De Groot and Mazur [24], which presents an overview of the development of this subject in the second half of the 20th century. Determining how the entropy produced macroscopically along stationary irreversible dynamics can be related to the probability of microscopic trajectories is a much more recent achievement. The standard way to compute entropy production in all stochastic processes was settled in the seminal paper by Lebowitz and Spohn in 1999 [16]. This work was proposed to extend the definition of Gallavotti–Cohen symmetry for the probability of stationary non-equilibrium fluctuations, initially conceived for deterministic dissipative systems, to systems which can be characterized in terms of a Markov chain. While in deterministic systems, the entropy produced macroscopically can be related, within the framework of a refined mathematical analysis, to the rate of contraction of phase space [15], the work of Lebowitz and Spohn was the first to provide an explicit formula to relate the same quantity to the probability of system trajectories. And, let us add, it is precisely the overwhelming simplicity of computing entropy production in the presence of stochasticity that motivated our present work, where we introduce the stochastic version of the Amari equation and explicitly compute the related entropy production rate. According to the Lebowitz–Spohn formula, if one considers the time interval [ 0 , t ] and denotes the sequence of configurations visited by the system in this interval as Ω 0 t , which is usually denoted as forward trajectory, and the reversed sequence of configurations as Ω ¯ 0 t , which is usually denoted as backward trajectory, the entropy produced along the time span [ 0 , t ] is [14,25]
Σ ( t ) = log P [ Ω 0 t ] P [ Ω ¯ 0 t ]
One of the most interesting properties of entropy production is that in general, i.e., for systems with finite-term memory, it can be shown that Σ ( t ) , which depends on the stochasticity of the dynamics, has a self-averaging property [26]; namely, in the case of a large system size, its probability distribution naturally concentrates around the average, as shown in the following:
lim t Σ ( t ) t = lim t Σ ( t ) t ,
where the average is taken with respect to the probability of forward trajectories, and we divide by t in order to obtain a finite quantity, since Σ ( t ) is by definition extensive in t. Therefore, since after a long time, we have Σ ( t ) Σ ( t ) , the properties of the typical value of entropy production for t 1 are those of the mean, which reads as
Σ ( t ) = D Ω 0 t P ( Ω 0 t ) log P [ Ω 0 t ] P [ Ω ¯ 0 t ] .
What is nice about the expression in Equation (3) is that it explicitly takes the form of Kullback–Leibler divergence. This is particularly interesting for motivating the choice of entropy production rather than correlation functions in order to assess the degree of irreversibility of the dynamics. As the distance between two probability distributions, Kullback–Leibler divergence does not depend on the choice of variables, as long as we have the probabilities P ( Ω 0 t ) and P ( Ω ¯ 0 t ) . This gives entropy production a great degree of universality: it does not depend on the variables we choose to represent the system trajectories, as long as we are able to determine with precision their probability. In addition, if some degrees of freedom are integrated out, the Kullback–Leibler divergence cannot increase, meaning that the entropy production rate decreases under coarse-graining [14]. On the other hand, the “limit” of this approach is that we need precise mathematical modeling for the dynamics of our system; otherwise it is quite unlikely that we will obtain correct probability distributions. The general manipulations conducted in the literature, starting from the definition of entropy production in Equation (3), are usually aimed at writing the stationary entropy production rate in terms of the appropriate combination of correlation functions, a task which is accomplished by computing, in any specific case, the quantity
σ = lim t Σ ( t ) t = stationary correlators .
The purpose of the work presented here is precisely to illustrate how by adding stochasticity to the originally deterministic Amari model, it is possible to exploit the entropy production formulae that are widespread in the contemporary literature [15,16,27,28]. In addition to this, we will show how the same expression for the entropy production rate can be obtained, partly by using known formulae [25,27,29] and also starting from the Shannon entropy formula from stochastic thermodynamics. This alternative derivation of the same formulae will clarify the whole picture, connecting entropy production with the time variation in the stationary entropy of the system. Let us assume that all variables needed to specify the state of the system are indicated by X and that their time-dependent probability distribution P ( X , t ) is known. In this case, the Shannon entropy of the system is
S sys ( t ) = D X P ( X , t ) log [ P ( X , t ) ] ,
where we use the symbol D X to denote functional integration over a possibly multidimensional space. By using simple formal manipulations, and also the Fokker–Planck equation of which P ( X , t ) is a solution, this is a standard procedure to show that the time derivative of the system entropy, defined as the Shannon entropy in Equation (5), splits into the difference in two contributions, which are usually interpreted, respectively, as the entropy of the universe, S tot , and the entropy of the reservoir coupled to the system, S res :
S ˙ sys ( t ) = S ˙ tot ( t ) S ˙ res ( t ) .
In a stationary state, the probability distribution P ( X , t ) is stationary, so the entropy of the system is also constant, and S ˙ sys ( t ) = 0 . In this case, the rate of increase in the entropy of the universe and the entropy of the reservoir are identical, S ˙ tot ( t ) = S ˙ res ( t ) , and we will show that they also correspond to the system’s entropy production rate as computed from a Lebowitz–Spohn-like functional. So far, we have kept the discussion general in order to present all the aspects of entropy production that are non-specific to a particular model. The next section will be devoted to the presentation of the Amari model, which we chose as a benchmark to study entropy production in a model for brain dynamics.

3. Linearized Stochastic Amari Model: Equilibrium Properties

In the previous section, we introduced the concept of the entropy production rate as an indicator for the degree of irreversibility in system dynamics. We also mentioned how, due to the Lebowitz–Spohn generalization of the Gallavotti–Cohen results, the calculation of entropy production is much easier in a system with stochasticity. It is for this reason that, elaborating on its original deterministic version, we added stochasticity to the Amari model for neural dynamics. Before presenting the calculation of entropy production in this model, let us introduce the model and explain the motivation behind this choice.
The stochastic Amari model is an integro-differential stochastic equation which models the neural activity of the brain by means of a local activation field u ( x , t ) defined in a spatio-temporal domain, x Ω R d and t R (we also assume Ω to be a boundary-free domain):
t u ( x , t ) = 1 τ u ( x , t ) + 1 τ Ω w ( x , y ) f [ u ( y , t ) ] d y + ξ ( x , t ) ,
where τ is the relaxation time, and the two-point function w ( x , y ) , which models the transmission of impulses from one region to another of the brain, is usually known as the synaptic weight. In Equation (7), f [ u ( x , t ) ] then appears, the activation function, which is expected to be saturated to a constant value when the local field exceeds a certain threshold u * . Usually the activation function is modeled as a sigmoid:
f [ u ] = 1 e β ( u * u ) + 1
with gain β > 0 and threshold u * > 0 . For the present discussion, we will consider ξ ( x , t ) to be a Gaussian field delta-correlated in time, i.e., white noise. Besides the correlation in the time domain, the noise field is also characterized by its spatial covariance:
ξ ( x , t ) ξ ( y , t ) = γ ( x , y ) δ ( t t ) ,
where, in order to obtain a consistent definition of noise, its covariance γ ( x , y ) must be a symmetric function of its coordinates:
γ ( x , y ) = γ ( y , x ) .
For simplicity, here, we only consider the case of additive noise. Thus, Equation (7) does not suffer from a different physical interpretation resulting from different discretization schemes [30]. This choice allows us to focus on the role of synaptic couplings and noise correlations in sustaining irreversible stationary states without facing all the difficulties encountered when multiplicative noise, with possibly time-varying power-law behavior, is taken into account [31]. We chose a stochastic version of the Amari neural field equation because it provides a minimal, spatially continuous mean-field description that directly links macroscopic cortical activity to physiologically interpretable ingredients: the synaptic coupling kernel w ( x , y ) encodes anatomical connectivity, the non-linear gain function f [ u ( x , t ) ] represents neuronal input–output properties, and the explicit temporal decay modeled by the term τ 1 u ( x , t ) represents membrane/leak dynamics. The deterministic Amari field u ( x , t ) was originally introduced as a canonical model for pattern formation and spatially structured activity in the cortex (bumps, waves, spatial patterns), as discussed in Chap. 1 of Ref. [32]. The addition of stochastic forcing to this framework is not only a technical trick to allow for a much easier calculation of entropy production but also a natural and well-studied extension because real cortical tissue is subject to numerous sources of variability (synaptic noise, finite-size effects, background inputs) whose principal effects are captured at the continuum level by noise terms (see Chap. 9 of Ref. [32]). In addition to this, stochastic neural field equations have been used to analyze phenomena such as the noise-induced wandering of stationary bumps, diffusion of wave positions, variability in stimulus tuning, and noise-driven bifurcations of spatial patterns—phenomena directly relevant to working memory, perceptual switching, and cortical variability observed in experiments [4,32,33]. From a mathematical perspective, the stochastic Amari model requires a rigorous probabilistic treatment (well-posedness, invariant measures, ergodicity) under reasonable assumptions on the synaptic kernel w ( x , y ) and noise correlations γ ( x , y ) , which both justify the analytical approach taken here. This combination of physiological interpretability, a direct connection to experimentally observed noisy dynamics, and the mature mathematical literature motivated our choice of this model for randomization.
Since the Amari model is originally defined as a deterministic model, there is a priori some degree of arbitrariness in the choice of noise properties. In fact, whether the system attains a stationary state, which can be deemed as equilibrium, depends, as we are going to show, on the interplay between the properties of noise and those of the local force acting on the neural activity field. Here we start by showing which choice of noise yields equilibrium properties, leaving a more detailed study of the general case to the next section. As the first step in this direction, it is convenient to linearize the Amari equation around a stationary solution. One can therefore denote η ( x , t ) as the fluctuations around the homogeneous solution u ( x , t ) = u 0 :
η ( x , t ) = u ( x , t ) u 0 u ( x , t ) u 0 u 0 1 .
We consider then the expansion to linear order in η ( x , t ) of the non-linear activation function f [ u ( x , t ) ] :
f [ u ( x , t ) ] = f [ u 0 ] + f [ u 0 ] η ( x , t ) + . . .
The homogeneous solution u 0 is the one obtained by plugging η ( x , t ) = u 0 + u ( x , t ) into Equation (7), then expanding and solving for u 0 after having set η ˙ ( x , t ) = 0 and ξ ( x , t ) = 0 :
0 = u 0 + w ˜ f [ u 0 ]
where
w ˜ = Ω w ( x , y ) d y .
The claim that the integral on the right-hand side of Equation (14) yields a constant value w ˜ independent from the coordinate x can be simply justified under the hypothesis that all points in the brain are equivalent from the perspective of the connectivity with the rest of the system, which is quite reasonable if one considers that all models for neural networks in the brain [34,35] are usually dense networks. For a given choice of w ˜ , the stationary homogeneous solution u 0 can be simply obtained from the following algebraic non-linear equation
f [ u 0 ] = u 0 w ˜ .
By plugging the expansion of Equation (11) into the stochastic Amari equation and truncating to the linear order, we get
t η ( x , t ) = 1 τ η ( x , t ) + f [ u 0 ] τ Ω w ( x , y ) η ( y , t ) d y + ξ ( x , t ) ,
which is the stochastic linear Amari equation for the (small) fluctuations η ( x , t ) around the homogeneous solution u 0 . The last equation can be rewritten in a more compact form as follows:
η ˙ ( x , t ) = Λ [ η ] ( x , t ) + ξ ( x , t ) ,
where Λ [ η ] ( x , t ) is an integral operator, acting on the field η ( x , t ) , defined by its kernel λ ( x , y )
Λ [ η ] ( x , t ) = d y λ ( x , y ) η ( y , t ) λ ( x , y ) = 1 τ δ ( d ) ( x , y ) + f [ u 0 ] τ w ( x , y )
One of the goals of the following discussion is to study how the presence of a thermodynamic equilibrium depends on the properties of λ ( x , y ) . Clearly, the operator Λ must be bounded and negative define to guarantee that Equation (17) is well-defined. For instance, a sufficient condition is that λ L 2 ( Ω ) , with L 2 ( Ω ) being the space of the square-integrable function on Ω . At the same time, the noise correlation kernel γ must be positive definite, and for simplicity, we also assume it is invertible. It is well known that the synaptic weight w ( x , y ) is a symmetric function of the coordinates and Λ is a self-adjoint operator. Therefore, one can introduce the following potential:
U [ η ] = 1 2 d x d y η ( x ) λ ( x , y ) η ( y )
and write the Langevin equation as
η ˙ ( x , t ) = δ U [ η ] δ η ( x , t ) + ξ ( x , t ) .
In this case it is sufficient to have a noise which is delta-correlated in both space and time, i.e., a delta-correlated spatial kernel
γ ( x , y ) = T δ ( x y )
in order to have a Boltzmann-like equilibrium distribution (here the Boltzmann constant k B is set equal to 1)
P eq [ η ] exp U [ η ] / T .
We will demonstrate in the following sections that there is a more general condition, also involving the properties of the noise kernel, which guarantees the presence of thermodynamic equilibrium.

4. Entropy Production from the Lebowitz–Spohn Formula

After the historical introduction on entropy production presented in Section 2, here we show how to compute it exactly in the stochastic Amari model. Let us just notice that, at variance with the original entropy production formula given in the Lebowitz–Spohn paper for Markov chains [16], where trajectories are written in terms of transition rates, the stochastic Amari model discussed here consists of continuous time stochastic processes, so we will need the functional Onsager–Machlup formalism to determine path probabilities [25,36,37]. First of all, we have to provide an explicit definition for trajectories, which were referred to only in an abstract way in Section 2. In the present discussion, the forward trajectory is represented by the collection of the field η ( x , t ) configurations, ordered chronologically
Ω 0 t = η ( x , s ) | s [ 0 , t ] ,
while the symbol Ω ¯ 0 t denotes the backward trajectory
Ω ¯ 0 t = η ( x , s ) ¯ | s [ 0 , t ] .
Here, the overline denotes the sequence of configurations defined as follows
η ( x , s ) ¯ = η ( x , t s ) s [ 0 , t ]
d d s η ( x , s ) ¯ = d d t η ( x , t s ) s [ 0 , t ] .
The apparently ambiguous notation for the derivative in the right-hand term of Equation (26) must be interpreted as follows:
d d t η ( x , t s ) = d d u η ( x , u ) | u = t s .
The task is then to compute
Σ ( t ) = log P [ Ω 0 t ] P [ Ω ¯ 0 t ] ,
for which we need the expressions of P [ Ω 0 t ] and P [ Ω ¯ 0 t ] . According to the Onsager–Machlup formula, for which the derivation is standard in the case of additive noise, see, for instance, [25,29], the probability of forward and backward trajectories can be, respectively, written as follows:
P [ Ω 0 t ] exp 1 2 0 t d s d x d y η ˙ ( x , s ) Λ [ η ] ( x , s ) γ ( x , y ) 1 η ˙ ( y , s ) Λ [ η ] ( y , s ) ,
P [ Ω ¯ 0 t ] exp { 1 2 0 t d s d x d y η ˙ ( x , t s ) Λ [ η ] ( x , t s ) γ ( x , y ) 1 η ˙ ( y , t s ) Λ [ η ] ( y , t s ) } .
From the above formulae, a straightforward algebraic manipulation yields the total entropy production, which takes the form of an integral over the elapsed time:
Σ ( t ) = 0 t d s d x d y η ˙ ( x , s ) γ ( x , y ) 1 Λ [ η ] ( y , s ) + Λ [ η ] ( x , t s ) γ ( x , y ) 1 η ˙ ( y , t s ) = 2 0 t d s d x d y η ˙ ( x , s ) γ ( x , y ) 1 Λ [ η ] ( y , s )
where the last integral has to be performed with the Stratonovich prescription. This is not related to the discretization scheme adopted in the definition of the stochastic differential equation and can be derived by discretizing the dynamics in time intervals Δ t , writing the path probability P [ Ω 0 t ] , and formally taking the limit Δ t 0 , as originally conceived by Onsager [36,37]. This is the most general expression of the entropy production in the Amari model, which relies on no other assumption than the linearization of the equations of motion. No translational invariance in space or symmetry under the exchange of coordinates has been considered for the synaptic weight function w ( x , y ) . It is then customary to consider the entropy production per unit time in the large time limit, i.e., σ ( t ) = Σ ( t ) / t , which reads as a time average and where, if the dynamics relax to a stationary state, it is possible to replace this average with the average over the stationary distribution, thus obtaining an expression in terms of stationary correlation functions:
σ = lim t 2 t d x d y d z γ ( x , y ) 1 λ ( y , z ) η ˙ ( x , s ) η ( z , s ) = 2 d x d y d z γ ( x , y ) 1 λ ( y , z ) η ˙ ( x ) η ( z )
What is now crucial in the manipulation of Equation (32) is to assume the Stratonovich convention for the definition of stochastic integrals. By replacing η ˙ ( x ) with its expression according to the Langevin equation, Equation (17), one gets the following:
σ = 2 d x d y d z d z γ ( x , y ) 1 λ ( y , z ) λ ( x , z ) η ( z ) η ( z ) + δ ( x z ) ξ ( z ) η ( z )
The above formula for the entropy production rate σ consists of two terms: The first one involves only c ( z , z ) = η ( z ) η ( z ) , the equal-time two-point correlation function of the field. The other term depends on the equal-time correlation between the noise and the field, which is also nontrivial due to the Stratonovich prescription. Therefore, Equation (33) can be rewritten as
σ = 2 d x d y d z d z γ ( x , y ) 1 λ ( y , z ) λ ( x , z ) c ( z , z ) + δ ( x z ) 1 2 γ ( z , z ) = 2 d x d y d z d z γ ( x , y ) 1 λ ( y , z ) λ ( x , z ) c ( z , z ) + d x d y d z γ ( x , y ) 1 λ ( y , z ) γ ( x , z ) = 2 d x d y d z d z λ T ( z , x ) γ ( x , y ) 1 λ ( y , z ) c ( z , z ) + d z λ ( z , z ) .
Let us now rewrite the expression in Equation (34) in a more transparent form. In order to do so, let us first define the trace of a generic linear operator
K [ η ] = d y k ( x , y ) η ( y )
as
Tr [ K ] = d x k ( x , x ) .
By also considering the Lyapunov equation, which is a fundamental equation that must be fulfilled in order to guarantee the existence of a stationary state in a linear system [38]
d z λ ( x , z ) c ( z , y ) + c ( x , z ) λ T ( z , y ) = γ ( x , y ) Λ C + ( Λ C ) T = Γ ,
it is possible to plug it into the expression of the entropy production rate in Equation (34), thus obtaining the following:
σ = 2 d x d y d z d z λ T ( z , x ) γ ( x , y ) 1 λ ( y , z ) c ( z , z ) + d z λ ( z , z ) = Tr 2 Λ T Γ 1 Λ C + Λ = Tr Λ T Γ 1 Λ C + Tr Λ T Γ 1 Λ C + Λ = Tr Λ T Γ 1 Λ C + Tr Λ T Γ 1 ( C Λ T Γ ) + Λ = Tr Λ T Γ 1 Λ C Tr Λ T Γ 1 C Λ T + Tr Λ Λ T = Tr ( Λ T Γ 1 Γ 1 Λ ) Λ C
The above expression of entropy production in Equation (38) is the central result of our paper. We will show how it can be written in a more explicit and insightful form when translational invariance is also considered in Section 6. In order to clarify its derivation, let us remark that from the first to the last line of Equation (38), we took advantage of the properties of the trace and of several identities which we are going to list right now. First of all, from the second to the third line of Equation (38), we used the Lyapunov stability condition in the following form:
Λ C + ( Λ C ) T = Γ Λ C = C Λ T Γ
In the passages of Equation (38), we also made use of the cyclic permutation of the trace; of the fact that for any matrix A, we have Tr [ A ] = Tr [ A T ] ; and of the fact that Γ and C, as well as their inverse, are symmetric. The above properties allowed us to write
Tr Λ T Γ 1 C Λ T = Tr ( Λ C Γ 1 Λ ) T = Tr Λ C Γ 1 Λ = Tr Γ 1 Λ Λ C ,
which, considering that the trace of an antisymmetric matrix is zero, Tr Λ Λ T = 0 , led us to the last line of Equation (38). The expression in the last line of Equation (38) makes it clear that the condition
Λ T Γ 1 = Γ 1 Λ
is the one necessary to result in a zero entropy production rate and is therefore, as a consequence, the one necessary to have thermodynamic equilibrium. In order to rewrite the identity in Equation (41) in a more transparent way, let us multiply both terms on the left and right by the operator Γ , e.g., Λ T Γ 1 Γ Λ T Γ 1 Γ , which, by also recalling that Γ = Γ T , yields
Γ T Λ T = Λ Γ ,
which is the general “equilibrium” condition we were seeking. A more transparent expression can be obtained by explicitly determining the action of the operators in Equation (41) on the activity fields, for instance, by writing Λ Γ [ η ] ( x ) as
Λ Γ [ η ] ( x ) = d z d y λ ( x , z ) γ ( z , y ) η ( y ) .
The identity in Equation (42) can be then more explicitly written as
( Λ Γ ) [ η ] ( x ) ( Γ T Λ T ) [ η ] ( x ) = d z d y λ ( x , z ) γ ( z , y ) γ ( x , z ) λ ( z , y ) η ( y ) = d y h ( x , y ) h ( y , x ) η ( y ) = 0 ,
where we introduced the function
h ( x , y ) = d z λ ( x , z ) γ ( z , y ) .
If one wishes to have thermodynamic equilibrium, the expression in the second line of Equation (44) must be zero independently from the choice of η ( y ) , which in turn implies symmetry under the exchange of arguments for the function h ( x , y ) . The need for this symmetry for h ( x , y ) tells us that the symmetry under the exchange of arguments of the synaptic kernel w ( x , y ) is not in itself a sufficient condition to guarantee thermal equilibrium; what is needed is the symmetry of h ( x , y ) , which implies a nontrivial relationship between γ ( x , y ) and λ ( x , y ) . It is only in the case of a white noise that is also delta-correlated across space, i.e., γ ( x , y ) = T δ ( x y ) , that the symmetry of the synaptic kernel becomes sufficient to have equilibrium, since we have
h ( x , y ) = T γ ( x , y ) .
As shown in [14,39], in the finite-dimensional case, these relations are equivalent to the Onsager reciprocal relations [40]. As we approach the end of this section, a comment is in order: Since most people are more familiar with the concept of entropy rather than entropy production, it might be useful to show how the same expression of the entropy production obtained from the Lebowitz–Spohn approach connects to the variation in time of one of the possible definitions of entropy in our system, namely the Shannon entropy. This task is accomplished in the next subsection, showing how an expression identical to the one in Equation (38) can be obtained. We deem this sort of derivation quite useful to grasp a more profound intuition of what entropy production is, before moving to Section 6, where a more explicit expression is provided for the case of translational invariance.

5. Entropy Production from Shannon Entropy

As noted in the discussion above, it is quite instructive to explicitly derive the relationship between the entropy production defined within the Lebowitz0-Spohn approach, specifically conceived for stochastic systems, and the usual entropy considered in the canonical ensemble, namely the Shannon entropy, which, according to the prescription of stochastic thermodynamics [27,28,41], can be directly computed from the time-dependent functional probability distribution P t [ η ] of the field η ( x , t ) as
S sys ( t ) = D η ( t ) P t [ η ] log P t [ η ] ,
where the symbol D η ( t ) = x d η ( x , t ) denotes functional integration over space at a given time t. In the case of thermodynamic equilibrium, when the probability distribution of the field η ( x , t ) does not depend on time and has the form of a Boltzmann weight, for instance, the one of Equation (22) in Section 3, the Shannon entropy turns out to be precisely the canonical entropy of the system, i.e., the one computed as S = β E β F , where E and F are, respectively, the internal energy and the free energy at the inverse temperature β . Also, interpreting P t as the single-particle marginal distribution of an N-dimensional system, S s y s coincides with that minus the H -function defined by Boltzmann [24]. But here we deal with a more general case. We will show how, by taking the time derivative of the entropy S sys ( t ) defined above and making use of the Fokker–Planck equation, one ends up with terms among which it is possible to recognize precisely the same entropy production expression derived from the Lebowitz–Spohn approach, which we obtained in the previous section. This sort of derivation is quite general; see, for instance, [25,29]. Nevertheless, in most cases, the procedure is discussed in terms of the probability distribution of a random variable, not of a fluctuating field, so we deemed it helpful and not redundant to go through all steps of the derivation using a functional formalism, which is what is needed here, and considering the specific form taken by the Fokker–Planck equation in the presence of a functional formulation, which is not so common in the stochastic thermodynamic literature. As we have said, we are interested in the time derivative of S sys ( t ) , which is
S ˙ sys ( t ) = D η ( t ) P ˙ [ η ] log P t [ η ] + 1 , = D η ( t ) P ˙ [ η ] log P t [ η ] .
The second term added in the first line of Equation (48) was eliminated by exploiting the conservation of probability normalization
D η ( t ) d d t P t [ η ] = d d t D η ( t ) P t [ η ] = 0 .
At this stage, we need to use the Fokker–Planck equation, which governs the dynamics of P t [ η ( x , t ) ] , which we write here in the form of a functional continuity equation which relates the rate of change probability density to the negative divergence of a probability current (or flux), thus ensuring total probability conservation [42]:
d d t P t [ η ] = d x δ J [ η ] ( x ) δ η ( x , t ) ,
where the current is defined as
J [ η ] ( x ) = P t [ η ] d y λ ( x , y ) η ( y , t ) 1 2 γ ( x , y ) δ δ η ( y , t ) log ( P [ η ] )
Then, by inserting the time derivative of the probability distribution as written in Equation (50) into the expression of the Shannon entropy time derivative, Equation (48), one gets
S ˙ sys ( t ) = D η ( t ) d x δ J [ η ] ( x ) δ η ( x , t ) log P t [ η ] .
By integrating the functional integral by parts, one obtains
S ˙ sys ( t ) = D η d x J [ η ] ( x ) δ δ η ( x , t ) log P t [ η ] ,
where we assumed that the probability current and probability distribution vanish sufficiently rapidly at the boundaries so that the boundary contribution can be neglected. Let us notice that the above formulation is completely general; it does not depend on either the linearity of the model or any stationarity assumption. If one then takes into account the explicit expression of the current J [ η ] ( x ) given in Equation (51) and explicitly substitutes δ δ η ( x , t ) log P t [ η ] with
δ δ η ( x , t ) log P t [ η ] = 2 d z γ 1 ( x , z ) d z λ ( z , z ) η ( z , t ) J [ η ] ( z ) P t [ η ] ,
it is possible to show that the time derivative of the Shannon entropy written in Equation (53) splits into the sum of only two nontrivial contributions. These contributions, according to the standard conventions in stochastic thermodynamics [28,41], can be identified with respect to the total entropy S tot ( t ) of the universe (system+reservoir) and the entropy of the reservoir S res ( t ) :
S ˙ sys ( t ) = S ˙ tot ( t ) S ˙ res ( t )
where we have
S ˙ tot ( t ) = 2 D [ η ] d x d z J [ η ] ( x ) γ 1 ( x , z ) J [ η ] ( z ) P t [ η ] ,
S ˙ r e s ( t ) = 2 D [ η ] d x d z d z J [ η ] ( x ) γ 1 ( x , z ) λ ( z , z ) η ( z ) .
The relationship between the rate of change in the Shannon entropy and the Lebowitz–Spohn approach emerges in the stationary state, where
d d t P [ η ] = 0 S ˙ sys = 0 S ˙ tot = S ˙ res
Physically, the last identity of Equation (58) means that in the stationary state, all of the entropy produced during the dynamics is dissipated into the environment/reservoir. The last step of the calculation shows that, in the stationary state, this rate of change in the entropy dissipated by the environment corresponds precisely to the entropy production rate according to the Lebowitz–Spohn formula, i.e., we precisely have
σ = S ˙ tot = S ˙ res
In order to show this, we need to explicitly write the expression of P [ η ] , which in the stationary state is Gaussian:
P [ η ] exp 1 2 d x d y η ( x ) c 1 ( x , y ) η ( y ) ,
where the kernel is the inverse of the stationary field correlator c ( x , y ) = η ( x ) η ( y ) . Then, by plugging this stationary probability into the expression of the current, one gets
J [ η ] ( x ) = P [ η ] d y λ ( x , y ) η ( y ) + 1 2 d z γ ( x , y ) c 1 ( y , z ) η ( z ) .
Finally, by inserting the expression of the current Equation (61) into the expression of the reservoir entropy derivative S ˙ res ( t ) , and by exploiting the same operator identities in Section 4, one gets
S ˙ res = 2 d x d z d z d y [ λ ( x , y ) γ 1 ( x , z ) λ ( z , z ) η ( z ) η ( y ) + 1 2 d q γ ( x , y ) c 1 ( y , q ) γ 1 ( x , z ) λ ( z , z ) η ( z ) η ( q ) ) ] = Tr 2 Λ T Γ 1 Λ C + Λ = Tr Λ T Γ 1 Λ C + Tr Λ T Γ 1 ( C Λ T + Γ ) + Λ = Tr ( Λ T Γ 1 Γ 1 Λ ) Λ C .
The above formula coincides exactly with Equation (38). We have therefore shown an alternative way to derive the same expression obtained from the Lebowitz–Spohn approach.

6. Entropy Production in Bulk: An Explicit Formula

In order to provide more explicit expressions of the entropy production rate, a convenient and still quite general assumption, which simplifies a lot of the calculation, allowing for the final explicit expression, involves claiming translational invariance in space for the synaptic weight function. The assumption of translational invariance clearly does not fit the brain, if we think about it as a whole, since it does not have infinite extension, and it is not a periodic system. It has well-defined boundaries and a specific geometric shape. Nevertheless, if we imagine focusing on the properties of a small piece in the bulk of the brain, then, as is customary in the statistical mechanics approach to condensed matter, it makes sense to assume periodic boundary conditions and, as a consequence of that, translational invariance. In practice, this corresponds to the assumption that the synaptic weight kernel depends only on the difference between the two coordinates:
w ( x , y ) = w ( x y )
By exploiting the translational invariance of the synaptic weight function, it is possible to consider the Fourier transform of the Langevin equation for the field fluctuations, Equation (17), thus resulting in M uncoupled equations, where M is the number of Fourier modes, k [ k 0 , , k M ] , of the following kind:
t η k ( t ) = λ k η k ( t ) + ξ k ( t )
where
λ k = 1 τ + f [ u 0 ] τ w k
where we introduced the Fourier transform of the synaptic weight
w k = 1 ( 2 π ) d / 2 d x e i k x w ( x ) .
At variance with the synaptic kernel w ( x , y ) , the assumption of translational invariance for the noise kernel is quite natural; namely, we have γ ( x , y ) = γ ( x y ) . Since the noise correlator is then, by definition, symmetric, it follows that its Fourier transform γ k is a real function, i.e., γ k = Re [ γ k ] . Let us now specify, given the forward trajectory η k ( s ) with s [ 0 , t ] , the definition of its corresponding backward trajectory η k ( s ) ¯ , which, consistent with its definition in real space, is defined as follows:
η k ( s ) ¯ = η k ( t s ) ,
and also
d η k ( s ) ¯ d s = d d u η k | u = ( t s ) .
From the above definition, we therefore have that if the forward trajectory is a solution of the Langevin equation in Equation (64), the backward trajectory is then a solution of
d d u η k | u = ( t s ) = λ k η k ( t s ) ξ k ( t s )
In terms of the Fourier modes of the field, the probabilities of forward and backward trajectories can be rewritten, using a compact notation for time derivatives, η ˙ k = d d u η k , as follows:
P [ Ω 0 t ] exp 1 2 0 t d s k η ˙ k * ( s ) λ k * η k * ( s ) γ k 1 η ˙ k ( s ) λ k η k ( s ) ,
P [ Ω ¯ 0 t ] exp 1 2 0 t d s k η ˙ k * ( t s ) + λ k * η k * ( t s ) γ k 1 η ˙ ( k , t s ) + λ k η k ( t s ) ,
which, exploiting the mathematical identity
0 t d s f ( t s ) = 0 t d s f ( s ) ,
can be rewritten as
P [ Ω 0 t ] exp 1 2 0 t d s k | η ˙ k ( s ) | 2 + | λ k η k ( s ) | 2 η ˙ k * ( s ) λ k η k ( s ) η ˙ k ( s ) λ k * η k * ( s ) γ k ,
P [ Ω ¯ 0 t ] exp 1 2 0 t d s k | η ˙ k ( s ) | 2 + | λ k η k | 2 ( s ) + η ˙ k * ( s ) λ k η k ( s ) + η ˙ k ( s ) λ k * η k * ( s ) γ k .
Let us notice that in the above Equation (74), the field η k ( s ) and λ k take values in C , since there is no general reason to assume that w ( x ) has definite parity symmetry. Then, according to its definition, the entropy production rate for the stochastic Amari model with translational invariant synaptic weight is
σ = lim t 2 t 0 t d s k Re λ k η ˙ k * ( s ) η k ( s ) γ k = λ k γ k η ˙ k * η k + λ k * γ k η ˙ k η k *
Further progress in the explicit calculation of the entropy production rate per Fourier mode σ k can be made by explicitly considering the formal solution of the Langevin Equation (64):
η k ( t ) = e λ k t η k ( 0 ) + 0 t d s e λ k ( t s ) ξ k ( s ) ,
from which, under the not too restrictive assumption η k ( 0 ) = 0 , one immediately gets
η k ( t ) = 0 t d s e λ k ( t s ) ξ k ( s )
η ˙ k ( t ) = ξ k ( t ) + λ k 0 t d s e λ k ( t s ) ξ k ( s )
From the above expressions, we can compute, for instance, η ^ ˙ ( k , t ) η ^ * ( k , t ) :
η ˙ k ( t ) η k * ( t ) = 0 t d s ˜ e λ k * ( t s ˜ ) ξ k ( t ) ξ ^ k * ( s ˜ ) + λ k 0 t d s ˜ d s e λ k * ( t s ˜ ) + λ k ( t s ) ξ k ( s ) ξ k * ( s ˜ ) = γ k 1 2 + λ k 0 t d s e [ λ k * + λ k ] ( t s ) = γ k 1 2 + λ k λ k * + λ k e 2 Re [ λ k ] t λ k λ k * + λ k .
Since we are interested in the value of the above correlation function at stationarity, we send t , and assuming Re [ λ k ] < 0 , we have
η ˙ k η k * = lim t η ˙ k ( t ) η k * ( t ) = γ k 2 Re [ λ k ] λ k * λ k 2 .
Correspondingly, it is straightforward to see that the complex conjugate of the same quantity is as follows:
η ˙ k * η k = lim t η ˙ k * ( t ) η k ( t ) = γ k 2 Re [ λ k ] λ k λ k * 2 .
By plugging the above results on stationary correlators into the expression of entropy production in Equation (75), one gets the following:
σ k = λ k γ k η ˙ k * η k + λ k * γ k η ˙ k η k * = i Im [ λ k ] ( λ k λ k * ) 2 Re [ λ k ]
and consequently
σ k = Im 2 [ λ k ] Re [ λ k ] .
This leads us to the final result for entropy production
σ = k Im 2 [ λ k ] Re [ λ k ] ,
which is a remarkably simple formula in terms of the eigenvalues λ k of the linearized Amari equation in Fourier space.

7. Conclusions

In this work, we derived an explicit and compact expression for the entropy production rate σ in the stochastic, linearized Amari neural field model, thereby providing a rigorous bridge between non-equilibrium statistical mechanics and large-scale neural dynamics. By computing σ , we showed that the emergence of irreversibility in the stochastic Amari model is entirely determined by the interplay between the symmetry of the synaptic kernel w ( x , y ) and the structure of the noise covariance γ ( x , y ) . Moreover, we discussed the relationship between the entropy production rate σ , as defined by Lebowitz and Spohn, and the temporal variation in the system entropy S ˙ sys , showing that, in a non-equilibrium stationary state, σ equals the entropy dissipated into the reservoir S ˙ res . While this equivalence is well known in stochastic thermodynamics [27,29,41], we derived it by employing the functional formalism typical of neural field models. Thus, our derivation systematically connects the functional formulations of stochastic neural field theory [2,33] with the language of modern stochastic thermodynamics [29]. Most previous literature on the subject has aimed at developing methods for estimating the entropy production rate or other measures of irreversibility from experimental signals and, therefore, usually deals with finite-dimensional models [10,13,19,22]. In contrast, we carefully examined the mechanism of time reversal symmetry breaking in the idealized framework of the stochastic Amari model. It is worth emphasizing that this model lacks several biological ingredients, such as population heterogeneity, delays in signal propagation, and synaptic plasticity, since it describes a single scalar field η ( x , t ) and employs a time-independent coupling kernel w ( x , y ) . Moreover, it is a phenomenological model, meaning that it is not derived from a microscopic description through a large-scale expansion. As explained in Section 3, fluctuations are externally imposed on the deterministic dynamics. To discuss, in the simplest way, the interplay between synaptic coupling and noise correlations, while avoiding certain interpretational subtleties, we focused on the case of additive Gaussian noise. Within this approximation, we are neglecting many possible sources of irreversibility such as those produced by non-Gaussian, non-Markovian, or multiplicative fluctuations. Despite these limitations, our analysis clarifies the conditions that w ( x , y ) and γ ( x , y ) must satisfy for linear, spatially extended neural fields to not produce entropy during evolution. For instance, the analysis in Fourier space, when translational invariance was assumed, revealed that the spatial correlations of fluctuations do not contribute to the breaking of time reversal symmetry, which originate solely from asymmetries in the coupling kernel. In this case, we also showed that the entropy production rate σ can be decomposed into positive contributions σ k , one for each mode k . This decomposition, similar to that proposed by [23] but applied in wave-vector space k rather than in the frequency domain, provides a simple theoretical tool to probe dissipation in neuronal systems across different spatial scales associated with the Fourier modes. In conclusion, the simplified setting adopted here should be understood as a testbed for uncovering the mechanisms by which asymmetries in synaptic couplings and noise spatial correlations generate irreversible steady dynamics. Isolating these contributions to entropy production may prove useful when more realistic situations are considered. Moreover, it allowed us to establish the theoretical foundation for linking stochastic thermodynamics to spatially extended neural field models under minimal assumptions and can serve as a starting point for future explorations of energy dissipation and information flow in neuronal systems.

Author Contributions

Conceptualization, D.L., G.G. and L.S.; methodology, D.L., G.G. and L.S.; software, D.L., G.G. and L.S.; formal analysis, D.L., G.G. and L.S.; investigation, D.L., G.G. and L.S.; writing—original draft preparation, D.L., G.G. and L.S.; writing—review and editing, D.L., G.G. and L.S.; visualization, D.L., G.G. and L.S.; supervision, D.L., G.G. and L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by project MIUR-PRIN2022, “Emergent Dynamical Patterns of Disor- dered Systems with Applications to Natural Communities”, code 2022WPHMXK; by project NEXTGENERATIONEU (NGEU) funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), and Project MNESYS (PE0000006)-A multiscale integrated approach to the study of the nervous system in health and disease (DN. 1553 11.10.2022); by ‘IniziativaSpecifica Quantum’ of INFN; by the European Union-NextGenerationEU within the National Center for HPC, Big Data and Quantum Computing (Project No. CN00000013, CN1 Spoke 10: ‘Quantum Computing’); by EU Project PASQuanS 2 ‘Programmable Atomic Large-Scale Quantum Simulation’; and by the National Grant of the Italian Ministry of University and Research for the PRIN 2022 project ‘Quantum Atomic Mixtures: Droplets, Topological Structures, and Vortices’.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Buice, M.A.; Cowan, J.D. Field-theoretic approach to fluctuation effects in neural networks. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 2007, 75, 051919. [Google Scholar] [CrossRef] [PubMed]
  2. Wilson, H.R.; Cowan, J.D. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 1972, 12, 1–24. [Google Scholar] [CrossRef]
  3. Amari, S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 1977, 27, 77–87. [Google Scholar] [CrossRef] [PubMed]
  4. Amari, S.; Kishimoto, K. Dynamics of excitation patterns in lateral inhibitory neural fields. Electron. Commun. Japan 1978, 61, 1–8. [Google Scholar]
  5. Beggs, J.M.; Plenz, D. Neuronal avalanches in neocortical circuits. J. Neurosci. 2003, 23, 11167–11177. [Google Scholar] [CrossRef]
  6. Buice, M.A.; Cowan, J.D. Statistical mechanics of the neocortex. Prog. Biophys. Mol. Biol. 2009, 99, 53–86. [Google Scholar] [CrossRef]
  7. de Arcangelis, L.; Lombardi, F.; Herrmann, H. Criticality in the brain. J. Stat. Mech. Theory Exp. 2014, 2014, P03026. [Google Scholar] [CrossRef]
  8. di Santo, S.; Burioni, R.; Vezzani, A.; Muñoz, M.A. Self-Organized Bistability Associated with First-Order Phase Transitions. Phys. Rev. Lett. 2016, 116, 240601. [Google Scholar] [CrossRef] [PubMed]
  9. Buendía, V.; di Santo, S.; Villegas, P.; Burioni, R.; Muñoz, M.A. Self-organized bistability and its possible relevance for brain dynamics. Phys. Rev. Res. 2020, 2, 013318. [Google Scholar] [CrossRef]
  10. Lynn, C.W.; Cornblath, E.J.; Papadopoulos, L.; Bertolero, M.A.; Bassett, D.S. Broken detailed balance and entropy production in the human brain. Proc. Natl. Acad. Sci. USA 2021, 118, e2109889118. [Google Scholar] [CrossRef]
  11. Sarracino, A.; Arviv, O.; Shriki, O.; de Arcangelis, L. Predicting brain evoked response to external stimuli from temporal correlations of spontaneous activity. Phys. Rev. Res. 2020, 2, 033355. [Google Scholar] [CrossRef]
  12. Nandi, M.K.; de Candia, A.; Sarracino, A.; Herrmann, H.J.; de Arcangelis, L. Fluctuation-dissipation relations in the imbalanced Wilson-Cowan model. Phys. Rev. E 2023, 107, 064307. [Google Scholar] [CrossRef] [PubMed]
  13. Monti, J.M.; Perl, Y.S.; Tagliazucchi, E.; Kringelbach, M.L.; Deco, G. Fluctuation-dissipation theorem and the discovery of distinctive off-equilibrium signatures of brain states. Phys. Rev. Res. 2025, 7, 013301. [Google Scholar] [CrossRef]
  14. Lucente, D.; Baldovin, M.; Cecconi, F.; Cencini, M.; Cocciaglia, N.; Puglisi, A.; Viale, M.; Vulpiani, A. Conceptual and practical approaches for investigating irreversible processes. New J. Phys. 2025, 27, 041201. [Google Scholar] [CrossRef]
  15. Ruelle, D. Positivity of entropy production in nonequilibrium statistical mechanics. J. Stat. Phys. 1996, 85, 1–23. [Google Scholar] [CrossRef]
  16. Lebowitz, J.L.; Spohn, H. A Gallavotti–Cohen-type symmetry in the large deviation functional for stochastic dynamics. J. Stat. Phys. 1999, 95, 333–365. [Google Scholar] [CrossRef]
  17. Kringelbach, M.L.; Perl, Y.S.; Deco, G. The thermodynamics of mind. Trends Cogn. Sci. 2024, 28, 568–581. [Google Scholar] [CrossRef]
  18. Nartallo-Kaluarachchi, R.; Kringelbach, M.L.; Deco, G.; Lambiotte, R.; Goriely, A. Nonequilibrium physics of brain dynamics. arXiv 2025, arXiv:2504.12188. [Google Scholar] [CrossRef]
  19. Nartallo-Kaluarachchi, R.; Asllani, M.; Deco, G.; Kringelbach, M.L.; Goriely, A.; Lambiotte, R. Broken detailed balance and entropy production in directed networks. Phys. Rev. E 2024, 110, 034313. [Google Scholar] [CrossRef]
  20. Nartallo-Kaluarachchi, R.; Bonetti, L.; Fernández-Rubio, G.; Vuust, P.; Deco, G.; Kringelbach, M.L.; Lambiotte, R.; Goriely, A. Multilevel irreversibility reveals higher-order organization of nonequilibrium interactions in human brain dynamics. Proc. Natl. Acad. Sci. USA 2025, 122, e2408791122. [Google Scholar] [CrossRef]
  21. Geli, S.M.; Lynn, C.W.; Kringelbach, M.L.; Deco, G.; Perl, Y.S. Non-equilibrium whole-brain dynamics arise from pairwise interactions. Cell Rep. Phys. Sci. 2025, 6, 102464. [Google Scholar] [CrossRef]
  22. Gilson, M.; Tagliazucchi, E.; Cofré, R. Entropy production of multivariate Ornstein-Uhlenbeck processes correlates with consciousness levels in the human brain. Phys. Rev. E 2023, 107, 024121. [Google Scholar] [CrossRef] [PubMed]
  23. Sekizawa, D.; Ito, S.; Oizumi, M. Decomposing thermodynamic dissipation of linear Langevin systems via oscillatory modes and its application to neural dynamics. Phys. Rev. X 2024, 14, 041003. [Google Scholar] [CrossRef]
  24. De Groot, S.R.; Mazur, P. Non-Equilibrium Thermodynamics; Courier Corporation: North Chelmsford, MA, USA, 2013. [Google Scholar]
  25. Sarracino, A.; Puglisi, A.; Vulpiani, A. Nonequilibrium Statistical Mechanics: Basic Concepts, Models and Applications; IOP Publishing: Bristol, UK, 2025. [Google Scholar]
  26. Nardini, C.; Fodor, É.; Tjhung, E.; Van Wijland, F.; Tailleur, J.; Cates, M.E. Entropy production in field theories without time-reversal symmetry: Quantifying the non-equilibrium character of active matter. Phys. Rev. X 2017, 7, 021007. [Google Scholar] [CrossRef]
  27. Sekimoto, K. Stochastic Energetics; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  28. Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 2012, 75, 126001. [Google Scholar] [CrossRef] [PubMed]
  29. Peliti, L.; Pigolotti, S. Stochastic Thermodynamics: An Introduction; Princeton University Press: Princeton, NJ, USA, 2021. [Google Scholar]
  30. Sokolov, I.M. Ito, Stratonovich, Hänggi and all the rest: The thermodynamics of interpretation. Chem. Phys. 2010, 375, 359–363. [Google Scholar] [CrossRef]
  31. Cherstvy, A.G.; Metzler, R. Ergodicity breaking and particle spreading in noisy heterogeneous diffusion processes. arXiv 2015, arXiv:1502.04035. [Google Scholar] [CrossRef]
  32. Coombes, S.; Graben, P.B.; Potthast, R.; Wright, J. Neural Fields: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  33. Bressloff, P.C. Spatiotemporal dynamics of continuum neural fields. J. Phys. A Math. Theor. 2011, 45, 033001. [Google Scholar] [CrossRef]
  34. Sompolinsky, H.; Crisanti, A.; Sommers, H.J. Chaos in Random Neural Networks. Phys. Rev. Lett. 1988, 61, 259–262. [Google Scholar] [CrossRef]
  35. Crisanti, A.; Sompolinsky, H. Path integral approach to random neural networks. Phys. Rev. E 2018, 98, 062120. [Google Scholar] [CrossRef]
  36. Onsager, L.; Machlup, S. Fluctuations and irreversible processes. Phys. Rev. 1953, 91, 1505. [Google Scholar] [CrossRef]
  37. Machlup, S.; Onsager, L. Fluctuations and irreversible process. II. Systems with kinetic energy. Phys. Rev. 1953, 91, 1512. [Google Scholar] [CrossRef]
  38. Gardiner, C.W.; Crispin, W. Handbook of Stochastic Methods; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3. [Google Scholar]
  39. Lucente, D.; Baldassarri, A.; Puglisi, A.; Vulpiani, A.; Viale, M. Inference of time irreversibility from incomplete information: Linear systems and its pitfalls. Phys. Rev. Res. 2022, 4, 043103. [Google Scholar] [CrossRef]
  40. Onsager, L. Reciprocal Relations in Irreversible Processes. I. Phys. Rev. 1931, 37, 405–426. [Google Scholar] [CrossRef]
  41. Seifert, U. Entropy production along a stochastic trajectory and an integral fluctuation theorem. Phys. Rev. Lett. 2005, 95, 040602. [Google Scholar] [CrossRef]
  42. O’Byrne, J.; Cates, M. Geometric theory of (extended) time-reversal symmetries in stochastic processes: II. Field theory. J. Stat. Mech. Theory Exp. 2025, 2025, 053204. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lucente, D.; Gradenigo, G.; Salasnich, L. Entropy Production and Irreversibility in the Linearized Stochastic Amari Neural Model. Entropy 2025, 27, 1104. https://doi.org/10.3390/e27111104

AMA Style

Lucente D, Gradenigo G, Salasnich L. Entropy Production and Irreversibility in the Linearized Stochastic Amari Neural Model. Entropy. 2025; 27(11):1104. https://doi.org/10.3390/e27111104

Chicago/Turabian Style

Lucente, Dario, Giacomo Gradenigo, and Luca Salasnich. 2025. "Entropy Production and Irreversibility in the Linearized Stochastic Amari Neural Model" Entropy 27, no. 11: 1104. https://doi.org/10.3390/e27111104

APA Style

Lucente, D., Gradenigo, G., & Salasnich, L. (2025). Entropy Production and Irreversibility in the Linearized Stochastic Amari Neural Model. Entropy, 27(11), 1104. https://doi.org/10.3390/e27111104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop