Nonlinear Fokker–Planck Equations, H-Theorem and Generalized Entropy of a Composed System

We investigate the dynamics of a system composed of two different subsystems when subjected to different nonlinear Fokker–Planck equations by considering the H–theorem. We use the H–theorem to obtain the conditions required to establish a suitable dependence for the system’s interaction that agrees with the thermodynamics law when the nonlinearity in these equations is the same. In this framework, we also consider different dynamical aspects of each subsystem and investigate a possible expression for the entropy of the composite system.


Introduction
Thermodynamics and statistical mechanics have entropy as a fundamental tool connecting the properties of a system from the particles' microscopic dynamics with macroscopic quantities and, consequently, with thermodynamic quantities.The concept of entropy started with Clausius's studies of thermal machines [1].Subsequently, the Boltzmann and Gibbs works incorporated the concept of probability, building up the fundamentals of statistical mechanics [2][3][4].It has been successfully applied in many contexts, where the fundamental basis is the molecular chaos hypothesis, which assumes the close-range interaction of molecules and the absence of memory in the collision of particles [5,6].However, for many physical systems (e.g., fractal and self-organizing structures), conditions for the fulfillment of the molecular chaos hypothesis are not observed as well as the range of the interactions, which are long-ranged [7][8][9].These points have motivated the analysis of extensions for thermodynamics and statistical physics to cover these scenarios.As an example, Tsallis has proposed an extension of the entropy [10], which has been systematically applied in many contexts such as black holes [11], the electrocaloric effect in quantum dots [12], chemotaxis of biological populations [13], Bose-Einstein condensation [14,15], and stimulated the analysis of other entropies [16][17][18][19][20].More applications can be found in Refs.[21][22][23][24][25][26].These entropies verify the H-theorem [27][28][29][30][31], which represents an important result of nonequilibrium statistical mechanics by ensuring that a system will reach an equilibrium after a long time evolution.The H-theorem establishes a connection between the dynamics and entropy, which may be used to investigate the dynamics behind the law of additivity for the different entropies.In this framework, by considering a nonlinear Fokker-Planck equation, the H-theorem can show how the entropy additivity laws can be obtained when a system composed of many subsystems is taken into account.In addition, it can also allow us to obtain the equilibrium distributions.
Here, we investigate through the H-theorem the conditions on the dynamics equations, i.e., nonlinear Fokker-Planck equations [32][33][34][35], for each subsystem of a composed system to reach the equilibrium condition.The results show that generalized entropies imply a coupling between the nonlinear equations.The distributions that emerge from these dynamics equations have a power-law behavior, where each subsystem modifies the other.We also investigate the entropy production for this system.These developments are presented in Section 2. In Section 3, we present our discussions and conclusions.

The Problem
Let us start our analysis by establishing the nonlinear Fokker-Planck equations connected to the dynamics of each subsystem of a composed system.They are and where F i (x i ), with i = 1 or 2, represents the external force, i.e., F i = −∂ x i φ i (x i ) and φ i (x i ) is a potential energy, while Γ stands for a generic diffusion coefficient.Notice that P 1 (ρ 1 , t) and P 2 (ρ 2 , t) present in the diffusive term may have the same form or a different form.Particular choices of P i (ρ i , t) have been successfully analyzed in several problems such as in porous media [36], anomalous diffusion [37], overdamped systems [38], and the Boltzmann equation endowed with a correlation term [39].In Equations ( 1) and ( 2), P i (ρ i , t) will be determined by the H-theorem in connection with the entropic form used to describe the combination of subsystems 1 and 2. It is worth pointing out that the different possibilities may be considered by allowing us to obtain different results for the composite system of 1 and 2 subsystems, as discussed in Refs.[28,29].However, the combination of these equations, which represent the subsystem 1 and 2, in connection with thermostatistics (e.g., the nonextensive statistics [40]) requires careful analysis with direct consequences on the entropic additivity and zeroth law [41][42][43].To accomplish this task, we consider general scenarios with different dynamics to investigate possible conditions to Equations ( 1) and (2) to allow a thermostatistics context.

H-Theorem
We start our analysis in terms of the H-theorem first by considering P 1 (ρ 1 ) and P 2 (ρ 2 ) with the same functional form.Afterwards, we consider P 1 (ρ 1 ) and P 2 (ρ 2 ) with a different functional form.Each one of these cases has different implications for the entropy related to the composed system formed by the systems 1 and 2, with the dynamics given in terms of Equations ( 1) and (2).Following Ref. [28,29,31], we analyze the behavior of the time derivative of the Helmholtz free energy.This free energy is defined by F = U − TS, with the internal energy, U, given by and the entropy, S, expressed in terms of an arbitrary function Note that Equations ( 3) and (4) represent the total internal energy and the entropy of the system composed of two subsystems governed by Equations ( 1) and (2), respectively.
By using the previous equations, the total free energy of the system is given by with Ψ(x 1 , x 2 ) = φ 1 (x 1 ) + φ 2 (x 2 ).Before determining the time derivative of Equation (5), we assume that P 1 (ρ 1 , t) and P 2 (ρ 2 , t) have essentially the same functional forms and the entropy is a function of the product of the probability densities related to each subsystem, i.e., s(ρ 1 , ρ 2 ) = s(ρ 1 ρ 2 ).It is then possible to show that where ρ 12 = ρ 1 ρ 2 , and After integration by parts and applying the conditions Now, let us focus on the term where i = 1, 2, which will be directly connected with the properties of the entropy of the composite system.To proceed, we consider that with j = i, j = 1, 2, and to be able to cover different scenarios, where α γ and α ν are constants.Note that the choice of the D j,γ (t) and D j,ν (t) implies that each subsystem influences the other.This aspect of the problem can be associated to the feature that the nonlinearity present in Equations ( 1) and ( 2) introduces additional interactions between the subsystems during the thermalization process, where each subsystem works as an additional thermal bath to the other.By using the previous equations, we have We verify that which implies Consequently, by solving Equation ( 13) with Γ = kT under the conditions defined in Refs.
[28-31], we obtain The entropy for the composite system is given by which can also be rewritten as and, consequently, as Equation ( 18) has several particular cases, such as the Tsallis and Kaniadakis entropies, depending on the values of the parameters α γ , α ν , γ, and ν.It is noteworthy that this result preserves the additivity in the Penrose sense [3], i.e., S(ρ 12 ) = S(ρ 1 ρ 2 ) required for a system composed of independent subsystems when the standard entropy is employed.
In the previous context, Equations ( 1) and ( 2) can be written as follows: and , by evidencing the influence of one of them on the other.In particular, the terms forming the diffusive part can also be connected with anomalous diffusion processes with different diffusion regimes.The stationary solutions obtained from Equations ( 19) and ( 20) are given by and where lim t→∞ D i,γ (t) = D i,γ = constant, φ i (x) are potentials with a minimum, and C i are constants.For the Tsallis entropy, by taking, for simplicity, D i,ν = 0, we have and where In the preceding equations, exp q [x] is the q−exponential function, defined as follows [40]: The presence of this function in the previous equations enables the identification of either a short-or a long-tailed behavior of the solution, depending on the value of the parameters γ and ν.Indeed, they may have a compact behavior for γ > 1 (or ν > 1) due to the cut-off required by the q-exponential to retain the probabilistic interpretation of the distribution.On the other hand, for γ < 1 (or ν > 1), the solutions may have the asymptotic limit governed by a power-law behavior, which may also be related to a Lévy distribution [44] and, consequently, asymptotically with the solutions of the fractional Fokker-Planck equations [45], which are asymptotically governed by power-laws.
From the stochastic point of view, Equations ( 19) and ( 20) are connected with the following Langevin equations: and where ξ 1 (t) and ξ 2 (t) are connected to the stochastic forces and In particular, we have and The walkers related to this problem can be described, for simplicity, in the absence of external forces, in terms of the following equations [46,47]: and where These equations, in the limit τ → 0 and x i → 0, yield Equations ( 1) and (2) in the absence of external forces, respectively.
Let us now consider a general case, i.e., the one in which the diffusion terms have a different nonlinear dependence on the distributions.This means that the systems have different dynamical aspects governed by the nonlinear dependence on the distribution present in the diffusive term.By using the preceding equations and having in mind Equation (5), we may write After some calculations, it is possible to show that Let us analyze in particular the previous equation, for example, for the case with which implies different dynamics for each subsystem.We notice that it is possible to take into account different aspects of the dynamics of each subsystem, and every choice has different implications for the total entropy of the composite system.Similar nonlinear Fokker-Planck equations were considered in Ref. [48] from the point of view of analyzing the interaction between the two subsystems.From Equation ( 37), we deduce that the entropy needs to satisfy the following equations: in order to verify and, consequently, to satisfy the H-theorem.A solution for the previous system of equations is This result allows us to write the total entropy of this system as follows: It is remarkable that this result for the entropy differs from the preceding one given by Equation ( 18), obtained from a different choice of nonlinear Fokker-Planck equations.Equation ( 41) results from a combination of different subsystems with different dynamics, which individually have different entropies associated with them.One of the consequences is that the entropy of the composite system, for this specific case, can not be written as S(ρ 1 ρ 2 ), only when γ = ν.Another remarkable point is the connection of Equation ( 41) with the composition of Tsallis entropies of different q-indices [49,50].The solution can be found in this framework using the q-exponential functions.In particular, it is possible to show that the solution for each nonlinear Fokker-Planck equation, in the absence of external force, is and with β 1 (t), β 2 (t), Z 1 (t), and Z 2 (t) obtained from the following set of equations: with where κ = γ or ν.
Figure 1 shows the behavior of the mean square displacement for two different sets of γ and ν in the absence of external forces.The values chosen for the parameters γ and ν are responsible for different behaviors of the mean square displacement for each case, as pointed out in the inset of Figure 1.In particular, the diffusion present in this scenario is anomalous [51,52].Figure 2 shows the behavior of Equation (41) for two different sets of γ and ν.Note that different values of β 1 (0) and β 2 (0) used to obtain Figures 1 and 2 are connected to different initial conditions for each subsystem.This is the reason why we initially verified different behaviors for each set of the parameters γ and ν, and, after some time, the mean square displacement has the same time dependence for both subsystems.The entropy production is shown in the inset in Figure 2, which corresponds to the behavior of Equation (61) for the entropy given by Equation (41).We underline that the system composed of these two systems reaches equilibrium in the limit of t → ∞, since in this limit Ṡ(t) → 0. For general nonlinear Fokker-Planck equations, the entropy should simultaneously satisfy the following equations, to verify d dt F ≤ 0 and, consequently, satisfy the H-theorem.It is also significant to mention that, depending on the form of the nonlinear dependence in the Equations ( 1) and ( 2), which may not recover the standard form of the Fokker-Planck equation, the entropy associated with these equations will not recover the usual form.x /σ 0,γ(ν) versus t for two different sets of γ and ν, where 2 , where σ γ(ν) is chosen in order to collapse the curves for each set of values.We consider, for simplicity, β 1 (0) = 2 and β 2 (0) = 1.The red dasheddotted and black dashed lines represent the case γ = 0.4 with ν = 0.7.The blue dashed-dotted and black dashed-dotted-dotted lines represent the case γ = 0.35 with ν = 0.55.Notice that the behavior for the cases worked out in this figure have different time dependence for the mean square displacement, as pointed out in the inset.

Entropy Production
Let us analyze the entropy production related to Equation (17) with the dynamics of ρ 1 (x 1 , t) and ρ 2 (x 2 , t) given by Equations ( 19) and (20).By performing a time derivative of Equation ( 17), we obtain and, consequently, performing integration by parts with the conditions J 1 (x 1 → ±∞, t) → 0 and J 2 (x 2 → ±∞, t) → 0, also It is possible to simplify Equation ( 48) by using, from the H-theorem, the equations and in order to obtain where and with P 1 (ρ 1 , t) and P 2 (ρ 2 , t) given by Equations ( 10) and (11).Equation ( 48) can be written as follows: where one identifies the entropy flux, representing the exchanges of entropy between the subsystems represented by ρ 1 and ρ 2 and their neighborhood, as well as the entropy-production contribution: We underline that T and ρ i (x i , t) are positive quantities, yielding the desirable result: Π ≥ 0.
For the general case represented by Equation ( 4), we have Performing integration by parts in Equation ( 58) and by taking into account the conditions J 1 (x 1 → ±∞, t) → 0 and J 2 (x 2 → ±∞, t) → 0, we obtain that By using the equations, it is possible to simplify Equation (59) in order to obtain where and as before, with P 1 (ρ 1 , t) and P 2 (ρ 2 , t) arbitrary.Note that Equation (61) is formally equal to Equation (52), which evidences that the result obtained for the entropy production is invariant in form when the entropies are obtained from the H-theorem.

Discussion and Conclusions
We have investigated the entropy of a system composed of two subsystems governed by nonlinear Fokker-Planck equations.In this context, we have essentially analyzed two scenarios; in one of them, the subsystems have the same dynamics, and in the other one, they have different dynamics, i.e., the nonlinear Fokker-Planck equations are different.The first case allows the definition of entropy which can be connected to different cases and preserves the formal structure S(ρ 1 , ρ 2 ) = S(ρ 1 ρ 2 ) also verified by the standard entropy of Boltzmann-Gibbs.For the other case, we consider different dynamics for each subsystem, which allows the definition of an entropic form for which S(ρ 1 , ρ 2 ) = S(ρ 1 ρ 2 ).In both cases, we have analyzed the entropy production and we have shown the effect of each subsystem on the composite system.In addition, we have shown that the time variation of the entropy (entropy production) for the total system is invariant in form for all the cases considered here.

Figure 2 .
Figure 2. Behavior of Equation (41) versus t for two different sets of γ and ν.We consider, for simplicity, β 1 (0) = 2 and β 2 (0) = 1.The red dashed-dotted line represents the case γ = 0.4 with ν = 0.7.The blue dashed-dotted line represents the case γ = 0.35 with ν = 0.55.Notice that the behavior for the cases worked out in this figure have different time dependence for Ṡ(t), as pointed out in the inset.