Next Article in Journal
Entanglement Structure in Expanding Universes
Next Article in Special Issue
Entropy Increase in Switching Systems
Previous Article in Journal
Quantum Entanglement Concentration Based on Nonlinear Optics for Quantum Communications
Previous Article in Special Issue
Genetic Algorithm-Based Identification of Fractional-Order Systems
Article Menu

Export Article

Entropy 2013, 15(5), 1821-1846; doi:10.3390/e15051821

Article
A Unification between Dynamical System Theory and Thermodynamics Involving an Energy, Mass, and Entropy State Space Formalism
The School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
Received: 29 March 2013; in revised form: 10 May 2013 / Accepted: 10 May 2013 / Published: 16 May 2013

Abstract

:
In this paper, we combine the two universalisms of thermodynamics and dynamical systems theory to develop a dynamical system formalism for classical thermodynamics. Specifically, using a compartmental dynamical system energy flow model involving heat flow, work energy, and chemical reactions, we develop a state-space dynamical system model that captures the key aspects of thermodynamics, including its fundamental laws. In addition, we show that our thermodynamically consistent dynamical system model is globally semistable with system states converging to a state of temperature equipartition. Furthermore, in the presence of chemical reactions, we use the law of mass-action and the notion of chemical potential to show that the dynamic system states converge to a state of temperature equipartition and zero affinity corresponding to a state of chemical equilibrium.
Keywords:
system thermodynamics; energy flow; interconnected systems; entropy; Helmholtz free energy; Gibbs free energy; chemical thermodynamics; mass action kinetics; chemical potential; neuroscience and thermodynamics

1. Introduction

Thermodynamics is a physical branch of science that governs the thermal behavior of dynamical systems from those as simple as refrigerators to those as complex as our expanding universe. The laws of thermodynamics involving conservation of energy and nonconservation of entropy are, without a doubt, two of the most useful and general laws in all sciences. The first law of thermodynamics, according to which energy cannot be created or destroyed but is merely transformed from one form to another, and the second law of thermodynamics, according to which the usable energy in an adiabatically isolated dynamical system is always diminishing in spite of the fact that energy is conserved, have had an impact far beyond science and engineering. The second law of thermodynamics is intimately connected to the irreversibility of dynamical processes. In particular, the second law asserts that a dynamical system undergoing a transformation from one state to another cannot be restored to its original state and at the same time restore its environment to its original condition. That is, the status quo cannot be restored everywhere. This gives rise to a monotonically increasing quantity known as entropy. Entropy permeates the whole of nature, and unlike energy, which describes the state of a dynamical system, entropy is a measure of change in the status quo of a dynamical system.
There is no doubt that thermodynamics is a theory of universal proportions whose laws reign supreme among the laws of nature and are capable of addressing some of science’s most intriguing questions about the origins and fabric of our universe. The laws of thermodynamics are among the most firmly established laws of nature and play a critical role in the understanding of our expanding universe. In addition, thermodynamics forms the underpinning of several fundamental life science and engineering disciplines, including biological systems, physiological systems, chemical reaction systems, ecological systems, information systems, and network systems, to cite but a few examples. While from its inception its speculations about the universe have been grandiose, its mathematical foundation has been amazingly obscure and imprecise [1,2,3,4]. This is largely due to the fact that classical thermodynamics is a physical theory concerned mainly with equilibrium states and does not possess equations of motion. The absence of a state space formalism in classical thermodynamics, and physics in general, is quite disturbing and in our view largely responsible for the monomeric state of classical thermodynamics.
In recent research [4,5,6], we combined the two universalisms of thermodynamics and dynamical systems theory under a single umbrella to develop a dynamical system formalism for classical thermodynamics so as to harmonize it with classical mechanics. While it seems impossible to reduce thermodynamics to a mechanistic world picture due to microscopic reversibility and Poincaré recurrence, the system thermodynamic formulation of [4] provides a harmonization of classical thermodynamics with classical mechanics. In particular, our dynamical system formalism captures all of the key aspects of thermodynamics, including its fundamental laws, while providing a mathematically rigorous formulation for thermodynamical systems out of equilibrium by unifying the theory of heat transfer with that of classical thermodynamics. In addition, the concept of entropy for a nonequilibrium state of a dynamical process is defined, and its global existence and uniqueness is established. This state space formalism of thermodynamics shows that the behavior of heat, as described by the conservation equations of thermal transport and as described by classical thermodynamics, can be derived from the same basic principles and is part of the same scientific discipline.
Connections between irreversibility, the second law of thermodynamics, and the entropic arrow of time are also established in [4,6]. Specifically, we show a state irrecoverability and, hence, a state irreversibility nature of thermodynamics. State irreversibility reflects time-reversal non-invariance, wherein time-reversal is not meant literally; that is, we consider dynamical systems whose trajectory reversal is or is not allowed and not a reversal of time itself. In addition, we show that for every nonequilibrium system state and corresponding system trajectory of our thermodynamically consistent dynamical system, there does not exist a state such that the corresponding system trajectory completely recovers the initial system state of the dynamical system and at the same time restores the energy supplied by the environment back to its original condition. This, along with the existence of a global strictly increasing entropy function on every nontrivial system trajectory, establishes the existence of a completely ordered time set having a topological structure involving a closed set homeomorphic to the real line, thus giving a clear time-reversal asymmetry characterization of thermodynamics and establishing an emergence of the direction of time flow.
In this paper, we reformulate and extend some of the results of [4]. In particular, unlike the framework in [4] wherein we establish the existence and uniqueness of a global entropy function of a specific form for our thermodynamically consistent system model, in this paper we assume the existence of a continuously differentiable, strictly concave function that leads to an entropy inequality that can be identified with the second law of thermodynamics as a statement about entropy increase. We then turn our attention to stability and convergence. Specifically, using Lyapunov stability theory and the Krasovskii–LaSalle invariance principle [7], we show that for an adiabatically isolated system, the proposed interconnected dynamical system model is Lyapunov stable with convergent trajectories to equilibrium states where the temperatures of all subsystems are equal. Finally, we present a state-space dynamical system model for chemical thermodynamics. In particular, we use the law of mass-action to obtain the dynamics of chemical reaction networks. Furthermore, using the notion of the chemical potential [8,9], we unify our state space mass-action kinetics model with our thermodynamic dynamical system model involving energy exchange. In addition, we show that entropy production during chemical reactions is nonnegative and the dynamical system states of our chemical thermodynamic state space model converge to a state of temperature equipartition and zero affinity (i.e., the difference between the chemical potential of the reactants and the chemical potential of the products in a chemical reaction).
The central thesis of this paper is to present a state space formulation for equilibrium and nonequilibrium thermodynamics based on a dynamical system theory combined with interconnected nonlinear compartmental systems that ensures a consistent thermodynamic model for heat, energy, and mass flow. In particular, the proposed approach extends the framework developed in [4] addressing closed thermodynamic systems that exchange energy but not matter with the environment to open thermodynamic systems that exchange matter and energy with their environment. In addition, our results go beyond the results of [4] by developing rigorous notions of enthalpy, Gibbs free energy, Helmholtz free energy, and Gibbs’ chemical potential using a state space formulation of dynamics, energy and mass conservation principles, as well as the law of mass-action kinetics and the law of superposition of elementary reactions without invoking statistical mechanics arguments.

2. Notation, Definitions, and Mathematical Preliminaries

In this section, we establish notation, definitions, and provide some key results necessary for developing the main results of this paper. Specifically, R denotes the set of real numbers, Z ¯ + (respectively, Z + ) denotes the set of nonnegative (respectively, positive) integers, R q denotes the set of q × 1 column vectors, R n × m denotes the set of n × m real matrices, P n (respectively, N n ) denotes the set of positive (respectively, nonnegative) definite matrices, ( · ) T denotes transpose, I q or I denotes the q × q identity matrix, e denotes the ones vector of order q, that is, e [ 1 , , 1 ] T R q , and e i R q denotes a vector with unity in the ith component and zeros elsewhere. For x R q we write x 0 (respectively, x > > 0 ) to indicate that every component of x is nonnegative (respectively, positive). In this case, we say that x is nonnegative or positive, respectively. Furthermore, R ¯ + q and R + q denote the nonnegative and positive orthants of R q , that is, if x R q , then x R ¯ + q and x R + q are equivalent, respectively, to x 0 and x > > 0 . Analogously, R ¯ + n × m (respectively, R + n × m ) denotes the set of n × m real matrices whose entries are nonnegative (respectively, positive). For vectors x , y R q , with components x i and y i , i = 1 , , q , we use x y to denote component-by-component multiplication, that is, x y [ x 1 y 1 , , x q y q ] T . Finally, we write S , S , and S ¯ to denote the boundary, the interior, and the closure of the set S , respectively.
We write · for the Euclidean vector norm, V ( x ) V ( x ) x for the Fréchet derivative of V at x, B ε ( α ) , α R q , ε > 0 , for the open ball centered at α with radius ε, and x ( t ) M as t to denote that x ( t ) approaches the set M (that is, for every ε > 0 there exists T > 0 such that dist ( x ( t ) , M ) < ε for all t > T , where dist ( p , M ) inf x M p - x ). The notions of openness, convergence, continuity, and compactness that we use throughout the paper refer to the topology generated on D R q by the norm · . A subset N of D is relatively open in D if N is open in the subspace topology induced on D by the norm · . A point x R q is a subsequential limit of the sequence { x i } i = 0 in R q if there exists a subsequence of { x i } i = 0 that converges to x in the norm · . Recall that every bounded sequence has at least one subsequential limit. A divergent sequence is a sequence having no convergent subsequence.
Consider the nonlinear autonomous dynamical system
x ˙ ( t ) = f ( x ( t ) ) , x ( 0 ) = x 0 , t I x 0
where x ( t ) R n , t I x 0 , is the system state vector, D is a relatively open set, f : D R n is continuous on D , and I x 0 = [ 0 , τ x 0 ) , 0 τ x 0 , is the maximal interval of existence for the solution x ( · ) of Equation (1). We assume that, for every initial condition x ( 0 ) D , the differential Equation (1) possesses a unique right-maximally defined continuously differentiable solution which is defined on [ 0 , ) . Letting s ( · , x ) denote the right-maximally defined solution of Equation (1) that satisfies the initial condition x ( 0 ) = x , the above assumptions imply that the map s : [ 0 , ) × D D is continuous ([Theorem V.2.1] [10]), satisfies the consistency property s ( 0 , x ) = x , and possesses the semigroup property s ( t , s ( τ , x ) ) = s ( t + τ , x ) for all t , τ 0 and x D . Given t 0 and x D , we denote the map s ( t , · ) : D D by s t and the map s ( · , x ) : [ 0 , ) D by s x . For every t R , the map s t is a homeomorphism and has the inverse s - t .
The orbit O x of a point x D is the set s x ( [ 0 , ) ) . A set D c D is positively invariant relative to Equation (1) if s t ( D c ) D c for all t 0 or, equivalently, D c contains the orbits of all its points. The set D c is invariant relative to Equation (1) if s t ( D c ) = D c for all t 0 . The positive limit set of x R q is the set ω ( x ) of all subsequential limits of sequences of the form { s ( t i , x ) } i = 0 , where { t i } i = 0 is an increasing divergent sequence in [ 0 , ) . ω ( x ) is closed and invariant, and O ¯ x = O x ω ( x ) [7]. In addition, for every x R q that has bounded positive orbits, ω ( x ) is nonempty and compact, and, for every neighborhood N of ω ( x ) , there exists T > 0 such that s t ( x ) N for every t > T [7]. Furthermore, ∈ is an equilibrium point of Equation (1) if and only if f ( x e ) = 0 or, equivalently, s ( t , x e ) = x e for all t 0 . Finally, recall that if all solutions to Equation (1) are bounded, then it follows from the Peano–Cauchy theorem ([7] [p. 76]) that I x 0 = R .
Definition 2.1 ([11] [pp. 9, 10] ) Let f = [ f 1 , , f n ] T : D R ¯ + n R n . Then f is essentially nonnegative if f i ( x ) 0 , for all i = 1 , , n , and x R ¯ + n such that x i = 0 , where x i denotes the ith component of x.
Proposition 2.1 ([11] [p. 12] ) Suppose R ¯ + n D . Then R ¯ + n is an invariant set with respect to Equation (1) if and only if f : D R n is essentially nonnegative.
Definition 2.2 ([11] [pp. 13, 23] ) An equilibrium solution x ( t ) x e R ¯ + n to Equation (1) is Lyapunov stable with respect to R ¯ + n if, for all ε > 0 , there exists δ = δ ( ε ) > 0 such that if x B δ ( x e ) R ¯ + n , then x ( t ) B ε ( x e ) R ¯ + n , t 0 . An equilibrium solution x ( t ) x e R ¯ + n to Equation (1) is semistable with respect to R ¯ + n if it is Lyapunov stable with respect to R ¯ + n and there exists δ > 0 such that if x 0 B δ ( x e ) R ¯ + n , then lim t x ( t ) exists and corresponds to a Lyapunov stable equilibrium point with respect to R ¯ + n The system given by Equation (1) is said to be semistable with respect to R ¯ + n if every equilibrium point of Equation (1) is semistable with respect to R ¯ + n The system given by Equation (1) is said to be globally semistable with respect to R ¯ + n if Equation (1) is semistable with respect to R ¯ + n and, for every x 0 R ¯ + n , lim t x ( t ) exists.
Proposition 2.2 ([11] [p. 22]) Consider the nonlinear dynamical system given by Equation (1) where f is essentially nonnegative and let x R ¯ + n . If the positive limit set of Equation (1) contains a Lyapunov stable (with respect to R ¯ + n ) equilibrium point y, then y = lim t s ( t , x ) .

3. Interconnected Thermodynamic Systems: A State Space Energy Flow Perspective

The fundamental and unifying concept in the analysis of thermodynamic systems is the concept of energy. The energy of a state of a dynamical system is the measure of its ability to produce changes (motion) in its own system state as well as changes in the system states of its surroundings. These changes occur as a direct consequence of the energy flow between different subsystems within the dynamical system. Heat (energy) is a fundamental concept of thermodynamics involving the capacity of hot bodies (more energetic subsystems with higher energy gradients) to produce work. As in thermodynamic systems, dynamical systems can exhibit energy (due to friction) that becomes unavailable to do useful work. This in turn contributes to an increase in system entropy, a measure of the tendency of a system to lose the ability of performing useful work. In this section, we use the state space formalism to construct a mathematical model of a thermodynamic system that is consistent with basic thermodynamic principles.
Specifically, we consider a large-scale system model with a combination of subsystems (compartments or parts) that is perceived as a single entity. For each subsystem (compartment) making up the system, we postulate the existence of an energy state variable such that the knowledge of these subsystem state variables at any given time t = t 0 , together with the knowledge of any inputs (heat fluxes) to each of the subsystems for time t t 0 , completely determines the behavior of the system for any given time t t 0 . Hence, the (energy) state of our dynamical system at time t is uniquely determined by the state at time t 0 and any external inputs for time t t 0 and is independent of the state and inputs before time t 0 .
More precisely, we consider a large-scale interconnected dynamical system composed of a large number of units with aggregated (or lumped) energy variables representing homogenous groups of these units. If all the units comprising the system are identical (that is, the system is perfectly homogeneous), then the behavior of the dynamical system can be captured by that of a single plenipotentiary unit. Alternatively, if every interacting system unit is distinct, then the resulting model constitutes a microscopic system. To develop a middle-ground thermodynamic model placed between complete aggregation (classical thermodynamics) and complete disaggregation (statistical thermodynamics), we subdivide the large-scale dynamical system into a finite number of compartments, each formed by a large number of homogeneous units. Each compartment represents the energy content of the different parts of the dynamical system, and different compartments interact by exchanging heat. Thus, our compartmental thermodynamic model utilizes subsystems or compartments with describe the energy distribution among distinct regions in space with intercompartmental flows representing the heat transfer between these regions. Decreasing the number of compartments results in a more aggregated or homogeneous model, whereas increasing the number of compartments leads to a higher degree of disaggregation resulting in a heterogeneous model.
To formulate our state space thermodynamic model, consider the interconnected dynamical system G shown in Figure 1 involving energy exchange between q interconnected subsystems. Let E i : [ 0 , ) R ¯ + denote the energy (and hence a nonnegative quantity) of the ith subsystem, let S i : [ 0 , ) R denote the external power (heat flux) supplied to (or extracted from) the ith subsystem, let ϕ i j : R ¯ + q R , i j , i , j = 1 , , q , denote the net instantaneous rate of energy (heat) flow from the jth subsystem to the ith subsystem, and let σ i i : R ¯ + q R ¯ + , i = 1 , , q , denote the instantaneous rate of energy (heat) dissipation from the ith subsystem to the environment. Here, we assume that ϕ i j : R ¯ + q R , i j , i , j = 1 , , q , and σ i i : R ¯ + q R ¯ + , i = 1 , , q , are locally Lipschitz continuous on R ¯ + q and S i : [ 0 , ) R , i = 1 , , q , are bounded piecewise continuous functions of time.
Figure 1. Interconnected dynamical system G .
Figure 1. Interconnected dynamical system G .
Entropy 15 01821 g001
An energy balance for the ith subsystem yields
E i ( T ) = E i ( t 0 ) + j = 1 , j i q t 0 T ϕ i j ( E ( t ) ) d t - t 0 T σ i i ( E ( t ) ) d t + t 0 T S i ( t ) d t , T t 0
or, equivalently, in vector form,
E ( T ) = E ( t 0 ) + t 0 T w ( E ( t ) ) d t - t 0 T d ( E ( t ) ) d t + t 0 T S ( t ) d t , T t 0
where E ( t ) [ E 1 ( t ) , , E q ( t ) ] T , t t 0 , is the system energy state, d ( E ( t ) ) [ σ 11 ( E ( t ) ) , , σ q q ( E ( t ) ) ] T , t t 0 , is the system dissipation, S ( t ) [ S 1 ( t ) , , S q ( t ) ] T , t t 0 , is the system heat flux, and w = [ w 1 , , w q ] T : R ¯ + q R q is such that
w i ( E ) = j = 1 , j i q ϕ i j ( E ) , E R ¯ + q
Since ϕ i j : R ¯ + q R , i j , i , j = 1 , , q , denotes the net instantaneous rate of energy flow from the jth subsystem to the ith subsystem, it is clear that ϕ i j ( E ) = - ϕ j i ( E ) , E R ¯ + q , i j , i , j = 1 , , q , which further implies that e T w ( E ) = 0 , E R ¯ + q .
Note that Equation (2) yields a conservation of energy equation and implies that the energy stored in the ith subsystem is equal to the external energy supplied to (or extracted from) the ith subsystem plus the energy gained by the ith subsystem from all other subsystems due to subsystem coupling minus the energy dissipated from the ith subsystem to the environment. Equivalently, Equation (2) can be rewritten as
E ˙ i ( t ) = j = 1 , j i q ϕ i j ( E ( t ) ) - σ i i ( E ( t ) ) + S i ( t ) , E i ( t 0 ) = E i 0 , t t 0
or, in vector form,
E ˙ ( t ) = w ( E ( t ) ) - d ( E ( t ) ) + S ( t ) , E ( t 0 ) = E 0 , t t 0
where E 0 [ E 10 , , E q 0 ] T , yielding a power balance equation that characterizes energy flow between subsystems of the interconnected dynamical system G . We assume that ϕ i j ( E ) 0 , E R ¯ + q , whenever E i = 0 , i j , i , j = 1 , , q , and σ i i ( E ) = 0 , whenever E i = 0 , i = 1 , , q . The above constraint implies that if the energy of the ith subsystem of G is zero, then this subsystem cannot supply any energy to its surroundings or dissipate energy to the environment. In this case, w ( E ) - d ( E ) , E R ¯ + q , is essentially nonnegative [12]. Thus, if S ( t ) 0 , then, by Proposition 2.1, the solutions to Equation (6) are nonnegative for all nonnegative initial conditions. See [4,11,12] for further details.
Since our thermodynamic compartmental model involves intercompartmental flows representing energy transfer between compartments, we can use graph-theoretic notions with undirected graph topologies (i.e., bidirectional energy flows) to capture the compartmental system interconnections. Graph theory [13,14] can be useful in the analysis of the connectivity properties of compartmental systems. In particular, an undirected graph can be constructed to capture a compartmental model in which the compartments are represented by nodes and the flows are represented by edges or arcs. In this case, the environment must also be considered as an additional node.
For the interconnected dynamical system G with the power balance Equation (6), we define a connectivity matrix C R q × q such that for i j , i , j = 1 , , q , C ( i , j ) 1 if ϕ i j ( E ) 0 and C ( i , j ) 0 otherwise, and C ( i , i ) - k = 1 , k i q C ( k , i ) , i = 1 , , q . (The negative of the connectivity matrix, that is, - C , is known as the graph Laplacian in the literature.) Recall that if rank C = q - 1 , then G is strongly connected [4] and energy exchange is possible between any two subsystems of G .
The next definition introduces a notion of entropy for the interconnected dynamical system G .
Definition 3.1 Consider the interconnected dynamical system G with the power balance Equation (6). A continuously differentiable, strictly concave function S : R ¯ + q R is called the entropy function of G if
S ( E ) E i - S ( E ) E j ϕ i j ( E ) 0 , E R ¯ + q , i j , i , j = 1 , , q
and S ( E ) E i = S ( E ) E j if and only if ϕ i j ( E ) = 0 with C ( i , j ) = 1 , i j , i , j = 1 , , q .
It follows from Definition 3.1 that for an isolated system G , that is, S ( t ) 0 and d ( E ) 0 , the entropy function of G is a nondecreasing function of time. To see this, note that
S ˙ ( E ) = S ( E ) E E ˙ = i = 1 q S ( E ) E i j = 1 , j i q ϕ i j ( E ) = i = 1 q j = i + 1 q S ( E ) E i - S ( E ) E j ϕ i j ( E ) 0 , E R ¯ + q
where S ( E ) E S ( E ) E 1 , , S ( E ) E q and where we used the fact that ϕ i j ( E ) = - ϕ j i ( E ) , E R ¯ + q , i j , i , j = 1 , , q .
Proposition 3.1 Consider the isolated (i.e., S ( t ) 0 and d ( E ) 0 ) interconnected dynamical system G with the power balance Equation (6). Assume that rank C = q - 1 and there exists an entropy function S : R ¯ + q R of G . Then, j = 1 q ϕ i j ( E ) = 0 for all i = 1 , , q if and only if S ( E ) E 1 = = S ( E ) E q . Furthermore, the set of nonnegative equilibrium states of Equation (6) is given by E 0 E R ¯ + q : S ( E ) E 1 = = S ( E ) E q .
Proof. If S ( E ) E i = S ( E ) E j , then ϕ i j ( E ) = 0 for all i , j = 1 , , q , which implies that j = 1 q ϕ i j ( E ) = 0 for all i = 1 , , q . Conversely, assume that j = 1 q ϕ i j ( E ) = 0 for all i = 1 , , q , and, since S is an entropy function of G , it follows that
0 = i = 1 q j = 1 q S ( E ) E i ϕ i j ( E ) = i = 1 q - 1 j = i + 1 q S ( E ) E i - S ( E ) E j ϕ i j ( E ) 0
where we have used the fact that ϕ i j ( E ) = - ϕ j i ( E ) for all i , j = 1 , , q . Hence,
S ( E ) E i - S ( E ) E j ϕ i j ( E ) = 0
for all i , j = 1 , , q . Now, the result follows from the fact that rank C = q - 1 . □
Theorem 3.1 Consider the isolated (i.e., S ( t ) 0 and d ( E ) 0 ) interconnected dynamical system G with the power balance Equation (6). Assume that rank C = q - 1 and there exists an entropy function S : R ¯ + q R of G . Then the isolated system G is globally semistable with respect to R ¯ + q .
Proof. Since w ( · ) is essentially nonnegative, it follows from Proposition 2.1 that E ( t ) R ¯ + q , t t 0 , for all E 0 R ¯ + q . Furthermore, note that since e T w ( E ) = 0 , E R ¯ + q , it follows that e T E ˙ ( t ) = 0 , t t 0 . In this case, e T E ( t ) = e T E 0 , t t 0 , which implies that E ( t ) , t t 0 , is bounded for all E 0 R ¯ + q . Now, it follows from Equation (8) that S ( E ( t ) ) , t t 0 , is a nondecreasing function of time, and hence, by the Krasovskii–LaSalle theorem [7], E ( t ) R { E R ¯ + q : S ˙ ( E ) = 0 } as t . Next, it follows from Equation (8), Definition 3.1, and the fact that rank C = q - 1 , that R = E R ¯ + q : S ( E ) E 1 = = S ( E ) E q = E 0 .
Now, let E e E 0 and consider the continuously differentiable function V : R q R defined by
V ( E ) S ( E e ) - S ( E ) - λ e ( e T E e - e T E )
where λ e S E 1 ( E e ) . Next, note that V ( E e ) = 0 , V E ( E e ) = - S E ( E e ) + λ e e T = 0 , and, since S ( · ) is a strictly concave function, 2 V E 2 ( E e ) = - 2 S E 2 ( E e ) > 0 , which implies that V ( · ) admits a local minimum at E e . Thus, V ( E e ) = 0 , there exists 0 such that V ( E ) > 0 , E B δ ( E e ) \ { E e } , and V ˙ ( E ) = - S ˙ ( E ) 0 for all E B δ ( E e ) \ { E e } , which shows that V ( · ) is a Lyapunov function for G and E e is a Lyapunov stable equilibrium of G . Finally, since, for every E 0 R ¯ + n , E ( t ) E 0 as t and every equilibrium point of G is Lyapunov stable, it follows from Proposition 2.2 that G is globally semistable with respect to R ¯ + q . □
In classical thermodynamics, the partial derivative of the system entropy with respect to the system energy defines the reciprocal of the system temperature. Thus, for the interconnected dynamical system G ,
T i S ( E ) E i - 1 , i = 1 , , q
represents the temperature of the ith subsystem. Equation (7) is a manifestation of the second law of thermodynamics and implies that if the temperature of the jth subsystem is greater than the temperature of the ith subsystem, then energy (heat) flows from the jth subsystem to the ith subsystem. Furthermore, S ( E ) E i = S ( E ) E j if and only if ϕ i j ( E ) = 0 with C ( i , j ) = 1 , i j , i , j = 1 , , q , implies that temperature equality is a necessary and sufficient condition for thermal equilibrium. This is a statement of the zeroth law of thermodynamics. As a result, Theorem 3.1 shows that, for a strongly connected system G , the subsystem energies converge to the set of equilibrium states where the temperatures of all subsystems are equal. This phenomenon is known as equipartition of temperature [4] and is an emergent behavior in thermodynamic systems. In particular, all the system energy is eventually transferred into heat at a uniform temperature, and hence, all dynamical processes in G (system motions) would cease.
The following result presents a sufficient condition for energy equipartition of the system, that is, the energies of all subsystems are equal. This state of energy equipartition is uniquely determined by the initial energy in the system.
Theorem 3.2 Consider the isolated (i.e., S ( t ) 0 and d ( E ) 0 ) interconnected dynamical system G with the power balance Equation (6). Assume that rank C = q - 1 and there exists a continuously differentiable, strictly concave function f : R ¯ + R such that the entropy function S : R ¯ + q R of G is given by S ( E ) = i = 1 q f ( E i ) . Then, the set of nonnegative equilibrium states of Equation (6) is given by E 0 = { α e : α 0 } and G is semistable with respect to R ¯ + q . Furthermore, E ( t ) 1 q e e T E ( t 0 ) as t and 1 q e e T E ( t 0 ) is a semistable equilibrium state of G .
Proof. First, note that since f ( · ) is a continuously differentiable, strictly concave function, it follows that
d f d E i - d f d E j ( E i - E j ) 0 , E R ¯ + q , i , j = 1 , , q
which implies that Equation (7) is equivalent to
E i - E j ϕ i j ( E ) 0 , E R ¯ + q , i j , i , j = 1 , , q
and E i = E j if and only if ϕ i j ( E ) = 0 with C ( i , j ) = 1 , i j , i , j = 1 , , q . Hence, - E T E is an entropy function of G . Next, with S ( E ) = - 1 2 E T E , it follows from Proposition 3.1 that E 0 = { α e R ¯ + q , α 0 } . Now, it follows from Theorem 3.1 that G is globally semistable with respect to R ¯ + q . Finally, since e T E ( t ) = e T E ( t 0 ) and E ( t ) M as t , it follows that E ( t ) 1 q e e T E ( t 0 ) as t . Hence, with α = 1 q e T E ( t 0 ) , α e = 1 q e e T E ( t 0 ) is a semistable equilibrium state of Equation (6). □
If f ( E i ) = log e ( c + E i ) , where c > 0 , so that S ( E ) = i = 1 q log e ( c + E i ) , then it follows from Theorem 3.2 that E 0 = { α e : α 0 } and the isolated (i.e., S ( t ) 0 and d ( E ) 0 ) interconnected dynamical system G with the power balance Equation (6) is semistable. In this case, the absolute temperature of the ith compartment is given by c + E i . Similarly, if S ( E ) = - 1 2 E T E , then it follows from Theorem 3.2 that E 0 = { α e : α 0 } and the isolated (i.e., S ( t ) 0 and d ( E ) 0 ) interconnected dynamical system G with the power balance Equation (6) is semistable. In both cases, E ( t ) 1 q e e T E ( t 0 ) as t . This shows that the steady-state energy of the isolated interconnected dynamical system G is given by 1 q e e T E ( t 0 ) = 1 q i = 1 q E i ( t 0 ) e , and hence is uniformly distributed over all subsystems of G . This phenomenon is known as energy equipartition [4]. The aforementioned forms of S ( E ) were extensively discussed in the recent book [4] where S ( E ) = i = 1 q log e ( c + E i ) and - S ( E ) = 1 2 E T E are referred to, respectively, as the entropy and the ectropy functions of the interconnected dynamical system G .

4. Work Energy, Gibbs Free Energy, Helmoholtz Free Energy, Enthalpy, and Entropy

In this section, we augment our thermodynamic energy flow model G with an additional (deformation) state representing subsystem volumes in order to introduce the notion of work into our thermodynamically consistent state space energy flow model. Specifically, we assume that each subsystem can perform (positive) work on the environment and the environment can perform (negative) work on the subsystems. The rate of work done by the ith subsystem on the environment is denoted by d w i : R ¯ + q × R + q R ¯ + , i = 1 , , q , the rate of work done by the environment on the ith subsystem is denoted by S w i : [ 0 , ) R ¯ + , i = 1 , , q , and the volume of the ith subsystem is denoted by V i : [ 0 , ) R + , i = 1 , , q . The net work done by each subsystem on the environment satisfies
p i ( E , V ) d V i = ( d w i ( E , V ) - S w i ( t ) ) d t
where p i ( E , V ) , i = 1 , , q , denotes the pressure in the ith subsystem and V [ V 1 , , V q ] T .
Furthermore, in the presence of work, the energy balance Equation (5) for each subsystem can be rewritten as
d E i = w i ( E , V ) d t - ( d w i ( E , V ) - S w i ( t ) ) d t - σ i i ( E , V ) d t + S i ( t ) d t
where w i ( E , V ) j = 1 , j i q ϕ i j ( E , V ) , ϕ i j : R ¯ + q × R + q R , i j , i , j = 1 , , q , denotes the net instantaneous rate of energy (heat) flow from the jth subsystem to the ith subsystem, σ i i : R ¯ + q × R + q R ¯ + , i = 1 , , q , denotes the instantaneous rate of energy dissipation from the ith subsystem to the environment, and, as in Section 3, S i : [ 0 , ) R , i = 1 , , q , denotes the external power supplied to (or extracted from) the ith subsystem. It follows from Equations (10) and (11) that positive work done by a subsystem on the environment leads to a decrease in the internal energy of the subsystem and an increase in the subsystem volume, which is consistent with the first law of thermodynamics.
The definition of entropy for G in the presence of work remains the same as in Definition 3.1 with S ( E ) replaced by S ( E , V ) and with all other conditions in the definition holding for every V > > 0 . Next, consider the ith subsystem of G and assume that E j and V j , j i , i = 1 , , q , are constant. In this case, note that
d S d t = S E i d E i d t + S V i d V i d t
and
p i ( E , V ) = S E i - 1 S V i , i = 1 , , q
It follows from Equations (10) and (11) that, in the presence of work energy, the power balance Equation (6) takes the new form involving energy and deformation states
E ˙ ( t ) = w ( E ( t ) , V ( t ) ) - d w ( E ( t ) , V ( t ) ) + S w ( t ) - d ( E ( t ) , V ( t ) ) + S ( t ) , E ( t 0 ) = E 0 , t t 0 ,
V ˙ ( t ) = D ( E ( t ) , V ( t ) ) ( d w ( E ( t ) , V ( t ) ) - S w ( t ) ) , V ( t 0 ) = V 0
where w ( E , V ) [ w 1 ( E , V ) , , w q ( E , V ) ] T , d w ( E , V ) [ d w 1 ( E , V ) , , d w q ( E , V ) ] T , S w ( t ) [ S w 1 ( t ) , , S w q ( t ) ] T , d ( E , V ) [ σ 11 ( E , V ) , , σ q q ( E , V ) ] T , S ( t ) [ S 1 ( t ) , , S q ( t ) ] T , and
D ( E , V ) diag S E 1 S V 1 - 1 , , S E q S V q - 1
Note that
S ( E , V ) V D ( E , V ) = S ( E , V ) E
The power balance and deformation Equations (14) and (15) represent a statement of the first law of thermodynamics. To see this, define the work L done by the interconnected dynamical system G over the time interval [ t 1 , t 2 ] by
L t 1 t 2 e T [ d w ( E ( t ) , V ( t ) ) - S w ( t ) ] d t
where [ E T ( t ) , V T ( t ) ] T , t t 0 , is the solution to Equations (14) and (15). Now, premultiplying Equation (14) by e T and using the fact that e T w ( E , V ) = 0 , it follows that
Δ U = - L + Q
where Δ U = U ( t 2 ) - U ( t 1 ) e T E ( t 2 ) - e T E ( t 1 ) denotes the variation in the total energy of the interconnected system G over the time interval [ t 1 , t 2 ] and
Q t 1 t 2 e T [ S ( t ) - d ( E ( t ) , V ( t ) ) ] d t
denotes the net energy received by G in forms other than work.
This is a statement of the first law of thermodynamics for the interconnected dynamical system G and gives a precise formulation of the equivalence between work and heat. This establishes that heat and mechanical work are two different aspects of energy. Finally, note that Equation (15) is consistent with the classical thermodynamic equation for the rate of work done by the system G on the environment. To see this, note that Equation (15) can be equivalently written as
d L = e T D - 1 ( E , V ) d V
which, for a single subsystem with volume V and pressure p, has the classical form
d L = p d V
It follows from Definition 3.1 and Equations (14)–(17) that the time derivative of the entropy function satisfies
S ˙ ( E , V ) = S ( E , V ) E E ˙ + S ( E , V ) V V ˙ = S ( E , V ) E w ( E , V ) - S ( E , V ) E ( d w ( E , V ) - S w ( t ) ) - S ( E , V ) E ( d ( E , V ) - S ( t ) ) + S ( E , V ) V D ( E , V ) ( d w ( E , V ) - S w ( t ) ) = i = 1 q S ( E , V ) E i j = 1 , j i q ϕ i j ( E , V ) + i = 1 q S ( E , V ) E i ( S i ( t ) - d i ( E , V ) ) = i = 1 q j = i + 1 q S ( E , V ) E i - S ( E , V ) E j ϕ i j ( E , V ) + i = 1 q S ( E , V ) E i ( S i ( t ) - d i ( E , V ) ) i = 1 q S ( E , V ) E i ( S i ( t ) - d i ( E , V ) ) , ( E , V ) R ¯ + q × R + q
Noting that d Q i [ S i - σ i i ( E ) ] d t , i = 1 , , q , is the infinitesimal amount of the net heat received or dissipated by the ith subsystem of G over the infinitesimal time interval d t , it follows from Equation (23) that
d S ( E ) i = 1 q d Q i T i
Inequality (24) is the classical Clausius inequality for the variation of entropy during an infinitesimal irreversible transformation.
Note that for an adiabatically isolated interconnected dynamical system (i.e., no heat exchange with the environment), Equation (23) yields the universal inequality
S ( E ( t 2 ) , V ( t 2 ) ) S ( E ( t 1 ) , V ( t 1 ) ) , t 2 t 1
which implies that, for any dynamical change in an adiabatically isolated interconnected system G , the entropy of the final system state can never be less than the entropy of the initial system state. In addition, in the case where ( E ( t ) , V ( t ) ) M e , t t 0 , where M e { ( E , V ) R ¯ + q × R ¯ + q : E = α e , α 0 , V R + q } , it follows from Definition 3.1 and Equation (23) that Inequality (25) is satisfied as a strict inequality for all ( E , V ) ( R ¯ + q × R ¯ + q ) \ M e . Hence, it follows from Theorem 2.15 of [4] that the adiabatically isolated interconnected system G does not exhibit Poincaré recurrence in ( R ¯ + q × R ¯ + q ) \ M e .
Next, we define the Gibbs free energy, the Helmholtz free energy, and the enthalpy functions for the interconnected dynamical system G . For this exposition, we assume that the entropy of G is a sum of individual entropies of subsystems of G , that is, S ( E , V ) = i = 1 q S i ( E i , V i ) , ( E , V ) R ¯ + q × R + q . In this case, the Gibbs free energy of G is defined by
G ( E , V ) e T E - i = 1 q S ( E , V ) E i - 1 S i ( E i , V i ) + i = 1 q S ( E , V ) E i - 1 S ( E , V ) V i V i ( E , V ) R ¯ + q × R + q
the Helmholtz free energy of G is defined by
F ( E , V ) e T E - i = 1 q S ( E , V ) E i - 1 S i ( E i , V i ) , ( E , V ) R ¯ + q × R + q
and the enthalpy of G is defined by
H ( E , V ) e T E + i = 1 q S ( E , V ) E i - 1 S ( E , V ) V i V i , ( E , V ) R ¯ + q × R + q
Note that the above definitions for the Gibbs free energy, Helmholtz free energy, and enthalpy are consistent with the classical thermodynamic definitions given by G ( E , V ) = U + p V - T S , F ( E , V ) = U - T S , and H ( E , V ) = U + p V , respectively. Furthermore, note that if the interconnected system G is isothermal and isobaric, that is, the temperatures of subsystems of G are equal and remain constant with
S ( E , V ) E 1 - 1 = = S ( E , V ) E q - 1 = T > 0
and the pressure p i ( E , V ) in each subsystem of G remains constant, respectively, then any transformation in G is reversible.
The time derivative of G ( E , V ) along the trajectories of Equations (14) and (15) is given by
G ˙ ( E , V ) = e T E ˙ - i = 1 q S ( E , V ) E i - 1 S ( E , V ) E i E ˙ i + S ( E , V ) V i V ˙ i + i = 1 q S ( E , V ) E i - 1 S ( E , V ) V i V ˙ i = 0
which is consistent with classical thermodynamics in the absence of chemical reactions.
For an isothermal interconnected dynamical system G , the time derivative of F ( E , V ) along the trajectories of Equations (14) and (15) is given by
F ˙ ( E , V ) = e T E ˙ - i = 1 q S ( E , V ) E i - 1 S ( E , V ) E i E ˙ i + S ( E , V ) V i V ˙ i = - i = 1 q S ( E , V ) E i - 1 S ( E , V ) V i V ˙ i = - i = 1 q ( d w i ( E , V ) - S w i ( t ) ) = - L
where L is the net amount of work done by the subsystems of G on the environment. Furthermore, note that if, in addition, the interconnected system G is isochoric, that is, the volumes of each of the subsystems of G remain constant, then F ˙ ( E , V ) = 0 . As we see in the next section, in the presence of chemical reactions the interconnected system G evolves such that the Helmholtz free energy is minimized.
Finally, for the isolated ( S ( t ) 0 and d ( E , V ) 0 ) interconnected dynamical system G , the time derivative of H ( E , V ) along the trajectories of Equations (14) and (15) is given by
H ˙ ( E , V ) = e T E ˙ + i = 1 q S ( E , V ) E i - 1 S ( E , V ) V i V ˙ i = e T E ˙ + i = 1 q ( d w i ( E , V ) - S w i ( t ) ) = e T w ( E , V ) = 0

5. Chemical Equilibria, Entropy Production, Chemical Potential, and Chemical Thermodynamics

In its most general form thermodynamics can also involve reacting mixtures and combustion. When a chemical reaction occurs, the bonds within molecules of the reactant are broken, and atoms and electrons rearrange to form products. The thermodynamic analysis of reactive systems can be addressed as an extension of the compartmental thermodynamic model described in Section 3 and Section 4. Specifically, in this case the compartments would qualitatively represent different quantities in the same space, and the intercompartmental flows would represent transformation rates in addition to transfer rates. In particular, the compartments would additionally represent quantities of different chemical substances contained within the compartment, and the compartmental flows would additionally characterize transformation rates of reactants into products. In this case, an additional mass balance is included for addressing conservation of energy as well as conservation of mass. This additional mass conservation equation would involve the law of mass-action enforcing proportionality between a particular reaction rate and the concentrations of the reactants, and the law of superposition of elementary reactions ensuring that the resultant rates for a particular species is the sum of the elementary reaction rates for the species.
In this section, we consider the interconnected dynamical system G where each subsystem represents a substance or species that can exchange energy with other substances as well as undergo chemical reactions with other substances forming products. Thus, the reactants and products of chemical reactions represent subsystems of G with the mechanisms of heat exchange between subsystems remaining the same as delineated in Section 3. Here, for simplicity of exposition, we do not consider work done by the subsystem on the environment or work done by the environment on the system. This extension can be easily addressed using the formulation in Section 4.
To develop a dynamical systems framework for thermodynamics with chemical reaction networks, let q be the total number of species (i.e., reactants and products), that is, the number of subsystems in G , and let X j , j = 1 , , q , denote the jth species. Consider a single chemical reaction described by
j = 1 q A j X j k j = 1 q B j X j
where A j , B j , j = 1 , , q , are the stoichiometric coefficients and k denotes the reaction rate. Note that the values of A j corresponding to the products and the values of B j corresponding to the reactants are zero. For example, for the familiar reaction
2 H 2 + O 2 k 2 H 2 O
X 1 , X 2 , and X 3 denote the species H 2 , O 2 , and H 2 O , respectively, and A 1 = 2 , A 2 = 1 , A 3 = 0 , B 1 = 0 , B 2 = 0 , and B 3 = 2 .
In general, for a reaction network consisting of r 1 reactions, the ith reaction is written as
j = 1 q A i j X j k i j = 1 q B i j X j , i = 1 , , r
where, for i = 1 , , r , k i > 0 is the reaction rate of the ith reaction, j = 1 q A i j X j is the reactant of the ith reaction, and j = 1 q B i j X j is the product of the ith reaction. Each stoichiometric coefficient A i j and B i j is a nonnegative integer. Note that each reaction in the reaction network given by Equation (35) is represented as being irreversible. Irreversibility here refers to the fact that part of the chemical reaction involves generation of products from the original reactants. Reversible chemical reactions that involve generation of products from the reactants and vice versa can be modeled as two irreversible reactions, one involving generation of products from the reactants and the other involving generation of the original reactants from the products. Hence, reversible reactions can be modeled by including the reverse reaction as a separate reaction. The reaction network given by Equation (35) can be written compactly in matrix-vector form as
A X k B X
where X = [ X 1 , , X q ] T is a column vector of species, k = [ k 1 , , k r ] T R + r is a positive vector of reaction rates, and A R r × q and B R r × q are nonnegative matrices such that A ( i , j ) = A i j and B ( i , j ) = B i j , i = 1 , , r , j = 1 , , q .
Let n j : [ 0 , ) R ¯ + , j = 1 , , q , denote the mole number of the jth species and define n [ n 1 , , n q ] T . Invoking the law of mass-action [15], which states that, for an elementary reaction, that is, a reaction in which all of the stoichiometric coefficients of the reactants are one, the rate of reaction is proportional to the product of the concentrations of the reactants, the species quantities change according to the dynamics [11,16]
n ˙ ( t ) = ( B - A ) T K n A ( t ) , n ( 0 ) = n 0 , t t 0
where K diag [ k 1 , , k r ] P r and
n A j = 1 q n j A 1 j j = 1 q n j A r j = n 1 A 11 n q A 1 q n 1 A r 1 n q A r q R ¯ + r
For details regarding the law of mass-action and Equation (37), see [11,15,16,17]. Furthermore, let M j > 0 , j = 1 , , q , denote the molar mass (i.e., the mass of one mole of a substance) of the jth species, let m j : [ 0 , ) R ¯ + , j = 1 , , q , denote the mass of the jth species so that m j ( t ) = M j n j ( t ) , t t 0 , j = 1 , , q , and let m [ m 1 , , m q ] T . Then, using the transformation m ( t ) = M n ( t ) , where M [ M 1 , , M q ] P q , Equation (37) can be rewritten as the mass balance
m ˙ ( t ) = M ( B - A ) T K ˜ m A ( t ) , m ( 0 ) = m 0 , t t 0
where K ˜ k 1 j = 1 q M j A 1 j , , k r j = 1 q M j A r j P r .
In the absence of nuclear reactions, the total mass of the species during each reaction in Equation (36) is conserved. Specifically, consider the ith reaction in Equation (36) given by Equation (35) where the mass of the reactants is j = 1 q A i j M j and the mass of the products is j = 1 q B i j M j . Hence, conservation of mass in the ith reaction is characterized as
j = 1 q ( B i j - A i j ) M j = 0 , i = 1 , , r
or, in general for Equation (36), as
e T M ( B - A ) T = 0
Note that it follows from Equations (39) and (41) that e T m ˙ ( t ) 0 .
Equation (39) characterizes the change in masses of substances in the interconnected dynamical system G due to chemical reactions. In addition to the change of mass due to chemical reactions, each substance can exchange energy with other substances according to the energy flow mechanism described in Section 3; that is, energy flows from substances at a higher temperature to substances at a lower temperature. Furthermore, in the presence of chemical reactions, the exchange of matter affects the change of energy of each substance through the quantity known as the chemical potential.
The notion of the chemical potential was introduced by Gibbs in 1875–1878 [8,9] and goes far beyond the scope of chemistry, affecting virtually every process in nature [18,19,20]. The chemical potential has a strong connection with the second law of thermodynamics in that every process in nature evolves from a state of higher chemical potential towards a state of lower chemical potential. It was postulated by Gibbs [8,9] that the change in energy of a homogeneous substance is proportional to the change in mass of this substance with the coefficient of proportionality given by the chemical potential of the substance.
To elucidate this, assume the jth substance corresponds to the jth compartment and consider the rate of energy change of the jth substance of G in the presence of matter exchange. In this case, it follows from Equation (5) and Gibbs’ postulate that the rate of energy change of the jth substance is given by
E ˙ j ( t ) = k = 1 , k j q ϕ j k ( E ( t ) ) - σ j j ( E ( t ) ) + S j ( t ) + μ j ( E ( t ) , m ( t ) ) m ˙ j ( t ) , E j ( t 0 ) = E j 0 , t t 0
where μ j : R ¯ + q × R ¯ + q R , j = 1 , , q , is the chemical potential of the jth substance. It follows from Equation (42) that μ j ( · , · ) is the chemical potential of a unit mass of the jth substance. We assume that if E j = 0 , then μ j ( E , m ) = 0 , j = 1 , , q , which implies that if the energy of the jth substance is zero, then its chemical potential is also zero.
Next, using Equations (39) and (42), the energy and mass balances for the interconnected dynamical system G can be written as
E ˙ ( t ) = w ( E ( t ) ) + P ( E ( t ) , m ( t ) ) M ( B - A ) T K ˜ m A ( t ) - d ( E ( t ) ) + S ( t ) , E ( t 0 ) = E 0 , t t 0 ,
m ˙ ( t ) = M ( B - A ) T K ˜ m A ( t ) , m ( 0 ) = m 0
where P ( E , m ) [ μ 1 ( E , m ) , , μ q ( E , m ) ] R q × q and where w ( · ) , d ( · ) , and S ( · ) are defined as in Section 3. It follows from Proposition 1 of [16] that the dynamics of Equation (44) are essentially nonnegative and, since μ j ( E , m ) = 0 if E j = 0 , j = 1 , , q , it also follows that, for the isolated dynamical system G (i.e., S ( t ) 0 and d ( E ) 0 ), the dynamics of Equations (43) and (44) are essentially nonnegative.
Note that, for the ith reaction in the reaction network given by Equation (36), the chemical potentials of the reactants and the products are j = 1 q A i j M j μ j ( E , m ) and j = 1 q B i j M j μ j ( E , m ) , respectively. Thus,
j = 1 q B i j M j μ j ( E , m ) - j = 1 q A i j M j μ j ( E , m ) 0 , ( E , m ) R ¯ + q × R ¯ + q
is a restatement of the principle that a chemical reaction evolves from a state of a greater chemical potential to that of a lower chemical potential, which is consistent with the second law of thermodynamics. The difference between the chemical potential of the reactants and the chemical potential of the products is called affinity [21,22] and is given by
ν i ( E , m ) = j = 1 q A i j M j μ j ( E , m ) - j = 1 q B i j M j μ j ( E , m ) 0 , i = 1 , , r
Affinity is a driving force for chemical reactions and is equal to zero at the state of chemical equilibrium. A nonzero affinity implies that the system in not in equilibrium and that chemical reactions will continue to occur until the system reaches an equilibrium characterized by zero affinity. The next assumption provides a general form for the inequalities (45) and (46).
Assumption 5.1 For the chemical reaction network (36) with the mass balance Equation (44), assume that μ ( E , m ) > > 0 for all E 0 and
( B - A ) M μ ( E , m ) 0 , ( E , m ) R ¯ + q × R ¯ + q
or, equivalently,
ν ( E , m ) = ( A - B ) M μ ( E , m ) 0 , ( E , m ) R ¯ + q × R ¯ + q
where μ ( E , m ) [ μ 1 ( E , m ) , , μ q ( E , m ) ] T is the vector of chemical potentials of the substances of G and ν ( E , m ) [ ν 1 ( E , m ) , , ν r ( E , m ) ] T is the affinity vector for the reaction network given by Equation (36).
Note that equality in Equation (47) or, equivalently, in Equation (48) characterizes the state of chemical equilibrium when the chemical potentials of the products and reactants are equal or, equivalently, when the affinity of each reaction is equal to zero. In this case, no reaction occurs and m ˙ ( t ) = 0 , t t 0 .
Next, we characterize the entropy function for the interconnected dynamical system G with the energy and mass balances given by Equations (43) and (44). The definition of entropy for G in the presence of chemical reactions remains the same as in Definition 3.1 with S ( E ) replaced by S ( E , m ) and with all other conditions in the definition holding for every m > > 0 . Consider the jth subsystem of G and assume that E k and m k , k j , k = 1 , , q , are constant. In this case, note that
d S d t = S E j d E j d t + S m j d m j d t
and recall that
S E P ( E , m ) + S m = 0
Next, it follows from Equation (50) that the time derivative of the entropy function S ( E , m ) along the trajectories of Equations (43) and (44) is given by
S ˙ ( E , m ) = S ( E , m ) E E ˙ + S ( E , m ) m m ˙ = S ( E , m ) E w ( E ) + S ( E , m ) E P ( E , m ) + S ( E , m ) m M ( B - A ) T K ˜ m A + S ( E , m ) E S ( t ) - S ( E , m ) E d ( E ) = S ( E , m ) E w ( E ) + S ( E , m ) E S ( t ) - S ( E , m ) E d ( E ) = i = 1 q j = i + 1 q S ( E , m ) E i - S ( E , m ) E j ϕ i j ( E ) + S ( E , m ) E S ( t ) - S ( E , m ) E d ( E ) ,
( E , m ) R ¯ + q × R ¯ + q
For the isolated system G (i.e., S ( t ) 0 and d ( E ) 0 ), the entropy function of G is a nondecreasing function of time and, using identical arguments as in the proof of Theorem 3.1, it can be shown that ( E ( t ) , m ( t ) ) R ( E , m ) R ¯ + q × R ¯ + q : S ( E , m ) E 1 = = S ( E , m ) E q as t for all ( E 0 , m 0 ) R ¯ + q × R ¯ + q .
The entropy production in the interconnected system G due to chemical reactions is given by
d S i ( E , m ) = S ( E , m ) m d m = - S ( E , m ) E P ( E , m ) M ( B - A ) T K ˜ m A d t , ( E , m ) R ¯ + q × R ¯ + q
If the interconnected dynamical system G is isothermal, that is, all subsystems of G are at the same temperature
S ( E , m ) E 1 - 1 = = S ( E , m ) E q - 1 = T
where T > 0 is the system temperature, then it follows from Assumption 5.1 that
d S i ( E , m ) = - 1 T e T P ( E , m ) M ( B - A ) T K ˜ m A d t = - 1 T μ T ( E , m ) M ( B - A ) T K ˜ m A d t = 1 T ν T ( E , m ) K ˜ m A d t 0 , ( E , m ) R ¯ + q × R ¯ + q
Note that since the affinity of a reaction is equal to zero at the state of a chemical equilibrium, it follows that equality in Equation (54) holds if and only if ν ( E , m ) = 0 for some E R ¯ + q and m R ¯ + q .
Theorem 5.1 Consider the isolated (i.e., S ( t ) 0 and d ( E ) 0 ) interconnected dynamical system G with the power and mass balances given by Equations (43) and (44). Assume that rank C = q - 1 , Assumption 5.1 holds, and there exists an entropy function S : R ¯ + q × R ¯ + q R of G . Then ( E ( t ) , m ( t ) ) R as t , where ( E ( t ) , m ( t ) ) , t t 0 , is the solution to Equations (43) and (44) with the initial condition ( E 0 , m 0 ) R ¯ + q × R ¯ + q and
R = ( E , m ) R ¯ + q × R ¯ + q : S ( E , m ) E 1 = = S ( E , m ) E q and ν ( E , m ) = 0
where ν ( · , · ) is the affinity vector of G .
Proof. Since the dynamics of the isolated system G are essentially nonnegative, it follows from Proposition 2.1 that ( E ( t ) , m ( t ) ) R ¯ + q × R ¯ + q , t t 0 , for all ( E 0 , m 0 ) R ¯ + q × R ¯ + q . Consider a scalar function v ( E , m ) = e T E + e T m , ( E , m ) R ¯ + q × R ¯ + q , and note that v ( 0 , 0 ) = 0 and v ( E , m ) > 0 , ( E , m ) R ¯ + q × R ¯ + q , ( E , m ) ( 0 , 0 ) . It follows from Equation (41), Assumption 5.1, and e T w ( E ) 0 that the time derivative of v ( · , · ) along the trajectories of Equations (43) and (44) satisfies
v ˙ ( E , m ) = e T E ˙ + e T m ˙ = e T P ( E , m ) M ( B - A ) T K ˜ m A = μ T ( E , m ) M ( B - A ) T K ˜ m A = - ν T ( E , m ) K ˜ m A 0 , ( E , m ) R ¯ + q × R ¯ + q
which implies that the solution ( E ( t ) , m ( t ) ) , t t 0 , to Equations (43) and (44) is bounded for all initial conditions ( E 0 , m 0 ) R ¯ + q × R ¯ + q .
Next, consider the function v ˜ ( E , m ) = e T E + e T m - S ( E , m ) , ( E , m ) R ¯ + q × R ¯ + q . Then it follows from Equations (51) and (56) that the time derivative of v ˜ ( · , · ) along the trajectories of Equations (43) and (44) satisfies
v ˜ ˙ ( E , m ) = e T E ˙ + e T m ˙ - S ˙ ( E , m ) = - ν T ( E , m ) K ˜ m A - i = 1 q j = i + 1 q S ( E , m ) E i - S ( E , m ) E j ϕ i j ( E ) 0 , ( E , m ) R ¯ + q × R ¯ + q
which implies that v ˜ ( · , · ) is a nonincreasing function of time, and hence, by the Krasovskii–LaSalle theorem [7], ( E ( t ) , m ( t ) ) R { ( E , m ) R ¯ + q × R ¯ + q : v ˜ ˙ ( E , m ) = 0 } as t . Now, it follows from Definition 3.1, Assumption 5.1, and the fact that rank C = q - 1 that
R = ( E , m ) R ¯ + q × R ¯ + q : S ( E , m ) E 1 = = S ( E , m ) E q { ( E , m ) R ¯ + q × R ¯ + q : ν ( E , m ) = 0 }
which proves the result. □
Theorem 5.1 implies that the state of the interconnected dynamical system G converges to the state of thermal and chemical equilibrium when the temperatures of all substances of G are equal and the masses of all substances reach a state where all reaction affinities are zero corresponding to a halting of all chemical reactions.
Next, we assume that the entropy of the interconnected dynamical system G is a sum of individual entropies of subsystems of G , that is, S ( E , m ) = j = 1 q S j ( E j , m j ) , ( E , m ) R ¯ + q × R ¯ + q . In this case, the Helmholtz free energy of G is given by
F ( E , m ) = e T E - j = 1 q S ( E , m ) E j - 1 S j ( E j , m j ) , ( E , m ) R ¯ + q × R ¯ + q
If the interconnected dynamical system G is isothermal, then the derivative of F ( · , · ) along the trajectories of Equations (43) and (44) is given by
F ˙ ( E , m ) = e T E ˙ - j = 1 q S ( E , m ) E j - 1 S ˙ j ( E j , m j ) = e T E ˙ - j = 1 q S ( E , m ) E j - 1 S j ( E j , m j ) E j E ˙ j + S j ( E j , m j ) m j m ˙ j = μ T ( E , m ) M ( B - A ) T K ˜ m A = - ν T ( E , m ) K ˜ m A 0 , ( E , m ) R ¯ + q × R ¯ + q
with equality in Equation (60) holding if and only if ν ( E , m ) = 0 for some E R ¯ + q and m R ¯ + q , which determines the state of chemical equilibrium. Hence, the Helmholtz free energy of G evolves to a minimum when the pressure and temperature of each subsystem of G are maintained constant, which is consistent with classical thermodynamics. A similar conclusion can be arrived at for the Gibbs free energy if work energy considerations to and by the system are addressed. Thus, the Gibbs and Helmholtz free energies are a measure of the tendency for a reaction to take place in the interconnected system G , and hence, provide a measure of the work done by the interconnected system G .

6. Conclusion and Opportunities for Future Research

In this paper, we developed a system-theoretic perspective for classical thermodynamics and chemical reaction processes. In particular, we developed a nonlinear compartmental model involving heat flow, work energy, and chemical reactions that captures all of the key aspects of thermodynamics, including its fundamental laws. In addition, we showed that the interconnected compartmental model gives rise to globally semistable equilibria involving states of temperature equipartition. Finally, using the notion of the chemical potential, we combined our heat flow compartmental model with a state space mass-action kinetics model to capture energy and mass exchange in interconnected large-scale systems in the presence of chemical reactions. In this case, it was shown that the system states converge to a state of temperature equipartition and zero affinity.
The underlying intention of this paper as well as [4,5,6] has been to present one of the most useful and general physical branches of science in the language of dynamical systems theory. In particular, our goal has been to develop a dynamical system formalism of thermodynamics using a large-scale interconnected systems theory that bridges the gap between classical and statistical thermodynamics. The laws of thermodynamics are among the most firmly established laws of nature, and it is hoped that this work will help to stimulate increased interaction between physicists and dynamical systems and control theorists. Besides the fact that irreversible thermodynamics plays a critical role in the understanding of our physical universe, it forms the underpinning of several fundamental life science and engineering disciplines, including biological systems, physiological systems, neuroscience, chemical reaction systems, ecological systems, demographic systems, transportation systems, network systems, and power systems, to cite but a few examples.
An important area of science where the dynamical system framework of thermodynamics can prove invaluable is in neuroscience. Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque [23] and proceeding through the modeling of the action potential by Hodgkin and Huxley [24] to the current era of mathematical neuroscience; see [25,26] and the numerous references therein. Neuroscience has always had models to interpret experimental results from a high-level complex systems perspective; however, expressing these models with dynamic equations rather than words fosters precision, completeness, and self-consistency. Nonlinear dynamical system theory, and in particular system thermodynamics, is ideally suited for rigorously describing the behavior of large-scale networks of neurons.
Merging the two universalisms of thermodynamics and dynamical systems theory with neuroscience can provide the theoretical foundation for understanding the network properties of the brain by rigorously addressing large-scale interconnected biological neuronal network models that govern the neuroelectronic behavior of biological excitatory and inhibitory neuronal networks [27]. As in thermodynamics, neuroscience is a theory of large-scale systems wherein graph theory can be used in capturing the connectivity properties of system interconnections, with neurons represented by nodes, synapses represented by edges or arcs, and synaptic efficacy captured by edge weighting giving rise to a weighted adjacency matrix governing the underlying directed graph network topology. However, unlike thermodynamics, wherein energy spontaneously flows from a state of higher temperature to a state of lower temperature, neuron membrane potential variations occur due to ion species exchanges which evolve from regions of higher concentrations to regions of lower concentrations. And this evolution does not occur spontaneously but rather requires the opening and closing of specific gates within specific ion channels.
A particularly interesting application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia [28,29,30,31,32]. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. In current clinical practice of general anesthesia, potent drugs are administered which profoundly influence levels of consciousness and vital respiratory (ventilation and oxygenation) and cardiovascular (heart rate, blood pressure, and cardiac output) functions. These variation patterns of the physiologic parameters (i.e., ventilation, oxygenation, heart rate, blood pressure, and cardiac output) and their alteration with levels of consciousness can provide scale-invariant fractal temporal structures to characterize the degree of consciousness in sedated patients.
In particular, the degree of consciousness reflects the adaptability of the central nervous system and is proportional to the maximum work output under a fully conscious state divided by the work output of a given anesthetized state. A reduction in maximum work output (and oxygen consumption) or elevation in the anesthetized work output (or oxygen consumption) will thus reduce the degree of consciousness. Hence, the fractal nature (i.e., complexity) of conscious variability is a self-organizing emergent property of the large-scale interconnected biological neuronal network since it enables the central nervous system to maximize entropy production and optimally dissipate energy gradients. In physiologic terms, a fully conscious healthy patient would exhibit rich fractal patterns in space (e.g., fractal vasculature) and time (e.g., cardiopulmonary variability) that optimize the ability for oxygenation and ventilation. Within the context of aging and complexity in acute illnesses, variation of physiologic parameters and their relationship to system complexity, fractal variability, and system thermodynamics have been explored in [33,34,35,36,37,38].
Merging system thermodynamics with neuroscience can provide the theoretical foundation for understanding the mechanisms of action of general anesthesia using the network properties of the brain. Even though simplified mean field models have been extensively used in the mathematical neuroscience literature to describe large neural populations [26], complex large-scale interconnected systems are essential in identifying the mechanisms of action for general anesthesia [27]. Unconsciousness is associated with reduced physiologic parameter variability, which reflects the inability of the central nervous system to adopt, and thus, decomplexifying physiologic work cycles and decreasing energy consumption (ischemia, hypoxia) leading to a decrease in entropy production. The degree of consciousness is a function of the numerous coupling of the network properties in the brain that form a complex large-scale, interconnected system. Complexity here refers to the quality of a system wherein interacting subsystems self-organize to form hierarchical evolving structures exhibiting emergent system properties; hence, a complex dynamical system is a system that is greater than the sum of its subsystems or parts. This complex system—involving numerous nonlinear dynamical subsystem interactions making up the system—has inherent emergent properties that depend on the integrity of the entire dynamical system and not merely on a mean field simplified reduced-order model.
Developing a dynamical system framework for neuroscience [27] and merging it with system thermodynamics [4,5,6] by embedding thermodynamic state notions (i.e., entropy, energy, free energy, chemical potential, etc.) will allow us to directly address the otherwise mathematically complex and computationally prohibitive large-scale dynamical models that have been developed in the literature. In particular, a thermodynamically consistent neuroscience model would emulate the clinically observed self-organizing spatio-temporal fractal structures that optimally dissipate energy and optimize entropy production in thalamocortical circuits of fully conscious patients. This thermodynamically consistent neuroscience framework can provide the necessary tools involving semistability, synaptic drive equipartitioning (i.e., synchronization across time scales), energy dispersal, and entropy production for connecting biophysical findings to psychophysical phenomena for general anesthesia.
In particular, we conjecture that as the model dynamics transition to an aesthetic state the system will involve a reduction in system complexity—defined as a reduction in the degree of irregularity across time scales—exhibiting semistability and synchronization of neural oscillators (i.e., thermodynamic energy equipartitioning). In other words, unconsciousness will be characterized by system decomplexification. In addition, connections between thermodynamics, neuroscience, and the arrow of time [4,5,6] can be explored by developing an understanding of how the arrow of time is built into the very fabric of our conscious brain. Connections between thermodynamics and neuroscience is not limited to the study of consciousness in general anesthesia and can be seen in biochemical systems, ecosystems, gene regulation and cell replication, as well as numerous medical conditions (e.g., seizures, schizophrenia, hallucinations, etc.), which are obviously of great clinical importance but have been lacking rigorous theoretical frameworks. This is a subject of current research.

Acknowledgements

This research was supported in part by the Air Force Office of Scientific Research under Grant FA9550-12-1-0192.

References

  1. Truesdell, C. Rational Thermodynamics; McGraw-Hill: New York, NY, USA, 1969. [Google Scholar]
  2. Truesdell, C. The Tragicomical History of Thermodynamics 1822–1854; Springer-Verlag: New York, NY, USA, 1980. [Google Scholar]
  3. Arnold, V. Contact Geometry: The Geometrical Method of Gibbs’ Thermodynamics. In Proceedings of the Gibbs Symposium, New Haven, CT, USA, 15–17 May 1989; Caldi, D., Mostow, G., Eds.; American Mathematical Society: Providence, RI, USA, 1990; pp. 163–179. [Google Scholar]
  4. Haddad, W.M.; Chellaboina, V.; Nersesov, S.G. Thermodynamics. A Dynamical Systems Approach; Princeton University Press: Princeton, NJ, USA, 2005. [Google Scholar]
  5. Haddad, W.M.; Chellaboina, V.; Nersesov, S.G. Time-reversal symmetry, Poincaré recurrence, irreversibility, and the entropic arrow of time: From mechanics to system thermodynamics. Nonlinear Anal. Real World Appl. 2008, 9, 250–271. [Google Scholar] [CrossRef]
  6. Haddad, W.M. Temporal asymmetry, entropic irreversibility, and finite-time thermodynamics: From Parmenides–Einstein time–reversal symmetry to the Heraclitan entropic arrow of time. Entropy 2012, 14, 407–455. [Google Scholar] [CrossRef]
  7. Haddad, W.M.; Chellaboina, V. Nonlinear Dynamical Systems and Control. A Lyapunov-Based Approach; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  8. Gibbs, J.W. On the equilibrium of heterogeneous substances. Trans. Conn. Acad. Sci. 1875, III, 108–248. [Google Scholar] [CrossRef]
  9. Gibbs, J.W. On the equilibrium of heterogeneous substances. Trans. Conn. Acad. Sci. 1878, III, 343–524. [Google Scholar] [CrossRef]
  10. Hartman, P. Ordinary Differential Equations; Birkhaäuser: Boston, MA, USA, 1982. [Google Scholar]
  11. Haddad, W.M.; Chellaboina, V.; Hui, Q. Nonnegative and Compartmental Dynamical Systems; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  12. Haddad, W.M.; Chellaboina, V. Stability and dissipativity theory for nonnegative dynamical systems: A unified analysis framework for biological and physiological systems. Nonlinear Anal. Real World Appl. 2005, 6, 35–65. [Google Scholar] [CrossRef]
  13. Diestel, R. Graph Theory; Springer-Verlag: New York, NY, USA, 1997. [Google Scholar]
  14. Godsil, C.; Royle, G. Algebraic Graph Theory; Springer-Verlag: New York, NY, USA, 2001. [Google Scholar]
  15. Steinfeld, J.I.; Francisco, J.S.; Hase, W.L. Chemical Kinetics and Dynamics; Prentice-Hall: Upper Saddle River, NJ, USA, 1989. [Google Scholar]
  16. Chellaboina, V.; Bhat, S.P.; Haddad, W.M.; Bernstein, D.S. Modeling and analysis of mass action kinetics: Nonnegativity, realizability, reducibility, and semistability. Contr. Syst. Mag. 2009, 29, 60–78. [Google Scholar] [CrossRef]
  17. Erdi, P.; Toth, J. Mathematical Models of Chemical Reactions: Theory and Applications of Deterministic and Stochastic Models; Princeton University Press: Princeton, NJ, USA, 1988. [Google Scholar]
  18. Baierlein, R. The elusive chemical potential. Am. J. Phys. 2001, 69, 423–434. [Google Scholar] [CrossRef]
  19. Fuchs, H.U. The Dynamics of Heat; Springer-Verlag: New York, NY, USA, 1996. [Google Scholar]
  20. Job, G.; Herrmann, F. Chemical potential–A quantity in search of recognition. Eur. J. Phys. 2006, 27, 353–371. [Google Scholar] [CrossRef]
  21. DeDonder, T. L’Affinité; Gauthiers-Villars: Paris, France, 1927. [Google Scholar]
  22. DeDonder, T.; Rysselberghe, P.V. Affinity; Stanford University Press: Menlo Park, CA, USA, 1936. [Google Scholar]
  23. Lapicque, L. Recherches quantitatives sur l’ excitation electiique des nerfs traitee comme une polarization. J. Physiol. Gen. 1907, 9, 620–635. [Google Scholar]
  24. Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef] [PubMed]
  25. Dayan, P.; Abbott, L.F. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  26. Ermentrout, B.; Terman, D.H. Mathematical Foundations of Neuroscience; Springer-Verlag: New York, NY, USA, 2010. [Google Scholar]
  27. Hui, Q.; Haddad, W.M.; Bailey, J.M. Multistability, bifurcations, and biological neural networks: A synaptic drive firing model for cerebral cortex transition in the induction of general anesthesia. Nonlinear Anal. Hybrid Syst. 2011, 5, 554–572. [Google Scholar] [CrossRef]
  28. Mashour, G.A. Consciousness unbound: Toward a paradigm of general anesthesia. Anesthesiology 2004, 100, 428–433. [Google Scholar] [CrossRef] [PubMed]
  29. Zecharia, A.Y.; Franks, N.P. General anesthesia and ascending arousal pathways. Anesthesiology 2009, 111, 695–696. [Google Scholar] [CrossRef] [PubMed]
  30. Sonner, J.M.; Antognini, J.F.; Dutton, R.C.; Flood, P.; Gray, A.T.; Harris, R.A.; Homanics, G.E.; Kendig, J.; Orser, B.; Raines, D.E.; et al. Inhaled anesthetics and immobility: Mechanisms, mysteries, and minimum alveolar anesthetic concentration. Anesth. Analg. 2003, 97, 718–740. [Google Scholar] [CrossRef] [PubMed]
  31. Campagna, J.A.; Miller, K.W.; Forman, S.A. Mechanisms of actions of inhaled anesthetics. N. Engl. J. Med. 2003, 348, 2110–2124. [Google Scholar] [PubMed]
  32. John, E.R.; Prichep, L.S. The anesthetic cascade: A theory of how anesthesia suppresses consciousness. Anesthesiology 2005, 102, 447–471. [Google Scholar] [CrossRef] [PubMed]
  33. Macklem, P.T.; Seely, A.J.E. Towards a definition of life. Prespectives Biol. Med. 2010, 53, 330–340. [Google Scholar] [CrossRef] [PubMed]
  34. Seely, A.J.E.; Macklem, P. Fractal variability: An emergent property of complex dissipative systems. Chaos 2012, 22, 1–7. [Google Scholar] [CrossRef] [PubMed]
  35. Bircher, J. Towards a dynamic definition of health and disease. Med. Health Care Philos. 2005, 8, 335–341. [Google Scholar] [CrossRef] [PubMed]
  36. Goldberger, A.L.; Rigney, D.R.; West, B.J. Science in pictures: Chaos and fractals in human physiology. Sci. Am. 1990, 262, 42–49. [Google Scholar] [CrossRef] [PubMed]
  37. Goldberger, A.L.; Peng, C.K.; Lipsitz, L.A. What is physiologic complexity and how does it change with aging and disease? Neurobiol. Aging 2002, 23, 23–27. [Google Scholar] [CrossRef]
  38. Godin, P.J.; Buchman, T.G. Uncoupling of biological oscillators: A complementary hypothesis concerning the pathogenesis of multiple organ dysfunction syndrome. Crit. Care Med. 1996, 24, 1107–1116. [Google Scholar] [CrossRef] [PubMed]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top