You are currently viewing a new version of our website. To view the old version click .
Entropy
  • Article
  • Open Access

5 November 2025

Thermodynamic Theory of Macrosystems: Entropy Production as a Metric

Institute of System and Software Engineering and Information Technology, National Research University of Electronic Technology, 124498 Moscow, Russia
This article belongs to the Special Issue The First Half Century of Finite-Time Thermodynamics

Abstract

The article considers the description of a macrosystem in terms that do not depend on the nature of the macrosystem. The results obtained can be used to describe macrosystem models of thermodynamic processes, and to create interdisciplinary models that take into account interactions of various natures. The macrosystem model is based on its representation in the form of a self-similar oriented weighted graph where the equation of state is fulfilled for each node, which connects extensive variables. One of the extensive variables is entropy, the maximum of which corresponds to the state of equilibrium. For processes in which fluxes are linearly dependent on driving forces, Onsager’s relations are shown to be true, which makes it possible to prove that in the space of stationary processes, entropy production in a closed macrosystem is a metric similar to the Mahalanobis metric, which determines the distance between processes. Zero in such a space indicates reversible processes, and thus the production of entropy shows the degree of irreversibility as the distance from a researched process to a reversible one.

1. Introduction

Thermodynamics has historically been the first and main research focus of complex systems []. It is thermodynamic analogies that are used in the description of other systems such as economic, social, informational, and algorithmic systems when control is only possible in an averaged sense. We mean averaged because it is impossible to observe or control the state and behaviour of individual elementary particles, whose large aggregate forms a complex system.
In a thermodynamic system, the elementary particles are molecules, even though only statistical mechanics need discrete particles as a concept. The reason is that phenomenological (i.e., experimentally observable) patterns related to the state of a thermodynamic system are carried out through dynamic or statistical averaging of the interactions between individual molecules. However, such averaging is only possible under a priori assumptions about the nature of such interactions as they cannot be verified due to the incredibly large number of molecules in the system and their extremely small size, which makes them neither observable nor controllable [].
The validity of such a framework is supported by the existence of phenomenological laws. However, in this case, the assumptions about the properties of individual molecular interactions must be derived from the observations of thermodynamic systems as they cannot serve as proof of the truth of the phenomenological laws per se.
The approach suggested in this work is the reverse one, which is based on a structural induction where the basic point is any scale level at which the properties of the system can be experimentally verified. This presupposes the fulfilment of the self-similarity condition, which is generally defined as follows: a set X is self-similar if there exists a finite set K indexing a collection of injective and non-surjective mappings ϕ κ κ K , such that X = κ K ϕ κ ( X ) []. The self-similarity condition is actively used in thermodynamics [], even though it is strictly fulfilled only when the set of subsystems whose union forms the given thermodynamic system is a continuum. This assumption is too strong for real thermodynamic systems but can be considered a good approximation.
The approach considered below is not limited to describing thermodynamic systems alone, but is intentionally extended to a more general representation. This allows for the consideration of a class of macrosystem models that includes thermodynamic systems.

2. Definition of Macrosystems

When analyzing patterns that arise in the exchange processes in systems of various natures (e.g., physical, chemical, economic, informational, and social), it is often advisable to use a macrosystem approach.
Macrosystems are systems in which the following conditions are satisfied:
  • Macrosystems consist of a large number of elementary objects, and this number is so large that any macrosystem can be considered as a continuum and can be divided into any finite number of subsystems, including those sufficient to determine the statistical characteristics of any given accuracy; each of the subsystems can be considered as a macrosystem. Any macrosystem Y is a nonempty collection S of pairwise disjoint subsets of Y closed under complement, countable unions, and countable intersections. S is a σ-algebra and ordered pair ( Y ,   S ) is a measurable space. The macrosystem Y can be considered as a union of a finite number of a lower level macrosystems Y i : Y = i Y Y i . This condition is called the self-similarity condition [].
  • The macrosystem state is determined by the vector Q of state variables. It is assumed that the state variables satisfy the conservation law; therefore, we can consider vector Q as a vector of extensive variables. Because of non-negativity and countable additivity, vector Q is a vector measure and ( Y ,   S , Q ) is a measure space. In thermodynamic macrosystems, the extensive variables are internal energy, mol number, and volume [], which the macrosystem can exchange with its external environment []; we will also consider the external environment as a set of macrosystems (and a higher-level macrosystem is the union of the macrosystem and its external environment). In the process of interaction between the subsystems X and Y, the values of the vectors Q X and Q Y change over time. In this way, exchange processes and their corresponding fluxes are formed, which are understood as the rates of change of the extensive quantities. We will denote fluxes between subsystems X and Y as q X Y .
  • It is impossible to control each elementary object due to their extremely large number []. The macrosystem control can be organized only by impact on the parameters averaged over a set of elementary objects, namely
    Changes in the parameters of the macrosystem’s external environment, the interaction with which determines the change in the values of extensive variables in the macrosystem;
    Change in the values of extensive variables (for example, their extraction) in the macrosystem due to external interventions;
    Changes in the characteristics of the exchange infrastructure to accelerate or, conversely, slow down the exchange processes.

3. Representation of the Macrosystem as a Graph

The macrosystem can be represented as a self-similar oriented weighted graph, the nodes of which correspond to the subsystems and the macrosystem’s external environment, and the edges correspond to the fluxes between the subsystems and between the subsystems and the macrosystem’s external environment. Each graph node (each subsystem) is characterized by a vector Q = Q 0 , , Q N , and fluxes between subsystems can be functionally unrelated to each other.
Self-similar—each graph node can be represented as a graph that describes the interaction of subsystems that form this node with each other and with their environment. Let us consider two interacting (the vector of the fluxes of extensive variables is q X Y ) disjoint macrosystems X and Y . Due to the self-similarity condition, we can introduce sets X = i X i , Y = j Y j of pairwise disjoint subsystems corresponding to these macrosystems. Each subsystem is characterized by its own values of extensive variables: Q i for X i and Q j for Y j . Flux q i j is formed between subsystems from different sets X and Y . In this notation, the following equations are valid:
i X Q i = Q X ; j Y Q j = Q Y ; i X j Y q i j = q X Y .
Oriented—the direction of each edge determines the positive sign of each flux; thus, if the edges are directed from node X to node Y , then
q X Y = Q X = Q Y .
Weighted—the weight of the edge shows the flux intensity: the positive value of the ν -th flux q X Y , ν > 0 if the real flux of the extensive variable is directed towards the edge, and q X Y , ν < 0 if it is opposite to the edge’s direction. In addition, a matrix A of infrastructure coefficients is determined for all fluxes between nodes X and Y (that is, for all fluxes q X Y ). This matrix is the metadata for the edges connecting nodes X and Y , and determines both the facility of exchange process and complementary or substitutionary features of the extensive variables [].
Figure 1 shows an example of the self-similar oriented weighted graph of a part of a macrosystem.
Figure 1. An example of the self-similar oriented weighted graph corresponding to a macrosystem. The nodes (the circles) correspond to the subsystems, and the edges (the arrows) correspond to the fluxes between the subsystems.

4. Equilibrium State of the Macrosystem

Let us assume that two macrosystems X and Y exchange the vector of the extensive variables Q . At each moment of time t, the reserves of the extensive variables describing the state of macrosystems are equal to Q X ( t ) ,   Q Y ( t ) . We represent macrosystems as a union of a finite set of subsystems: X   a n d   Y , respectively. An equilibrium state shall be a state ( Q X ,   Q Y ) in which the sum of fluxes at each moment of time t for each ν = 0 , , N
q X Y , ν ( t ) = i X j Y q i j , ν ( t ) = 0 .
Thus, the equilibrium in the macrosystem is considered dynamic. The equilibrium condition can be defined as follows: at the any level of sets of subsystems X , Y : X = i X X i ; Y = j Y Y j , a vector of fluxes q i j i X , j Y is described by a time-independent distribution f ( q ~ ) with expectation E q ~ = 0 .
Note that the fluxes q i j ( t ) , i X , j Y , and the random variable q ~ , which describes the subsystem fluxes distribution, are vectors. At this level of subsystems, we can assume that a large number of factors affects the distribution of fluxes, which means that the distribution f q q ~ can be described by a multivariate normal distribution. The parameters of this distribution are the expectation E q ~ , which determines fluxes at the level of subsystems X and Y in accordance with (3), and the covariance matrix C o v q ~ .
Assume that fluxes q X Y arise due to the activity of some driving forces, which we can also consider at the subsystem level as a random vector, such that
Fluxes q X Y are linearly dependent on the exchange driving forces φ X Y []: q X Y = A φ X Y —under this assumption and suggesting that the matrix of infrastructure coefficients A is constant, the driving forces distribution is also normal;
The covariance matrix C o v q ~ of subsystems of the X Y macrosystem depends on the driving forces intensity φ X Y = E [ φ ~ ] causing these fluxes, so that the matrices C o v q ~ and C o v [ φ ~ ] are jointly normalizable (their eigenvectors coincide)—due to the linear relationship between fluxes and driving forces;
Limits of correlation coefficients corresponding to the covariance matrix C o v q ~ for any ν , κ = 0 , , N : lim φ XY 0 ρ ν κ = 0 , lim φ XY ρ ν κ = 1 —due to the redistribution of the extensive variables over a variety of subsystems, depending on the number of intermediate nodes in the graph chain to the contact point.
If the equilibrium condition is satisfied for a macrosystem X when interacting with each macrosystem from its environment, then such a macrosystem is called closed. If, for a macrosystem X , all its subsystems are in equilibrium when interacting with each other (but not necessarily with the environment of the macrosystem X )
t , i X : j X q i j ( t ) = 0 ,
then we can say that this macrosystem is in a state of internal equilibrium. For a macrosystem in internally equilibrium, all fluxes can be observed only at the boundaries of the macrosystem and its environment.

5. Extensive and Intensive Variables

Let us assume that two macrosystems X and Y exchange the vector of the extensive variables Q . At each moment of time t , the values of the extensive variables describe the state of macrosystems.
The macrosystem shall be described by a set of extensive and intensive variables:
Extensive variables are such that for any two disjoint macrosystems X and Y (not necessarily in equilibrium):
Q X Y = Q X + Q Y ;
Due to conservation law, all the values of the extensive variables are extensive variables;
Intensive variables v are such that for any two systems X and Y that are in equilibrium:
v X Y = v X = v Y .
Extensive variables satisfy the neutral scale effect condition: if all extensive variables in all subsystems of the macrosystem are increased by n times, then the flux intensities in the macrosystem will not change. In particular, such a proportional increase in the extensive variables will not bring the macrosystem out of the internal equilibrium state if before that, the system was in internal equilibrium. The neutral scale effect condition provides the self-similarity of the macrosystem.
As a consequence of (6), intensive variable dependencies on the extensive variable values that determine the macrosystem state should be homogeneous functions of the zero-order.

6. Entropy of the Macrosystem

The set of extensive variables describes the macrosystem state. Among the extensive variables, we single out the value S = Q 0 , which characterizes the objective function of the system. S and other extensive variables are functionally related: S = S ( Q ) , Q = Q 1 , , Q N . We call this equation the system state equation.
The choice of S as the objective function is determined by the Levitin–Popkov Axioms [], which impose the following conditions on this variable:
  • For a controlled system, S   =   S ( Q ) given a fixed deterministic control v . Its stochastic state, which is characterized by a vector flux, is transformed into a deterministic vector Q ( v ) , called the steady or stationary state, which belongs to a permissible set D ( v ) .
  • For any fixed vector v D v , there exists a vector p v of a priori probabilities for the distribution of fluxes in the system S Q , such that the stationary state Q v of the macrosystem under that given fixed control v is the optimal solution to the entropy-based optimization problem: Q v =   z v p v , where z v p v is the entropy operator defined as
z u p = a r g m a x { S p , Q : v D ( v ) } .
Thus, the pair ( p ( v ) ,   Q ( v ) ) simultaneously provides both the required vector of prior probabilities and the corresponding stationary state vector.
3.
There exists an inverse mapping p   =   ζ [ v ] ( Q ) such that the desired pair ( p ( v ) ,   Q ( v ) ) is the unique solution to the system of the relations:
Q v = z v p v ,   p = ζ v Q .
Within the framework of the considered model, these statements can be interpreted as follows:
  • For any fixed deterministic control v = c o n s t , there exists a stable steady state Q * ( v ) .
  • The stable steady state Q * ( v ) is a state of an internal equilibrium corresponding to the maximum of S Q .
  • The stable steady state Q * ( v ) is unique.
For a closed system, the condition S ( Q ) m a x determines the spontaneous direction of exchange of extensive variables. However, a question arises about the homogeneity of the function S Q : entropy is a homogeneous function of degree one only in the systems that are in a state of internal equilibrium. In cases where the distribution p ( v ) is scale-invariant but does not correspond to internal equilibrium, a fractal structure of the macrosystem is observed, in which S Q is a homogeneous function with a degree of homogeneity less than one. Taking this remark into account, entropy can still be considered an extensive quantity. The question to use the degree of homogeneity S Q as a measure of equilibrium of the macrosystem should be given further consideration.
Since S is an extensive variable in the condition of internal equilibrium, then S Q is a homogeneous function of the first-order: when scaling the system by n times or combining n identical macrosystems in the equilibrium state:
n S = S ( n Q ) .
In accordance with the Euler relations for homogeneous functions
S Q = Q S = ν = 1 N Q ν S Q ν .
Let us denote v ν = S / Q ν . Since S Q is a homogeneous function of the first-order, then v ν ( Q )   ν = 1 , , N are homogeneous of the zero-order, i.e., they are intensive variables: for any n value v ν n Q = v ν ( Q ) . This means that a proportional increase in the extensive variables will not bring the macrosystem out of the internal equilibrium state if before that, the system was in internal equilibrium.
Under the assumption that the function S Q is differentiable and its partial derivatives are continuous, in accordance with the necessary condition for the function to be differentiable, there exists a total differential
d S = ν = 1 N S Q ν d Q ν = ν = 1 N v ν d Q ν .
Two consequences of Equation (11) can be formulated.
Consequence 1. By differentiating the Euler relation (10), we obtain
d S = ν = 1 N v ν d Q ν + Q ν d v ν .
By comparing (11) and (12), we see that the second term in (12) should be equal to zero:
ν = 1 N Q ν d v ν = 0 .
Consequence 2. The exchange process between subsystems X and Y (with a positive direction of fluxes from X to Y ) can be described using Equations (2) and (11) as follows:
d S d t = d S Y d t + d S X d t = ν = 1 N v Y ν v X ν q X Y , ν .
The parameter S is an extensive variable and its dependence on other state variables S ( Q ) is an objective function for spontaneous processes in the macrosystem.
From the established relations, the following conclusion can be drawn:
Let Q be a vector measure on a measurable space ( Y ,   S , Q ) . Then, the entropy of a subsystem X Y is defined as a function S X = S ( Q X ) , such that
S X S ( Q * ) , where S ( Q * ) is the entropy of X under internal equilibrium of all subsystems X;
S Q X + S 0 = S ( Q X ) , where S 0 is the entropy of a subsystem with a zero-vector measure;
For any two subsystems X 1 , X 2 such that Y = X 1 X 2 , Q X 1 X 2 = 0 , it holds that S Y = S X 1 + S X 2 .
These properties correspond to Shannon–Khinchin Axioms for entropy [].
The entropy function meets the conditions of Shannon entropy if the number of subsystems is large enough. S statistically corresponds to the entropy of the distribution p ( v ( Q ) ) of state parameters of subsystems of the macrosystem. Let us explain this statement. Since there are no restrictions on the possible random vector values with a finite variance, the normal distribution corresponds to the maximum entropy value (this can be considered as a justification for the distribution law of subsystems parameters that form a macrosystem).
The entropy of the normal distribution consists of two terms: S = S 0 + 0.5 ln det R , where R = ρ ν κ ,   ν , κ = 1 , , N is the correlation matrix. The first addend corresponds to the complete independence of the system elements and characterizes the structure of the macrosystem, and the second describes the relationships in the macrosystem. The correlation ρ ν κ   ( ν , κ = 1 , , N ) between fluxes increases, which is observed with the growing magnitude of the exchange driving forces φ X Y . Figure 2 illustrates the kind of dependency of ρ ν κ φ X Y when components φ ν and φ κ of the vector φ X Y are changed. The second term of entropy expression, which is always negative at R E , ( E is the identity matrix), decreases. This explains the statement about S as an objective function that reaches its maximum when the macrosystem reaches an equilibrium state, and a state function that sets the direction of processes in a closed system, increasing the dispersion of low-level subsystem parameters while simplifying the macrosystem structure at a high level.
Figure 2. Dependency of correlation between fluxes ρ ν κ φ X Y in a macrosystem: φ ν and φ κ are the components of the vector φ X Y .
In the process of spontaneous, not forced exchange, the S value of a closed macrosystem cannot decrease since S Q is the objective function for the system behaviour. In order for the value σ = d S / d t , called the entropy production, to be positive during the any process of given non-zero intensity, it is sufficient to fulfil the condition
s i g n v Y ν v X ν = s i g n q X Y , ν ν = 1 , , N .
If the process duration is infinitely long, then the macrosystem’s natural evolution leads it to a state of internal equilibrium, which corresponds to the achievement of the maximum S ( Q ) value. This is equivalent to the statement that in the internal equilibrium state, S ( Q ) is maximum.
Intensive variables v = S can be considered as specific potentials. Their difference φ X Y = v Y v X according to (15) is the driving force of the exchange process. The fluxes are directed from subsystems with lower values of intensive parameters to subsystems with higher values of intensive parameters. Thus, Equation (14) can be rewritten as follows:
σ ( v X , v Y ) = ν = 1 N φ ν v X , v Y q X Y , ν A , φ v X , v Y ,
where A is the matrix of infrastructure coefficients. Note that the infrastructure of the exchange process involves a wide variety of features of the subsystems’ boundaries medium. For example, in thermodynamic macrosystems these features are surface area, roughness, etc., all these parameters can be controls in the corresponding optimization problems, but here, A is assumed to be constant. If the driving force during the process is constant, then such a process is called stationary. Since φ v X , v Y is a linear relation and therefore the superposition principle is being applied, we single out the classes of reversible ( φ v X , v Y 0 ) processes and processes of minimal dissipation σ φ min φ     q X Y φ = f i x .

7. Differential Form of the State Equation

The state function can be given in differential form, as is typical for thermodynamic systems:
δ Φ = ν = 1 N F ν Q d Q ν .
If we cannot write a state function explicitly, then the problem of integrability of Φ ( Q ) should be solved. Equation (17) is the Pfaffian form. A Pfaffian form is said to be holonomic if there exists an integrating multiplier w ( Q ) such that
w Q δ Φ = ν = 1 N S Q ν d Q ν = d S , w h e r e   S Q ν = w Q F ν Q , ν = 1 , , N .
The Pfaffian form of two independent variables is always holonomic, that is, there is always an integrating multiplier w ( Q ) . However, for N > 2 , the integrating multiplier exists if the holonomy conditions are satisfied: for any three different κ , μ , ν ,
F κ Q F μ Q ν F ν Q μ + F μ Q F ν Q κ F κ Q ν + F ν Q F κ Q μ F μ Q κ = 0 .
These conditions are obtained from the equality of the second mixed derivatives with respect to any variable pairs (Maxwell’s relations)
2 S Q ν Q κ = w ( Q ) F ν ( Q ) Q κ
and by exclusion of the integrating factor w ( Q ) from these equalities. In addition to conditions (19), it is necessary that all products w ( Q ) F ν ( Q ) be homogeneous of zero-order homogeneity.

8. Concavity of Entropy Function

The S Q function gradient determines the intensive variable vector of the macrosystem. As the extensive variables increase, the S value also increases, but at a slower rate (the diminishing returns law), so that for all = 1 , , N   S / Q ν = v ν > 0 , 2 S / Q ν 2 = v ν / Q ν < 0 . Moreover, the Hessian matrix H S = 2 S / Q ν Q κ for homogeneous functions of the first-order homogeneity is negative semi-definite. Indeed, it follows from the Euler relations (10) that differentiating both parts of this equation per Q κ ,
κ = 1 , , N : ν = 1 N Q ν 2 S Q ν Q κ = 0 .
For an arbitrary vector x , the necessary conditions for the extremum of a quadratic form in x are
x T H S x = ν = 1 N x ν 2 2 S Q ν 2 + ν = 1 N 1 κ = ν + 1 N 2 x ν x κ 2 S Q ν Q κ max x   .
( x T H S x ) x ν = κ = 1 N x κ 2 S Q ν Q κ = 0 ,
which corresponds to the maximum point (according to the diminishing returns law) of the quadratic form x ν = Q ν , ν = 1 , , N . Since the product Q T H S (21) is equal to zero, then x T H S x at the maximum point is also equal to zero. Thus, for any value of x : x T H S x 0 , which has to be proved. Note that since the Hesse matrix is symmetric, then all its eigenvalues are real numbers.
The negative semi-definiteness of the Hessian matrix H S corresponds to the upward convexity (concavity) of the S ( Q ) function and S unimodality as the objective parameter. As the reserves of the extensive variables in the macrosystem increase, S increases, and the intensive parameters decrease, reducing the magnitude of the resource exchange driving force. In accordance with (14), this behaviour of intensive parameters leads to the fact that when the macrosystem is affected, changing its internal equilibrium conditions; the resource exchange processes are directed towards counteracting changes, and thus the Le Chatelier principle is fulfilled.
Partial derivatives 2 S / Q ν 2 describe the saturation of the system with the extensive variables, and 2 S / Q ν Q κ determines the substitution and complementation of the extensive variables in the macrosystem. If the extensive variables are substitutes, then an increase in one of them reduces the intensive variables adjoined with the other extensive variable. If the extensive variables are complements, then an increase in one of them, on the contrary, increases the intensive variables adjoined with the other extensive variable.

9. Metric Features of Entropy

Let us consider a particular case of exchange processes in a macrosystem consisting of two subsystems X and Y , where the fluxes linearly depend on the differences between the intensive variables of the subsystems:
q X Y , ν v X , v Y = d Q Y ν d t = κ = 1 N α ν κ φ κ v X κ , v Y κ ,
where φ κ v X κ , v Y κ = v Y κ v X κ .
Proposition 1. 
Matrix A = α ν κ is the matrix of infrastructure coefficients describing the exchange possibilities at the boundary between subsystems and it is a symmetric matrix.
Proof. 
Given the self-similarity property, the system can be divided into a statistically significant disjoint set of subsystems such that the characteristics of these subsystems form a representative sample of random variables q ~ , φ ~ , etc. The eigenvectors of the matrices C o v q ~ and C o v [ q ~ , φ ~ ] coincide and form a system of orthonormal vectors. Since the eigenvectors of any matrix and its inverse are the same, C o v q ~ and C o v [ q ~ , φ ~ ] are symmetric and commutative, so their product is a symmetric matrix. Since M [ q ~ ] = C o v T [ q ~ , φ ~ ] C o v q ~ M q ~ , the matrix of phenomenological coefficients A = C o v T [ q ~ , φ ~ ] C o v q ~ is symmetric (Onsager conditions). □
Proposition 2. 
Matrix A = α ν κ is a positive definite matrix.
Proof. 
The right part of Equation (16) under condition (24) has a quadratic form:
σ ( v X , v Y ) = φ T ( v X , v Y ) A φ v X , v Y .
For any positive values of the exchange driving forces φ v X , v Y , the entropy production is positive, as shown in (15). Therefore, A is a positive definite matrix. It means that of the dependencies σ ( φ ) and σ ( q ) under the linear dependence of the flows on the driving forces are convex. □
The consequence of this proposition is that entropy production is a metric in the space of stationary processes and can be used to determine the distance between processes
δ a , b = σ ( v X , v Y ) = φ a φ b T A φ a φ b .
Let us present some properties of this metric:
Zero in the space of stationary processes represents reversible processes for which σ = 0 . According to the third Levitin–Popkov axiom, there is the only reversible process in the macrosystem;
The distance between two processes a and b is determined as δ 2 a , b = φ a φ b T A φ a φ b . It is evident that δ a , a = 0 ; δ a , b = δ b , a ;
The distance δ a , b satisfies the triangle inequality due to A being a positive definite symmetric matrix; all its eigenvalues λ are positive real numbers.
Thus, the entropy production in a macrosystem characterizes the distance of stationary exchange processes occurring in such a system from the corresponding reversible process. For linear systems, it is possible to consider generalized, averaged over probability to measure stationary processes: s R μ s = P ( σ v X , v Y = s ) :   σ ¯ = + s   μ s d s . Such an averaging for cyclic and stochastic processes is illustrated on Figure 3.
Figure 3. Averaging of entropy production in cyclic (a), and stochastic (b) stationary processes.
The distance between irreversible processes is an important concept for analyzing complex systems. The class of minimal dissipation processes indicates the limit of performance of macrosystems when the average intensities of the processes in it are restricted. To determine the effectiveness of an arbitrary process, it is necessary to find a distance between this process and the minimum dissipation process. This distance is determined by the production of entropy.

10. Trajectories of the Exchange Process

For non-stationary exchange processes, it is necessary to determine the trajectory of the process, i.e., the change in time of all subsystem parameters when approaching the equilibrium state. For a linear dependence between fluxes and driving forces (24), we derive a differential equation that determines the change in the driving forces of the exchange process in a macrosystem consisting of two subsystems X and Y . Under the given initial conditions φ X Y 0 = φ 0 , this equation describes all the parameters of the subsystems.
Full differentials of functions v i κ Q i , i X , Y , κ = 1 , , N shall be written as
κ = 1 , , N : ν = 1 N Q ν 2 S Q ν Q κ = 0 .
Subtracting the equations for subsystem X from the equations for subsystem Y and taking into account the linear dependencies of fluxes on driving forces (24) and the fact that the driving forces are the differences between the corresponding intensive variables of the subsystems ( d φ X Y / d t = d v Y / d t d v X / d t ), we obtain
κ = 1 , , N : ν = 1 N Q ν 2 S Q ν Q κ = 0 .
Equation (28), together with the initial conditions, determines the trajectory of the exchange process.

11. Conclusions

The metric properties of entropy production allow for the determination of both the class of minimally irreversible processes [] and the quantitative distance between the processes. The stationarity restriction can be removed either by averaging the process parameters over time or due to the superposition principle for processes in linear systems. A generalized, independent-of-the-nature-of-the-processes, macrosystem model is useful for the formalization and investigation of extreme performances of complex, hierarchically related systems. In these macrosystem, there is a vector of entropy functions. It gives the macrosystem as additional degree of freedom at the level of subsystems.
The given proofs of the phenomenological properties of macrosystems are valid for thermodynamic systems but can also be applied to the systems of a different nature, particularly economic systems [,], information exchange systems in communication networks [], and high-performance computers [], where the number of computational cores is already comparable to the number of molecules in 1 μm3 of gas. The main difference in the description of macrosystems of different natures lies in the definition of the extensive quantities that describe the state of a system and in the relationships between the fluxes that arise from interactions between subsystems. In thermodynamic macrosystems, the extensive variables are internal energy, mol number, and volume []; in economic macrosystems, the extensive variables are resources, goods, and welfare []; in information macrosystems like recommendation systems, the extensive variables are database size and the number of exposed marks []; and so on. In information macrosystems, we restrict ourselves to describing syntactic information exchange []. Social macrosystems with semantic and pragmatic information exchange processes require a special conceptual apparatus, which is markedly different from the one usually used in formal logic []. For signal transmission systems, the extensive variables are the number of processors of a given type, the amount of memory, and the entropy properties of the computing power; the intensive variables are determined by the contribution to the increase in computing power that each type of hardware provides. When modelling algorithms as complex systems, the extensive variables can be used as control values, the entropy corresponds to the objective function, and the intensive variables are the values of the Lagrange multipliers. Economic analogies of complex systems are often considered. At the micro level, extensive quantities include the stock of goods while intensive quantities include the prices and the values of the goods. The welfare function has entropy properties in microeconomic systems. At the macro level, it is advisable to choose the gross regional product as the entropy, which is a function of vector of production factors. The intensive variables in this model are prices in real terms.
Entropy as an extensive quantity is introduced into different models of self-similar systems. Integration problems of the welfare function have been observed in []. The first-order homogeneity properties for gross regional product have been proven: the Cobb–Douglas production function is most commonly used in approximation. For communication networks, the entropy properties of the indicator of congestion have been proven []. These examples show the universality of a macrosystemic approach to modelling complex systems of different natures.
The graph of interaction between subsystems can be different for each type of nature of extensive variables. Such a complex system can be represented as a multigraph. Multigraphs where two graph nodes can be connected both by edges of different colours and by groups of edges of different colours, which correspond to multiple contact points between subsystems. The absence of edges between the multigraph’s nodes means that the nodes are isolated from each other. Edges of different colours corresponding to different extensive variables can be either co-directed or opposite.
The specific behaviour of individual elementary entities is especially relevant for social systems, where free and often unmotivated decision-making is possible and can complicate the application of the model. Nevertheless, the model can still be used to describe processes of various natures occurring simultaneously within a system. For example, a high-performance computer can be represented as a macrosystem in which hardware, software, and engineering subsystems exchange energy, signals, and information, and the average intensities of the computational processes and heat transfer processes are considered to be given. A computer does not perform mechanical work; therefore, all consumed electrical energy is converted into heat and must be dissipated into the surrounding environment.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Andresen, B.; Salamon, P. Future Perspectives of Finite-Time Thermodynamics. Entropy 2022, 24, 690. [Google Scholar] [CrossRef] [PubMed]
  2. Fitzpatrick, R. Thermodynamics and Statistical Mechanics; World Scientific Publishing Company: Chennai, India, 2020; 360p, ISBN 978-981-12-2335-8. [Google Scholar]
  3. Prunescu, M. Self-Similar Carpets over Finite Fields. Eur. J. Comb. 2009, 30, 866–878. [Google Scholar] [CrossRef]
  4. Tropea, C.; Yarin, A.L.; Foss, J.F. Springer Handbook of Experimental Fluid Mechanics; Springer: Berlin/Heidelberg, Germany, 2007; 557p, ISBN 978-3-54-030299-5. [Google Scholar]
  5. Popkov, Y.S. Macrosystem Models of Fluxes in Communication-Computing Networks (GRID Technology). IFAC Proc. Vol. 2004, 37, 47–57. [Google Scholar] [CrossRef]
  6. Rozonoér, L.I. Resource Exchange and Allocation (a Generalized Thermodynamic Approach). Autom. Remote Control 1973, 34, 915–927. [Google Scholar]
  7. Popkov, Y.S. Macrosystems Theory and Its Applications. Equilibrium Models; Ser.: Lecture Notes in Control and Information Sciences; Springer: Berlin, Germany, 1995; Volume 203, 323p, ISBN 3-540-19955-1. [Google Scholar]
  8. Amelkin, S.A.; Tsirlin, A.M. Optimal Choice of Prices and Fluxes in a Complex Open Industrial System. Open Syst. Inf. Dyn. 2001, 8, 169–181. [Google Scholar] [CrossRef]
  9. Levitin, E.S.; Popkov, Y.S. Axiomatic Approach to Mathematical Macrosystems Theory with Simultaneous Searching for Aprioristic Probabilities and Stochastic Flows Stationary Values. Proc. Inst. Syst. Anal. RAS 2014, 64, 35–40. [Google Scholar]
  10. Khinchin, A.I. Mathematical Foundations of Information Theory; Dover: New York, NY, USA, 1957; 120p, Original publication in Russian is: Khinchin, A.I. The Concept of Entropy in Probability Theory. Uspekhi Matematicheskikh Nauk 1953, 8, 3–20. [Google Scholar]
  11. Tsirlin, A.M. Minimum Dissipation Processes in Irreversible Thermodynamics. J. Eng. Phys. Thermophys. 2016, 89, 1067–1078. [Google Scholar] [CrossRef]
  12. Tsirlin, A.M. External Principles and the Limiting Capacity of Open Thermodynamic and Economic Macrosystems. Autom. Remote Control 2005, 66, 449–464. [Google Scholar] [CrossRef]
  13. Tsirlin, A.M.; Gagarina, L.G. Finite-Time Thermodynamics in Economics. Entropy 2020, 22, 891. [Google Scholar] [CrossRef] [PubMed]
  14. Dodds, P.; Watts, D.; Sabel, C. Information Exchange and the Robustness of Organizational Networks. Proc. Natl. Acad. Sci. USA 2003, 100, 12516–12521. [Google Scholar] [CrossRef] [PubMed]
  15. De Vos, A. Endoreversible Models for the Thermodynamics of Computing. Entropy 2022, 22, 660. [Google Scholar] [CrossRef] [PubMed]
  16. Pebesma, E.; Bivand, R. Spatial Data Science: With Applications in R; Chapman and Hall/CRC: Boca Raton, FL, USA; New York, NY, USA, 2023; 314p. [Google Scholar] [CrossRef]
  17. Martinás, K. Entropy and Information. World Futures 1997, 50, 483–493. [Google Scholar] [CrossRef]
  18. Jøsang, A. Subjective Logic. A Formalism for Reasoning Under Uncertainty; Springer: Cham, Switzerland, 2016; 337p, ISBN 978-3-319-42335-7. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.