Abstract
The article considers the description of a macrosystem in terms that do not depend on the nature of the macrosystem. The results obtained can be used to describe macrosystem models of thermodynamic processes, and to create interdisciplinary models that take into account interactions of various natures. The macrosystem model is based on its representation in the form of a self-similar oriented weighted graph where the equation of state is fulfilled for each node, which connects extensive variables. One of the extensive variables is entropy, the maximum of which corresponds to the state of equilibrium. For processes in which fluxes are linearly dependent on driving forces, Onsager’s relations are shown to be true, which makes it possible to prove that in the space of stationary processes, entropy production in a closed macrosystem is a metric similar to the Mahalanobis metric, which determines the distance between processes. Zero in such a space indicates reversible processes, and thus the production of entropy shows the degree of irreversibility as the distance from a researched process to a reversible one.
1. Introduction
Thermodynamics has historically been the first and main research focus of complex systems []. It is thermodynamic analogies that are used in the description of other systems such as economic, social, informational, and algorithmic systems when control is only possible in an averaged sense. We mean averaged because it is impossible to observe or control the state and behaviour of individual elementary particles, whose large aggregate forms a complex system.
In a thermodynamic system, the elementary particles are molecules, even though only statistical mechanics need discrete particles as a concept. The reason is that phenomenological (i.e., experimentally observable) patterns related to the state of a thermodynamic system are carried out through dynamic or statistical averaging of the interactions between individual molecules. However, such averaging is only possible under a priori assumptions about the nature of such interactions as they cannot be verified due to the incredibly large number of molecules in the system and their extremely small size, which makes them neither observable nor controllable [].
The validity of such a framework is supported by the existence of phenomenological laws. However, in this case, the assumptions about the properties of individual molecular interactions must be derived from the observations of thermodynamic systems as they cannot serve as proof of the truth of the phenomenological laws per se.
The approach suggested in this work is the reverse one, which is based on a structural induction where the basic point is any scale level at which the properties of the system can be experimentally verified. This presupposes the fulfilment of the self-similarity condition, which is generally defined as follows: a set is self-similar if there exists a finite set K indexing a collection of injective and non-surjective mappings such that []. The self-similarity condition is actively used in thermodynamics [], even though it is strictly fulfilled only when the set of subsystems whose union forms the given thermodynamic system is a continuum. This assumption is too strong for real thermodynamic systems but can be considered a good approximation.
The approach considered below is not limited to describing thermodynamic systems alone, but is intentionally extended to a more general representation. This allows for the consideration of a class of macrosystem models that includes thermodynamic systems.
2. Definition of Macrosystems
When analyzing patterns that arise in the exchange processes in systems of various natures (e.g., physical, chemical, economic, informational, and social), it is often advisable to use a macrosystem approach.
Macrosystems are systems in which the following conditions are satisfied:
- Macrosystems consist of a large number of elementary objects, and this number is so large that any macrosystem can be considered as a continuum and can be divided into any finite number of subsystems, including those sufficient to determine the statistical characteristics of any given accuracy; each of the subsystems can be considered as a macrosystem. Any macrosystem is a nonempty collection of pairwise disjoint subsets of closed under complement, countable unions, and countable intersections. is a σ-algebra and ordered pair is a measurable space. The macrosystem can be considered as a union of a finite number of a lower level macrosystems : . This condition is called the self-similarity condition [].
- The macrosystem state is determined by the vector of state variables. It is assumed that the state variables satisfy the conservation law; therefore, we can consider vector as a vector of extensive variables. Because of non-negativity and countable additivity, vector is a vector measure and is a measure space. In thermodynamic macrosystems, the extensive variables are internal energy, mol number, and volume [], which the macrosystem can exchange with its external environment []; we will also consider the external environment as a set of macrosystems (and a higher-level macrosystem is the union of the macrosystem and its external environment). In the process of interaction between the subsystems X and Y, the values of the vectors and change over time. In this way, exchange processes and their corresponding fluxes are formed, which are understood as the rates of change of the extensive quantities. We will denote fluxes between subsystems and as .
- It is impossible to control each elementary object due to their extremely large number []. The macrosystem control can be organized only by impact on the parameters averaged over a set of elementary objects, namely
- –
- Changes in the parameters of the macrosystem’s external environment, the interaction with which determines the change in the values of extensive variables in the macrosystem;
- –
- Change in the values of extensive variables (for example, their extraction) in the macrosystem due to external interventions;
- –
- Changes in the characteristics of the exchange infrastructure to accelerate or, conversely, slow down the exchange processes.
3. Representation of the Macrosystem as a Graph
The macrosystem can be represented as a self-similar oriented weighted graph, the nodes of which correspond to the subsystems and the macrosystem’s external environment, and the edges correspond to the fluxes between the subsystems and between the subsystems and the macrosystem’s external environment. Each graph node (each subsystem) is characterized by a vector , and fluxes between subsystems can be functionally unrelated to each other.
Self-similar—each graph node can be represented as a graph that describes the interaction of subsystems that form this node with each other and with their environment. Let us consider two interacting (the vector of the fluxes of extensive variables is ) disjoint macrosystems and . Due to the self-similarity condition, we can introduce sets of pairwise disjoint subsystems corresponding to these macrosystems. Each subsystem is characterized by its own values of extensive variables: for and for . Flux is formed between subsystems from different sets and . In this notation, the following equations are valid:
Oriented—the direction of each edge determines the positive sign of each flux; thus, if the edges are directed from node to node , then
Weighted—the weight of the edge shows the flux intensity: the positive value of the -th flux if the real flux of the extensive variable is directed towards the edge, and if it is opposite to the edge’s direction. In addition, a matrix of infrastructure coefficients is determined for all fluxes between nodes and (that is, for all fluxes ). This matrix is the metadata for the edges connecting nodes and , and determines both the facility of exchange process and complementary or substitutionary features of the extensive variables [].
Figure 1 shows an example of the self-similar oriented weighted graph of a part of a macrosystem.
Figure 1.
An example of the self-similar oriented weighted graph corresponding to a macrosystem. The nodes (the circles) correspond to the subsystems, and the edges (the arrows) correspond to the fluxes between the subsystems.
4. Equilibrium State of the Macrosystem
Let us assume that two macrosystems and exchange the vector of the extensive variables . At each moment of time t, the reserves of the extensive variables describing the state of macrosystems are equal to . We represent macrosystems as a union of a finite set of subsystems: , respectively. An equilibrium state shall be a state in which the sum of fluxes at each moment of time for each
Thus, the equilibrium in the macrosystem is considered dynamic. The equilibrium condition can be defined as follows: at the any level of sets of subsystems , a vector of fluxes is described by a time-independent distribution with expectation .
Note that the fluxes , , and the random variable , which describes the subsystem fluxes distribution, are vectors. At this level of subsystems, we can assume that a large number of factors affects the distribution of fluxes, which means that the distribution can be described by a multivariate normal distribution. The parameters of this distribution are the expectation , which determines fluxes at the level of subsystems and in accordance with (3), and the covariance matrix .
Assume that fluxes arise due to the activity of some driving forces, which we can also consider at the subsystem level as a random vector, such that
- –
- Fluxes are linearly dependent on the exchange driving forces []: —under this assumption and suggesting that the matrix of infrastructure coefficients is constant, the driving forces distribution is also normal;
- –
- The covariance matrix of subsystems of the macrosystem depends on the driving forces intensity causing these fluxes, so that the matrices and are jointly normalizable (their eigenvectors coincide)—due to the linear relationship between fluxes and driving forces;
- –
- Limits of correlation coefficients corresponding to the covariance matrix for any : —due to the redistribution of the extensive variables over a variety of subsystems, depending on the number of intermediate nodes in the graph chain to the contact point.
If the equilibrium condition is satisfied for a macrosystem when interacting with each macrosystem from its environment, then such a macrosystem is called closed. If, for a macrosystem , all its subsystems are in equilibrium when interacting with each other (but not necessarily with the environment of the macrosystem )
then we can say that this macrosystem is in a state of internal equilibrium. For a macrosystem in internally equilibrium, all fluxes can be observed only at the boundaries of the macrosystem and its environment.
5. Extensive and Intensive Variables
Let us assume that two macrosystems and exchange the vector of the extensive variables . At each moment of time , the values of the extensive variables describe the state of macrosystems.
The macrosystem shall be described by a set of extensive and intensive variables:
- –
- Extensive variables are such that for any two disjoint macrosystems and (not necessarily in equilibrium):Due to conservation law, all the values of the extensive variables are extensive variables;
- –
- Intensive variables are such that for any two systems and that are in equilibrium:
Extensive variables satisfy the neutral scale effect condition: if all extensive variables in all subsystems of the macrosystem are increased by times, then the flux intensities in the macrosystem will not change. In particular, such a proportional increase in the extensive variables will not bring the macrosystem out of the internal equilibrium state if before that, the system was in internal equilibrium. The neutral scale effect condition provides the self-similarity of the macrosystem.
As a consequence of (6), intensive variable dependencies on the extensive variable values that determine the macrosystem state should be homogeneous functions of the zero-order.
6. Entropy of the Macrosystem
The set of extensive variables describes the macrosystem state. Among the extensive variables, we single out the value , which characterizes the objective function of the system. and other extensive variables are functionally related: , . We call this equation the system state equation.
The choice of as the objective function is determined by the Levitin–Popkov Axioms [], which impose the following conditions on this variable:
- For a controlled system, given a fixed deterministic control . Its stochastic state, which is characterized by a vector flux, is transformed into a deterministic vector , called the steady or stationary state, which belongs to a permissible set .
- For any fixed vector , there exists a vector of a priori probabilities for the distribution of fluxes in the system , such that the stationary state of the macrosystem under that given fixed control is the optimal solution to the entropy-based optimization problem: , where is the entropy operator defined as
- 3.
- There exists an inverse mapping such that the desired pair is the unique solution to the system of the relations:
Within the framework of the considered model, these statements can be interpreted as follows:
- For any fixed deterministic control , there exists a stable steady state .
- The stable steady state is a state of an internal equilibrium corresponding to the maximum of .
- The stable steady state is unique.
For a closed system, the condition determines the spontaneous direction of exchange of extensive variables. However, a question arises about the homogeneity of the function : entropy is a homogeneous function of degree one only in the systems that are in a state of internal equilibrium. In cases where the distribution is scale-invariant but does not correspond to internal equilibrium, a fractal structure of the macrosystem is observed, in which is a homogeneous function with a degree of homogeneity less than one. Taking this remark into account, entropy can still be considered an extensive quantity. The question to use the degree of homogeneity as a measure of equilibrium of the macrosystem should be given further consideration.
Since is an extensive variable in the condition of internal equilibrium, then is a homogeneous function of the first-order: when scaling the system by times or combining identical macrosystems in the equilibrium state:
In accordance with the Euler relations for homogeneous functions
Let us denote . Since is a homogeneous function of the first-order, then are homogeneous of the zero-order, i.e., they are intensive variables: for any value . This means that a proportional increase in the extensive variables will not bring the macrosystem out of the internal equilibrium state if before that, the system was in internal equilibrium.
Under the assumption that the function is differentiable and its partial derivatives are continuous, in accordance with the necessary condition for the function to be differentiable, there exists a total differential
Two consequences of Equation (11) can be formulated.
Consequence 1. By differentiating the Euler relation (10), we obtain
By comparing (11) and (12), we see that the second term in (12) should be equal to zero:
Consequence 2. The exchange process between subsystems and (with a positive direction of fluxes from to ) can be described using Equations (2) and (11) as follows:
The parameter is an extensive variable and its dependence on other state variables is an objective function for spontaneous processes in the macrosystem.
From the established relations, the following conclusion can be drawn:
Let be a vector measure on a measurable space . Then, the entropy of a subsystem is defined as a function , such that
- –
- , where is the entropy of X under internal equilibrium of all subsystems X;
- –
- , where is the entropy of a subsystem with a zero-vector measure;
- –
- For any two subsystems such that , , it holds that .
These properties correspond to Shannon–Khinchin Axioms for entropy [].
The entropy function meets the conditions of Shannon entropy if the number of subsystems is large enough. statistically corresponds to the entropy of the distribution of state parameters of subsystems of the macrosystem. Let us explain this statement. Since there are no restrictions on the possible random vector values with a finite variance, the normal distribution corresponds to the maximum entropy value (this can be considered as a justification for the distribution law of subsystems parameters that form a macrosystem).
The entropy of the normal distribution consists of two terms: , where is the correlation matrix. The first addend corresponds to the complete independence of the system elements and characterizes the structure of the macrosystem, and the second describes the relationships in the macrosystem. The correlation between fluxes increases, which is observed with the growing magnitude of the exchange driving forces . Figure 2 illustrates the kind of dependency of when components and of the vector are changed. The second term of entropy expression, which is always negative at , ( is the identity matrix), decreases. This explains the statement about as an objective function that reaches its maximum when the macrosystem reaches an equilibrium state, and a state function that sets the direction of processes in a closed system, increasing the dispersion of low-level subsystem parameters while simplifying the macrosystem structure at a high level.
Figure 2.
Dependency of correlation between fluxes in a macrosystem: and are the components of the vector .
In the process of spontaneous, not forced exchange, the value of a closed macrosystem cannot decrease since is the objective function for the system behaviour. In order for the value , called the entropy production, to be positive during the any process of given non-zero intensity, it is sufficient to fulfil the condition
If the process duration is infinitely long, then the macrosystem’s natural evolution leads it to a state of internal equilibrium, which corresponds to the achievement of the maximum value. This is equivalent to the statement that in the internal equilibrium state, is maximum.
Intensive variables can be considered as specific potentials. Their difference according to (15) is the driving force of the exchange process. The fluxes are directed from subsystems with lower values of intensive parameters to subsystems with higher values of intensive parameters. Thus, Equation (14) can be rewritten as follows:
where is the matrix of infrastructure coefficients. Note that the infrastructure of the exchange process involves a wide variety of features of the subsystems’ boundaries medium. For example, in thermodynamic macrosystems these features are surface area, roughness, etc., all these parameters can be controls in the corresponding optimization problems, but here, is assumed to be constant. If the driving force during the process is constant, then such a process is called stationary. Since is a linear relation and therefore the superposition principle is being applied, we single out the classes of reversible () processes and processes of minimal dissipation .
7. Differential Form of the State Equation
The state function can be given in differential form, as is typical for thermodynamic systems:
If we cannot write a state function explicitly, then the problem of integrability of should be solved. Equation (17) is the Pfaffian form. A Pfaffian form is said to be holonomic if there exists an integrating multiplier such that
The Pfaffian form of two independent variables is always holonomic, that is, there is always an integrating multiplier . However, for , the integrating multiplier exists if the holonomy conditions are satisfied: for any three different ,
These conditions are obtained from the equality of the second mixed derivatives with respect to any variable pairs (Maxwell’s relations)
and by exclusion of the integrating factor from these equalities. In addition to conditions (19), it is necessary that all products be homogeneous of zero-order homogeneity.
8. Concavity of Entropy Function
The function gradient determines the intensive variable vector of the macrosystem. As the extensive variables increase, the value also increases, but at a slower rate (the diminishing returns law), so that for all . Moreover, the Hessian matrix for homogeneous functions of the first-order homogeneity is negative semi-definite. Indeed, it follows from the Euler relations (10) that differentiating both parts of this equation per ,
For an arbitrary vector , the necessary conditions for the extremum of a quadratic form in are
which corresponds to the maximum point (according to the diminishing returns law) of the quadratic form . Since the product (21) is equal to zero, then at the maximum point is also equal to zero. Thus, for any value of , which has to be proved. Note that since the Hesse matrix is symmetric, then all its eigenvalues are real numbers.
The negative semi-definiteness of the Hessian matrix corresponds to the upward convexity (concavity) of the function and unimodality as the objective parameter. As the reserves of the extensive variables in the macrosystem increase, increases, and the intensive parameters decrease, reducing the magnitude of the resource exchange driving force. In accordance with (14), this behaviour of intensive parameters leads to the fact that when the macrosystem is affected, changing its internal equilibrium conditions; the resource exchange processes are directed towards counteracting changes, and thus the Le Chatelier principle is fulfilled.
Partial derivatives describe the saturation of the system with the extensive variables, and determines the substitution and complementation of the extensive variables in the macrosystem. If the extensive variables are substitutes, then an increase in one of them reduces the intensive variables adjoined with the other extensive variable. If the extensive variables are complements, then an increase in one of them, on the contrary, increases the intensive variables adjoined with the other extensive variable.
9. Metric Features of Entropy
Let us consider a particular case of exchange processes in a macrosystem consisting of two subsystems and , where the fluxes linearly depend on the differences between the intensive variables of the subsystems:
where
Proposition 1.
Matrix
is the matrix of infrastructure coefficients describing the exchange possibilities at the boundary between subsystems and it is a symmetric matrix.
Proof.
Given the self-similarity property, the system can be divided into a statistically significant disjoint set of subsystems such that the characteristics of these subsystems form a representative sample of random variables , etc. The eigenvectors of the matrices and coincide and form a system of orthonormal vectors. Since the eigenvectors of any matrix and its inverse are the same, and are symmetric and commutative, so their product is a symmetric matrix. Since , the matrix of phenomenological coefficients is symmetric (Onsager conditions). □
Proposition 2.
Matrix is a positive definite matrix.
Proof.
The right part of Equation (16) under condition (24) has a quadratic form:
For any positive values of the exchange driving forces , the entropy production is positive, as shown in (15). Therefore, is a positive definite matrix. It means that of the dependencies and under the linear dependence of the flows on the driving forces are convex. □
The consequence of this proposition is that entropy production is a metric in the space of stationary processes and can be used to determine the distance between processes
Let us present some properties of this metric:
- –
- Zero in the space of stationary processes represents reversible processes for which . According to the third Levitin–Popkov axiom, there is the only reversible process in the macrosystem;
- –
- The distance between two processes and is determined as . It is evident that ;
- –
- The distance satisfies the triangle inequality due to being a positive definite symmetric matrix; all its eigenvalues are positive real numbers.
Thus, the entropy production in a macrosystem characterizes the distance of stationary exchange processes occurring in such a system from the corresponding reversible process. For linear systems, it is possible to consider generalized, averaged over probability to measure stationary processes: . Such an averaging for cyclic and stochastic processes is illustrated on Figure 3.
Figure 3.
Averaging of entropy production in cyclic (a), and stochastic (b) stationary processes.
The distance between irreversible processes is an important concept for analyzing complex systems. The class of minimal dissipation processes indicates the limit of performance of macrosystems when the average intensities of the processes in it are restricted. To determine the effectiveness of an arbitrary process, it is necessary to find a distance between this process and the minimum dissipation process. This distance is determined by the production of entropy.
10. Trajectories of the Exchange Process
For non-stationary exchange processes, it is necessary to determine the trajectory of the process, i.e., the change in time of all subsystem parameters when approaching the equilibrium state. For a linear dependence between fluxes and driving forces (24), we derive a differential equation that determines the change in the driving forces of the exchange process in a macrosystem consisting of two subsystems and . Under the given initial conditions , this equation describes all the parameters of the subsystems.
Full differentials of functions shall be written as
Subtracting the equations for subsystem from the equations for subsystem and taking into account the linear dependencies of fluxes on driving forces (24) and the fact that the driving forces are the differences between the corresponding intensive variables of the subsystems (), we obtain
Equation (28), together with the initial conditions, determines the trajectory of the exchange process.
11. Conclusions
The metric properties of entropy production allow for the determination of both the class of minimally irreversible processes [] and the quantitative distance between the processes. The stationarity restriction can be removed either by averaging the process parameters over time or due to the superposition principle for processes in linear systems. A generalized, independent-of-the-nature-of-the-processes, macrosystem model is useful for the formalization and investigation of extreme performances of complex, hierarchically related systems. In these macrosystem, there is a vector of entropy functions. It gives the macrosystem as additional degree of freedom at the level of subsystems.
The given proofs of the phenomenological properties of macrosystems are valid for thermodynamic systems but can also be applied to the systems of a different nature, particularly economic systems [,], information exchange systems in communication networks [], and high-performance computers [], where the number of computational cores is already comparable to the number of molecules in 1 μm3 of gas. The main difference in the description of macrosystems of different natures lies in the definition of the extensive quantities that describe the state of a system and in the relationships between the fluxes that arise from interactions between subsystems. In thermodynamic macrosystems, the extensive variables are internal energy, mol number, and volume []; in economic macrosystems, the extensive variables are resources, goods, and welfare []; in information macrosystems like recommendation systems, the extensive variables are database size and the number of exposed marks []; and so on. In information macrosystems, we restrict ourselves to describing syntactic information exchange []. Social macrosystems with semantic and pragmatic information exchange processes require a special conceptual apparatus, which is markedly different from the one usually used in formal logic []. For signal transmission systems, the extensive variables are the number of processors of a given type, the amount of memory, and the entropy properties of the computing power; the intensive variables are determined by the contribution to the increase in computing power that each type of hardware provides. When modelling algorithms as complex systems, the extensive variables can be used as control values, the entropy corresponds to the objective function, and the intensive variables are the values of the Lagrange multipliers. Economic analogies of complex systems are often considered. At the micro level, extensive quantities include the stock of goods while intensive quantities include the prices and the values of the goods. The welfare function has entropy properties in microeconomic systems. At the macro level, it is advisable to choose the gross regional product as the entropy, which is a function of vector of production factors. The intensive variables in this model are prices in real terms.
Entropy as an extensive quantity is introduced into different models of self-similar systems. Integration problems of the welfare function have been observed in []. The first-order homogeneity properties for gross regional product have been proven: the Cobb–Douglas production function is most commonly used in approximation. For communication networks, the entropy properties of the indicator of congestion have been proven []. These examples show the universality of a macrosystemic approach to modelling complex systems of different natures.
The graph of interaction between subsystems can be different for each type of nature of extensive variables. Such a complex system can be represented as a multigraph. Multigraphs where two graph nodes can be connected both by edges of different colours and by groups of edges of different colours, which correspond to multiple contact points between subsystems. The absence of edges between the multigraph’s nodes means that the nodes are isolated from each other. Edges of different colours corresponding to different extensive variables can be either co-directed or opposite.
The specific behaviour of individual elementary entities is especially relevant for social systems, where free and often unmotivated decision-making is possible and can complicate the application of the model. Nevertheless, the model can still be used to describe processes of various natures occurring simultaneously within a system. For example, a high-performance computer can be represented as a macrosystem in which hardware, software, and engineering subsystems exchange energy, signals, and information, and the average intensities of the computational processes and heat transfer processes are considered to be given. A computer does not perform mechanical work; therefore, all consumed electrical energy is converted into heat and must be dissipated into the surrounding environment.
Funding
This research received no external funding.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest
The author declares no conflict of interest.
References
- Andresen, B.; Salamon, P. Future Perspectives of Finite-Time Thermodynamics. Entropy 2022, 24, 690. [Google Scholar] [CrossRef] [PubMed]
- Fitzpatrick, R. Thermodynamics and Statistical Mechanics; World Scientific Publishing Company: Chennai, India, 2020; 360p, ISBN 978-981-12-2335-8. [Google Scholar]
- Prunescu, M. Self-Similar Carpets over Finite Fields. Eur. J. Comb. 2009, 30, 866–878. [Google Scholar] [CrossRef]
- Tropea, C.; Yarin, A.L.; Foss, J.F. Springer Handbook of Experimental Fluid Mechanics; Springer: Berlin/Heidelberg, Germany, 2007; 557p, ISBN 978-3-54-030299-5. [Google Scholar]
- Popkov, Y.S. Macrosystem Models of Fluxes in Communication-Computing Networks (GRID Technology). IFAC Proc. Vol. 2004, 37, 47–57. [Google Scholar] [CrossRef]
- Rozonoér, L.I. Resource Exchange and Allocation (a Generalized Thermodynamic Approach). Autom. Remote Control 1973, 34, 915–927. [Google Scholar]
- Popkov, Y.S. Macrosystems Theory and Its Applications. Equilibrium Models; Ser.: Lecture Notes in Control and Information Sciences; Springer: Berlin, Germany, 1995; Volume 203, 323p, ISBN 3-540-19955-1. [Google Scholar]
- Amelkin, S.A.; Tsirlin, A.M. Optimal Choice of Prices and Fluxes in a Complex Open Industrial System. Open Syst. Inf. Dyn. 2001, 8, 169–181. [Google Scholar] [CrossRef]
- Levitin, E.S.; Popkov, Y.S. Axiomatic Approach to Mathematical Macrosystems Theory with Simultaneous Searching for Aprioristic Probabilities and Stochastic Flows Stationary Values. Proc. Inst. Syst. Anal. RAS 2014, 64, 35–40. [Google Scholar]
- Khinchin, A.I. Mathematical Foundations of Information Theory; Dover: New York, NY, USA, 1957; 120p, Original publication in Russian is: Khinchin, A.I. The Concept of Entropy in Probability Theory. Uspekhi Matematicheskikh Nauk 1953, 8, 3–20. [Google Scholar]
- Tsirlin, A.M. Minimum Dissipation Processes in Irreversible Thermodynamics. J. Eng. Phys. Thermophys. 2016, 89, 1067–1078. [Google Scholar] [CrossRef]
- Tsirlin, A.M. External Principles and the Limiting Capacity of Open Thermodynamic and Economic Macrosystems. Autom. Remote Control 2005, 66, 449–464. [Google Scholar] [CrossRef]
- Tsirlin, A.M.; Gagarina, L.G. Finite-Time Thermodynamics in Economics. Entropy 2020, 22, 891. [Google Scholar] [CrossRef] [PubMed]
- Dodds, P.; Watts, D.; Sabel, C. Information Exchange and the Robustness of Organizational Networks. Proc. Natl. Acad. Sci. USA 2003, 100, 12516–12521. [Google Scholar] [CrossRef] [PubMed]
- De Vos, A. Endoreversible Models for the Thermodynamics of Computing. Entropy 2022, 22, 660. [Google Scholar] [CrossRef] [PubMed]
- Pebesma, E.; Bivand, R. Spatial Data Science: With Applications in R; Chapman and Hall/CRC: Boca Raton, FL, USA; New York, NY, USA, 2023; 314p. [Google Scholar] [CrossRef]
- Martinás, K. Entropy and Information. World Futures 1997, 50, 483–493. [Google Scholar] [CrossRef]
- Jøsang, A. Subjective Logic. A Formalism for Reasoning Under Uncertainty; Springer: Cham, Switzerland, 2016; 337p, ISBN 978-3-319-42335-7. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).