Next Article in Journal
Constructing Hetero−Microstructures in Additively Manufactured High−Performance High−Entropy Alloys
Previous Article in Journal
Modeling the Evolution of Dynamic Triadic Closure Under Superlinear Growth and Node Aging in Citation Networks
Previous Article in Special Issue
Spontaneous Emergence of Agent Individuality Through Social Interactions in Large Language Model-Based Communities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Criterion for Distinguishing Temporally Different Dynamical Systems

Department of Industrial Engineering, Faculty of Engineering, Ariel University, Kiryat Ha-Mada, Ariel 4070000, Israel
Entropy 2025, 27(9), 916; https://doi.org/10.3390/e27090916
Submission received: 22 June 2025 / Revised: 24 August 2025 / Accepted: 28 August 2025 / Published: 29 August 2025

Abstract

The paper presents a method for distinguishing dynamical systems with respect to their behavior. The suggested criterion is interpreted as internal time of the ergodic dynamical system, which is a time generated by the system and differs from the external global or reference time. The paper includes a formal definition of the internal time of dynamical system in the form of the entropy ratio, considers its basic properties and gives examples of analysis of dynamical systems.

1. Introduction

Analysis of multiagent systems starts with distinguishing the system’s elements—the subsystems which activities differ from the activity of the system and from the other subsystems [1].
Formal criteria for such distinguishing are based on different structural and functional characteristics of the system and depend on the aim of the analysis.
One such criteria that represents dynamics of the system in general is entropy of the system [2,3,4]. As an invariant of the system, entropy is used for distinguishing the system from other systems. The other criterion recently applied for distinguishing the leading agents in the group [5] is based on the structure of the agents’ connections.
The third type of such criteria is local or internal time of the dynamical system; an overview and informal description of internal time is presented in paper [6].
Formal definitions of internal time follow three different approaches (detailed consideration of these approaches is given in Appendix A).
In the approach tracked back to Lévy [7,8], local time is defined as a period during which the states of the system stay in a certain set. The period is measured using the external time (also known as global, universal or reference time) in which the process evolves [9].
The second approach, developed by Prigogine and his group [10,11], considers internal time (or age) as an operator acting on the states of the system in parallel to the operator of the system’s evolution. This approach was developed in several directions; see, e.g., [12] and the references therein.
Finally, in the third approach, developed by Valleé [13,14,15], internal time of a dynamical system is defined as a measure of divergence of the system’s trajectories. This definition is closely related to the Lyapunov criterion of stability of the system’s dynamics [16]. For later development of this approach, see, e.g., [17].
Despite the differences, the indicated formulations have one common disadvantage, which is the need for an external reference time or external indices. In the Lévy approach, external time is used as a measure of the periods; in the Prigogine approach, it is a part of the system’s evolution; and in the Valleé approach, it is hidden in the term “trajectory”.
In the paper, we suggest a new definition of the internal time of an ergodic dynamical system that does not require the external reference time and uses this internal time for distinguishing between temporally different systems.

2. Materials and Methods

Let S = X , φ , μ be an ergodic dynamical system, where X is a differentiable compact manifold called a state space; φ : X X is an automorphism of X ; μ : X 0,1 is a probability measure; and for any A X , μ A = μ φ 1 A holds.
Let α = A 1 , A 2 , , A n α , A i A j = for i j , i , j = 1,2 , , n α , i = 1 n α A i = X , be a finite partition of the space X .
The entropy of partition α with respect to the measure μ is a value [18,19]
H α = i = 1 n α μ A i log μ A i ,
where l o g is taken as base 2 , and it is assumed that 0 log 0 = 0 .
Let ζ = Y 1 , Y 2 , , Y n ζ and ξ = Z 1 , Z 2 , , Z n ξ be two partitions of X . The product of the partitions ζ and ξ is the partition
ζ ξ = Y i Z j | Y i ζ ,   Z j ξ ,   i = 1,2 , , n ζ ,   j = 1,2 , , n ξ .
Denote by
φ ν α = φ ν A 1 , φ ν A 2 , , φ ν A n α ,   ν = 0 ,   1 ,   2 , ,
the ν th application of the automorphism φ to partition α , where we assume that φ 0 α = α , and by
α N φ = v = 0 N φ v α .
a partition obtained by the multiplication of N + 1 partitions φ 0 α ,   φ 1 α , , φ N α obtained by iterative applications of the automorphism φ to the partition α .
Denote by
H α N φ = H v = 0 N φ v α = H φ N α   |   v = 0 N 1 φ v α
an entropy of the partition α N φ .
Definition 1
([18,19,20]). The limiting value
h φ , α = lim N 1 N H α N φ ,
is called the entropy of the system  S  with respect to time, and its supremum
h φ = sup α h φ , α ,
taken over all finite measurable partitions of  X  is called the entropy of dynamical system  S .
Originally, the concept of entropy of a dynamical system was suggested by Kolmogorov [2,3] and formulated in terms of the phase flow g t , where g t : X X is a trajectory of the system in X at time t . Later, Sinai [4] suggested the formulation used above. For a detailed consideration of the theory of entropy in the framework of dynamical systems, see books [18,19,20]; a brief overview of these entropies is given in Appendix B.
Also, below we will need the following facts.
Let ζ = Y 1 , Y 2 , , Y n ζ and ξ = Z 1 , Z 2 , , Z n ξ be two partitions of X . If each set Z ξ is a subset of some set Y ζ , then it is said that the partition ξ is a refinement of the partition ζ , which is written as ξ ζ .
Lemma 1
([19]). If  ξ ζ , then  H ζ H ξ .
Let ζ ξ be a multiplication of the partitions ζ and ξ . Since each set W ζ ξ is a subset of some set Y ζ and of some set Z ξ , partition ζ ξ is a refinement of both ζ and ξ , that is, ζ ξ ζ and ζ ξ ξ .
Then, following Lemma 1, H ζ H ζ ξ and H ξ H ζ ξ .

3. Results

Internal time of a dynamical system is defined as follows. Let α be a partition that provides the supremum of the entropy h φ , α , that is,
h φ , α = h φ .
Definition 2.
The value
τ N φ = 1 H φ 0 α H φ N α φ N 1 α H φ N 1 α ,
is called the internal time of dynamical system S at iteration N , and the limit
τ φ = lim N τ N φ .
 is called the internal time of dynamical system  S .
The suggested definition follows a widely accepted understanding of time as some form of the change in entropy. The idea of such definitions is to provide a quantitative parameter that represents the system’s stability and periodicity. For example, an already mentioned Lyapunov criterion [16] represents the divergence of the system’s trajectories but does not relate it with the entropy of the system, which is one of the main criteria for distinguishing the systems [19,20]. The suggested definition attempts to bridge this gap.
To clarify the introduced definition, let us consider several examples.
Example 1
(circle rotations). Assume that in the system  S , the set  X  is a circle of unit radius and the subsets  A X  are the arcs of the circle  X . The measure  μ A  of arc  A  is defined as a length of  A  divided by  2 π , and the automorphism  φ  defines rotations 
φ x , θ = x + θ mod 2 π ,   x X ,
of the circle  X  to the angle  θ .
The entropy of such a system is [18,20]
h φ = 0 ,
and we can choose any partition  α = φ 0 α , θ .
Let  φ 0 α , θ = A 1 , A 2  be a partition of  X  such that  A 1  is a left semicircle and  A 2  is a right semicircle and let  φ  be a rotation of  X  to the angle  θ = π 2 . Then,
φ 1 α , θ = φ 1 A 1 , θ , φ 1 A 2 , θ
is a partition such that  φ 1 A 1 , θ  is a bottom semicircle and  φ 1 A 2 , θ  is a top semicircle.
Hence,  φ 1 α , θ φ 0 α , θ  is a partition of  X  into four equivalent disjoint arcs.
The entropy of the partitions  φ 0 α , θ  and  φ 1 α , θ  is 
H φ 0 α , θ = H φ 1 α , θ = 1 2 log 1 2 1 2 log 1 2 = 1 ,
and the entropy of the partition  φ 1 α , θ φ 0 α , θ  is 
H φ 1 α , θ φ 0 α , θ = 1 4 log 1 4 1 4 log 1 4 1 4 log 1 4 1 4 log 1 4 = 2 .
Consequently, the internal time of this system at the first iteration  N = 1  is 
τ 1 φ = 2 1 = 1 .
At the next step  N = 2 , in the partition
φ 2 α , θ = A 1 , A 2
A 1  is a right semicircle and  A 2  is a left semicircle, and, similar to above, partition  φ 2 α , θ φ 1 α , θ  includes four equivalent disjoint arcs.
Thus, the entropies are 
H φ 1 α , θ = 1       and       H φ 2 α , θ φ 1 α , θ = 2 ,
and the internal time at the step  N = 2  is 
τ 2 φ = 1 .
The same partitions and the values of entropies and distances are obtained for any  N = 1 ,   2 ,  Thus, the internal time of the system is 
τ φ = 1 .
Note that the internal time of this system depends on the angle  θ .
Example 2.
In the system  S , let the set  X = 0 ,   1  be a unit interval without zero and let measure  μ  be a length of the subintervals of  X . The automorphism  φ  is defined as 
φ x , m = 1 m x ,   x X .
Let 
α = j 1 m , j m   |   j = 1 , ,   m
be a partition of  X . Then,
α N φ = j 1 m N , j m N   |   j = 1 ,   2 ,   ,   m N ,
the entropy of  α N φ  is 
H α N φ = j = 1 m N 1 m N log 1 m N = N log m
and of the system is 
h φ = sup α lim N 1 N H α N φ = log m .
Let  m = 2 . Then,
φ 0 α , 2 = α = 0 , 1 2 ,   1 2 , 1 , φ 1 α , 2 = 0 , 1 4 ,   1 4 , 1 2 , 1 2 , 3 4 ,   3 4 , 1 , φ 2 α , 2 = 0 , 1 8 ,   1 8 , 1 4 , 1 4 , 3 8 ,   3 8 , 1 2 , 1 2 , 5 8 ,   5 8 , 3 4 , 3 4 , 7 8 ,   7 8 , 1 .
The entropies of these partitions are 
H φ 0 α , 2 = 2 1 2 log 1 2 = 1 , H φ 1 α , 2 = 4 1 4 log 1 4 = 2 , H φ 2 α , 2 = 8 1 8 log 1 8 = 3 .
Thus, the internal time at the first iteration is 
τ 1 φ = 2 1 = 1       and       τ 2 φ = 3 2 = 1 .
The same calculations for the next iterations show that 
τ N φ = 1 ,
and the internal time of the system is 
τ φ = 1 .
For detailed calculations, see Appendix C.
Note that despite an infinite increase of the entropy of the system, its internal time is finite and is equivalent to the internal time of the rotations of the circle.
Example 3
(Bernoulli shift). Let  X = 0 ,   1  be a union of two open intervals. Bernoulli shift  φ  on  X  is defined by the iterative formula 
x N + 1 = 2 x N mod 1 = { 2 x N if   x N [ 0 , 1 2 ) 2 x N 1 if   x N [ 1 2 , 1 ] .
Let 
α = A j   |   j = 1 , ,   m
be a partition of  X . The measure  μ  is defined as a probability that the state  x N A j .
Let  m = 2  and assume that 
φ 0 α = α = 0 , 1 2 ,   1 2 , 1 .
Then, 
φ 1 α = 0 , 1 2 ,   1 2 , 1 ,       φ 2 α = 0 , 1 2 ,   1 2 , 1 ,
and so on.
The entropies of these partitions are 
H φ N α = 2 1 2 log 1 2 = 1 ,
Thus, the internal time at the  N th iteration is 
τ N φ = 1 1 = 0
and the internal time of the system is also 
τ φ = 0
For detailed calculations, see Appendix C.
The same holds true for any partition  α  to  m  subsets such that  μ A j = μ A k   j , k = 1,2 , , m .
Now assume that 
φ 0 α = α = 0 , 1 3 ,   1 3 , 1 .
Then, 
φ 1 α = 0 , 2 3 ,   2 3 , 1 ,   φ 2 α = 0 , 1 3 ,   1 3 , 1 ,
and so on.
The entropies of these partitions are 
H φ N α = log 3 2 3 ,   N = 0 ,   1 ,   2 ,
The multiplications of these partitions are 
φ N α φ N 1 α = 0 , 1 3 , 1 3 , 2 3 ,   2 3 , 1 ,
and the entropies of the multiplications are 
H φ N α φ N 1 α = log 3 ,   N = 0 ,   1 ,   2 ,
Thus, internal time at the  N th iteration is 
τ N φ = 1 log 3 2 3 log 3 log 3 2 3 = 2 3 log 3 2 0.726
and the internal time of the system is 
τ φ = 2 3 log 3 2 0.726 .
For the other partitions, internal times are calculated similarly.
It is easy to demonstrate that the internal time has the following properties.
Theorem 1.
τ N φ 0  for any N = 1,2 ,
Proof. 
By the properties of entropy,
H φ 0 α 0 ,
and since α includes at least two non-empty sets with positive measures:
H φ 0 α 0 .
Since
φ N α φ N 1 α φ N 1 α
by Lemma 1, the following holds:
H φ N α φ N 1 α H φ N 1 α .
Hence,
H φ N α φ N 1 α H φ N 1 α 0
and the internal time is non-negative. □
The next theorem defines a bound of applicability of internal time in the analysis of the systems.
Theorem 2.
If system  S = X , φ , μ  is periodic, then τ N φ < .
Proof. 
In the periodic system for some k > 0 , the following holds:
φ N + k α = φ N α ,   N = 0 ,   1 ,   2 ,
Hence, H φ N α φ N 1 α H φ N 1 α is periodic with period k . □
Along with this, note that applicability of the internal time is not limited by periodic systems and can be used for any system with distortion which trajectories return to an ε -surrounding of some point in X .
Let S 1 = X 1 , φ 1 , μ 1 and S 2 = X 2 , φ 2 , μ 2 be two dynamical systems.
Definition 3.
We say that the systems  S 1   a n d   S 2 are temporally equivalent if for each N = 0 ,   1 ,   2 , , either
τ N φ 1 = τ N φ 2 = 0
or the ratios
γ N φ 1 , φ 2 = τ N φ 1 τ N φ 2 ,   N = 0 ,   1 ,   2 ,
are rational numbers.
Informally speaking, the suggested criterion means that the clocks based on the behaviourally equivalent systems can be used together and their results can be compared with any precision. In contrast, results of the clocks based on behaviourally different systems can be compared with limited precision only.
Note that in practice, recognition of rational or irrational ratios is possible only for simple cases, while in general comparison of internal times can be conducted with certain finite precision. Then, an additional check of convergence of the sequences τ N φ 1 and τ N φ 2 , N = 0 ,   1 ,   2 ,   . . . , is required.
The sequences τ N φ 1 and τ N φ 2 , N = 0 ,   1 ,   2 ,   . . . , of internal times can also be compared by appropriate statistical methods. However, since the elements of each sequence are not independent, such a comparison is strongly limited.
Theorem 3.
If the systems  S 1   and   S 2  are isomorphic, then they are temporally equivalent.
In other words, behavioral equivalence is weaker than an isomorphism of the systems.
Proof. 
Let u : X 1 X 2 be an isomorphism of the systems  S 1 and S 2 , which means that φ 2 = u φ 1 u 1 . Thus, for each N = 0 ,   1 ,   2 , ,
φ 2 N u α = u φ 1 N u 1 u α = u φ 1 N α .
Since u is an isomorphism, it gives ( N = 1 ,   2 , )
H u φ 1 N 1 α = H φ 1 N 1 α , H u φ 1 N α u φ 1 N 1 α = H φ 1 N α φ 1 N 1 α .
Hence, for each N = 1 ,   2 , , the following holds:
τ N φ 2 = 1 H   φ 2 0 u α H φ 2 N u α φ 2 N 1 u α H   φ 2 N 1 u α = 1 H u φ 1 0 α H u φ 1 N α u φ 1 N 1 α H u φ 1 N 1 α = 1 H φ 1 0 α H φ 1 N α φ 1 N 1 α H φ 1 N 1 α = τ N φ 1 .
which is required. □
Let us illustrate the use of the suggested criteria.
Example 4.
Consider a pair of Tsetlin automata  A 1  and  A 2  acting in the stochastic environments:
E 1 = p 11 , p 12 , , p 1 m   and   E 2 = p 21 , p 22 , , p 2 m .
where  p k 1 + p k 2 + + p k m = 1 ,  k = 1 ,   2 .
The activity of each automaton  A k  is defined as follows [21]. Assume that the states space of automaton  A k  is 
S k = s k 1 ,   s k 2 , , s k n
and assume that in each state  s k i , the automaton can conduct one of the actions  a k j  from the set 
A k = a k 1 ,   a k 2 , , a k m
If automaton  A k  conducts an action  a k j , then with probability  p k j , its outcome is  o k j = 1  (payoff), and with probability  q k j = 1 p k j , its outcome is  o k j = 0  (reward), where probabilities  p k j ,  j = 1,2 , , m , are defined by the environment  E k , k = 1 ,   2 .
Transitions of automaton  A k  are defined by two matrices:
B k 1 = b i j k 1       and       B k 0 = b i j k 0
such that each row of the matrix  b k 1  includes a single element  b i j k 1 = 1  and each row of the matrix  b k 0  includes a single element  b i j k 0 = 1 ; all other elements of the matrices are zeros.
Thus, being in the state  s k i  and receiving the outcome  o k j = 1 , the automaton transitions to the state  s k j  prescribed by the element  b i j k 1 = 1 , and while receiving the outcome  o k j = 0 , the automaton transitions to the state  s k j  prescribed by the element  b i j k 0 = 1 .
Assume that automaton  A k  is in the state  s k i . Then, probability  ρ i j k  of transition from the state  s k i  to the state  x k j  is 
ρ i j k = p k l b i j k 1 + q k l b i j k 0 ,   k   = 1 ,   2 ,   l = 1 ,   2 ,   ,   m ,   i , j = 1 ,   2 ,   ,   n .
Let each automaton  A k  be a binary automaton with  S k = 0 ,   1  acting in the environment with two states  E k = p k 1 , p k 2 , k = 1 ,   2 .
Transitions of the automata are defined by the matrices 
b k 1 = 0 1 1 0       and       b k 0 = 1 0 0 1 ,
which specify that if automaton  A k  receives outcome  o k = 1 , then it changes its state to the opposite and if automaton  A k  receives outcome  o k = 0 , then it stays in its current state.
In terms of dynamical systems, such activity is defined by the  x o r  function: 
s k N = xor s k N 1 , o k N 1 ,   N = 1 ,   2 ,  
Matrices of transition probabilities are 
ρ k = ρ 11 k ρ 12 k ρ 21 k ρ 22 k = q k 1 p k 1 p k 2 q k 2 .
The steady state probabilities of automaton  A k  are 
r k 1 = p k 2 p k 1 + p k 2       and       r k 2 = p k 1 p k 1 + p k 2 ,
and the expected outcome of the automaton  A k  is 
E o k = 2 p k 1 p k 2 p k 1 + p k 2 ,   k   = 1 ,   2 .
Each automaton  A k , k = 1 ,   2 , is associated with the dynamical system  S k = X k , φ k , μ k , where  X k = 0 ,   1 , automorphism  φ k : X k X k  is defined by the matrices  B k 0  and  B k 1 , and measure  μ k  coincides with the steady state probabilities  r k .
Let  α k = A k 1 , A k 2  be a partition of the space  X k  such that  μ k A k 1 = r k 1  and  μ k A k 2 = r k 2 .
Then, internal time  τ N k = τ N φ k  per iteration  N = 0 ,   1 ,   2 ,    and internal time  τ k = τ φ k  of the system  S k  define the corresponding internal times of the Tsetlin automaton  A k .
If the automata  A 1  and  A 2  act in the equivalent environments  E 1 = E 2 , then from Definition 3, it follows that they are temporally equivalent.
In fact, if  E 1 = E 2 , then
r 11 = r 21       and       r 12 = r 22 .
Hence, in the partitions  α 1 = A 11 , A 12  and  α 2 = A 21 , A 22 :
μ 1 A 11 = μ 2 A 21       and       μ 1 A 12 = μ 2 A 22 .
Since, by definition,  φ 1 φ 2 , for each  N = 1 ,   2 ,   , the following holds:
H φ 1 N α 1 = H φ 2 N α 2       and       H φ 1 N α 1 φ 1 N 1 α 1 = H φ 2 N α 2 φ 2 N 1 α 2 ,
which results in the equivalence of the internal times:
τ N φ 1 = τ N φ 2 ,   N = 0 ,   1 ,   2 ,  
On the other hand, if the automata act in different environments  E 1 E 2 , then the behavioral equivalence of the automata  A 1 and A 2  is defined by the ratio  γ N φ 1 , φ 2  of their internal times per iteration. For example, if the environments  E 1  and  E 2  are such that 
α 1 = 0 , 1 3 ,   1 3 , 1       and       α 2 = 0 , 2 3 ,   2 3 , 1 ,
then the automata  A 1  and  A 2  are temporally equivalent, but if they are such that 
α 1 = 0 , 1 3 ,   1 3 , 1       and       α 2 = 0 , 1 4 ,   1 4 , 1 ,
then the automata  A 1  and  A 2  are temporally different.
Let us demonstrate the behavioral equivalence of a binary Tsetlin automaton and Bernoulli shifts.
Lemma 2.
Let  A  be a Tsetlin automaton acting in the environment E = p 1 , p 2 . Then there exist Bernoulli shifts S 1  and S 2  such that with certain probability, behavior of A  is equivalent to the behavior of one of the shifts S 1  and S 2 .
Proof. 
To prove the lemma, we will construct the required Bernoulli shifts S 1 and S 2 given the automaton A and its environment E .
Given the environment E = p 1 , p 2 , the steady state probabilities of A are
r 1 = p 2 p 1 + p 2       and       r 2 = p 1 p 1 + p 2 .
Define the states space X = 0 ,   1 and two partitions
α 1 = 0 , r 1 ,   r 1 , 1       and       α 2 = 0 , r 2 ,   r 2 , 1 .
Then, the shift S 1 acting on the partition α 1 corresponds to the activity of the automaton A while it receives outcome o = 1 (payoff) and the shift S 2 acting on the partition α 2 corresponds to the activity of the automaton A while it receives outcome o = 0 (reward).
Finally, recalling that the probability of receiving outcome o = 1 is r 1 and the probability of receiving outcome o = 0 is r 2 , we obtain the statement of the lemma. □

4. Discussion

In the paper, we consider internal time of an ergodic dynamical system as an operational criterion for distinguishing subsystems that demonstrate autonomous behavior. A rich collection of the most influential ideas and approaches to understanding time was published by Jaroszkiewicz in the book [22], which continues the earlier seminal philosophical work by Whitrow [23], who, among other ideas, contraposed the concepts “arrow of time” and “circle of time”.
In the natural sciences, the concept of time is usually considered together with the concept of space. The most known popular sources about time written by physicists are probably the book by Hawking [24] and the books by Carroll [25] and by Rovelli [26]. An influential source about the arrow of time is the book by Zeh [27].
The ideas considered in this paper were inspired by the concept of time derived from the steps of logical implications suggested by Reichenbach [28]. Following classification by Jaroszkiewicz [22], such an approach can be considered in the framework of “contextual truth” without referencing “external reality”.
After publication of the books by Haken [29] and Prigogine [30] and further studies of non-linear dynamical systems, it became clear that each dynamical system generates a certain internal time that characterizes and is characterized by the system’s behavior and its interactions with the environment.
From these studies, it also follows that internal time of linear systems is, in a certain sense, proportional to the external global time, and the internal time of non-linear systems can strongly differ from the global time and even can have varying scales.
Appendix A briefly presents the three most influential definitions of internal time that inspired this paper.
As indicated in the Introduction, the need for the criterion for distinguishing the systems with respect to their behavior arose in the analysis of multiagent systems. For example, in the consideration of swarm dynamics, it is required to distinguish the agents that behave differently from the others. Such a task is similar to the task of distinguishing the pacemakers as it appears in the analysis of active media or on the analysis of networks of spiking neurons. In addition, the suggested criterion will probably also be useful in studies of symbolic dynamics; however, this issue requires additional studies.

5. Conclusions

In the paper, we suggest a criterion for distinguishing temporally different systems that is required for analysis of groups of autonomous agents.
The suggested criterion is based on the entropy of an ergodic dynamical system and has the meaning of internal time of the system that characterizes dynamics of the system. Then, if two systems have comparable internal times, which means that the ratio of the times is a rational number, then the systems are temporally equivalent, and if this ratio is irrational, then the systems are temporally different.
Calculations of the internal time and uses of the criterion are illustrated by numerical examples.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Appendix A.1. Local Time of Brownian Motion

Let w τ , τ 0 , be one-dimensional Brownian motion and let B be a σ -algebra of the Borel sets from R .
Denote by μ t A a Lebesgue measure of time up to t during which the trajectory of the process w τ is in the set A B . Then,
μ t A = 0 t I A w τ d τ ,
where I A w τ is an indicator function of the set A .
Lévy demonstrated [7] that there exists a density function l t such that with probability one for all trajectories of the process w τ , any time t > 0 and any set A B , it holds true that
μ t A = A l t x d x ,
This density l t x is called the local time of Brownian motion w τ in the point x up to time t .
Finally, Trotter [31] proved that with probability one, there exists a process t : 0 , × R R such that for any x R ,
t t , x = l t x .
The process t is called local time of Brownian motion w τ . For the fixed t , the process t is a Markov process on the points x .
A detailed overview of local time t was published by Borodin [9].

Appendix A.2. Operator of Internal Time

Let p , q be canonical coordinates of the n -dimensional dynamical system, and assume that its evolution is defined by following the system of Hamilton equations:
q ˙ i =   H p i ,   p ˙ i =   H q i ,   i =   1,2 , , n ,
where q ˙ i = d q i / d t ; p ˙ i = d p i / d t ; and for initial time t   = t 0 , the values q i t 0 = q i 0 and p i t 0 = p i 0 are specified.
Denote by
L = i H , = i i = 1 n H p i q i H q i p i
the Liouville operator, where A , B = i   = 1 n A p i B q i A q i B p i is the Poisson bracket.
Then, dynamics of the system in terms of the density ρ t of its states in the phase space is defined by the Liouville equation
i ρ t t = H , ρ t = L ρ t .
Prigogine and his group [10,11,29] suggested to consider the operator L as a “generator of dynamical evolution” ([10], p. 4769) of the system and defined internal time of a dynamical system as an operator T such that
i L , T = I .
where I is an identity operator.
In the quantum mechanical formalism [30], operator T of internal time is also considered a “square root” of the entropy operator M , that is
M = T H T .
where T H stands for Hermitian conjugate of the operator T .
The external or reference time t is derived for the internal time T and is defined as an average [30]:
t =   T =   tr ρ t H T ρ t .
Note that in this formulation, the behavior of internal time also demonstrates Markovian properties [11].

Appendix A.3. Internal Time of Dynamical System

Denote by S be a dynamical system with the phase space X and assume that S is linear and is defined by the first-order m -dimensional differential equations [14]
d x d t = A t x t .
where x t X R m are coordinates and A t is a real m × m matrix.
An initial state x t 0 of the system and the values of the matrix A t completely define the evolution of the system. If the state x t 0 of the system S at initial time t 0 is strictly specified, then the dynamics of the system is completely predictable and its state x t for any t > t 0 can be calculated.
However, if tr A t > 0 for all t , that is, the system S is globally exploding, and the initial state x t 0 is unknown and is defined by a probability distribution over the state space of S , then the further states x t , t > t 0 , are also defined by certain probability distributions.
Assume that the system S is globally exploding and denote by H t the entropy of the probability distribution of system states at time t .
Then given the entropy H t 0 , for any t > t 0 , the following holds [13]:
H t = H t 0 + t 0 t tr A τ d τ .
Since tr A t > 0 for all t , the entropy H t of the system increases with t .
Using this equation, Valleé defines the value [14]
t t = H t H t 0 = t 0 t tr A τ d τ ,
which is called the internal (or intrinsic) time adapted to the dynamics of the globally exploding system or internal time of the dynamical system in brief.
Later [15], Valleé demonstrated that if the state x t of the system S belongs to finite dimensional linear space, in particular, if x t R m , then the intensity tr A τ of the changing system at time τ can be represented by an increasing function V : R + R + such that V 0 = 0 .
The simplest example of such function is the square of the Euclidian norm V d x d t d x d t 2 . Then, the internal time of the dynamical system is defined as an internal duration
t ~ t = d t 0 , t = t 0 t d x d τ 2 d τ ,
and the duration in the interval t 1 , t 2 is defined as a difference d t 1 , t 2 = t ~ t 2 t ~ t 1 between internal times t ~ t 2 and t ~ t 1 at the bounds t 2 and t 12 of the interval.

Appendix B

In the paper, the Sinai formulation [4] of the Kolmogorov entropy [2,3] is used. For completeness, additional definitions of the related concepts are presented below.
Let X be a compact set and ε > 0 be a real number.
A set α   = A : A X is called ε -covering of the set X if X A α A and the diameter of any A α is not greater than 2 ε .
Set X is said to be ε -distinguishable if any two of its distinct points are located at a distance greater than ε .
Denote by N ε X the minimal number of sets in ε -covering α of the set X , and by M ε X the maximal number of points in the ε -distinguishable subset of the set X .
The value [32]
H ε X = log 2 N ε X  
is called ε -entropy of the set X and the value
E ε X = log 2 M ε X
is called ε -capacity of the set X .
These values are interpreted as follows: ε -entropy H ε X is a minimal number of bits required to transmit the set X with the precision ε , and ε -capacity E ε X is a maximal number of bits, which can be memorized by X with the precision ε .
Note that the concepts of ε -entropy and ε -capacity differ from the concept of Kolmogorov complexity, which is defined as follows.
Let σ be a sequence of symbols. Denote by p s a program that describes the string σ and by l p σ the length of the program p s .
The value [33]
K s = min p σ l p σ ,   if   p σ   exists , ,         if   p σ   does   not   exist ,  
where the minimum is taken over all programs describing σ , is called the Kolmogorov complexity of the string σ .
It can be demonstrated that Kolmogorov complexity of the trajectories of dynamical system is almost certainly equivalent to the entropy of the dynamical system h φ given in Definition 1.
Now let us return to the dynamical system S = X , φ , μ . The concept that generalizes the Sinai metric entropy of dynamical system is the topological entropy defined by Agler, Konheim and McAndrew [34]. In contrast to Section 2, definition of topological entropy deals with topological space X and its coverings. Along with this, the definition follows similar steps.
Let β   = B 1 , B 2 , , B n β , i   = 1 n β B i = X , be a finite open covering of the space X . Then, the set
φ ν β = φ ν B 1 , φ ν B 2 , , φ ν B n α ,   ν = 0 ,   1 ,   2 , ,
resulting from the ν th application of the automorphism φ to partition β is also an open covering of X .
Let N φ N be cardinality of the minimal subcovering of the covering
β N φ = v = 0 N φ v β .
The value [34]
h φ , β = lim N 1 N log N φ N ,
is called the topological entropy of the system  S  with respect to time, and its supremum
h φ = sup β h φ , β ,
taken over all open covering of X is called the topological entropy of dynamical system S .
Dinaburg demonstrated [35] the following relations between topological entropy h φ , metric entropy h μ φ , and ε -entropy H ε X , namely, the following equalities hold
h φ = lim ε 0 lim v H ε X       and       h φ = sup β h μ φ ,
where the supremum is taken over all invariants with respect to φ measures on X .
Finally, since at each iteration, automorphism φ acts on a finite set of subsets of X , internal time of the system can be considered in terms of symbolic dynamics and algebraic entropy suggested by Goppa [36] in the framework of coding theory.
Let U be a finite set of symbols (alphabet) of the size n U and Ω = U n be a set of all words of length n in alphabet U . Denote by G a symmetric group that permutes the letters of the words such that if ω = u 1 ,   u 2 , , u n Ω , then
g ω = u g 1 ,   u g 2 , , u g n Ω ,   g G .
The value [36]
I 0 ω = log G ω = n ! m 1 ! m 2 ! m n U ! ,
where m i , i = 1,2 , , n U , is the number of times that the letter u i U appears in the word ω , is called null-information or algebraic entropy of the word ω .

Appendix C

For illustration, detailed calculations for Examples 2 and 3 are presented below.

Appendix C.1. Calculations for Example 2

Recall that everywhere, l o g is taken base 2 . We have
φ 0 α , 2 = 0 , 1 2 ,   1 2 , 1 , μ 1 = μ 0 , 1 2 = 1 2 0 = 1 2 ,   μ 2 = μ 1 2 , 1 = 1 1 2 = 1 2 , H φ 0 α , 2 = μ 1 log μ 1 μ 2 log μ 2 = = 1 2 log 1 2 1 2 log 1 2 = 2 1 2 log 1 2 = log 1 2 = 1 ; φ 1 α , 2 = 0 , 1 4 ,   1 4 , 1 2 , 1 2 , 3 4 ,   3 4 , 1 , μ 1 = μ 0 , 1 4 = 1 4 0 = 1 4 ,   μ 2 = μ 1 4 , 1 2 = 1 2 1 4 = 1 4 , μ 3 = μ 1 2 , 3 4 = 3 4 1 2 = 1 4 ,   μ 4 = μ 3 4 , 1 = 1 3 4 = 1 4 , H φ 1 α , 2 = μ 1 log μ 1 μ 2 log μ 2 μ 3 log μ 3 μ 4 log μ 4 = = 1 4 log 1 4 1 4 log 1 4 1 4 log 1 4 1 4 log 1 4 = 4 1 4 log 1 4 log 1 4 = 2 ; φ 2 α , 2 = 0 , 1 8 ,   1 8 , 1 4 , 1 4 , 3 8 ,   3 8 , 1 2 , 1 2 , 5 8 ,   5 8 , 3 4 , 3 4 , 7 8 ,   7 8 , 1 , μ 1 = μ 0 , 1 8 = 1 8 0 = 1 8 ,   μ 2 = μ 1 8 , 1 4 = 1 4 1 8 = 1 8     ,   μ 7 = μ 3 4 , 7 8 =   7 8 3 4 = 1 8 ,   μ 8 = μ 7 8 , 1 = 1 7 8 = 1 8 , H φ 2 α , 2 = = μ 1 log μ 1 μ 2 log μ 2 μ 7 log μ 7 μ 8 log μ 8 = = 1 8 log 1 8 1 8 log 1 8 1 8 log 1 8 1 8 log 1 8 8   t i m e s = = 8 1 8 log 1 8 = log 1 8 = 3 .
Then,
τ 1 φ = 1 H φ 0 α H φ 1 α , 2 H φ 0 α , 2 = 1 1 2 1 = 1 , τ 2 φ = 1 H φ 0 α H φ 2 α , 2 H φ 1 α , 2 = 1 1 3 2 = 1 ,
and
τ φ = 1 .

Appendix C.2. Calculations for Example 3

Let
φ 0 α = 0 , 1 2 ,   1 2 , 1 ,   φ 1 α = 0 , 1 2 ,   1 2 , 1 and   φ 2 α = 0 , 1 2 ,   1 2 , 1 .
The entropies of these partitions are equivalent:
H φ 0 α =   H φ 1 α =   H φ 2 α .
Similar to above, for these partitions, we have
μ 1 = μ 0 , 1 2 = 1 2 0 = 1 2 ,   μ 2 = μ 1 2 , 1 = 1 1 2 = 1 2 ,
and
H φ 0 α , 2 = H φ 1 α = H φ 2 α = = μ 1 log μ 1 μ 2 log μ 2 = = 1 2 log 1 2 1 2 log 1 2 = 2 1 2 log 1 2 = log 1 2 = 1 ;
Then,
τ 1 φ = 1 H φ 0 α H φ 1 α , 2 H φ 0 α , 2 = 1 1 1   1 = 0 , τ 2 φ = 1 H φ 0 α H φ 2 α , 2 H φ 1 α , 2 = 1 1 1 1 = 0 ,
and
τ φ = 0 .
Now, let
φ 0 α = 0 , 1 3 ,   1 3 , 1 .
Then,
μ 1 = μ 0 , 1 3 = 1 3 0 = 1 3 ,   μ 2 = μ 1 3 , 1 = 1 1 3 = 2 3 , H φ 0 α = μ 1 log μ 1 μ 2 log μ 2 = = 1 3 log 1 3 2 3 log 2 3 = log 3 2 3 .
Similarly, for
φ 1 α = 0 , 2 3 ,   2 3 , 1
we have
μ 1 = μ 0 , 2 3 =   2 3 0 = 2 3 ,   μ 2 = μ 2 3 , 1 = 1 2 3 = 1 3 , H φ 1 α = μ 1 log μ 1 μ 2 log μ 2 = = 2 3 log 2 3 1 3 log 1 3 = log 3 2 3 .
and for
φ 2 α = 0 , 1 3 ,   1 3 , 1
we have
μ 1 = μ 0 , 1 3 = 1 3 0 = 1 3 ,   μ 2 = μ 1 3 , 1 = 1 1 3 = 2 3 , H φ 2 α = μ 1 log μ 1 μ 2 log μ 2 = = 1 3 log 1 3 2 3 log 2 3 = log 3 2 3 .
Finally, for
φ 1 α φ 0 α = φ 2 α φ 1 α = 0 , 1 3 , 1 3 , 2 3 ,   2 3 , 1
we have
μ 1 = μ 0 , 1 3 = 1 3 0 = 1 3 ,   μ 2 = μ 1 3 , 2 3 = 2 3 1 3 = 1 3   and   μ 3 = μ 2 3 , 1 = 1 2 3 = 1 3 , H φ 1 α φ 0 α = H φ 2 α φ 1 α = μ 1 log μ 1 μ 2 log μ 2 μ 3 log μ 3 = = 1 3 log 1 3 1 3 log 1 3 1 3 log 1 3 = log 3 .
Then,
τ 1 φ = 1 H φ 0 α H φ 1 α φ 0 α H φ 0 α = = 1 log 3 2 3 log 3 log 3 2 3 = 2 3 log 3 2 3 = 2 3 log 3 2 0.726 ,
τ 2 φ = 1 H φ 0 α H φ 2 α φ 1 α H φ 1 α = = 1 log 3 2 3 log 3 log 3 2 3 = 2 3 log 3 2 0.726 ,
and
τ φ = 2 3 log 3 2 0.726 .

References

  1. Mesarovic, M.D.; Takahara, Y. General Systems Theory: Mathematical Foundations; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  2. Kolmogorov, A. New metric invariant of transitive dynamical systems and automorphisms of the Lebesgue space. Dokl. Sov. Acad. Sci. 1958, 119, 861–864. [Google Scholar]
  3. Kolmogorov, A. On the entropy on time unit as on the metrical invariant of automorphisms. Dokl. Sov. Acad. Sci. 1959, 124, 754–755. [Google Scholar]
  4. Sinai, Y. On the notion of entropy of dynamical system. Dokl. Sov. Acad. Sci. 1959, 124, 768–771. [Google Scholar]
  5. Kagan, E.; Ben-Gal, I. Distinguishing the leading agents in classification problems using the entropy-based metric. Entropy 2024, 26, 318. [Google Scholar] [CrossRef] [PubMed]
  6. Glattfelder, J.B.; Olsen, R.B. The theory of internal time: A primer. arXiv 2024, arXiv:2406.07354. [Google Scholar]
  7. Lévy, P. Sur certains processus stochastiques homogénes. Compos. Math. 1939, 7, 283–339. [Google Scholar]
  8. Lévy, P. Construction du processus de W. Feller et H.P. McKean en partant du mouvement Brownian. In Probability and Statistics (The Harald Cramér Volume); Almqvist & Wiksell: Stockholm, Sweden, 1959; pp. 162–174. [Google Scholar]
  9. Borodin, A.N. Brownian local time. Russ. Math. Surv. 1989, 44, 1–51. [Google Scholar] [CrossRef]
  10. Misra, B.; Prigogine, I.; Courbage, M. Lyapounov variable: Entropy and measurement in quantum mechanics. Proc. Natl. Acad. Sci. USA 1979, 76, 4768–4772. [Google Scholar] [CrossRef] [PubMed]
  11. Misra, B.; Prigogine, I.; Courbage, M. From deterministic dynamics to probabilistic descriptions. Proc. Natl. Acad. Sci. USA 1979, 78, 3607–3611. [Google Scholar] [CrossRef] [PubMed]
  12. Gialampoukidis, I.; Antoniou, I. Entropy, age and time operator. Entropy 2015, 17, 407–424. [Google Scholar] [CrossRef]
  13. Vallée, R. Perception, memorization and multidimensional time. Kybernetes 1991, 20, 15–28. [Google Scholar] [CrossRef]
  14. Vallée, R. About internal time of a dynamical system. In Proceedings of the 12th European Meeting on Cybernetics and Systems, Vienna, Austria, 5–8 April 1994; pp. 33–39. [Google Scholar]
  15. Vallée, R. Internal time of a dynamical system. Adv. Syst. Sci. Appl. 2010, 10, 1–5. [Google Scholar]
  16. Astashkina, E.; Mikhailov, A. Stochastic self-oscillations in parametric excitation of spin waves. Sov. Phys. JETP 1980, 51, 821–826. [Google Scholar]
  17. Buis, R. Biomathématiques de la Croissance, le Cas des Végétaux; Collection: Grenoble Sciences; EDP Sciences: Paris, France, 2016. [Google Scholar]
  18. Martin, N.; England, J. Mathematical Theory of Entropy; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
  19. Sinai, Y.G. Introduction to Ergodic Theory; Princeton University Press: Princeton, MA, USA, 1976. [Google Scholar]
  20. Sinai, Y. Topics in Ergodic Theory; Princeton University Press: Princeton, MA, USA, 1993. [Google Scholar]
  21. Tsetlin, M. Automaton Theory and Modeling of Biological Systems; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  22. Jaroszkiewicz, G. Images of Time: Mind, Science, Reality; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
  23. Whitrow, G.J. The Natural Philosophy of Time; Oxford University Press: Oxford, UK, 1981. [Google Scholar]
  24. Hawking, S. The Brief History of Time: From the Big Bang to Black Holes; Bantam Books: New York, NY, USA, 1988. [Google Scholar]
  25. Carroll, S. From Eternity to Here. The Quest for the Ultimate Theory of Time; Dutton: New York, NY, USA, 2010. [Google Scholar]
  26. Rovelli, C. The Order of Tim; Allen Lane: London, UK, 2018. [Google Scholar]
  27. Zeh, H.D. The Physical Basis of the Direction of Time; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
  28. Reichenbach, H. The Direction of Time; Dover: New York, NY, USA, 1999. [Google Scholar]
  29. Haken, H. Synergetics: An Introduction Nonequilibrium Phase Transitions and Self-Organization in Physics, Chemistry and Biology; Springer: Berlin/Heidelberg, Germany, 1977. [Google Scholar]
  30. Prigogine, I. From Being to Becoming. Time and Complexity in Physical Sciences; W.H. Freeman and Company: New York, NY, USA, 1980. [Google Scholar]
  31. Trotter, H. A property of Brownian motion paths. Ill. J. Math. 1953, 2, 425–433. [Google Scholar] [CrossRef]
  32. Kolmogorov, A.N.; Tikhomirov, V.M. ε-entropy and ε-capacity of sets in functional spaces. Am. Math. Soc. Transl. Ser. 2 1961, 17, 277–364. [Google Scholar]
  33. Kolmogorov, A.N. Three approaches to definition of the concept “the quantity of information”. Probl. Inf. Transm. 1965, 1, 3–11. (In Russian) [Google Scholar]
  34. Adler, R.L.; Konheim, A.G.; McAndrew, M.H. Topological entropy. Trans. Am. Math. Soc. 1965, 114, 309–319. [Google Scholar] [CrossRef]
  35. Dinaburg, E.I. On the relations among various entropy characteristics of dynamical systems. Math. USSR Izv. 1971, 5, 337–378. [Google Scholar] [CrossRef]
  36. Goppa, V.D. Group representations and algebraic information theory. Math. USSR Izv. 1995, 59, 1123–1147. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kagan, E. A Criterion for Distinguishing Temporally Different Dynamical Systems. Entropy 2025, 27, 916. https://doi.org/10.3390/e27090916

AMA Style

Kagan E. A Criterion for Distinguishing Temporally Different Dynamical Systems. Entropy. 2025; 27(9):916. https://doi.org/10.3390/e27090916

Chicago/Turabian Style

Kagan, Evgeny. 2025. "A Criterion for Distinguishing Temporally Different Dynamical Systems" Entropy 27, no. 9: 916. https://doi.org/10.3390/e27090916

APA Style

Kagan, E. (2025). A Criterion for Distinguishing Temporally Different Dynamical Systems. Entropy, 27(9), 916. https://doi.org/10.3390/e27090916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop