Next Article in Journal
Sim-to-Real Reinforcement Learning for a Rotary Double-Inverted Pendulum Based on a Mathematical Model
Previous Article in Journal
Space-Time Varying Coefficient Model on Longitudinal Data of the Dengue Fever in Bandung City
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kinetic Theory and Markov Chains with Stochastically Varying Transition Matrices

1
Dipartimento di Matematica e Fisica, Università degli Studi della Campania “L. Vanvitelli”, Viale Lincoln 5, 81100 Caserta, Italy
2
Dipartimento di Matematica e Applicazioni “R. Caccioppoli”, Università degli Studi di Napoli “Federico II”, Via Cintia, Monte S. Angelo, 80126 Naples, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(12), 1994; https://doi.org/10.3390/math13121994
Submission received: 6 October 2024 / Revised: 31 October 2024 / Accepted: 2 March 2025 / Published: 17 June 2025
(This article belongs to the Section C2: Dynamical Systems)

Abstract

As is well-known, the Kinetic Theory for Active Particles is a scheme of mathematical models based on a generalization of the Boltzmann equation. It must be nowadays acknowledged as one of the most versatile and effective tools to describe in mathematical terms the behavior of any system consisting of a large number of mutually interacting objects, no matter whether they also interact with the external world. In both cases, the description is stochastic, i.e., it aims to provide at each instant the probability distribution (or density) function on the set of possible states of the particles of the system. In other words, it describes the evolution of the system as a stochastic process. In a previous paper, we pointed out that such a process can be described in turn in terms of a special kind of vector time-continuous Markov Chain. These stochastic processes share important properties with many natural processes. The present paper aims to develop the discussion presented in that paper, in particular by considering and analyzing the case in which the transition matrices of the chain are neither constant (stationary Markov Chains) nor assigned functions of time (nonstationary Markov Chains). It is shown that this case expresses interactions of the system with the external world, with particular reference to random external events.

1. Introduction

The evolution of many-particle systems (where the word “particle” must be read in the widest sense, as any individual belonging to a gas, or to a herd of predators or prey, or to any human collectivity, like a nation, social class, or condominium), in view of its wide range of interpretations in different scientific fields, is one of the most interesting and widely studied topics of that branch of applied mathematics dealing with models of natural, political, social, and economic phenomena. The mathematical formalization of such an evolution has been tackled from many different viewpoints and with many different languages and techniques, e.g., that based on Bayesian Networks, see [1,2,3], or that based on the use of Fokker–Planck equation [4,5,6]. Among these, kinetic-theoretic models (see Section 6 for more details) seem to be the most versatile, expressive, and effective tools to describe many-particle systems and their evolution by means of suitable Equations (we can quote [7,8,9,10,11,12,13,14,15,16,17,18] as just as a small sample of the research addressed to this field). Since the so-called transition matrices (see Section 2) characterizing Markov Chains play a fundamental rôle in kinetic-theoretic models, as they describe in stochastic terms the results of mutual interactions between the particles of any system. It should be clearly understood that, as pointed out in formal terms in [19,20], the equations of the Kinetic Theory for Active Particles are firmly based on the use of Markov Chains; however, rather surprisingly, in the literature about the kinetic-theoretic models, Markov Chains, at least to the best of our knowledge, have never been explicitly related to the evolution equations before [19,20]. In [20], we tried to show that the explicit reference to Markov Chains actually represents a good perspective to describe the behavior of a many-particle system when it interacts with the external world (in this connection, see [21] where suitable «forcing» terms are introduced in the equations). In [20], we tried to show that, when we consider the evolution of a many-particle system as an n-tuple of joined stochastic processes, one of which is a vector Markov Chain [19], then the interactions with the external world can be described by assuming that the interaction rates and the transition matrices undergo stochastic variations at each step [19].
The main aim of [20] was, however, the interpretation rather than the formal and technical analysis of the evolution equation in the presence of random varying interaction rates and transition matrices, As a consequence, the accurate description of the equations obtained in such a case is simply outlined in the final sections. The aim of the present paper is to develop that discussion and analyze the technical differences it introduces in the description of the evolution of the system. We also try to give some interpretations of the meaning of such differences.
The rest of the paper is organized as follows. Section 6 is devoted to recalling some basic features of kinetic-theoretic models. Definitions and notions about Markov Chains as stochastic processes, with particular regard to discrete time-discrete Markov Chains are given in Section 2, while in Section 3 we introduce (two-dimensional) vector Markov Chains and describe some of their basic properties in the discrete time-discrete case. Section 4 is devoted to introducing what we have called “continuous semi-Markov coupled random processes” with particular concern with time-continuous processes, and to conclude that the equations of the Kinetic Theory, in their most general form, can be seen as a special form of the basic vector equation connecting the absolute probability distributions at different steps of a Markov Chain with each other and with the transition probabilities. Section 5 is devoted to introducing the Markov Chains with stochastically varying transition matrices in the time-discrete case, in order to give a heuristic basis to their properties, while in Section 7 we show how this kind of Markov Chain modifies the evolution equations for a many-particle system interacting with the external world. Finally, in Section 8, we discuss some problems left to be tackled in the modified scheme and outline some possible research perspectives.

2. Some Background on Markov Chains

We now turn to giving the basic definitions of Markov Chains and a short account of their basic properties.
To start with, we recall that a random process [22] is a sequence { X η } η H of random variables (readers not acquainted with the notion of “random variable” can usefully consult [23], p. 37) with a common range S of possible values (the states), called the state space of the process. Both the set H of indexes and the state space S can be either a discrete or a continuous set. If H is discrete ( H = N = { 0 , 1 , 2 , , n , } ), then the process is said to be time-discrete; if H is continuous ( H = [ 0 , + ) ), then the process is said to be time-continuous. Analogously, if S is discrete ( S = { z h } h I R , with I Z ), then the process is said to be discrete, while, if S is continuous (i.e., any real interval), then the process is said to be continuous. From now to the end of this section, we shall first consider discrete (either time-discrete or time-continuous) processes. In both cases, for the sake of simplicity and without loss of generality (as regards Markov Chains, this simply amounts to avoiding matrices with infinitely many rows and columns), the state space S will be assumed to be finite.
Now, consider a discrete and time-discrete random process { X h } h N with S = { z 1 , z 2 , , z n } . As usual, we denote by a vector p h (the state vector of the process at time h) the probability distribution on S according to the random variable X h . In other words, p h ( p h , 1 , p h , 2 , , p h , n ) with p h , k = P ( X h = z k ) . A discrete time-discrete Markov Chain [22,24,25] { X h } h N , with R ( X h ) = S Z , is a random process such that the Markov condition
P ( X h = z i h | X 1 = z i 1 , X 2 = z i 2 , , X h 1 = z i h 1 ) = = P ( X h = z i h | X h 1 = z i h 1 ) , h N { 0 }
holds (where, as customary, we have used the symbol P ( A | B ) to denote the conditional probability of an event A under the assumption that an event B has taken place [23]). This means that, for any h N , the values of the h-th random variable of the process depend only on the values of the ( h 1 ) -th, not on the values of any previous variable. Now, setting P i j ( h ) = P ( X h = z j | X h 1 = z i ) (with ( i , j ) { 1 , 2 , , n } 2 ), P i j ( h ) is the transition probability from state z i to state z j at the h-th step, and the matrix
P h ( P i j ( h ) ) 1 i , j n P 11 ( h ) P 12 ( h ) P 1 n ( h ) P 21 ( h ) P 22 ( h ) P 2 n ( h ) P n 1 ( h ) P n 2 ( h ) P n n ( h ) , h N { 0 } .
is called the transition matrix at h-th step. It is obvious and well known that
j = 1 n P i j ( h ) = 1 , h N { 0 }
and that, in virtue of the law of alternatives [23],
p h = p h 1 P h , h N { 0 } .
For the sake of completeness, we also recall that, for any couple ( r , s ) of non-negative integers,
p r + s = p r P r + 1 P r + 2 P r + s ,
and in particular, when the Markov Chain is stationary, i.e., a transition matrix P exists such that P h = P for any h N { 0 } ,
p r + s = p r P s ,
where the power at right-hand side must be interpreted in the sense of the row-by-column product of matrices (for further details, see, e.g., [22,24,25,26]).

3. Joint and Marginal Transition Probabilities

As already carried out in [20], and almost tracing the steps of the treatment presented there, our next step will be to now consider a special kind of Markov Chain, never explicitly considered before in the literature about random processes (as far as we are aware, a state space endowed with a structure is considered only in [27]), namely, vector Markov Chains (see [19]). Like in [20], for the sake of simplicity, we examine the case in which S = Ω 2 (where Ω = { x 1 , x 2 , , x m } , so that | Ω 2 | = m 2 ; as usual in the literature about discrete Markov Chains, the states in S are assumed to be arbitrarily ordered, so that we can set S = { z 1 , z 2 , , z m 2 } , and the order chosen is quite immaterial. Since z i is a couple of elements of Ω , we write z i ( x i 1 , x i 2 ) for any i { 1 , 2 , , m 2 } , with ( i 1 , i 2 ) { 1 , 2 , , m } 2 , and the process we want to consider is a sequence { ( X h , 1 , X h , 2 ) } h N , which will be written in vector notation as { X h } h N . So, the transition probabilities are
P ( X h = z j | X h 1 = z i ) = = P ( X h , 1 = x j 1 , X h , 2 = x j 2 | X h 1 , 1 = x i 1 , X h 1 , 2 = x i 2 )
(where we have set z j ( x j 1 , x j 2 ) and z i ( x i 1 , x i 2 ) ). Accordingly, the transition probabilities are labeled by 4 indexes, and we can set
P ( X h , 1 = x j 1 , X h , 2 = x j 2 | X h 1 , 1 = x i 1 , X h 1 , 2 = x i 2 ) P i 1 , i 2 ; j 1 , j 2 ( h ) .
For any h N , the four-dimensional transition matrix P h ( P i 1 , i 2 ; j 1 , j 2 ( h ) ) will be called the joint transition matrix. Each of its elements expresses the probability that the chain passes from a state ( x i 1 , x i 2 ) to a state ( x j 1 , x j 2 ) at h-th time. These are the joint transition probabilities. Obviously, together with these joint probabilities, we can consider also a number (in this case, 2 m ) of different marginal probabilities. More precisely, for any h N , we can consider the probabilities P ( X h , 1 = x j 1 | X h 1 , 1 = x i 1 , X h 1 , 2 = x i 2 ) P i 1 , i 2 ; j 1 ( h ) and P ( X h , 2 = x j 2 | X h 1 , 1 = x i 1 , X h 1 , 2 = x i 2 ) P i 1 , i 2 ; j 2 ( h ) . The former is the probability that, if X h 1 = z i , then X h takes one of the values ( x j 1 , y ) with y Ω ; the latter is the probability that, if X h 1 = z i , then X h takes one of the values ( y , x j 2 ) with y Ω . Thus, we have two three-dimensional matrices, each with m 3 entries. We shall denote by P i 1 , i 2 ; j k ( h ) ( k = 1 , 2 ) the entries of each matrix. By definition, we have
P i 1 , i 2 ; j 1 ( h ) = j 2 = 1 m P i 1 , i 2 ; j 1 , j 2 ( h ) , P i 1 , i 2 ; j 2 ( h ) = j 1 = 1 m P i 1 , i 2 ; j 1 , j 2 ( h ) ,
Recall now that S = Ω 2 { ( x i , y j ) } 1 i , j m , so we can agree to set z i j ( x i , y j ) S and write the first of relations (9) in the form
P z k l , x r ( h ) = s = 1 m P z k l , z r s ( h ) = s = 1 m P z k l , ( x r , y s ) ( h ) ,
(where we have replaced the indexes i 1 and i 2 by h and k, respectively, and the indexes j 1 and j 2 by r and s, respectively). Next, we introduce the joint state vector p ( h ) ( p i j ( h ) ) , where p i j ( h ) p h ( z i j ) p h ( x i , y j ) is the probability of the state z i j at time h, and the marginal state vector p x ( h ) ( p x , h ( x i ) ) 1 i m is the probability of the set of states { ( x i , y j ) } y j Ω , so that
p x , h ( x i ) = j = 1 m p h ( z i j ) = j = 1 m p h ( x i , y j )
and, in virtue of the law of alternatives
p h ( z i j ) = k = 1 m l = 1 m p h 1 ( z k l ) P z k l , z i j ( h ) .
Hence, replacing this last relation in relation (11), and taking into account relation (10), we obtain
p x , h ( x i ) = j = 1 m k = 1 m l = 1 m p h 1 ( x k , y l ) P z k l , ( x i , y j ) ( h ) = = k = 1 m l = 1 m p h 1 ( x k , y l ) j = 1 m P z k l , ( x i , y j ) ( h ) = = k = 1 m l = 1 m p h 1 ( x k , y l ) P z k l , x i ( h ) .
This last equation, by subtracting the probability p x , h 1 ( x i ) at both sides, yields
p x , h ( x i ) p x , h 1 ( x i ) = k = 1 m l = 1 m p h 1 ( z k l ) P z k l , x i ( h ) p x , h 1 ( x i ) .
Moreover, for any l { 1 , 2 , , q } ,
i = 1 m l = 1 m P ( x i , y l ) , ( x i , y l ) ( h ) = 1 ,
so that
p x , h 1 ( x i ) = p x , h 1 ( x i ) i = 1 m l = 1 m P ( x i , y l ) , ( x i , y l ) ( h ) = = l = 1 m p h 1 ( z i l ) i = 1 m l = 1 m P ( x i , y l ) , ( x i , y l ) ( h ) = = l = 1 m p h 1 ( z i l ) i = 1 m P ( x i , y l ) , x i ( h )
and we can rewrite Equation (12) in the final form
p x , h ( x i ) p x , h 1 ( x i ) = l = 1 m k i 1 m p h 1 ( z k l ) P z k l , x i ( h ) + p h 1 ( z i l ) i i 1 m P ( x i , y l ) , x i ( h ) .
Finally, if the random variables X h , 1 and X h , 2 are independent for any h N , and the chain is stationary, then we find
p x , h ( x i ) p x , h 1 ( x i ) = l = 1 q k i 1 m p x , h 1 ( x k ) p y , h 1 ( y l ) P z k l , x i ( h ) + p x , h 1 ( x i ) k i 1 m p y , h 1 ( y l ) P z i l , x k ( h ) .
The first term at right-hand side will be called the gain term of the subset of states S i = { z i l } 1 l m , since it accounts for all the possible transitions from other states to a state of S i , while the last term is called the loss term of S i , since it accounts for all the possible transitions from a state of S i to any state outside S i .
Therefore, we are on the way to show that the statistical equations governing the evolution of many-particle systems (see Section 6) are simply equations describing the evolution of suitably defined Markov Chains. In the next Section this conclusion will be completely achieved.

4. Continuous Semi-Markov Coupled Random Processes

We now introduce what we shall call a continuous semi-Markov coupled random process, that is a random process { ( X t , χ t ) } , where t [ 0 , T ] R is a continuous parameter interpreted as the time variable and T > 0 can be either finite or infinite. Here, { X t } is a two-dimensional Markov Chain like the one described in Section 3, and { χ t } is a m 2 -dimensional vector Bernoullian process. More precisely,
  • χ t ( χ i j , t ) , with ( i , j ) { 1 , 2 , , m } 2 ;
  • For any ( i , j ) { 1 , 2 , , m } 2 and any t [ 0 , T ] , χ i j , t is the classical Bernoulli variable, with range R ( χ i j , t ) = { 0 , 1 } ;
  • For any pair of triples ( i 1 , j 1 , t 1 ) and ( i 2 , j 2 , t 2 ) ( i 1 , j 1 , t 1 ) in { 1 , 2 , , m } 2 × [ 0 , T ] , the random variables χ i 1 j 1 , t 1 and χ i 2 , j 2 , t 2 are independent.
The meaning of the last condition requires at least a short discussion. When interpreted in the framework of the description of the behavior of many particle systems, it means that the probability that two particles interact at a given time t 1 , when they are in two given states x i 1 and y j 1 , is not influenced by the assumption that they interacted at a different time t 1 nor on the assumption that an interaction between them has occurred when they were in a different couple of states. Of course, at first glance such a condition seems to introduce an undue restriction, but it is classically acknowledged to be quite plausible, since the probability of interaction is linked to states, not to particles, and the condition simply expresses the almost obvious fact that the way in which the states x i 1 and y j 1 have been achieved should not be taken into account.
As already observed in [20], the state vectors obviously depend on time, but the transition probabilities and the probabilities associated with the Bernoulli variables χ i j , t may or may not depend on time. Obviously, on a strictly empirical ground, a positive probability distribution u i j ( t ) on [ 0 , T ] such that P ( χ i j , t = 1 ) = u i j ( t ) for any t cannot be assigned; all what can be done is to assign a continuous “probability density” τ i j ( t ) such that, for any s [ 0 , T ] and any sufficiently small Δ t , τ i j ( s ) Δ t is (approximately) the probability to find in any interval [ s , s + Δ t ] some points such that χ i j , t = 1 . Roughly speaking, and referring to the interpretation of probability as relative frequency, we can state that τ i j ( s ) Δ t is the length of the set { t [ s , s + Δ t ] | χ i j , t = 1 } , so that τ i j ( s ) is the ratio of that length to the length Δ t . So, in terms of relative frequency, each τ i j ( s ) must be interpreted as the instantaneous interaction rate in the states x i and x j at time t. We preliminarily consider the case in which the interaction rate τ i j ( s ) is constant with respect to time for any ( i , j ) { 1 , 2 , , m } 2 , and write τ i j ( s ) = τ i j . So, for any assigned couple ( i , j ) , the Bernoulli variables χ i j , t 1 and χ i j , t 2 are identically distributed for any ( t 1 , t 2 ) [ 0 , T ] 2 and, for simplicity, we may write χ i j , t χ i j for any t.
The continuous Markov Chain { X t } t [ 0 , T ) and the Bernoulli random process { χ t } t [ 0 , T ) are coupled in the sense that
  • A function
    χ : z i j Ω 2 χ i j = χ ( z i j )
    is given;
  • For any t [ 0 , T ] , the transition matrix P t is assumed to depend on χ t (and only on χ t ), in such a way that
    P z k l , z i j = P z k l , z i j | χ k l = 1 τ k l Δ t + P z k l , z i j | χ k l = 0 ( 1 τ k l Δ t ) ,
    where P z k l , z i j | χ k l = 1 (independent of time) is the transition probability from the state z k l to the state z i j under the assumption χ k l = 1 , and P z k l , z i j | χ k l = 0 is at any time the transition probability from the state z k l to the state z i j under the assumption χ k l = 0 ;
  • We assume that
    P z k l , z i j | χ k l = 0 ( t ) = δ k i δ l j , t [ 0 , T ] ,
    where δ k i and δ l j are the well-known Kronecker symbols, that is, δ k i = 0 when i k and δ k i = 1 when i = k , and δ l j = 0 when j l and δ l j = 1 when j = l . This means that the Chain remains in each state z i j when χ i j = 0 (notice that the Markov Chain is stationary if and only if the matrix P z i j , z k l | χ i j = 1 is constant with respect to t. This remark will prove meaningful in the final sections.)
Accordingly, it is readily seen that, if for any t [ 0 , T ] , the random variables X 1 , t and X 2 , t are assumed to be independent, then system (14) may be written as follows:
p x , t + Δ t ( x i ) p x , t ( x i ) = = Δ t l = 1 m k i 1 m τ k l p x , t ( x k ) p y , t ( y l ) P z k l , x i | χ k l = 1 + τ i l p x , t ( x i ) k i 1 m p y , t ( y l ) P z i l , x k | χ i l = 1 . ( i = 1 , 2 , , p ) ,
Next, by dividing both sides by Δ t and letting Δ t 0 , we obtain
d p x , t d t ( x i ) = l = 1 m k i 1 m τ k l p x , t ( x k ) p y , t ( y l ) P z k l , x i | χ k l = 1 + τ i l p x , t ( x i ) k i 1 m p y , t ( y l ) P z i l , x k | χ i l = 1 ( i = 1 , 2 , , p ) ,
if the transition matrix P χ = 1 ( P z k l , z i j | χ k l = 1 ) is stationary.
A comparison between system (19) and system (23), which will be made in Section 6, now shows at once that the latter can be viewed simply as a particular interpretation of the basic Chapman–Kolmogorov vector equation for the marginal state vectors of a two-dimensional vector Markov Chain. Furthermore, the random variables of the chain are pairs of states of the particles of the system. It could also be easily proved that the dimension of the vector Markov Chain describing the evolution of the system is strictly connected to the choice to allow and consider only pairwise interactions between the particles of the system. If one would decide to allow and to consider multiple interactions (e.g., simultaneous interactions between p > 2 particles), then the Markov Chain describing the evolution should be p-dimensional.
This will also allow us to state that all the conditions leading to the deduction of system (23) in the mathematical physical framework can be reduced to the sole following axiom:
The random process of p-tuples of states of the particles of the system is a p-dimensional vector Markov Chain conditioned by simultaneous interactions of p particles.
This conclusion allows to acknowledge that all assumptions about the influence of the external world on the evolution of the system can and must involve only the form of the transition matrices and their possible dependence on time or external parameters. In the remaining part of the paper, we shall treat the case in which the transition matrices are in turn random variables depending on other random variables describing relevant features of the external world.

5. Markov Chains with Stochastic Transition Matrices: A Heuristic Description

In this Section, we leave aside for the moment any reference to Kinetic Theory and to the mathematical description of the evolution of a many-particle system, in order to discuss in quite general terms the Markov Chains for which the transition matrices are never assigned but simply treated as random variables, depending on the outcomes of some additional experiments. In particular, we want to offer an example of a sequence of extractions from an urn for which this is the case.
Confining ourselves to the case of stationary Markov Chains, consider a sequence of extractions from an urn U containing b black balls and w white balls, and set n = b + w . The extractions are performed according to the following rule: for h 3 , before extracting the h-th ball from the urn, we replace in the urn the ball we have drawn at the ( h 2 ) -th extraction. So, it is readily seen that the transition matrix at h-th step is
P h b 1 n 1 w n 1 b n 1 w 1 n 1
for any h 2 , at least if nothing modifies the state space (i.e., the urn) at any step. However, now, at each step, we actually intervene on the urn U, and our intervention will be decided by an extraction from another urn U containing only three balls, labeled in any way we want (for instance, one can be white, one can be black, and the third red). If we draw from U the white ball, then we add only a white ball in U; if we draw the black ball, then we add only a black ball; if we draw the red ball, then we leave U unchanged; in any case, we replace the ball in the urn U . Accordingly, the transition matrix from the outcomes of the first extraction from U to those of the second one will be:
P 1 1 = b 1 n w + 1 n b n w n
if we have drawn from U the white ball;
P 1 2 = b n w n b + 1 n w 1 n
if the ball drawn from U is black; finally, P 1 3 will simply be the matrix (20) if we have drawn from U the red ball. In the considered case, the matrices P 1 1 , P 1 2 and P 1 3 have the same probability 1 / 3 , but if U contained for instance three white balls, four black balls, and only two red balls, then the probability of P 1 1 was 3 / 9 , that of P 1 2 was 4 / 9 and that of P 1 3 was 2 / 9 .
This example aims to introduce in as general terms as possible the notion of Markov Chains coupled with another random process. As a matter of fact, we considered a random process { X i } i N , which is a time-discrete Markov Chain with state space S { 0 , 1 } (where “0” stands for “black” and “1” stands for “white”) and another time-discrete independent random process { E j } j N with state space S { 0 , 1 , 2 } (where “0” stands for “black”, “1” stands for “white”, and “2” stands for “red”). The form of the transition matrix of process { X i } i N at each step i is ruled by the outcomes of process { E i } i N at all the steps from the first one to the i-th one, so that we shall give the following two definitions:
  • The processes { X i } i N and { E i } i N are said to be coupled;
  • { E i } i N is called the ruling process of the Markov Chain { X i } i N .
Notice, however, that the variables of the ruling process are assumed to be not only independent but also identically distributed, while independence is assumed only for the sake of simplicity, and does not affect the generality of our description, the identical distribution of the random variables E i ( i N ) assures the independence from h of all conditional transition probabilities, at least when each transition matrix P i is ruled only by E i and is independent of E h for any h < i . Of course, the above elementary example does not satisfy such condition, but it is possible to find different examples—though less simple—for which it is actually met.
In order to go a little deeper into the behavior of a Markov Chain in the presence of a ruling process, let us denote by X { X i } i N the given Markov Chain and by E { E i } i N , by D { u 1 , , u m } the state space of X and by S { η 1 , , η n } that of E , by p i ( p i ( u 1 ) , p i ( u 2 ) , , p i ( u m ) ) the absolute probability distribution on D at i-th step (the i-th state vector of the chain) and by π i ( π i ( η 1 ) , π i ( η 2 ) , , π i ( η n ) ) the i-th state vector of the ruling process E . Moreover, assuming once and for all that the transition matrix of X at each step is ruled only by the value of E at the same step, we shall use the symbols p k | η h ( p k ( u 1 | η h ) , p k ( u 2 | η h ) , , p k ( u m | η h ) ) and P k ( · | η h ) ( P k , i j ( η h ) ) 1 i , j m to denote the state vector at k-th step and the transition matrix of X at the same step conditional to the assumption that E k = η h .
Now, given the initial state vector p 0 of X , we obviously have
p 1 | η h = p 0 P 0 ( · | η h ) , η h S , p 2 | η h = p 1 | η h P 1 ( · | η h ) , η h S , p k | η h = p k 1 | η h P k 1 ( · | η h ) , η h S ,
so that
p k | η h = p 0 P 1 ( · | η h ) P 2 ( · | η h ) P k 1 ( · | η h ) , k N , η h S .
This shows that—under the assumption that the random variables E i are independent and identically distributed—a chain conditioned to the repetition of the same “external” event is stationary. More generally, for any fixed k, consider any element η ( k ) ( η i 1 , η i 2 , , η i k ) of S k . One has
p 1 | η i 1 = p 0 P 0 ( · | η i 1 ) , η i 1 S , p 2 | ( η i 1 , η i 2 ) = p 1 | η i 1 P 1 ( · | η i 2 ) , ( η i 1 , η i 2 ) S 2 , p k | η ( k ) = p k 1 | ( η i 1 , η i k 1 ) P k 1 ( · | η i k ) , η ( k ) S k ,
so that
p k | η ( k ) = p 0 P 0 ( · | η i 1 ) P 1 ( · | η i 2 ) P k 1 ( · | η i k ) , k N , η ( k ) S k
(note that the symbol p k | η ( k ) denotes the state vector at k-th step depending on all the external events from the first step to “just before” the k-th step).
These relations are of the greatest importance for applications to the behavior of many-particle systems subjected to random external influences, not only to understand its dependence on the sequence of all past and present external events but also to see that a complete description of such behavior cannot ignore a comparison between different predictions corresponding to different sequences of such events.

6. Kinetic-Theoretic Models: A Recall

As laid out in Section 4, the equations expressing the link between the state vectors of a coupled semi-Markov continuous process via the transition matrices, when written in the form of differential equations, simply become the equations describing in statistical terms the evolution of a many-particle system according to the Kinetic Theory for Active Particles. In order to clarify this point, we devote this Section to recalling some basic features of the kinetic-theoretic scheme, so that, in the subsequent Section, once the rôle of Markov Chains in the equations of the model is pointed out, we shall be able to study the way in which the dependence on random events modify the equations themselves.
A complex system is a set S of a very large number of objects called individuals or active particles, that, as already laid out in the Introduction and shown in the papers cited therein, can represent not only particles of a mechanical systems but also cells, living individuals, or human beings. The state of each individual in S is defined, according to the context in which we aim to develop our description, as a scalar variable u or a vector variable u ( u 1 , u 2 , , u m ) which, much more for historical reasons than for reasons suggested by the context, is called activity or—more simply—a state variable. This variable may describe the kinetic state of a material particle, or the activity of a cell in its interactions with other cells of a different tissue, or the health state of a living individual, or the level of wealth and the social condition of a human being in a collectivity. According to the context, the activity may be assumed to take its values in a discrete domain (typically, a finite or countable subset of Z or Z m ) or in a continuous domain (typically, a bounded or unbounded real interval or a bounded or unbounded domain of R m ). In any case, the domain D u (or D u ) in which the state variable is allowed to take its values is called the state space of the system [21].
As in Boltzmann’s Kinetic Theory of Gases [7], the viewpoint from which the mathematical framework is developed is statistical, i.e., we are interested to describe the evolution in time of the system as a whole rather than the way in which the states of single particles vary with time. Since we are aware that any precise description of the state of each particle of S must be given up, for both theoretical and technical reasons, we decide to study the relative frequency (or probability) distribution over the state space at each instant. In other words, the state variable is conceived as a random variable, and the goal of the study becomes the forecast of the evolution of its probability density function as well as of some of its parameters (expected value, moment of order two, standard deviation, and so on) [21].
In many cases of interest, S is considered to be split into a family { S 1 , S 2 , , S k } of subsystems, called its functional subsystems. The introduction of such subsystems aims to describe the interactions between different subclasses of individuals of S, characterized by different reactions to interactions, and offers several applications (1) in biology, for instance to describe the fight between tumor cells and immune system, or the competition between different species to model Darwinian selection; (2) in social sciences, to model interactions between social classes; and (3) in economics, to model the fluxes of wealth, e.g., between the class of financial managers, that of entrepreneurs, and that of salaried workers.
In some cases, the same variable can be used to identify the state of the members of all the functional subsystems, while in other cases, different state variables must be used for different subsystems.
In view of the aims of the present paper, we shall, however, disregard any possible decomposition of the system into different subsystems. Such a decomposition would simply result in a quite useless complication of notation and multiplication of similar equations. Accordingly, in this Section, we shall describe in some details only the case of a system S consisting in one subsystem equal to the whole S. The state space associated with the system will be denoted by the symbol
D = { u 1 , u 2 , , u m } .
For any t in a time-interval I R , the state of S at time t will be identified by a probability distribution (also called a state vector) f t ( f t ( u 1 ) , f t ( u 2 ) , …, f t ( u m ) ) on D, and, for any k { 1 , 2 , , m } , we can define the function f k : t I f k ( t ) f t ( u k ) [ 0 , 1 ] . According to this definition, we have
k = 1 m f k ( t ) = k = 1 m f t ( u k ) = 1 , .
Now, the evolution of the system S is viewed as a time-continuous stochastic process, and the time derivative of each probability function f k is expressed, according to the law of alternatives, in terms of transition probabilities. More precisely, for any ( r , s , j ) { 1 , 2 , m } 3 , the symbol F r s j F h ( u s , u j ; u r ) will denote the probability that a particle of S falls from the state u s to the state u r after an interaction with another particle of S which is in the state u j : accordingly,
r = 1 m F r s j = 1 , ( s , j ) { 1 , 2 , , m } 2 .
Thus, using the law of alternatives, we write the following system of differential equations
d f h d t ( t ) = i , j 1 m τ i j F h i j f i ( t ) f j ( t ) f h ( t ) i , j 1 m 1 τ h j F i h j f j ( t )
where, for any ( r , s ) { 1 , 2 , , m } 2 , τ r s τ ( u r , u s ) is the so-called encounter rate of particles of S in the states u r and u s , that is the number of pairwise interactions between particles of S, that are in the states u r and u s , per time unit, so that for any sufficiently small Δ t , the product τ r s Δ t is the probability that one such interaction occur in the time interval Δ t provided the individuals involved are in the states u r and u s (accordingly, τ r s Δ t is in turn a conditional probability and τ r s is the ratio of a conditional probability to time).
The terms on the right-hand side of Equation (23) express, respectively, the increase in probability of state u h as the probability that some «candidate» particles of S in the state u i interact with some field particles of S in the state u j and, as a consequence, fall in the state u h with a positive probability, and the decrease in probability of state u h as the probability that some «test» particles of S in the state u h interact with some field particles of S in the state u j and, as a consequence, leave the state u h with a positive probability.

7. The Continuous Case and KTAP Systems with Stochastic Transition Matrices

Now, in order to deepen the reasoning presented in [20] (Section 7), we consider again a stochastic process ( X , χ , E ) , where X { X t } t [ 0 , T ) , χ { χ t } t [ 0 , T ) and E { E t } t [ 0 , T ) ; just for the sake of simplicity, X t ( X 1 , t , X 2 , t ) is assumed to be two-dimensional and E t is assumed to be scalar for any t [ 0 , T ) ; χ t ( χ i j , t ) 1 i , j m is instead a n × n matrix-valued random variable for any t [ 0 , T ) . For the sake of completeness and clarity, we collect again our assumptions on these processes:
  • We denote by D 2 { u 1 , u 2 , , u m } 2 and S { e 1 , e 2 , , e n } , respectively, the ranges of X t and E t for any t [ 0 , T ) . The range of χ i j , t is simply the set { 0 , 1 } for any ( i , j ) { 1 , 2 , , m } 2 and any t [ 0 , T ) ;
  • X is a Markov Chain, and for any t [ 0 , T ) the variable X t depends on all the variables χ i j , t but is independent on any variable χ i j , t for t t ;
  • For any t [ 0 , T ) each variable X h , t ( h = 1 , 2 ) depends on all the variables E s ( s t ) only via the dependence of its transition matrices P s ( s < t ) on the corresponding variables E s ;
  • For any t [ 0 , T ) the variables X 1 , t and X 2 , t are independent;
  • For any pair of triples ( i , j , t ) and ( i , j , t ) in D 2 × [ 0 , T ) , with t t , the random variables χ i j , t and χ i j , t are independent;
  • For any pair ( t , t ) of times in [ 0 , T ) , the random variables χ i j , t and χ i j , t are identically distributed;
  • Each variable χ i j , t is independent of all variables E s , with s [ 0 , T ) ;
  • The variables E t are independent and identically distributed.
This stated, before going on to the technical development of the consequences of our definitions and assumptions, we need to discuss and explain their meaning in some details. To this end, we start by outlining an example of a stochastic process of the type under examination, and try to show how it suggests the above conditions to be quite reasonable.
It is to be noted that a complete description of any example of a time-continuous process of such kind would require a long and rather complicated treatment, which would go far beyond the limits of this paper and requires a separate paper. So, we shall confine ourselves to a concise outline. Furthermore, in the same way as for the case of time-discrete processes, we shall not refer to the case of interacting subjects, because this would force us to consider at least two-dimensional vector variables, and we want to avoid formal complications.
This stated, consider the evolution in time of the bank account of any subject we may choose. Such evolution is readily seen to be well described as a continuous “random walk”, once the probability distribution on the set of possible amounts of payments and incomes and the rates have been assigned; the probability of any possible amount of money in the account at each instant t does not depend on its history, but—roughly speaking—only on the amount at instant t d t . Now, as the “ruling” urn, we choose a set of possible abrupt external events producing unpredictable new expenses or incomes (climate changes requiring unusual heating or cooling expenses, unexpected taxes, thefts, fires, earthquakes, unexpected inheritances, lottery winnings, unexpected salary advancements), all labeled with the amounts of money they could produce, so that the range D and the range S are the same set (with different probability distribution functions, of course). These events are random, with rates (probabilities) that are usually very low, and must be estimated by means of a statistical analysis of the history of similar events, and are very likely to be independent of each other, as it is difficult to assume that any one of the above events take into account the preceding ones to decide whether they will take place or not. This gives a full explanation of assumption 8. In the same way, the assumptions on the variables χ i j , t seem to be very plausible, as these variables simply express accounting information, which is essentially linked to the life and habits of the subject under examination, and we could say that they define the system.
We are well aware that, while the condition on E , as expressing the abruptness and the randomness of external events, can be assumed in almost any application of the model, the conditions on the interaction rates would require to be modified according to the meaning of the word “interaction” in each context, and could turn out to be quite unlikely for more complicated examples, but this point cannot be examined here, and must be postponed to future analysis.
Now, turning again to the formal development proposed above, in order to study the form of System (19) when the transition matrices of the Markov Chain X depend on the values of random variables E t , we shall first agree to set u h k ( u h , u k ) and to denote by the symbol P u i l , u k | χ i l = 1 , E t = e j the probability that the couple of states ( u i , u l ) falls in any state whose first component is u k when not only a particle in the state u i interacts with a particle in the state u l but also the random variable E t takes the value e j . Furthermore, we agree to denote by ρ i l j Δ t the joint probability that an interaction between a particle in the state u i and a particle in the state u l occurs and E t = e j in any time interval of length Δ t , so that ρ i l j will be the instantaneous probability density of these joint events (which we have implicitly assumed to be constant).
In addition, we must also introduce—for any r N —the symbol e ( r ) ( e h 1 , e h 2 , , e h r ) to denote any unspecified element of S r . For reasons that will become clear later, we also agree to denote by σ h 1 h 2 h r the joint probability of events e h 1 , e h 2 , …, e h r . Note that, in view of the independence of χ i j , t from the variables of the process E , the product τ i j σ h 1 h 2 h r is the joint probability density of events e h 1 , e h 2 , …, e h r and interactions between a particle in the state u i with a particle in the state u j . According to the classical treatment of the kinetic model, we will assume that τ i j and σ h 1 h 2 h r are independent of time for any i, j, h 1 , h 2 , …, h r .
This stated, in the time-continuous model with transition matrices that vary randomly over time depending on the values of the variables E t , System (19) can be rewritten in many ways, according to the information we want to take into account and the information we want to obtain.
First of all, if we want to link the instantaneous variation of the distribution of the states over the particles of the system to assumptions about possible past r-tuples of external events (where r is assigned arbitrarily in advance), we may simply consider the system
d p t d t ( u i | e ( r ) , E t = e h ) = = l = 1 m k i 1 m τ k l p t ( u k | e ( r ) ) p t ( u l | e ( r ) ) P u k l , u i | χ k l = 1 , E t = e h + τ i l p t ( u i | e ( r ) ) k i 1 m p t ( u l | e ( r ) ) P u i l , u k | χ i l = 1 , E t = e h , ( i = 1 , 2 , , m ) ,
where ( a ) we have dropped out the index of the component of the state vectors because the variable is always the same; ( b ) u k l ( u k , u l ) and u i l ( u i , u l ) ; ( c ) the conditional symbol “ | e ( r ) ” should be read “under the assumption that r variables of the process E in the time interval [ 0 , t ) have taken the values e h 1 , e h 2 , …, e h r ”; ( d ) we have set
P u i l , u k | χ i l = 1 , E t = e h P | χ i l = 1 ( u i l , u k | E t = e h ) P ( u i l , u k | χ i l = 1 , E t = e h ) .
Note that, as a matter of fact, System (24) is not only one system, but a whole set of n r systems of equations, one for each possible choice of assumption e ( r ) in S r . As we shall discuss in detail in the final Section of this paper, this would allow us to compare the possible behaviors of the many-particle system under examination corresponding to different possible histories of external influences if we could treat the distributions at right-hand side like known functions of e ( r ) . Since this is not the case, bearing in mind the independence from time of σ h 1 h 2 h r for any choice of ( h 1 , h 2 , , h r ) { 1 , 2 , , n } n , we may multiply each system of the set by the corresponding σ h 1 h 2 h r , then add side by side all the n r systems of the set, to obtain the unique system
d p t d t ( u i | E t = e h ) = = l = 1 m k i 1 m τ k l p t ( u k ) p t ( u l ) P u k l , u i | χ k l = 1 , E t = e h + τ i l p t ( u i ) k i 1 m p t ( u l ) P u i l , u k | χ i l = 1 , E t = e h , ( i = 1 , 2 , , p ) ,
where we have taken into account the law of total probabilities
h 1 , h 2 , , h r 1 n p t ( u i | e ( r ) , E t = e h ) σ h 1 h 2 h r = p t ( u i | E t = e h ) , h 1 , h 2 , , h r 1 n p t ( u k | e ( r ) ) p t ( u l | e ( r ) ) σ h 1 h 2 h r = p t ( u k ) p t ( u l ) , h 1 , h 2 , , h r 1 n p t ( u i | e ( r ) ) p t ( u l | e ( r ) ) σ h 1 h 2 h r = p t ( u i ) p t ( u l ) .
It is obvious that, while the last system (25) is much simpler to manage, the set of systems expressed by (24) gives much more information in connection with the sequence of external events preceding the one identified by the value of E t . This point requires a careful discussion, which will be proposed in the concluding Section of the paper.

8. Conclusions and Perspectives

As the reader will have undoubtedly noticed, the main aim of the present paper was to give a detailed and precise account, in the light of the basic language of Markov Chains, of the case in which the transition matrices appearing in the equations governing the evolution of a many-particle system in turn depend on stochastic events taking place in the external world and influencing the behavior of the system. This case was treated only as a hint in [20], which focused on possible applications of the Kinetic Theory for Active Particles to social and economic problems, and has been discussed in more general terms here.
In order to synthesize the main significant aspects of the discussion presented here, we want to direct the attention of the reader to the fact that the dependence of the form of transition matrices of a Markov Chain from random events at each step produces a “tree” of possible ordinary, deterministic chains, whose number of branches, in the time-discrete case, grows exponentially with the number of steps. More precisely, at the r-th step, if the number of possible events influencing the form of the transition matrix is n, the number of chains is n r , even when the form of r-th transition matrix is modified only by the r-th random external event.
In the time-continuous case (which is the most interesting one in view of applications to physical or biological or economic or social problems), the situation is far more complicated. In principle, at any time t > 0 , we could have any number of different sequences of transition matrices, with different probabilities, and we should formulate suitable assumptions on the sequences to be taken into account. However, since the form of the system hides the dependence of the distributions from e ( r ) , we are forced to use the law of alternatives, and to ignore any assumption on the sequence of external events preceding the one influencing the transition matrix at time t.
Conversely, as a consequence of this remark, the solution of system (25) in an arbitrary time interval [ 0 , T ) is somehow misleading, as it suggests the idea that a particular external event took place at time t = 0 and influenced the whole behavior of the system in the whole interval [ 0 , T ) . So, we lose any reference to a sequence of random external events. The only correct way to apply the notion of “random variable transition matrix” is the following: we assume that a sequence of times { t 1 , t 2 , , t r } [ 0 , T ) exists such that some events e h 1 , e h 2 , …, e h r happened, influencing the transition matrix in such a way that in the time interval [ t j , t j + 1 ) , the system governing the evolution of the system is
d p t d t ( u i ) = = l = 1 m k i 1 m τ k l p t ( u k ) p t ( u l ) P u k l , u i | χ k l = 1 + τ i l p t ( u i ) k i 1 m p t ( u l ) P u i l , u k | χ i l = 1 , ( i = 1 , 2 , , m ) ,
for t [ 0 , t 1 ) ,
d p t d t ( u i | E t = e h 1 ) = = l = 1 m k i 1 m τ k l p t ( u k ) p t ( u l ) P u k l , u i | χ k l = 1 , E t = e h 1 + τ i l p t ( u i ) k i 1 m p t ( u l ) P u i l , u k | χ i l = 1 , E t = e h 1 , ( i = 1 , 2 , , m ) ,
for t [ t 1 , t 2 ) and
d p t d t ( u i | e ( j 1 ) , E t = e h j ) = = l = 1 m k i 1 m τ k l p t ( u k | e ( j 1 ) ) p t ( u l | e ( j 1 ) ) P u k l , u i | χ k l = 1 , E t = e h j + τ i l p t ( u i | e ( j 1 ) ) k i 1 m p t ( u l | e ( j 1 ) ) P u i l , u k | χ i l = 1 , E t = e h j , ( i = 1 , 2 , , m ) ,
for t [ t j , t j + 1 ) , where 2 j r and e ( j ) = { e h 1 , e h 2 , , e h j ) . Since the distributions on the right-hand side of each system in the interval [ t j , t j + 1 ) are known, as obtained by solving the analogous system in the interval [ t j 1 , t j ) , we obtain a possible evolution of the system in the whole interval [ 0 , T ) . More precisely, we obtain a solution for any assigned r-tuple e ( r ) ( e h 1 , e h 2 , , e h r ) , and each solution has the same probability of the sequence { e h 1 , e h 2 , , e h r } , that is the probability σ h 1 h 2 h r . In view of identical distribution and independence of random variables E t , one has
σ h 1 h 2 h r = s = 1 r q h s ,
where q h s = P ( E t = e h s ) for any t.
Now, besides the complete irrelevance of any precise identification of times t 1 , t 2 , …, t r , we must remark that r N is arbitrary, so that the number of possible solutions in the given time interval [ 0 , T ) is potentially infinite. More precisely, we may repeat the same procedure for any r N , to obtain n r solutions
p r ( e ( r ) , t ) ( p r ( e ( r ) ; u 1 , t ) , p r ( e ( r ) ; u 2 , t ) , , p r ( e ( r ) ; u m , t ) ) , ( t [ 0 , T ) ) ,
defined by the conditions
p ( e ( r ) ; u i , t ) = p ( u i , t ) for t [ 0 , t 1 ) p ( u i , t | E t = e h 1 ) p ( u i , t | e ( 1 ) ) for t [ t 1 , t 2 ) p ( u i , t | e ( 1 ) , E t = e h 2 ) p ( u i , t | e ( 2 ) ) for t [ t 2 , t 3 ) p ( u i , t | e ( r 2 ) , E t = e h r 1 ) p ( u i , t | e ( r 1 ) ) for t [ t r 1 , t r ) p ( u i , t | e ( r 1 ) , E t = e h r ) p ( u i , t | e ( r ) ) for t [ t r , T ) ,
where i = 1 , 2 , , m .
We cannot but acknowledge that this is a purely abstract treatment of the problem of describing the behavior of a many-particle system influenced by random external events. Furthermore, we cannot hide that its abstract character, joined with the formal complication and the increased number of variables and parameters, can make the application of the model to real phenomena remarkably difficult; however, we are encouraged by the remark that the difficulty is in nature the same as in the classical kinetic model in which the transition matrices are assumed to be constant, and the difference is only technical. The subsequent steps, besides the classical existence, uniqueness, and stability problems for each of the partial solutions listed above, and the explicit computation of solutions (at least by means of numerical simulations in suitable particular cases), must be essentially of a statistical type, and mainly oriented by applications (however, this is true also in the classical model). For each application, we need to establish whether a r * exists such that the rate of r-tuples of external events is zero for any r > r * . At the same time, one should also study the expected behavior of the system, described by the relation
p ¯ ( t ) = r = 1 ρ r σ e h 1 e h 2 e h r p r ( e ( r ) , t ) ,
where ρ r is the probability density (rate) of a sequence of r external events. It is quite clear that the terms corresponding to large values of r are smaller the larger r is. However, the application to real systems on the one hand, suggests, and on the other hand, requires that only a finite number of external events actually occur in any given time interval of assigned length. The analysis of these points is the natural development of the present proposal, and should be the object of future papers.

Author Contributions

Conceptualization, B.C. and M.M.; Formal analysis, B.C. and M.M.; Writing—original draft, B.C. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the articlel, further inquiries can be directed to the corresponding author.

Acknowledgments

The research by M.M. has been carried out under the auspices of GNFM (National Group of Mathematical-Physics) and of INdAM (National Institute of Advanced Mathematics).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Waldmann, M.R.; Martignon, L. A Bayesian network model of causal learning. In Proceedings of the Twentieth Annual Conference of the Cognitive Science Society; Routledge: London, UK, 2022; pp. 1102–1107. [Google Scholar]
  2. Sajid, Z.; Khan, F.; Zhang, Y. Integration of interpretive structural modelling with Bayesian network for biodiesel performance analysis. Renew. Energy 2017, 107, 194–203. [Google Scholar] [CrossRef]
  3. Hosseini, S.; Barker, K. A Bayesian network model for resilience-based supplier selection. Int. J. Prod. Econ. 2016, 180, 68–87. [Google Scholar] [CrossRef]
  4. Toscani, G.; Sen, P.; Biswas, S. Kinetic exchange models of societies and economies. Phil. Trans. Royal Soc. A Math. Phys. Eng. Sci. 2022, 380, 2224. [Google Scholar] [CrossRef] [PubMed]
  5. Dimarco, G.; Pareschi, L.; Toscani, G.; Zanella, M. Wealth distribution under the spread of infectious diseases. Phys. Rev. E 2020, 102, 022303. [Google Scholar] [CrossRef] [PubMed]
  6. Bernardi, E.; Pareschi, L.; Toscani, G.; Zanella, M. Effects of vaccination efficacy on wealth distribution in kinetic epidemic models. Entropy 2022, 24, 216. [Google Scholar] [CrossRef] [PubMed]
  7. Boltzmann, L. Lectures on Gas Theory; Dover Publications: New York, NY, USA, 2012. [Google Scholar]
  8. Aylaj, B.; Bellomo, N.; Gibelli, L. Crowd Dynamics by Kinetic Theory Modeling: Complexity, Modeling, Simulations, and Safety; Springer: New York, NY, USA, 2020. [Google Scholar]
  9. Bellomo, N.; Bellouquid, A.; Gibelli, L.; Outada, N. A Quest Towards a Mathematical Theory of Living Systems; Birkhäuser: Basel, Switzerland, 2017. [Google Scholar]
  10. Bellomo, N.; Bertotti, M.L.; Delitala, M. From the kinetic theory of active particles to the modeling of social behaviors and politics. Qual. Quant. 2007, 41, 545–555. [Google Scholar] [CrossRef]
  11. Bellomo, N.; Brezzi, F. Traffic, crowds and swarms. Math. Model. Methods Appl. Sci. 2008, 18, 1145–1148. [Google Scholar] [CrossRef]
  12. Bellomo, N.; Degond, P.; Tadmor, E. (Eds.) Active Particles, Volume 1: Advances in Theory, Models, and Applications; Birkhäuser: Basel, Switzerland, 2017. [Google Scholar]
  13. Bertotti, M.L.; Modanese, G. Economic inequality and mobility in kinetic models for social sciences. Eur. Phys. J. Spec. Top. 2019, 225, 1945–1958. [Google Scholar] [CrossRef]
  14. Bianca, C.; Menale, M. Mathematical analysis of a thermostatted equation with a discrete real activity variable. Mathematics 2020, 8, 57. [Google Scholar] [CrossRef]
  15. Carbonaro, B.; Menale, M. Dependence on the Initial Data for the Continuous Thermostatted Framework. Mathematics 2019, 7, 602. [Google Scholar] [CrossRef]
  16. Chinesta, F.; Abisset-Chavanne, E. A Journey Around the Different Scales Involved in the Description of Matter and Complex Systems: A Brief Overview with Special Emphasis on Kinetic Theory Approaches; Springer: New York, NY, USA, 2017. [Google Scholar]
  17. Marsan, G.A.; Bellomo, N.; Gibelli, L. Stochastic Evolving Differential Games Toward a Systems Theory of Behavioral Social Dynamics. arXiv 2015, arXiv:1506.05699v2. [Google Scholar]
  18. Bellomo, N.; Marsan, G.A.; Tosin, A. Complex Systems and Society: Modeling and Simulation; Springer: New York, NY, USA, 2006. [Google Scholar]
  19. Carbonaro, B.; Vitale, F. Some remarks on vector Markov Chains and their applications to the description of many-particle systems. In Stochastic Processes—Theoretical Advances and Applications in Complex Systems; Kulasiri, D., Ed.; IntechOpen: London, UK, 2024. [Google Scholar]
  20. Carbonaro, B.; Menale, M. Markov Chains and Kinetic Theory: A Possible Application to Socio-Economic Problems. Mathematics 2024, 12, 1571. [Google Scholar] [CrossRef]
  21. Carbonaro, B. Modeling epidemics by means of the stochastic description of complex systems. Comput. Math. Methods 2021, 3, 1208–1220. [Google Scholar] [CrossRef]
  22. Lawler, G.F. Introduction to Stochastic Processes; Chapman and Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
  23. Rozanov, Y.A. Probability Theory: A Concise Course; Dover Publications: New York, NY, USA, 2019. [Google Scholar]
  24. Gilch, L. Markov Chains: An Introduction: Lecture Notes; Independently Published: Chicago, IL, USA, 2022; ISBN 979-8358906143. [Google Scholar]
  25. Norris, J.R. Markov Chains; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  26. Crisanti, A.; Paladin, G.; Vulpiani, A. Producta of Random Matrices in Statistical Mechanics; Springer: Berlin, Germany, 1993. [Google Scholar]
  27. Benaim, M.; Hurth, T. Markov Chains on Metric Spaces: A Short Course; Springer Nature: New York, NY, USA, 2022. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Carbonaro, B.; Menale, M. Kinetic Theory and Markov Chains with Stochastically Varying Transition Matrices. Mathematics 2025, 13, 1994. https://doi.org/10.3390/math13121994

AMA Style

Carbonaro B, Menale M. Kinetic Theory and Markov Chains with Stochastically Varying Transition Matrices. Mathematics. 2025; 13(12):1994. https://doi.org/10.3390/math13121994

Chicago/Turabian Style

Carbonaro, Bruno, and Marco Menale. 2025. "Kinetic Theory and Markov Chains with Stochastically Varying Transition Matrices" Mathematics 13, no. 12: 1994. https://doi.org/10.3390/math13121994

APA Style

Carbonaro, B., & Menale, M. (2025). Kinetic Theory and Markov Chains with Stochastically Varying Transition Matrices. Mathematics, 13(12), 1994. https://doi.org/10.3390/math13121994

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop