Next Article in Journal
Incorporating Multi-Source Market Sentiment and Price Data for Stock Price Prediction
Previous Article in Journal
Robust-MBDL: A Robust Multi-Branch Deep-Learning-Based Model for Remaining Useful Life Prediction of Rotating Machines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Markov Chains and Kinetic Theory: A Possible Application to Socio-Economic Problems

by
Bruno Carbonaro
1,† and
Marco Menale
2,*,†
1
Dipartimento di Matematica e Fisica, Università degli Studi della Campania “L. Vanvitelli”, Viale Lincoln 5, 81100 Caserta, Italy
2
Dipartimento di Matematica e Applicazioni “R. Caccioppoli”, Università degli Studi di Napoli “Federico II”, Via Cintia, Monte S. Angelo, 80126 Naples, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(10), 1571; https://doi.org/10.3390/math12101571
Submission received: 8 April 2024 / Revised: 7 May 2024 / Accepted: 12 May 2024 / Published: 17 May 2024
(This article belongs to the Special Issue Kinetic Models of Collective Phenomena and Data Science)

Abstract

:
A very important class of models widely used nowadays to describe and predict, at least in stochastic terms, the behavior of many-particle systems (where the word “particle” is not meant in the purely mechanical sense: particles can be cells of a living tissue, or cars in a traffic flow, or even members of an animal or human population) is the Kinetic Theory for Active Particles, i.e., a scheme of possible generalizations and re-interpretations of the Boltzmann equation. Now, though in the literature on the subject this point is systematically disregarded, this scheme is based on Markov Chains, which are special stochastic processes with important properties they share with many natural processes. This circumstance is here carefully discussed not only to suggest the different ways in which Markov Chains can intervene in equations describing the stochastic behavior of any many-particle system, but also, as a preliminary methodological step, to point out the way in which the notion of a Markov Chain can be suitably generalized to this aim. As a final result of the discussion, we find how to develop new very plausible and likely ways to take into account possible effects of the external world on a non-isolated many-particle system, with particular attention paid to socio-economic problems.

1. Introduction

In the past forty years, kinetic–theoretic models [1] (see Section 5 for more details) seem to be increasingly studied and applied in the research about the evolution of many-particle systems. They seem to have delivered a particularly versatile, expressive, and effective tool to formulate suitable equations describing such evolution in stochastic terms (see [2,3,4,5,6,7,8,9,10,11], but these are only a few examples of the papers devoted to this kind of study: in each of them, the reader can find much more complete bibliographic references that are nevertheless far from being exhaustive), though other interesting and more effective models are available, e.g., the one based on Bayesian networks (see [12,13,14]). As pointed out in formal terms in [15], these equations are firmly based on the use of Markov Chains: in fact, transition matrices, which characterize these important stochastic processes, play a fundamental role in them, as they describe in stochastic terms the results of mutual interactions between the particles in the system. Nevertheless, in the literature about the kinetic–theoretic models Markov Chains are never explicitly mentioned (at least to our knowledge). So, the question spontaneously arises whether exploiting their role could improve our understanding of the terms of the equations and suggest new ways to achieve a better and more effective description of the behavior of systems interacting with the external world (in this connection, see [16,17], where any implicit use of the Principle of Inertia is renounced, and [18,19], where suitable «forcing» terms are introduced in the equations). For this connection, we want to show that the interpretation of the evolution of a many-particle system as an n-tuple of joined stochastic processes, one of which is a vector Markov Chain [15], allows us to describe interactions with the external world by assuming that the interaction rates and the transition matrices undergo stochastic variations at each step [15].
One of the most interesting applications of this possible extension of Kinetic Theory is in addressing socio-economic problems, which are already among the most widely studied topics in the framework of the stochastic description of the behavior of the widest possible class of many-particle systems. Before the birth of kinetic–theoretic models addressing human collectivities, the intervention of mathematics in social and economic sciences was reduced to statistics to assess the relative frequencies of different conditions of life, economic levels, and psychological attitudes connected to belonging to different social classes. However, purely statistical estimates are not sufficient for prediction; an accurate prediction, in purely stochastic terms, requires evolution equations; the management of any human community requires accurate predictions. This has led to flourishing research devoted to treating socio-economic problems in the framework of the Kinetic Theory (see, e.g., [20,21,22,23,24]). However, as far as we are aware, with the only exception of [18,19], where socio-economic problems are proposed as possible interpretations of a more general scheme, the influence of environmental conditions has not been considered, and also in the above quoted papers such influence is described by introducing an “external force” acting directly on the distribution of states: till now, no studies have been conducted about external influences only indirectly modifying the distribution of the states, by producing direct variations only in transition probabilities and in interaction rates (see the references above and Section 7). The aim of the present paper is just to fill this gap, by showing how the evolution of a many-particle system can always be described as a vector time-continuous random process { ( X t , χ t , E t ) } t [ 0 , T ) , where { X t } is a vector Markov Chain, { χ t } is a suitably defined vector Bernoulli process, and { E t } is a vector (or scalar) random process expressing the variation in time of the environment. For the sake of simplicity, X t is assumed to be a two-dimensional vector (for any t [ 0 , T ] ), but in Section 7, albeit very briefly, we will consider the case of an n-dimensional vector with n > 2 with { E t } , a constant random process (notice that a constant random process on [ 0 , T ] is a particular random process such that E = c for any t [ 0 , T ] , so that the associated probability distribution on R is defined by P ( E t = c ) = 1 and P ( E t = x ) = 0 for any x c (here and in the the following, any probability measure, when explicitly applied to events, will be denoted by a bold-face capital letter)) to recover classical kinetic–theoretical equations.
The contents of this paper are distributed as follows. Section 2 is devoted to recalling some basic definitions and notions about Markov Chains as stochastic processes, with particular regard to discrete time-discrete Markov Chains. In Section 3, we introduce (two-dimensional) vector Markov Chains and describe some of their basic properties in the discrete time-discrete case; Section 4 is devoted to introducing what we have called the “continuous semi-Markov coupled random processes”, with particular concern for time-continuous processes; in Section 5, thanks to the result obtained in Section 4, we show how the equations in the Kinetic Theory, as applied to general many-particle systems, can be seen as a special form of the basic vector equation connecting the absolute probability distributions at different steps of a Markov Chain with each other and with the transition probabilities. Section 6 and Section 7 are devoted to showing how this circumstance opens at once the door to the interpretation of the interaction of a system with the external world as the non-stationarity of transition matrices (Section 6) or as the stochastic variation in the transition matrix at each step (Section 7). Finally, in Section 8, we discuss some possible research perspectives.

2. Some Background about Markov Chains

In this section, we begin the discussion we aim to develop in this paper by recalling some basic notions about stochastic processes and in particular about Markov Chains.
As is well known, a random process [25] is a sequence { X η } η H of random variables (readers not acquainted with the notion of a “random variable” can usefully consult [26], p. 37) with a common range S of possible values (the states), called the state space of the process. Both the set H of indexes and the state space S can be either a discrete or a continuous set. If H is discrete ( H = N = { 0 , 1 , 2 , , n , } ), then the process is said to be time-discrete; if H is continuous ( H = [ 0 , + ) ), then the process is said to be time-continuous. Analogously, if S is discrete ( S = { z h } h I R , with I Z ), then the process is said to be discrete, while, if S is continuous (i.e., any real interval), then the process is said to be continuous. From now to the end of this section, we shall first consider discrete (either time-discrete or time-continous) processes. In both cases, for the sake of simplicity and without loss of generality (as regards Markov Chains (but also in connection with any other stochastic process)), we simply avoid writing matrices with infinitely many rows and columns)l; the state space S is assumed to be finite.
To start with, consider a discrete and time-discrete random process { X h } h N with S = { z 1 , z 2 , , z n } . As usual, we denote by a vector p h (the state vector of the process at time h) the (absolute) probability distribution on S according to the random variable X h . In other words, p h ( p h , 1 , p h , 2 , , p h , n ) with p h , k = P ( X h = z k ) .
Now, a discrete time-discrete Markov Chain [25,26,27,28] { X h } h N , with R ( X h ) = S Z , is a random process such that the Markov condition, h N { 0 } ,
P ( X h = z h | X 1 = z 1 , X 2 = z 2 , , X h 1 = z h 1 ) = P ( X h = z h | X h 1 = z h 1 )
holds. (As usual, we use the symbol P ( A | B ) to denote the conditional probability of an event A under the assumption that an event B has taken place.) This means that, for any h N , the value of the h-th random variable of the process depends only on the value of the ( h 1 ) -th, not on the value of any previous variable. Now, setting P i j ( h ) = P ( X h = z j | X h 1 = z i ) (with ( i , j ) { 1 , 2 , , n } 2 ), P i j ( h ) is the transition probability from state z i to state z j at the h-th step, and the matrix
P h ( P i j ( h ) ) 1 i , j n P 11 ( h ) P 12 ( h ) P 1 n ( h ) P 21 ( h ) P 22 ( h ) P 2 n ( h ) P n 1 ( h ) P n 2 ( h ) P n n ( h ) , h N { 0 } .
is called the transition matrix at the h-th step. It is obvious and well known that
j = 1 n P i j ( h ) = 1 , h N { 0 }
and that, by virtue of the law of alternatives,
p h = p h 1 P h , h N { 0 } .
For the sake of completeness, we also recall that, for any couple ( r , s ) of non-negative integers,
p r + s = p r P r + 1 P r + 2 P r + s ,
and, in particular, when the Markov Chain is stationary, i.e., a transition matrix P exists such that P h = P for any h N { 0 } ,
p r + s = p r P s ,
where the power on the right-hand side must be interpreted in the sense of the row-by-column product of matrices (for further details, see, e.g., [25,28]).

3. Joint and Marginal Transition Probabilities

Our next step will now be to consider a special kind of Markov Chain that has never been explicitly considered before in the literature about random processes (at least to our knowledge (as far as we are aware, only in [29] a state space endowed with a structure is considered)), namely, vector Markov Chains (see [15]). More precisely, we want to examine the case in which S = Ω 2 (where Ω = { x 1 , x 2 , , x m } , so that | Ω 2 | = m 2 ; as usual, in the literature about discrete Markov Chains, the states in S are assumed to be ordered, so that we can set S = { z 1 , z 2 , , z m 2 } (notice that the actual order is quite immaterial) and, since z i is a couple of elements of Ω , we write z i ( x i 1 , x i 2 ) for any i { 1 , 2 , , m 2 } , with ( i 1 , i 2 ) { 1 , 2 , , m } 2 ) and the process we want to consider is a sequence { ( X h , 1 , X h , 2 ) } h N , which will be written in vector notation as { X h } h N . So, the transition probabilities are
P ( X h = z h | X h 1 = z h 1 ) = = P ( X h , 1 = x h , 1 , X h , 2 = x h , 2 | X h 1 , 1 = x h 1 , 1 , X h 1 , 2 = x h 1 , 2 )
and we have four indexes, i.e., we can set
P ( X h , 1 = x j 1 , X h , 2 = x j 2 | X h 1 , 1 = x i 1 , X h 1 , 2 = x i 2 ) P i 1 , i 2 ; j 1 , j 2 ( h ) .
For any h N , the four-dimensional transition matrix P h ( P i 1 , i 2 ; j 1 , j 2 ( h ) ) will be called the joint transition matrix. Each of its elements expresses the probability that the chain passes from a state ( x i 1 , x i 2 ) to a state ( x j 1 , x j 2 ) at h-th time. These are the joint transition probabilities. Together with these joint probabilities, we now have to also consider a number of different marginal probabilities. More precisely, for any h N , we can choose one of the random variables X h , 1 , X h , 2 (say X h , k to fix ideas), and consider the probabilities P ( X h , k = x j k | X h 1 , 1 = x i 1 , X h 1 , 2 = x i 2 ) . For any choice, we have a three-dimensional matrix, with m 3 entries. We shall denote by P i 1 , i 2 ; j k ( h ) ( k = 1 , 2 ) the entries of each matrix. Now, we have
P i 1 , i 2 ; j 1 ( h ) = j 2 = 1 m P i 1 , i 2 ; j 1 , j 2 ( h ) , P i 1 , i 2 ; j 2 ( h ) = j 1 = 1 m P i 1 , i 2 ; j 1 , j 2 ( h ) ,
Now, for the sake of simplicity, we agree to set S = { z i j } 1 i , j m Ω 2 { ( x i , y j ) } 1 i , j m . (From now on, the symbols z i j and ( x i , y j ) will be treated as interchangeable. Moreover, the indexes of the components of any state vector will be written as apices for the sake of readability.) Then, we can write relation (9) in the form
P z h k , x i ( h ) = j = 1 m P z h k , z i j ( h ) = j = 1 m P z h k , ( x i , y j ) ( h ) ,
and introduce the joint state vector p h ( p h i j ) , with p h i j p h ( z i j ) p h ( x i , y j ) , and the marginal state vector p x , h ( p x , h ( x i ) ) 1 i m , where
p x , h ( x i ) = j = 1 m p h ( z i j ) = j = 1 m p h ( x i , y j )
and, by virtue of the law of alternatives,
p h ( z i j ) = k = 1 m l = 1 m p h 1 ( z k l ) P z k l , z i j ( h ) .
Hence, replacing this last relation in relation (5), and taking into account relation (4), we obtain
p x , h ( x i ) = j = 1 m k = 1 m l = 1 m p h 1 ( x k , y l ) P z k l , ( x i , y j ) ( h ) = k = 1 m l = 1 m p h 1 ( x k , y l ) j = 1 m P z k l , ( x i , y j ) ( h ) = k = 1 m l = 1 m p h 1 ( x k , y l ) P z k l , x i ( h ) .
Now, for reasons that will be clear in the following, we write this last equation in the form
p x , h ( x i ) p x , h 1 ( x i ) = k = 1 m l = 1 m p h 1 ( z k l ) P z k l , x i ( h ) p x , h 1 ( x i ) .
Moreover, for any l { 1 , 2 , , q } ,
i = 1 m l = 1 m P ( x i , y l ) , ( x i , y l ) ( h ) = 1 ,
so that we can rewrite Equation (12) in the final form
p x , h ( x i ) p x , h 1 ( x i ) = l = 1 m k i 1 m p h 1 ( z k l ) P z k l , x i ( h ) p h 1 ( z i l ) i i 1 m P ( x i , y l ) , x i ( h ) .
Finally, if for any h N , the random variables X h , 1 and X h , 2 are independent, and the chain is stationary, then we find
p x , h ( x i ) p x , h 1 ( x i ) = l = 1 m k i 1 m p x , h 1 ( x k ) p y , h 1 ( y l ) P z k l , x i ( h ) + p x , h 1 ( x i ) k i 1 m p y , h 1 ( y l ) P z i l , x k ( h ) .
The first term on the right-hand side is called the gain term of the subset of states S i = { z i l } 1 l m , since it accounts for all the possible transitions from other states to a state of S i , while the last term is called the loss term of S i , since it accounts for all the possible transitions from a state of S i to any state outside S i .

4. Continuous Semi-Markov Coupled Random Processes

We now introduce what we shall call a continuous semi-Markov coupled random process, that is a random process { ( X t , χ t ) } , where the continuous parameter t [ 0 , T ] R , with T > 0 either finite or infinite, is time. Here, { X t } is a two-dimensional Markov Chain like the one described above, and { χ t } is an m 2 -dimensional vector Bernoullian process (more precisely, χ t ( χ i j , t ) , with ( i , j ) { 1 , 2 , , m } 2 , and each χ i j , t is the classical Bernoulli variable, with range R ( χ i j , t ) = { 0 , 1 } , and for any pair of triples ( i 1 , j 1 , t 1 ) and ( i 2 , j 2 , t 2 ) ( i 1 , j 1 , t 1 ) in { 1 , 2 , , m } 2 × [ 0 , T ] , the random variables χ i 1 j 1 , t 1 and χ i 2 , j 2 , t 2 are independent). Now, the state vectors obviously depend on time, but the transition probabilities and the probabilities associated with the Bernoulli variables χ i j , t may depend on time or not. It is well known that a positive probability distribution u i j ( t ) on [ 0 , T ] such that P ( χ i j , t = 1 ) = u i j ( t ) for any t cannot be assigned. All that can be done is to assign a continuous “probability density” τ i j ( t ) such that, for any s [ 0 , T ] and any sufficiently small Δ t , τ i j ( s ) Δ t is (approximately) the probability of finding in any interval [ s , s + Δ t ] some points such that χ i j , t = 1 . Roughly speaking, and referring to the interpretation of probability as relative frequency, we can state that τ i j ( s ) Δ t is the length of the set { t [ s , s + Δ t ] | χ i j , t = 1 } , so that τ i j ( s ) is the ratio of that length to the length Δ t . We preliminarily consider the case in which the density τ i j ( s ) is constant with respect to time for any ( i , j ) { 1 , 2 , , m } 2 , and write τ i j ( s ) = τ i j . So, for any assigned couple ( i , j ) , the Bernoulli variables χ i j , t 1 and χ i j , t 2 are identically distributed for any ( t 1 , t 2 ) [ 0 , T ] 2 and, for simplicity, we may write χ i j , t χ i j for any t.
The continuous Markov Chain { X t , } t [ 0 , T ) and the Bernoulli random process { χ t } t [ 0 , T ) are coupled in the following sense:
  • A function
    χ : z i j Ω 2 χ i j = χ ( z i j )
    is given;
  • For any t [ 0 , T ] , X t , the transition matrix P t is assumed to depend on χ t (and only on χ t ), in such a way that
    P z k l , z i j = P z k l , z i j | χ k l = 1 τ k l Δ t + P z k l , z i j | χ k l = 0 ( 1 τ k l Δ t ) ,
    where P z k l , z i j | χ k l = 1 (independent of time) is the transition probability from the state z k l to the state z i j under the assumption χ k l = 1 , and P z k l , z i j | χ k l = 0 is at any time the transition probability from the state z k l to the state z i j under the assumption χ k l = 0 . In particular, we assume that
    P z k l , z i j | χ k l = 0 ( t ) = δ k i δ l j , t [ 0 , T ] ,
     where δ k i and δ l j are the well-known Kronecker symbols, that is, δ k i = 0 when i k and δ k i = 1 when i = k , and δ l j = 0 when j l and δ l j = 1 when j = l .
    This means that the chain remains in each state z i j when χ i j = 0 (notice that the Markov Chain is stationary if and only if the matrix P z i j , z k l | χ i j = 1 is constant with respect to t. This remark will prove meaningful in the last sections).
This stated, if for any t [ 0 , T ] the random variables X 1 , t and X 2 , t are assumed to be independent, we shall write system (14) as follows:
p x , t + Δ t ( x i ) p x , t ( x i ) = = Δ t l = 1 m k i 1 m τ k l p x , t ( x k ) p y , t ( y l ) P z k l , x i | χ k l = 1 + τ i l p x , t ( x i ) k i 1 m p y , t ( y l ) P z i l , x k | χ i l = 1 . ( i = 1 , 2 , , p ) ,
Next, by dividing both sides by Δ t and letting Δ t 0 , we obtain
d p x , t d t ( x i ) = l = 1 m k i 1 m τ k l p x , t ( x k ) p y , t ( y l ) P z k l , x i | χ k l = 1 ( t ) + τ i l p x , t ( x i ) k i 1 m p y , t ( y l ) P z i l , x k | χ i l = 1 ( t ) ( i = 1 , 2 , , p ) ,
if the transition matrix P χ = 1 ( P z k l , z i j | χ k l = 1 ) is stationary.
It is important to stress the following:
  • As the above deduction procedure shows, system (19) is actually originated by coupling a stationary vector Markov Chain with an additional vector random process;
  • This latter process modifies the final form of the transition matrix, but preserves its stationarity (that is, its independence on time);
  • System (19) is just the system postulated in modern Kinetic Theory for Active Particles, as a generalization of the Boltzmann equation, to describe the behavior of isolated conservative many-particle systems.
The last point will be explained, illustrated, and briefly discussed in Section 5. However, in Section 6 and Section 7, we present something more: we show that different choices of the vector random process to be coupled with the starting Markov Chain can lead us to describe the behavior of non-isolated conservative many-particle systems.

5. Markov Chains and the Stochastic Description of Many-Particle Systems

In this section, we describe the stochastic description of the evolution of isolated many-particle systems, which has become increasingly more important in the last forty years as it can be applied to a wide range of different phenomena, from the mechanics of gases (which is its true historical origin, since Boltzmann proposed his celebrated Kinetic Theory [1]) to the behavior of biological systems, with particular concern for the interaction between healthy and tumor tissues (see [3] also for more complete bibliographic references); from social sciences, with particular concern for the diffusion of competing opinions, to economics [8], and to the behavior of swarms and crowds [2,5]. This interpretation has in fact given rise to a general model for the description of the behavior of a large number of many-particle systems, we would dare say of almost all possible types and in almost all possible contexts.
As is well known, a many-particle system is a set S of a very large number N of objects, usually called «particles» or «individuals» according to the context. In general, N = N ( t ) , i.e., the number of individuals of S changes with time: for instance, when S is a set of living beings, in any unit time the number of its members is increased or decreased by the difference between the people born and people who die in that unit time. The system is said to be conservative if N is independent of time (no births nor deaths per unit time), and nonconservative in the opposite case. The particles can interact pairwise (but the theory is currently developing to cover the case of multiple interactions) with each other and are identified (further than by some common «physical» properties they can possess to different degrees (these properties do not appear explicitly in the mathematical description of the behavior of S , but influence the values of parameters)) by their «states». A state is any measure of a property which turns out to be relevant in the considered context: for example, in a purely economic framework, the state is the amount of money (or of goods of some specified type). According to the context, the set of possible states of all particles can be finite, countably infinite, or continuous. For instance, the set of possible economic states can be naturally split into income classes, identified by their middle value, so that the set of states is naturally defined as discrete and finite. Accordingly, for simplicity, we shall now refer only to the case in which the set of states is finite. Accordingly, it will be taken to be the set
D { x 1 , x 2 , , x m } .
We cannot (and do not) claim to know exactly the state of each particle of the system at any time, so that even the prescription of N precise initial conditions on the states of particles would be quite unrealistic. What we assume to be allowed to state is that all the particles of S can be thought of at each time t as divided into m different classes C 1 , C 2 , ⋯, C m , each containing all and only the particles sharing the same state; if n i ( t ) (with i { 1 , 2 , m } is the number of particles sharing the i-th state at time t, then p i ( t ) = n i ( t ) / N is the probability (at time t) to pick at random in S a particle in the i-th state and we can construct a state vector p ( t ) ( p 1 ( t ) , p 2 ( t ) , , p m ( t ) ) . In addition, each individual belonging to a class C k (that is, occupying the state x k ) can jump into another class C i (that is, in another state x i ) if and only if it «interacts» with some other particle (for instance, by buying or selling some goods, and consequently paying or collecting money): the state y l (that is, the class C l ) of this latter case does not matter if not as it influences the assessment of the different probabilities of jump P k i = P ( x k , x i ) , which accordingly are denoted P k l ; i = P ( x k , x i ; y l ) . These probabilities (called transition probabilities) should also in general depend on time, but in this section we shall assume them to be stationary, i.e., independent of t. Moreover, if a particle has no interactions with other particles, then it will not change its state (to be complete and precise, we must point out that this condition, though apparently natural and intuitive, turns out to be rather restrictive in some cases. For this connection, see [16,17]). This means that condition (17) holds.
In conclusion, any many-particle system is in fact a random process { ( X t , χ t ) } t [ 0 , T ) of the above described type, where { X t } is a Markov Chain, characterized by the state space D 2 , and by a state (probability) vector p x expressing the distribution of particles over all possible different states in D as a percentage; { χ t ( χ i j , t ) } t [ 0 , T ) is an m 2 -dimensional vector Bernoulli process such that, for any ( i , j ) { 1 , 2 , m } 2 , χ i j , t = 1 expresses an interaction between a particle in the state x i and a particle in the state y j at time t. For any ( i , j ) { 1 , 2 , m } 2 , the variables χ i j , t ( t [ 0 , T ) ) are assumed to be identically distributed, and their common interaction rate or encounter rate τ i j expresses the average number of interactions between a particle of C i and a particle of C j occurring in any unit time interval, i.e., the (time-independent) measure of the set I 1 ( i , j ) { t I | χ t i j = 1 } in any interval of unit length. Each probability of jump P k i ; l = P ( C k , C i ; C l * ) , conditional to the occurrence of an interaction between a particle in the state x k and a particle in the state y l , is given by P ( x k , y l ) , x i | χ k l , t = 1 = P z k l , x i | χ k l , t = 1 . So, system (19) turns out to be exactly the system of equations governing, in stochastic terms, the evolution of isolated many-particle systems.
As already pointed out, the literature about the Kinetic Theory for Active Particles and other kinetic models offers a wide number of papers treating particular examples and interesting applications of system (19). In particular, in the socio-economic framework, the reader can find interesting applications in [21,22,23,24].
The next section is devoted to presenting a first generalization of the above scheme, as well as of the definition of semi-Markov coupled random processes, to describe the case of systems undergoing a sequence of deterministic external influences.

6. Equations for Many-Particle Systems with Time-Dependent Transition Matrices

The above reconstruction of the equations of Kinetic Theory in terms of a random process { ( X t , χ t ) } t [ 0 , T ) coupling a stationary Markov Chain and a Bernoullian process is very useful to extend our model for many-particle systems to cover the case of systems interacting with the external world. In fact, as we stressed in Section 4 and Section 5, Equation (19) holds for isolated many-particle systems. These equations are based on the assumption that the instantaneous variation in the distribution of particles over the state space is uniquely due to mutual interactions between particles, and that these interactions always modify the states of the involved particles in the same way, independently of the time at which they occur. (This assumption can be seen as equivalent to assuming a form of the Principle of Inertia for many-particle systems [17].) However, when the system perceives the influence of the external world, we are allowed to assume that both the interaction rate and the results of any interaction are more or less strongly modified from time to time by events that happen outside the system.
To be precise, our stochastic process { ( X t , χ t ) } t [ 0 , T ) must be replaced by another process { ( X t , χ t , E t ) } t [ 0 , T ) , where, for any t [ 0 , T ) , E t is another vector random variable whose dimension will be assigned by the kind of external phenomena we want to take into account as influencing the behavior of system S . More precisely, the external events are described by a set of d measures and/or h numerical labels. The measures are assumed to vary continuously in R , while the numerical labels typically express in quantitative terms some qualitative features (e.g., different degrees of a psychological disease, of the effectiveness of an educational system or a public service, etc.). Accordingly, in the most general case, the range of E t will be contained in W R d × K h (where K is a finite subset of Z ).
To start with, in this section we treat the case in which, for any t [ 0 , T ) , there exists an e ( t ) = ( u ( t ) , v ( t ) ) ( u 1 ( t ) , u 2 ( t ) , , u d ( t ) , v 1 ( t ) , v 2 ( t ) , , v h ( t ) ) W [ u ( t ) ( u 1 ( t ) , u 2 ( t ) , , u d ( t ) ) , v ( t ) ( v 1 ( t ) , v 2 ( t ) , , v h ( t ) ) ] for which the probability density function of E t is expressed, for any ε ( ξ , η ) ( ξ 1 , , ξ d , η 1 , , η h ) , with ξ ( ξ 1 , , ξ d , ) and , η ( η 1 , , η h ) , by the relation
ρ E t ( ε ) = δ ( ξ 1 u 1 ( t ) ) δ ( ξ 2 u 2 ( t ) ) δ ( ξ d u d ( t ) ) × × δ ( η 1 v 1 ( t ) ) δ ( η 2 u 2 ( t ) ) δ ( η h v h ( t ) ) t [ 0 , T ) = δ ( ξ u ( t ) ) δ ( η v ( t ) ) ,
where δ is the well-known Dirac distribution, expressing the circumstance that { E t } is a deterministic process (we recall that, for any random variable X, the probability density function associated with X is a function ρ X : x R ρ X [ 0 , + ) such that, for any interval ( a , b ) R ( a b ),
P ( a < X < b ) = a b ρ x ( x ) d x ) .
In the socio-economic framework, a typical example of a practical situation described by the addition of the process { E t } to the internal process based on mutual interactions can be found in the dependence of buying and selling interactions between the individuals of S on the seasonal variation in the availability of some particular food F. A precise and detailed description of this example requires a further generalization of the scheme outlined in the previous sections, since the state of each “particle” in S can no longer be identified by only a number, but it needs at least a triple set of numbers, say “(amount of money available, amount of food F to eat, amount of food F to sell)”. This, however, does not change anything substantial in the previous scheme: we only need to consider a state space D R 3 and to denote its elements by using vector notation. We then set
D = { x 1 , x 2 , , x m } ,
assuming that the three amounts, of money, of food F to eat, and of food F to sell, can be identified by integers (that is, multiple integers of suitable measurement units). Now, only considering interactions consisting of the purchase and sale of some amounts of F, we can expect that, in the seasons when F is largely available, these interactions will be more frequent, so that the interaction rates τ i j will have rather large values, while in the seasons when F can be produced only in small amounts these interaction rates will be small. Analogously, when F is easily produced in large amounts, we can reasonably expect that each transition probability P ( x k , y l ) , x i | χ k l = 1 ( t ) (with x k ( x k 1 , x k 2 , x k 3 ) and x i ( x i 1 , x i 2 , x i 3 ) ) will be decreasing when | x i 1 x k 1 | increases and increasing when either | x i 2 x k 2 | or | x i 3 x k 3 | increase; vice versa, when F is hardly available, we expect that P ( x k , y l ) , x i | χ k l = 1 ( t ) will be increasing when | x i 1 x k 1 | increases and decreasing when either | x i 2 x k 2 | or | x i 3 x k 3 | increases. So, both the interaction rates and the transition probabilities are functions of time, and system (17) must be simply written in the form
d p x , t d t ( x i ) = l = 1 m k i 1 m τ k l ( t ) p x , t ( x k ) p y , t ( y l ) P z k l , x i | χ t k l = 1 ( t ) + p x , t ( x i ) k i 1 m τ i l ( t ) p y , t ( y l ) P z i l , x k | χ t i l = 1 ( t ) , ( i = 1 , 2 , , l s ) .
Notice that the above described conditions on interaction rates and transition probabilities are a stochastic version of the well-known economic principle known as the law of supply and demand, stated in pure deterministic terms in the framework of economics. In fact, assuming that when the good F is scarcely available then exchanges of large amounts of money and small quantities of F are more likely than exchanges of small amounts of money and large quantities of F, means to exactly assume the law of supply and demand.
In conclusion, the introduction of a dependence on time of both interaction rates and transition probabilities seems to be the most appropriate way to link the behavior of a many-particle system to deterministic external influences, in particular ones that can modify economic or social equilibria according to well-defined temporal rhythms: we have considered here a case in which the external phenomena influencing the behavior of the system are periodic, and in general, in a purely economic framework, the most meaningful and frequent phenomena influencing economic transactions are such, with the sole exception of the average increase in prices identified as “inflation”. In the social framework, the changes in the external world can no longer be considered strictly periodic, although they tend to reproduce over time, since the time intervals between subsequent “reproductions” are usually very irregular. This is a hint towards the introduction of further generalizations and the consideration of transition matrices and interaction rates subjected to stochastic influences.

7. Equations for Many-Particle Systems with Stochastic Transition Matrices

The object of this section is the case in which the changes in the external world influencing the behavior of the system are not assigned at each time, but are in turn stochastic events. This situation will be described by considering again a stochastic process { ( X t , χ t , E t ) } t [ 0 , T ) , where now, for any t [ 0 , T ) , E t is a random variable in the “strict” sense of the word. More precisely, for any t [ 0 , T ) , we assign the following:
  • The range W R d × Z h of all possible values of E t for any t [ 0 , T ) (with the same meaning as in the previous section); we denote again by ε ( ξ , η ) ( ξ 1 , , ξ d , η 1 , , η h ) any unspecified element of W ;
  • For any t [ 0 , T ) , a couple E t ( E t , 1 , E t , 2 ) of independent random variables, the former continuous and the latter discrete, with ranges R d and Z h , respectively;
  • Setting a ( a 1 , a 2 , , a d ) , b ( b 1 , b 2 , , b d ) , and I ( a , b ) [ a 1 , b 1 ] × [ a 2 , b 2 ] × × [ a d , b d ] , a probability density function ρ t , 1 ( ξ ) ρ 1 ( ξ , t ) for E 1 such that
    P ( E t , 1 I ( a , b ) ) = I ( a , b ) ρ 1 ( ξ , t ) d ξ < 1
    for any ( a , b ) R d × R d such that R d I ( a , b ) ;
  • A probability distribution function P t , 2 for E 2 such that
    P ( E t , 2 = η ) = P t , 2 ( η ) P 2 ( η , t ) ,
    for any η Z h , P 2 ( η , t ) < 1 .
  • The density τ i j ( ε , t ) (on the interval [ 0 , T ) ) of the probability that χ t i j = 1 conditional to the event E t = ε .
Obviously, for any ε W , the associated probability density of E t at ε is
ρ E t ( ε ) = ρ 1 ( ξ , t ) P 2 ( η , t ) ,
so that, according to the law of alternatives, and under the conditions imposed in Section 4, we see at once that each transition probability depends on time, and its expression is
P z i l , x k | χ i l , t = 1 ( t ) = η Z P 2 ( η , t ) R d P ( z i l , x k | χ i l , t = 1 , E 1 = ξ , E 2 = η ) ρ 1 ( ξ , t ) τ i j ( ε , t ) d ξ .
As a consequence, the system of equations governing the evolution of the many-particle system S takes the form
d p x , t d t ( x i ) = l = 1 m k i 1 m η Z P 2 ( η , t ) × × p x , t ( x k ) p y , t ( y l ) R d P z k l , x i | χ t k l = 1 , E t = ε ρ 1 ( ξ , t ) τ k l ( ε , t ) d ξ + p x , t ( x i ) p y , t ( y l ) R d P z i l , x k | χ t i l = 1 , E t = ε ρ 1 ( ξ , t ) τ i l ( ε , t ) d ξ , ( i = 1 , 2 , , l s )
where P z k l , x i | χ t k l = 1 , E t = ε P ( z k l , x i | χ t k l = 1 , E 1 = ξ , E 2 = η ) and P z i l , x k | χ t i l = 1 , E t = ε P ( z i l , x k | χ t i l = 1 , E 1 = ξ , E 2 = η ) , when D is discrete, as assumed in the previous sections. When D is instead a continuous set (typically, a real interval), then the sums in relation (24) must be replaced by integrals and—with the obvious correspondences x i x , x k x , y l y , z k l ( x , y ) , z i l ( x , y ) , τ k l ( ε , t ) τ ( x , y ; ε , t ) , τ i l ( ε , t ) τ ( x , y ; ε , t ) , χ t k l χ ( x , y , t ) and χ t i l χ ( x , y , t ) —we obtain the equation
p x t ( x , t ) = η Z P 2 ( η , t ) × × Ω Ω p x ( x , t ) p y ( y , t ) R d P ( x , y ) , x | χ ( x , y , t ) = 1 , E t = ε ρ 1 ( ξ , t ) τ ( x , y ; ε , t ) d ξ + p x ( x , t ) p y ( y , t ) R d P ( x , y ) , x | χ ( x , y , t ) = 1 , E t = ε ρ 1 ( ξ , t ) τ ( x , y ; ε , t ) d ξ d x d y
where we have also agreed to write
p x , t ( x ) = p x ( x , t ) , p x , t ( x ) = p x ( x , t ) , p y , t ( y ) = p y ( y , t ) .
System (25) is clearly the most general description of the behavior of any many-particle system, be it isolated or interacting with the external world, provided only binary interactions between the particles of the system are taken into account. On the one hand, we can see at once that the equations governing the evolution of the state vectors for an isolated system can be obtained from system (25) by taking P ( E t = c ) = 1 for any t (however, the constant c is chosen), and those for a system undergoing a deterministic external influence correspond to condition (21); on the other hand, we are able to depict the quite general case of external actions modifying the rates and the effects of internal interactions. These important influences can be considered together with other types of recently considered external influences [18], acting directly on the state vectors and not on the interactions between particles.
We will, however, leave aside direct external influences on state vectors. In connection with the external actions considered and described in systems (22), (24), and (25), and with particular concern to random events, we want to consider and discuss at least some possible applications of the above model to economic and social problems.

8. Conclusions and Perspectives

The doubly stochastic model presented in Section 7 seems to fit the description of the complex world of social and economic interactions in the whole human society, whose very rapid and often sudden and disordered changes we witness every day. Of course, there are also other very interesting applications, for instance in biochemistry, if we want to study the effects of accidentally breathing carbon monoxide or carbon dioxide: in these cases, the interactions between the hemoglobin contained in red blood cells and the cells of muscle tissue are either prevented (zero interaction rates, for carbon monoxide) or see their ability to deliver a sufficient quantity of oxygen to these latter, seriously diminished cells (transition probabilities extremely re-distributed toward low values, for carbon dioxide). However, these, at this moment, are only proposals for future applications and models: till now, there is no research about an equation of the Kinetic Theory for Active Particles in which interaction rates and transition probabilities are stochastic, as influenced by random external events. Furthermore, now, it is to economic and social scenarios that we want to turn our attention.
A first, rather evident example is economic, and does not require that the system S be split into a number N of different functional subsystems  S j (not to be confused with the classes C i of individuals sharing the same state (see, e.g., [3,7]). (In this paper, we have not explicitly recalled the notion of “functional subsystems” just because this would simply amount to introducing one more index to each term appearing in the equations and, in the case of continuous state variables, to replacing a single equation with N different equations, one for each subsystem. Readers not acquainted with the Kinetic Theory for Active Particles can see this topic in the cited books, and also in [15], where subsystems are recalled and briefly discussed.) If we consider any national population, and confine ourselves to only consider internal commercial interactions (without taking into account import and export), then we can correlate the frequency of such interactions, and also the probability distribution of their effects, to the rather stochastic fluctuations in inflation, depending on abrupt external events like sudden wars, interruptions in the import of raw materials for the internal production of goods, and the political choices of central banks about interest rates. In this case, the values of the random variables E t should be vectors, whose components will be a dichotomous variable expressing the possible occurrence of a war, the prices of raw materials, and the official interest rates. It should be a task of economists to study the dependence of interaction rates and transition probabilities on these variables by means of suitable statistical analyses.
Much more interesting are the social problems that can be managed by means of the above model. The one we want to describe concerns the role of school in a society. The interaction rates between students and teachers would not be affected by the diffusion of telematic and IT tools, and—what is more—by the utilitarian way of seeing now widespread in the government institutions of almost all Western nations, but certainly the transition probabilities would. The results of teaching, described as increases in critical skills in addition to technical skills, will depend on prescriptions by governments. Although the common views about the most desirable outcomes of education are rather stable, they nevertheless change with time in a rather unexpected way. Accordingly, the interactions between students and teachers can be described by the above outlined model, where the states of both students and teachers are vectors whose components express the possible different skills of students as well as the skills of teachers, and the random variables E t are a vector of scores assigned by common thought to different skills.
Coupling the direct effects of the environment on the state distribution of one or more interacting populations considered as subsystems of a unique many-particle system with its effects on interaction rates and transition probabilities seems now to be the most promising and interesting way to enable the scheme of the Kinetic Theory to describe and predict the evolution of both animal and human collectivities in the presence of random environmental changes (for instance, systems of preys and predators when random and abrupt climate changes take place, or systems of nations involved in an unexpected war).
The examples simply outlined above are the very challenges for the development of a theory based on one of systems (24) and (25). They require the intervention of statisticians and the help of scholars of many different disciplines, like economics, the science of education, politics, and ethology, just to mention a few problems and themes possibly involved in the applications of the theory. Possible applications will be the main objects of our future research, at least by trying to outline possible case studies and to provide some numerical simulations, and we expect and hope that other researchers will follow this path to apply the scheme to an ever-wider class of concrete serious problems, with particular concern for the economy, which strongly needs detailed descriptions of situations that still nowadays systematically escape the schematic and deterministic descriptions of classical economies.

Author Contributions

Conceptualization, B.C. and M.M.; Formal analysis, B.C. and M.M.; Writing—original draft, B.C. and M.M.; Writing—review & editing, B.C. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The research of M.M. has been carried out under the auspices of GNFM (National Group of Mathematical-Physics) of INdAM (National Institute of Advanced Mathematics).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boltzmann, L. Lectures on Gas Theory; Courier Corporation: Washington, DC, USA, 2012. [Google Scholar]
  2. Aylaj, B.; Bellomo, N.; Gibelli, L. Crowd Dynamics by Kinetic Theory Modeling: Complexity, Modeling, Simulations, and Safety; Springer: New York, NY, USA, 2020. [Google Scholar]
  3. Bellomo, N.; Bellouquid, A.; Gibelli, L.; Outada, N. A Quest towards a Mathematical Theory of Living Systems; Birkhäuser: Basel, Switzerland, 2017. [Google Scholar]
  4. Bellomo, N.; Bertotti, M.L.; Delitala, M. From the kinetic theory of active particles to the modeling of social behaviors and politics. Qual. Quant. 2007, 41, 545–555. [Google Scholar] [CrossRef]
  5. Bellomo, N.; Brezzi, F. Traffic, crowds and swarms. Math. Model. Methods Appl. Sci. 2008, 18, 1145–1148. [Google Scholar] [CrossRef]
  6. Bellomo, N.; Carbonaro, B. On the modeling of complex socio-psychological systems with some reasonings about Kate, Jules and Jim. Diff. Equ. Nonlinear Mech. 2006, 1, 086816. [Google Scholar]
  7. Bellomo, N.; Degond, P.; Tadmor, E. (Eds.) Active Particles, Volume 1: Advances in Theory, Models, and Applications; Birkhäuser: Basel, Switzerland, 2017. [Google Scholar]
  8. Bertotti, M.L.; Modanese, G. Economic inequality and mobility in kinetic models for social sciences. Eur. Phys. J. Spec. Top. 2019, 225, 1945–1958. [Google Scholar] [CrossRef]
  9. Carbonaro, B.; Menale, M. Dependence on the Initial Data for the Continuous Thermostatted Framework. Mathematics 2019, 7, 602. [Google Scholar] [CrossRef]
  10. Chinesta, F.; Abisset-Chavanne, E. A Journey around the Different Scales Involved in the Description of Matter and Complex Systems: A Brief Overview with Special Emphasis on Kinetic Theory Approaches; Springer: New York, NY, USA, 2017. [Google Scholar]
  11. Menale, M.; Carbonaro, B. The mathematical analysis towards the dependence on the initial data for a discrete thermostatted kinetic framework for biological systems composed of interacting entities. AIMS Biophys. 2020, 7, 204–218. [Google Scholar] [CrossRef]
  12. Waldmann, M.R.; Martignon, L. A Bayesian Network Model of Causal Learning. In Proceedings of the Twentieth Annual Conference of the Cognitive Science Society, Madison, WI, USA, 25–28 July 2018; pp. 1102–1107. [Google Scholar]
  13. Sajid, Z.; Khan, F.; Zhang, Y. Integration of interpretive structural modelling with Bayesian network for biodiesel performance analysis. Renew. Energy 2017, 107, 194–203. [Google Scholar] [CrossRef]
  14. Hosseini, S.; Barker, K. A Bayesian network model for resilience-based supplier selection. Int. J. Prod. Econ. 2016, 180, 68–87. [Google Scholar] [CrossRef]
  15. Carbonaro, B.; Vitale, F. Some Remarks on Vector Markov Chains and Their Applications to the Description of Many-Particle Systems. In Stochastic Processes—Theoretical Advances and Applications in Complex Systems; Kulasiri, D., Ed.; IntechOpen: Rijeka, Croatia, 2024. [Google Scholar]
  16. Carbonaro, B. Modeling epidemics by means of the stochastic description of complex systems. Comput. Math. Methods 2021, 3, 1208–1220. [Google Scholar] [CrossRef]
  17. Carbonaro, B. The role of the principle of inertia in KTAP models. J. Math. Phys. 2022, 63, 013302. [Google Scholar] [CrossRef]
  18. Carbonaro, B.; Menale, M. A nonconservative kinetic framework under the action of an external force field: Theoretical results with application inspired to ecology. Eur. J. Appl. Math. 2023, 34, 1170–1186. [Google Scholar] [CrossRef]
  19. Menale, M.; Munafò, C.F. A kinetic framework under the action of an external force field: Analysis and application in epidemiology. Chaos Solitons Fractals 2023, 1174, 13801. [Google Scholar] [CrossRef]
  20. Marsan, G.A.; Bellomo, N.; Gibelli, L. Stochastic Evolving Differential Games toward a Systems Theory of Behavioral Social Dynamics. arXiv 2015, arXiv:1506–05699v2. [Google Scholar]
  21. Bellomo, N.; Marsan, G.A.; Tosin, A. Complex Systems and Society: Modeling and Simulation; Springer: New York, NY, USA, 2006. [Google Scholar]
  22. Toscani, G.; Sen, P.; Biswas, S. Kinetic exchange models of societies and economies. Phil. Trans. R. Soc. A Math. Phys. Eng. Sci. 2022, 380, 2224. [Google Scholar] [CrossRef] [PubMed]
  23. Dimarco, G.; Pareschi, L.; Toscani, G.; Zanella, M. Wealth distribution under the spread of infectious diseases. Phys. Rev. E 2020, 102, 022303. [Google Scholar] [CrossRef] [PubMed]
  24. Bernardi, E.; Pareschi, L.; Toscani, G.; Zanella, M. Effects of vaccination efficacy on wealth distribution in kinetic epidemic models. Entropy 2022, 24, 216. [Google Scholar] [CrossRef] [PubMed]
  25. Lawler, G.F. Introduction to Stochastic Processes; Chapman and Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
  26. Rozanov, Y.A. Probability Theory: A Concise Course; Dover Publications: New York, NY, USA, 2019. [Google Scholar]
  27. Gilch, L. Markov Chains: An Introduction: Lecture Notes; Independently Published: Traverse City, MI, USA, 2022; ISBN 9798358906143. [Google Scholar]
  28. Norris, J.R. Markov Chains; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  29. Benaim, M.; Hurth, T. Markov Chains on Metric Spaces: A Short Course; Springer Nature: New York, NY, USA, 2022. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Carbonaro, B.; Menale, M. Markov Chains and Kinetic Theory: A Possible Application to Socio-Economic Problems. Mathematics 2024, 12, 1571. https://doi.org/10.3390/math12101571

AMA Style

Carbonaro B, Menale M. Markov Chains and Kinetic Theory: A Possible Application to Socio-Economic Problems. Mathematics. 2024; 12(10):1571. https://doi.org/10.3390/math12101571

Chicago/Turabian Style

Carbonaro, Bruno, and Marco Menale. 2024. "Markov Chains and Kinetic Theory: A Possible Application to Socio-Economic Problems" Mathematics 12, no. 10: 1571. https://doi.org/10.3390/math12101571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop