1. Introduction
Kinetic equations are measure-valued equations describing the dynamic law of large number (LLN) limit of Markovian interacting particle systems when the number of particles tends to infinity. The resulting nonlinear measure-valued evolutions can be interpreted probabilistically as nonlinear Markov processes, see [
1]. In case of discrete state spaces, the set of probability measures coincides with the simplex
of sequences of nonnegative numbers
such that
, where
n can be any natural number, or even
. The corresponding kinetic equations (in the case of stationary transitions) are ordinary differential equations (ODEs) of the form
where
Q is a stochastic or Kolmogorov
Q-matrix (that is, it has non-negative non-diagonal elements and the elements on each row sum up to one) depending Lipschitz continuously on
x. Let
denote the solution of Equation (
1) with the initial condition
x.
It is seen that evolution (
1) is a direct nonlinear extension of the evolution of the probability distributions of a usual Markov chain, which is given by the equation
with a constant
Q-matrix
Q.
Remark 1. One can show (see [1]) that any ordinary differential equation (with a Lipschitz r.h.s.) preserving the simplex has form (1) with some function . Instead of looking at the evolution of the distributions, one can look alternatively at the evolution of functions
of these distributions, which clearly satisfy the equation
On the language of the theory of differential equations, one says that ODEs (
1) are the characteristics of the linear first order partial differential Equation (
2).
In case of a usual Markov chain (with a constant
Q) it is seen that the set of linear functions
F is preserved by Equation (
2). In fact if,
with some vector
, then
with
satisfying the equation
, which is dual to the equation
. For nonlinear case, this conservation of linearity does not hold.
More generally (see [
1]), for a system of mean-field interacting particles given by a family of operators
, which are generators of Markov processes in some Euclidean space
depending on probability measures
on
as parameters and having a common core, the natural scaling limit of such a system, as the number of particles tends to infinity (dynamic LLN), is described by the kinetic equations, which are most conveniently written in the weak form
where
f is an arbitrary function from the common core of the operators
.
Remark 2. In Equation (3) we stress explicitly in the notation that depend on time t (to distinguish from time independent test function f), while for the standard ODE (1) we have omitted this dependency for brevity. The corresponding generalization of Equation (
2) is the following differential equation in variational derivatives (see [
1]):
The most studied situation is the case with
being the diffusion operators, in which case the nonlinear evolution given by (
3) is referred to as the nonlinear diffusion or the McKean-Vlasov diffusion. Other important particular cases include nonlinear Lévy processes (when
generate Lévy processes), nonlinear stable processes (when
generate stable or stable-like processes) and various cases with jump-type process generated by
that include the famous Boltzmann and Smoluchovski equations.
All these equations are derived as the natural scaling limits of some random walks on the space of the configurations of many-particle systems.
Standard diffusions are known to be obtained by the natural scaling limits of simple random walks. For instance, in this way one can obtain the simplest diffusion processes with variable drift governed by the diffusion equations
where
and ∇ are operators acting on the
x variable.
When the standard random walks are extended to more general CTRWs (continuous time random walks), characterised by the property that the random times between jumps are not exponential, but with tail probabilities decreasing by a power law, their limits turn to non Markovian processes described by the fractional evolutions. The fractional equations were introduced into physics precisely via such limits, see [
2]. For instance, instead of Equation (
5), the simplest equation that one can obtain via a natural scaling is the equation
where
is the Caputo-Dzherbashyan fractional derivative of some order
(see [
3,
4]).
Now the question arises: what will be the natural fractional version of the kinetic evolutions (
3) and (
4)? One way of extension would be to write the Caputo-Dzherbashyan fractional derivative of some fixed order
instead of the usual one in (
3), as was done, e.g., in [
5]. However, if one follows systematically the idea of scaling from the CTRW approximation, and take into account the natural possibility of different waiting times for jumps from different states (as for usual Markovian approximation), one would obtain an equation of a more complicated type, with the fractional derivatives of position-dependent order. In [
6] this derivation was performed for the case of a discrete state space, that is for a nonlinear Markov chain described by Equation (
1), leading to the fractional generalization of Equation (
2) of the form
where
is the vector describing the power laws of the waiting times in various states
and
is the right Caputo-Dzherbashyan fractional derivative of order
depending on the position
x and acting on the time variable
t.
The aim of the present paper is to derive the corresponding limiting equation in the general case, which, instead of (
4) (and generalising (
7)), writes down as the equation
where
is a function on
and
We also supply the well-posedness of this equation and the probabilistic formula for the solutions. We will perform the derivation only in the case of integral operators , that is probabilistically for the underlying Markov processes of pure jump type.
The content of the paper is as follows. In the next section, we recall some basic notations and facts from the theory of measure-valued limits of interacting particle systems. In
Section 3 we obtain the dynamic LLN for interacting multi-agent systems for the case of non-exponential waiting times with the power tail distributions. In
Section 4 we formulate our main results for the new class of fractional kinetic measure-valued evolutions, with the mixed fractional-order derivatives and with variational derivatives with respect to the measure variable. In
Section 5,
Section 6 and
Section 7, we present proofs of our main results, formulated in the previous section. Namely, we justify rigorously the limiting procedure, prove the well-posedness of the new equations and present a probabilistic formula for their solutions.
In
Section 8 we extend the CTRW modeling of interacting particles to the case of binary or even more general
k-ary interactions.
In the next two
Section 9 and
Section 10, we present examples of the kinetic equations for binary interaction: the fractional versions of the Smoluchovski coagulation and Boltzmann collision models
In
Appendix A,
Appendix B and
Appendix C we present auxiliary results we need, namely, the standard functional limit theorem for the random-walk-approximation; some results on time-nonhomogeneous stable-like subordinators; and a standard piece of theory about Dynkin’s martingales in a way tailored to our purposes.
The bold letters and will be used to denote expectation and probability.
All general information on fractional calculus that we are using can be found in the books [
7,
8,
9,
10].
2. Preliminaries: General Kinetic Equations for Birth-and-Death Processes with Migration
Let us recall some basic notations and facts from the standard measure-valued-limits of interacting processes.
Let X be a locally compact metric space. For simplicity and definiteness, one can take or (the set of positive numbers), but this is not important. Denoting by a one-point space and by the powers (j times), we denote by their disjoint union . In applications, X specifies the state space of one particle and stands for the state space of a random number of similar particles. We denote by the Banach spaces of symmetric (invariant under all permutations of arguments) bounded continuous functions on and by the corresponding spaces of functions on the finite power . The space of symmetric (positive finite Borel) measures is denoted by . The elements of and are interpreted as the (mixed) states and observables for a Markov process on . We denote the elements of by bold letters, say , .
Reducing the set of observables to means effectively that our state space is not (or ) but rather the quotient space (or resp.) obtained by factorization with respect to all permutations, which allows the identifications and .
For a function f on X we shall denote by the function on defined as for any .
A key role in the theory of measure-valued limits of interacting particle systems is played by the scaled inclusion
to
given by
which defines a bijection between
and the set
of finite sums of
h-scaled Dirac’s
-measures, where
h is a small positive parameter. If a process under consideration preserves the number of particles
N, then one usually chooses
.
Remark 3. Let us stress that we are using here a non-conventional notation: , which is convenient for our purposes.
Clearly each
is defined by its components (restrictions)
on
so that for
, say, we can write
. Similar notations for the components of measures from
will be used. In particular, the pairing between
and
can be written as
A
mean-field dependent jump-type process of particle transformations (with a possible change in the number of particles) or a
mean-field dependent birth-and-death process with migration, can be specified by a continuous transition kernel
from
X to
depending on a measure
as a parameter.
Remark 4. For brevity we write (10) in a unified way including . More precisely, the transitions with describe the death of particles and are specified by some rates . We restrict attention to finite in order not to bother with some irrelevant technicalities arising otherwise.
To exclude fictitious jumps one usually assumes that for all x, which we shall do as well.
By the
intensity of the interaction at
x we mean the total mass
Supposing that any particle, randomly chosen from a given set of
n particles, can be transformed according to
P, leads to the following generator of the process on
By the standard probabilistic interpretation of jump processes (see, e.g., [
1,
11]), the probabilistic description of the evolution of a pure jump Markov process on
specified by the generator
G (if this process is well defined) is as follows. Starting from a state
, one attaches to each
a random
-exponential waiting time
(exponential clock). That is,
for all
. Then the minimum
of all these times is again an exponential random time, namely
-exponential waiting time with
When rings, a particle at that makes a transition, is chosen according to the probability law , and then it makes an instantaneous transition to according to the distribution . Then this procedure repeats, starting from the new state .
By the transformation (
9), we transfer the process generated by
G on
to the process on
with the generator
Then it is seen that, as
(and for smooth
F), these generators converge to the operator
This makes it plausible to conclude (for detail see [
1]) that, as
(and under mild technical assumptions), the process generated by (
12) converges weakly to a deterministic process on measures generated by (
13), so that this process is given by the solution of a kinetic equation of type (
3), that is,
Denoting by
the solution to Equation (
14) with the initial condition
we can rewrite evolution (
14) equivalently in terms of the functions
as the equation of type (
4):
Notice that (
14) is obtained from (
15) by choosing
F to be a linear function
.
Alternatively, and more relevant for the extension to CTRW, we can obtain the same limit from a discrete Markov chain. Namely, let us define a Markov chain on with the transition operator such that, in a state , a jump from occurs with the probability and it is distributed according to the distribution , and with the probability the process remains in . In terms of the measures the jumps are described by the transitions from to .
We set
to link the scaling in time with the scaling in the number of particles. Let us see what happens in the limit
. Namely, we are interested in the weak limit of the chains with transitions
, where
denotes the integer part of the number
, as
. It is well known (see, e.g., Theorem 19.28 of [
11] or Theorem 8.1.1 of [
12]) that if such a chain converges to a Feller process, then the generator of this limiting process can be obtained as the limit
One sees directly that this limit coincides with (
13).
In the simplest case when the number of particles is preserved by all transformations, that is when only
is allowed in (
10) (that is, only migration can occur), the Equations (
14) and (
15) simplify to the equations
and, respectively
3. CTRW Modeling of Interacting Particle Systems
Our objective is to obtain the dynamic LLN for interacting multi-agent systems for the case of non-exponential waiting times with the power tail distributions. As one can expect, this LLN will not be deterministic anymore.
We shall assume that the waiting times between jumps are not exponential, but have a power-law decay. Recall that a positive random variable
with a probability law
on
is said to have a
power tail of index if
for large
t, that is the ratio of the l.h.s. and the r.h.s tends to 1, as
. Here
is a positive constant.
As the exponential tails, the power tails are invariant under taking minima. Namely, if , , are independent variables with a power tail of indices and normalising constants , then is clearly a variable with a power tail of index and normalising constant .
In full analogy with the case of exponential times of the discussion above, let us assume that the waiting time of the agent at
to decay has the power tail with the index
with some fixed
. For simplicity, assume that the normalising constant
equals 1. Consequently, the minimal waiting time of all
n points in a collection
will have the probability law
with a tail of the index
, with
given by (
11):
Our process with power tail waiting times can thus be described probabilistically as follows. Starting from any time and current state , we wait a random waiting time , which has a power tail with the index . Then everything goes as in the above case of exponential waiting times. Namely, when rings, a particle at that makes a transition is chosen according to the probability law , and then it makes an instantaneous transition to according to the distribution . Then this procedure repeats, starting from the new state .
In order to derive the LLN in this case, let us lift this non-Markovian evolution on the space of subsets
or the corresponding measures
to the discrete time Markov chain on
by considering the total waiting time
s as an additional space variable and additionally making the usual scaling (by
) of the waiting time for the jumps of CTRW (see Proposition A1 from
Appendix A). Thus we consider the Markov chain
on
with the jumps occurring at discrete times
,
, such that the process at a state
at time
jumps to
, or equivalently a state
jumps to the state
, where
and
are chosen as above (that is,
according to the law
and
according to the law
) and
r is distributed by
.
As above, we link the scaling of measures with the scaling of time by choosing
, which we set from now on. Then the transition operator of the chain
is given by
for
.
What we are interested in is the value of the first coordinate
evaluated at the random time
such that the total waiting time
reaches
t, that is, at the time
so that
is the inverse process to
. Thus the
scaled mean-field interacting system of particles with a power tail waiting time between jumps is the (non-Markovian) process
This process can also be called the scaled CTRW of mean-field interacting particles (with birth-and-death and migration).
4. Main Results
Let us see first of all what happens with the process in the limit . Namely, we are interested in the weak limit of the chains with transitions , where denotes the integer part of the number , as .
As above, if such a chain converges to a Feller process, then the generator of this limiting process can be obtained as the limit
Lemma 1. Assume that has a bounded derivative in t and a bounded variational derivative in μ. If converges weakly to some measure, which we shall also denote by μ with some abuse of notation, as , thenwhere denotes the usual paring of the function P with the measure μ (but taking into account the additional dependence of P on μ): Proposition 1. Suppose is a weakly continuous transition kernel from to , that is, the measures depend weakly continuous on . Moreover let the intensity be everywhere strictly positive and uniformly bounded. Finally, the transition kernel is assumed to be subcritical meaning that Then the Markov process in such that the first coordinate does not depend on s (it could be well denoted shortly ), is deterministic, and solves the kinetic equation of type (14), namely the equationand the second coordinate is a time nonhomogeneous stable-like subordinator generated by the time dependent family of generators(see Appendix B for the proper definition of this process), is well defined and is generated by operator (22). Moreover, the discrete time Markov chains given by (19) converge weakly to the Markov process , so that, in particular, for any continuous bounded function F, Remark 5. The assumption of boundedness of the intensity is made for simplicity and can be weakened essentially. Effectively one needs here the well-posedness of the kinetic Equation (24), which is established under rather general assumptions, see [1]. Finally we can formulate our main result.
Theorem 1. Under the assumptions of Proposition 1, the marginal distributions of the scaled CTRW of mean-field interacting particles (20) converge to the marginal distributions to the processwhere is the random time when the stable-like process generated by (25) and started at s reaches the time t,that is, for a bounded continuous function , it holds that Moreover, for any smooth function (with continuous bounded variational derivative), the evolution of averages satisfies the mixed fractional differential equationwith the terminal condition , where the right fractional derivative acting on the variable of is defined as It is not difficult to extend this statement to the functional level, namely by deriving the convergence in distribution of the scaled CTRW (
20) to the process (
27), but we shall not plunge into related technical details here.
5. Proof of Lemma 1
We have
where the error term is
with
Assuming that
converges to some measure, which we also denote by
, as
, we can conclude by (
A3) that the first term in (
31) converges, as
, to
whenever
F is continuously differentiable in
s. By the definition of the variational derivative, the second term in (
31) converges, as
, to
To estimate the term
R we note that if
F has a bounded derivative in
t and a bounded variational derivative in
, then
is uniformly bounded by
and the derivative
is uniformly bounded. Hence by (
A3) it follows that
, as
implying (
22).
6. Proof of Proposition 1
By Theorem 6.1 of [
1], under the assumptions of the Proposition, the kinetic Equation (
24) is well posed in
. Consequently, in view of the discussion of stable-like subordinators from
Appendix B, the process
in
, as described in Proposition 1, is well defined.
Let us show that, for smooth functions
such that
, the generator of this process is indeed given by formula (
22). To simplify formulas we shall sometimes consider
to be defined for all
s so that
for all
. By (
A10), the transition operators
of this process are given by the formula
where, by (
A11) and (
25),
where for brevity we wrote
for
.
We need to show that
with
given by (
22). Differentiating (
32), we find that
We see that the second term turns to the second term of (
22) at
. Noting that
for
and using (
A13), we get for the first term that
with
given by (
25), which by changing variables rewrites as
and which in turn for
becomes equal to
, that is, to the first term of (
22). Thus smooth functions
vanishing at
and
do belong to the domain of the generator of the process
.
Differentiating Formula (
32) with respect to
and
s we can conclude that smooth functions are invariant under the semigroup of this process. In fact, differentiability with respect to
s follows from the explicit Formula (
33) for
(and bounds (
A12)), and differentiability with respect to
follows, on one hand side, from the explicit formula for
, and, on the other hand, from the smooth dependence of the solutions to kinetic equations
on initial data
. This smooth dependence is a known fact from the theory of kinetic equations (Theorem 8.2 of [
1]), which has in fact a rather straightforward proof under the assumption of bounded intensities. Consequently, smooth functions
, vanishing at
and
, form an invariant core for the semigroup of the process
.
Consequently, from Lemma 1 and the general result on the convergence of semigroups (see e.g., Theorem 19.28 of [
11]) we can conclude that the Markov chains
converge weakly to the Markov process
thus completing the proof of the Proposition.
7. Proof of Theorem 1
By the density arguments, to prove (
28), it is sufficient to show that
as
, for smooth functions
F (that have bounded continuous variational derivatives).
To estimate
I we write
where
(depending in
) is the distribution of
. Choosing
K large enough we can make the second integral arbitrary small uniformly in
. And then, by Proposition 1, we can make the first integral arbitrary small by choosing small enough
(and uniformly in
t from compact sets).
It remains
II. Integrating by parts we get the following:
and therefore
By Proposition 1 (and because the distribution of the random variable is absolutely continuous), as . Therefore by the dominated convergence, as .
It remains to show that
F satisfies Equation (
29). However, this follows from the general arguments, see Formulas (
A14) and (
A15) from
Appendix C, because Equation (
29) is a particular case of equation
with
from (
A14).
8. Extension to Binary and -ary Interaction
Here we extend the CTRW modeling of interacting particles to the case of binary or even more general
k-ary interactions stressing main new points and omitting details. Firstly we recall the basic scheme of general Markovian binary interaction from [
1].
A mean-field dependent jump-type process of binary interaction of particles (with a possible decrease in the number of particles) can be specified by a continuous transition kernel
from
to
depending on a measure
as a parameter, with the intensity
We again assume that always.
Remark 6. The possibility that more than two particles can result after the decay of two particles creates some technical difficulties for the analysis of the kinetic equation that we choose to avoid here.
For a finite subset of a finite set , we denote by the number of elements in I, by its complement and by the collection of variables .
The corresponding scaled Markov process on
of binary interaction is defined via its generator
(note the additional multiplier
h, as compared with (
12), needed for the proper scaling of binary interactions). In terms of the measures from
the process can be equivalently described by the generator
which acts on the space of continuous functions
F on
.
Applying the obvious equation
which holds for any
and
, one observes that the operator
can be written in the form
On the linear functions
this operator acts as
It follows that if
and
tends to some finite measure
(in other words, that the number of particles tends to infinity, but the “whole mass” remains finite due to the scaling of each atom), the corresponding evolution equation
on linear functionals
tends to the equation
which is the
general kinetic equation for binary interactions of pure jump type in
weak form.
For a nonlinear smooth function
the time evolving function
satisfies the equation in variational derivatives
As in the case of mean-field interaction, the evolution (
39) can be obtained as the limit of discrete Markov chains with waiting times depending on the current position of the particle system.
Let us see what comes out of the CTRW modelling.
To this end we attach a random waiting time
to each pair
of particles, assuming that
has the power tail with the index
with some fixed
. Consequently, the minimal waiting time
of all pairs among a collection
will have the probability law
with a tail of the index
, with
where (
36) was used.
In full analogy with the case of a mean-field interaction, let us define the discrete-time Markov chain on by considering the total waiting time s as an additional space variable and additionally making the usual scaling of the waiting times. Namely, we consider the Markov chain on with the jumps occurring at discrete times , , such that the process at a state at time jumps to , or equivalently a state jumps to the state , where are chosen according to the law and according to the law ), and r is distributed by .
Again choosing
, the transition operator of the chain
is given by
for
.
We are again interested in the value of the first coordinate
evaluated at the random time
such that the total waiting time
reaches
t, that is, at the time
so that
is the inverse process to
. Thus the
scaled mean-field and binary interacting system of particles with a power tail waiting time between jumps is the (non-Markovian) process
This process can also be called the scaled CTRW of mean-field and binary interacting particles.
Analogously to Lemma 1 one shows that
where
Then, analogously to Proposition 1 one establishes the following result.
Proposition 2. Let the kernel P from to be strictly positive, uniformly bounded and weakly continuous. Then operator (43) generates a Markov process in such that the first coordinate does not depend on s, is deterministic, and solves the kinetic Equation (38). The second coordinate is a stable-like subordinator generated by the time dependent family of operators Moreover, the Markov chain (in discrete time) given by (19) converges weakly to the Markov process . Finally one proves the analog of Theorem 1, that is, that process (
42) converges weakly to the process
, whose averages
satisfy the fractional equation
for
, where
Similarly, the
kth order (or
k-ary) interactions of jump-type are given by the
from
to
. The corresponding limit of the scaled CTRW is governed by the following equation that extends (
45) from
to an arbitrary
k:
9. Example: Fractional Smoluchovski Coagulation Evolution
One of the famous examples of the kinetic equations for binary interaction (
38) represents the Smoluchovski equation for the process of mass preserving binary coagulation. In its slightly generalized standard weak form it writes down (see, e.g., [
13]) as
Here X is a locally compact set, is a continuous function (generalized mass) and the (coagulation) transition kernel is such that the measures are supported on the set (preservation of mass).
The corresponding fractional evolution (
45) takes the form
with
10. Example: Fractional Boltzmann Collisions Evolution
The classical spatially trivial Boltzmann equation in
writes down as
where
,
is a unit sphere in
with
the Lebesgue measure on it,
is the angle between
and
n, and
B is a collision kernel, which is a continuous function on
satisfying the symmetry condition
.
The corresponding fractional evolution (
45) takes the form
with