Abstract
Kinetic equations describe the limiting deterministic evolution of properly scaled systems of interacting particles. A rather universal extension of the classical evolutions, that aims to take into account the effects of memory, suggests the generalization of these evolutions obtained by changing the standard time derivative with a fractional one. In the present paper, extending some previous notes of the authors related to models with a finite state space, we develop systematically the idea of CTRW (continuous time random walk) modelling of the Markovian evolution of interacting particle systems, which leads to a more nontrivial class of fractional kinetic measure-valued evolutions, with the mixed fractional order derivatives varying with the change of the state of the particle system, and with variational derivatives with respect to the measure variable. We rigorously justify the limiting procedure, prove the well-posedness of the new equations, and present a probabilistic formula for their solutions. As the most basic examples we present the fractional versions of the Smoluchovski coagulation and Boltzmann collision models.
Keywords:
fractional kinetic equations; interacting particles; Caputo-Dzherbashyan fractional derivative; continuous time random walks (CTRW) MSC:
34A08; 35S15; 45G15
1. Introduction
Kinetic equations are measure-valued equations describing the dynamic law of large number (LLN) limit of Markovian interacting particle systems when the number of particles tends to infinity. The resulting nonlinear measure-valued evolutions can be interpreted probabilistically as nonlinear Markov processes, see [1]. In case of discrete state spaces, the set of probability measures coincides with the simplex of sequences of nonnegative numbers such that , where n can be any natural number, or even . The corresponding kinetic equations (in the case of stationary transitions) are ordinary differential equations (ODEs) of the form
where Q is a stochastic or Kolmogorov Q-matrix (that is, it has non-negative non-diagonal elements and the elements on each row sum up to one) depending Lipschitz continuously on x. Let denote the solution of Equation (1) with the initial condition x.
It is seen that evolution (1) is a direct nonlinear extension of the evolution of the probability distributions of a usual Markov chain, which is given by the equation with a constant Q-matrix Q.
Remark 1.
One can show (see [1]) that any ordinary differential equation (with a Lipschitz r.h.s.) preserving the simplex has form (1) with some function .
Instead of looking at the evolution of the distributions, one can look alternatively at the evolution of functions of these distributions, which clearly satisfy the equation
On the language of the theory of differential equations, one says that ODEs (1) are the characteristics of the linear first order partial differential Equation (2).
In case of a usual Markov chain (with a constant Q) it is seen that the set of linear functions F is preserved by Equation (2). In fact if, with some vector , then with satisfying the equation , which is dual to the equation . For nonlinear case, this conservation of linearity does not hold.
More generally (see [1]), for a system of mean-field interacting particles given by a family of operators , which are generators of Markov processes in some Euclidean space depending on probability measures on as parameters and having a common core, the natural scaling limit of such a system, as the number of particles tends to infinity (dynamic LLN), is described by the kinetic equations, which are most conveniently written in the weak form
where f is an arbitrary function from the common core of the operators .
Remark 2.
The corresponding generalization of Equation (2) is the following differential equation in variational derivatives (see [1]):
The most studied situation is the case with being the diffusion operators, in which case the nonlinear evolution given by (3) is referred to as the nonlinear diffusion or the McKean-Vlasov diffusion. Other important particular cases include nonlinear Lévy processes (when generate Lévy processes), nonlinear stable processes (when generate stable or stable-like processes) and various cases with jump-type process generated by that include the famous Boltzmann and Smoluchovski equations.
All these equations are derived as the natural scaling limits of some random walks on the space of the configurations of many-particle systems.
Standard diffusions are known to be obtained by the natural scaling limits of simple random walks. For instance, in this way one can obtain the simplest diffusion processes with variable drift governed by the diffusion equations
where and ∇ are operators acting on the x variable.
When the standard random walks are extended to more general CTRWs (continuous time random walks), characterised by the property that the random times between jumps are not exponential, but with tail probabilities decreasing by a power law, their limits turn to non Markovian processes described by the fractional evolutions. The fractional equations were introduced into physics precisely via such limits, see [2]. For instance, instead of Equation (5), the simplest equation that one can obtain via a natural scaling is the equation
where is the Caputo-Dzherbashyan fractional derivative of some order (see [3,4]).
Now the question arises: what will be the natural fractional version of the kinetic evolutions (3) and (4)? One way of extension would be to write the Caputo-Dzherbashyan fractional derivative of some fixed order instead of the usual one in (3), as was done, e.g., in [5]. However, if one follows systematically the idea of scaling from the CTRW approximation, and take into account the natural possibility of different waiting times for jumps from different states (as for usual Markovian approximation), one would obtain an equation of a more complicated type, with the fractional derivatives of position-dependent order. In [6] this derivation was performed for the case of a discrete state space, that is for a nonlinear Markov chain described by Equation (1), leading to the fractional generalization of Equation (2) of the form
where is the vector describing the power laws of the waiting times in various states and is the right Caputo-Dzherbashyan fractional derivative of order depending on the position x and acting on the time variable t.
The aim of the present paper is to derive the corresponding limiting equation in the general case, which, instead of (4) (and generalising (7)), writes down as the equation
where is a function on and
We also supply the well-posedness of this equation and the probabilistic formula for the solutions. We will perform the derivation only in the case of integral operators , that is probabilistically for the underlying Markov processes of pure jump type.
The content of the paper is as follows. In the next section, we recall some basic notations and facts from the theory of measure-valued limits of interacting particle systems. In Section 3 we obtain the dynamic LLN for interacting multi-agent systems for the case of non-exponential waiting times with the power tail distributions. In Section 4 we formulate our main results for the new class of fractional kinetic measure-valued evolutions, with the mixed fractional-order derivatives and with variational derivatives with respect to the measure variable. In Section 5, Section 6 and Section 7, we present proofs of our main results, formulated in the previous section. Namely, we justify rigorously the limiting procedure, prove the well-posedness of the new equations and present a probabilistic formula for their solutions.
In Section 8 we extend the CTRW modeling of interacting particles to the case of binary or even more general k-ary interactions.
In the next two Section 9 and Section 10, we present examples of the kinetic equations for binary interaction: the fractional versions of the Smoluchovski coagulation and Boltzmann collision models
In Appendix A, Appendix B and Appendix C we present auxiliary results we need, namely, the standard functional limit theorem for the random-walk-approximation; some results on time-nonhomogeneous stable-like subordinators; and a standard piece of theory about Dynkin’s martingales in a way tailored to our purposes.
The bold letters and will be used to denote expectation and probability.
All general information on fractional calculus that we are using can be found in the books [7,8,9,10].
2. Preliminaries: General Kinetic Equations for Birth-and-Death Processes with Migration
Let us recall some basic notations and facts from the standard measure-valued-limits of interacting processes.
Let X be a locally compact metric space. For simplicity and definiteness, one can take or (the set of positive numbers), but this is not important. Denoting by a one-point space and by the powers (j times), we denote by their disjoint union . In applications, X specifies the state space of one particle and stands for the state space of a random number of similar particles. We denote by the Banach spaces of symmetric (invariant under all permutations of arguments) bounded continuous functions on and by the corresponding spaces of functions on the finite power . The space of symmetric (positive finite Borel) measures is denoted by . The elements of and are interpreted as the (mixed) states and observables for a Markov process on . We denote the elements of by bold letters, say , .
Reducing the set of observables to means effectively that our state space is not (or ) but rather the quotient space (or resp.) obtained by factorization with respect to all permutations, which allows the identifications and .
For a function f on X we shall denote by the function on defined as for any .
A key role in the theory of measure-valued limits of interacting particle systems is played by the scaled inclusion to given by
which defines a bijection between and the set of finite sums of h-scaled Dirac’s -measures, where h is a small positive parameter. If a process under consideration preserves the number of particles N, then one usually chooses .
Remark 3.
Let us stress that we are using here a non-conventional notation: , which is convenient for our purposes.
Clearly each is defined by its components (restrictions) on so that for , say, we can write . Similar notations for the components of measures from will be used. In particular, the pairing between and can be written as
A mean-field dependent jump-type process of particle transformations (with a possible change in the number of particles) or a mean-field dependent birth-and-death process with migration, can be specified by a continuous transition kernel
from X to depending on a measure as a parameter.
Remark 4.
For brevity we write (10) in a unified way including . More precisely, the transitions with describe the death of particles and are specified by some rates .
We restrict attention to finite in order not to bother with some irrelevant technicalities arising otherwise.
To exclude fictitious jumps one usually assumes that for all x, which we shall do as well.
By the intensity of the interaction at x we mean the total mass
Supposing that any particle, randomly chosen from a given set of n particles, can be transformed according to P, leads to the following generator of the process on
By the standard probabilistic interpretation of jump processes (see, e.g., [1,11]), the probabilistic description of the evolution of a pure jump Markov process on specified by the generator G (if this process is well defined) is as follows. Starting from a state , one attaches to each a random -exponential waiting time (exponential clock). That is, for all . Then the minimum of all these times is again an exponential random time, namely -exponential waiting time with
When rings, a particle at that makes a transition, is chosen according to the probability law , and then it makes an instantaneous transition to according to the distribution . Then this procedure repeats, starting from the new state .
By the transformation (9), we transfer the process generated by G on to the process on with the generator
Then it is seen that, as (and for smooth F), these generators converge to the operator
This makes it plausible to conclude (for detail see [1]) that, as (and under mild technical assumptions), the process generated by (12) converges weakly to a deterministic process on measures generated by (13), so that this process is given by the solution of a kinetic equation of type (3), that is,
Denoting by the solution to Equation (14) with the initial condition we can rewrite evolution (14) equivalently in terms of the functions as the equation of type (4):
Alternatively, and more relevant for the extension to CTRW, we can obtain the same limit from a discrete Markov chain. Namely, let us define a Markov chain on with the transition operator such that, in a state , a jump from occurs with the probability and it is distributed according to the distribution , and with the probability the process remains in . In terms of the measures the jumps are described by the transitions from to .
We set to link the scaling in time with the scaling in the number of particles. Let us see what happens in the limit . Namely, we are interested in the weak limit of the chains with transitions , where denotes the integer part of the number , as . It is well known (see, e.g., Theorem 19.28 of [11] or Theorem 8.1.1 of [12]) that if such a chain converges to a Feller process, then the generator of this limiting process can be obtained as the limit
One sees directly that this limit coincides with (13).
3. CTRW Modeling of Interacting Particle Systems
Our objective is to obtain the dynamic LLN for interacting multi-agent systems for the case of non-exponential waiting times with the power tail distributions. As one can expect, this LLN will not be deterministic anymore.
We shall assume that the waiting times between jumps are not exponential, but have a power-law decay. Recall that a positive random variable with a probability law on is said to have a power tail of index if
for large t, that is the ratio of the l.h.s. and the r.h.s tends to 1, as . Here is a positive constant.
As the exponential tails, the power tails are invariant under taking minima. Namely, if , , are independent variables with a power tail of indices and normalising constants , then is clearly a variable with a power tail of index and normalising constant .
In full analogy with the case of exponential times of the discussion above, let us assume that the waiting time of the agent at to decay has the power tail with the index with some fixed . For simplicity, assume that the normalising constant equals 1. Consequently, the minimal waiting time of all n points in a collection will have the probability law with a tail of the index , with given by (11):
Our process with power tail waiting times can thus be described probabilistically as follows. Starting from any time and current state , we wait a random waiting time , which has a power tail with the index . Then everything goes as in the above case of exponential waiting times. Namely, when rings, a particle at that makes a transition is chosen according to the probability law , and then it makes an instantaneous transition to according to the distribution . Then this procedure repeats, starting from the new state .
In order to derive the LLN in this case, let us lift this non-Markovian evolution on the space of subsets or the corresponding measures to the discrete time Markov chain on by considering the total waiting time s as an additional space variable and additionally making the usual scaling (by ) of the waiting time for the jumps of CTRW (see Proposition A1 from Appendix A). Thus we consider the Markov chain on with the jumps occurring at discrete times , , such that the process at a state at time jumps to , or equivalently a state jumps to the state , where and are chosen as above (that is, according to the law and according to the law ) and r is distributed by .
As above, we link the scaling of measures with the scaling of time by choosing , which we set from now on. Then the transition operator of the chain is given by
for .
What we are interested in is the value of the first coordinate evaluated at the random time such that the total waiting time reaches t, that is, at the time
so that is the inverse process to . Thus the scaled mean-field interacting system of particles with a power tail waiting time between jumps is the (non-Markovian) process
This process can also be called the scaled CTRW of mean-field interacting particles (with birth-and-death and migration).
4. Main Results
Let us see first of all what happens with the process in the limit . Namely, we are interested in the weak limit of the chains with transitions , where denotes the integer part of the number , as .
As above, if such a chain converges to a Feller process, then the generator of this limiting process can be obtained as the limit
Lemma 1.
Assume that has a bounded derivative in t and a bounded variational derivative in μ. If converges weakly to some measure, which we shall also denote by μ with some abuse of notation, as , then
where denotes the usual paring of the function P with the measure μ (but taking into account the additional dependence of P on μ):
Proposition 1.
Suppose is a weakly continuous transition kernel from to , that is, the measures depend weakly continuous on . Moreover let the intensity be everywhere strictly positive and uniformly bounded. Finally, the transition kernel is assumed to be subcritical meaning that
Then the Markov process in such that the first coordinate does not depend on s (it could be well denoted shortly ), is deterministic, and solves the kinetic equation of type (14), namely the equation
and the second coordinate is a time nonhomogeneous stable-like subordinator generated by the time dependent family of generators
(see Appendix B for the proper definition of this process), is well defined and is generated by operator (22).
Moreover, the discrete time Markov chains given by (19) converge weakly to the Markov process , so that, in particular, for any continuous bounded function F,
Remark 5.
The assumption of boundedness of the intensity is made for simplicity and can be weakened essentially. Effectively one needs here the well-posedness of the kinetic Equation (24), which is established under rather general assumptions, see [1].
Finally we can formulate our main result.
Theorem 1.
Under the assumptions of Proposition 1, the marginal distributions of the scaled CTRW of mean-field interacting particles (20) converge to the marginal distributions to the process
where is the random time when the stable-like process generated by (25) and started at s reaches the time t,
that is, for a bounded continuous function , it holds that
Moreover, for any smooth function (with continuous bounded variational derivative), the evolution of averages satisfies the mixed fractional differential equation
with the terminal condition , where the right fractional derivative acting on the variable of is defined as
5. Proof of Lemma 1
We have
where the error term is
with
6. Proof of Proposition 1
By Theorem 6.1 of [1], under the assumptions of the Proposition, the kinetic Equation (24) is well posed in . Consequently, in view of the discussion of stable-like subordinators from Appendix B, the process in , as described in Proposition 1, is well defined.
Let us show that, for smooth functions such that , the generator of this process is indeed given by formula (22). To simplify formulas we shall sometimes consider to be defined for all s so that for all . By (A10), the transition operators of this process are given by the formula
where, by (A11) and (25),
where for brevity we wrote for .
We see that the second term turns to the second term of (22) at . Noting that for and using (A13), we get for the first term that
with given by (25), which by changing variables rewrites as
and which in turn for becomes equal to , that is, to the first term of (22). Thus smooth functions vanishing at and do belong to the domain of the generator of the process .
Differentiating Formula (32) with respect to and s we can conclude that smooth functions are invariant under the semigroup of this process. In fact, differentiability with respect to s follows from the explicit Formula (33) for (and bounds (A12)), and differentiability with respect to follows, on one hand side, from the explicit formula for , and, on the other hand, from the smooth dependence of the solutions to kinetic equations on initial data . This smooth dependence is a known fact from the theory of kinetic equations (Theorem 8.2 of [1]), which has in fact a rather straightforward proof under the assumption of bounded intensities. Consequently, smooth functions , vanishing at and , form an invariant core for the semigroup of the process .
Consequently, from Lemma 1 and the general result on the convergence of semigroups (see e.g., Theorem 19.28 of [11]) we can conclude that the Markov chains converge weakly to the Markov process thus completing the proof of the Proposition.
7. Proof of Theorem 1
By the density arguments, to prove (28), it is sufficient to show that
as , for smooth functions F (that have bounded continuous variational derivatives).
We have
with
To estimate I we write
where (depending in ) is the distribution of . Choosing K large enough we can make the second integral arbitrary small uniformly in . And then, by Proposition 1, we can make the first integral arbitrary small by choosing small enough (and uniformly in t from compact sets).
It remains II. Integrating by parts we get the following:
and therefore
By Proposition 1 (and because the distribution of the random variable is absolutely continuous), as . Therefore by the dominated convergence, as .
8. Extension to Binary and -ary Interaction
Here we extend the CTRW modeling of interacting particles to the case of binary or even more general k-ary interactions stressing main new points and omitting details. Firstly we recall the basic scheme of general Markovian binary interaction from [1].
A mean-field dependent jump-type process of binary interaction of particles (with a possible decrease in the number of particles) can be specified by a continuous transition kernel
from to depending on a measure as a parameter, with the intensity
We again assume that always.
Remark 6.
The possibility that more than two particles can result after the decay of two particles creates some technical difficulties for the analysis of the kinetic equation that we choose to avoid here.
For a finite subset of a finite set , we denote by the number of elements in I, by its complement and by the collection of variables .
The corresponding scaled Markov process on of binary interaction is defined via its generator
(note the additional multiplier h, as compared with (12), needed for the proper scaling of binary interactions). In terms of the measures from the process can be equivalently described by the generator
which acts on the space of continuous functions F on .
Applying the obvious equation
which holds for any and , one observes that the operator can be written in the form
On the linear functions
this operator acts as
It follows that if and tends to some finite measure (in other words, that the number of particles tends to infinity, but the “whole mass” remains finite due to the scaling of each atom), the corresponding evolution equation on linear functionals tends to the equation
which is the general kinetic equation for binary interactions of pure jump type in weak form.
For a nonlinear smooth function the time evolving function satisfies the equation in variational derivatives
As in the case of mean-field interaction, the evolution (39) can be obtained as the limit of discrete Markov chains with waiting times depending on the current position of the particle system.
Let us see what comes out of the CTRW modelling.
To this end we attach a random waiting time to each pair of particles, assuming that has the power tail with the index with some fixed . Consequently, the minimal waiting time of all pairs among a collection will have the probability law with a tail of the index , with
where (36) was used.
In full analogy with the case of a mean-field interaction, let us define the discrete-time Markov chain on by considering the total waiting time s as an additional space variable and additionally making the usual scaling of the waiting times. Namely, we consider the Markov chain on with the jumps occurring at discrete times , , such that the process at a state at time jumps to , or equivalently a state jumps to the state , where are chosen according to the law and according to the law ), and r is distributed by .
Again choosing , the transition operator of the chain is given by
for .
We are again interested in the value of the first coordinate evaluated at the random time such that the total waiting time reaches t, that is, at the time
so that is the inverse process to . Thus the scaled mean-field and binary interacting system of particles with a power tail waiting time between jumps is the (non-Markovian) process
This process can also be called the scaled CTRW of mean-field and binary interacting particles.
Analogously to Lemma 1 one shows that
where
Then, analogously to Proposition 1 one establishes the following result.
Proposition 2.
Let the kernel P from to be strictly positive, uniformly bounded and weakly continuous. Then operator (43) generates a Markov process in such that the first coordinate does not depend on s, is deterministic, and solves the kinetic Equation (38). The second coordinate is a stable-like subordinator generated by the time dependent family of operators
Moreover, the Markov chain (in discrete time) given by (19) converges weakly to the Markov process .
Finally one proves the analog of Theorem 1, that is, that process (42) converges weakly to the process , whose averages satisfy the fractional equation
for , where
Similarly, the kth order (or k-ary) interactions of jump-type are given by the
from to . The corresponding limit of the scaled CTRW is governed by the following equation that extends (45) from to an arbitrary k:
9. Example: Fractional Smoluchovski Coagulation Evolution
One of the famous examples of the kinetic equations for binary interaction (38) represents the Smoluchovski equation for the process of mass preserving binary coagulation. In its slightly generalized standard weak form it writes down (see, e.g., [13]) as
Here X is a locally compact set, is a continuous function (generalized mass) and the (coagulation) transition kernel is such that the measures are supported on the set (preservation of mass).
10. Example: Fractional Boltzmann Collisions Evolution
The classical spatially trivial Boltzmann equation in writes down as
where , is a unit sphere in with the Lebesgue measure on it,
is the angle between and n, and B is a collision kernel, which is a continuous function on satisfying the symmetry condition .
Author Contributions
Conceptualization, V.N.K.; methodology, V.N.K.; formal analysis, V.N.K. and M.T.; investigation, V.N.K. and M.T.; writing—original draft preparation, V.N.K. and M.T.; writing—review and editing, V.N.K. and M.T. All authors have read and agreed to the published version of the manuscript.
Funding
The work is supported by the Ministry of Science and Higher Education of the Russian Federation: agreement No. 075-02-2021-1396, 31 May 2021.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. CTRW Approximation of Stable Processes
As an auxiliary result, we need the standard functional limit theorem for the random-walk-approximation of stable laws, see, e.g., [12,14,15] and references therein for various proofs.
Let a positive random variables T with distribution belong to the domain of attraction of a -stable law, , in the sense that
(the sign ∼ means here that the ratio tends to 1, as ). Let be a sequence of i.i.d. random variables distributed like T and
be a scaled random walk based on , , so that can be presented as a scaled Markov chain , where is the integer part of and is a Markov transition operator:
Proposition A1.
Let be a β-stable Lévy subordinator, that is a Lévy process in generated by the stable generator
(which up to a multiplier represents the fractional derivative ). Then
for all smooth functions and in distribution, as .
Appendix B. Time-Nonhomogeneous Stable-like Processes
Stable Lévy subordinators are spatially and temporary homogeneous Markov processes generated by operators (A2). In our story, the key role belongs to the time nonhomogeneous extension of these processes, which we shall refer to as time nonhomogeneous stable-like subordinators. These are the spatially homogenous Markov processes on generated by the family of operators of the type
with some continuous functions , on such that
Remark A1.
More studied in the literature are the time homogeneous, but spatially nonhomogeneous processes, generated by operators, which are similar to (A4), but where the functions and of the time variable t are substituted by the functions of the spatial variable x. These processes are usually referred to as stable-like processes.
Since time-dependent generators lack many standard properties of standard generators of linear semigroups, let us stress for clarity that by saying that the process is generated by the family (A4) we mean that its Markov transition operators
are well defined as operators in the space (of continuous functions on vanishing at zero and at infinity) for , they form a backward propagator, that is, they satisfy the chain rule for , the subspace of consisting of continuously differentiable functions with bounded derivatives is invariant under all and finally, for any , the function satisfies the pseudo-differential equation in backward time
For any continuous functions satisfying (A5) there exists a unique Markov process satisfying all required conditions which follow from a general result on time nonhomogeneous extension of Lévy processes, see Proposition 7.1 of [1]. For our purposes, it is handy to have also a concrete representation of the propagator in terms of transition probabilities. Namely, Equation (A6) for from (A4) can be written in the pseudo-differential form as
where the symbol equals
Standard calculations show (see, e.g., Proposition 9.3.2 of [16]) that this is in fact a power function, that is
where denotes the sign of p and is the Euler Gamma function. Solving Equation (A7) via the Fourier transform method we find that its solution with the terminal function is given by the formula
where the Green function G that represents the transition probability density for the process is given by the following integral:
From this representation it follows (see, e.g., Proposition 9.3.6 from [16]) that is infinitely differentiable in t and x for and and that for large x, G has the following asymptotic behavior:
with some constants , and , .
It is also seen from (A11) that tends to the Dirac function , as , and that
Appendix C. Stationary Problems and Dynkin’s Martingales
Let us recall here a very standard piece of theory about Dynkin’s martingales in a way tailored to our purposes. Let (where denote the initial position) be a (time homogeneous) Markov process in with some metric space generated by an operator L, so that the operators of the semigroup satisfy the equation for some space of continuous functions f. Then for any function f from this class the process
is a martingale, called Dynkin’s martingale.
If is a stopping time with a uniformly bounded expectation, one can apply Doob’s optional sampling theorem to conclude that
In particular, if , it follows that
Let L can be written in the form
with being a Lévy-Khinchin-type operator acting on the variable x (with coefficients that may depend on x) and some Lévy kernel (that is ) such that also . For any T let us define a modification of the process such that it stops once reaches T. Clearly so the modified process has the generator
Due to the assumptions on , the stopping time , when reaches T has a uniformly bounded expectation. Hence we can apply Dynkin’s martingale to conclude that if and with a given function , then
Thus Equation (A15) is the probabilistic representation for the solution to the boundary-value problem: and . As a consequence of this formula one also gets the uniqueness of the solution to this boundary-value problem.
References
- Kolokoltsov, V.N. Nonlinear Markov Processes and Kinetic Equations; Cambridge Tracts in Mathematics; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2010; Volume 182. [Google Scholar]
- Zaslavsky, G.M. Fractional kinetic equation for Hamiltonian chaos. Phys. D Nonlinear Phenom. 1994, 76, 110–122. [Google Scholar] [CrossRef]
- Caputo, M. Linear models of dissipation whose Q is almost frequency independent—II. Geophys. J. Intern. 1967, 13, 529–539. [Google Scholar] [CrossRef]
- Dzherbashian, M.M.; Nersesian, A.B. Fractional derivatives and the Cauchy problem for differential equations of fractional order. Fract. Calc. Appl. Anal. 2020, 23, 1810–1836. [Google Scholar] [CrossRef]
- Kochubei, A.N.; Kondratiev, Y. Fractional kinetic hierarchies and intermittency. Kinet. Relat. Models 2017, 10, 725–740. [Google Scholar] [CrossRef]
- Kolokoltsov, V.N.; Malafeyev, O.A. Many Agent Games in Socio-Economic Systems: Corruption, Inspection, Coalition Building, Network Growth, Security; Springer Series in Operations Research and Financial Engineering; Springer Nature: Cham, Switzerland, 2019. [Google Scholar]
- Baleanu, D.; Diethelm, K.; Scalas, E.; Trujillo, J.J. Fractional Calculus: Models and Numerical Methods, 2nd ed.; Series on Complexity, Nonlinearity and Chaos; World Scientific Publishing: Singapore, 2017; Volume 5. [Google Scholar]
- Kiryakova, V. Generalized Fractional Calculus and Applications; Pitman Research Notes in Mathematics Series; Longman Scientific: Harlow, UK; John Wiley and Sons: New York, NY, USA, 1994; Volume 301. [Google Scholar]
- Podlubny, I. Fractional Differential Equations, An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Mathematics in Science and Engineering; Academic Press, Inc.: San Diego, CA, USA, 1999; Volume 198. [Google Scholar]
- Diethelm, K. The Analysis of Fractional Differential Equations; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Kallenberg, O. Foundations of Modern Probability, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Kolokoltsov, V.N. Markov Processes, Semigroups and Generators; DeGruyter Studies in Mathematics; DeGruyter: Berlin, Germany, 2011; Volume 38. [Google Scholar]
- Norris, J. Cluster Coagulation. Comm. Math. Phys. 2000, 209, 407–435. [Google Scholar] [CrossRef]
- Gnedenko, B.V.; Korolev, V.Y. Random Summation: Limit Theorems and Applications; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
- Meerschaert, M.M.; Scheffler, H.-P. Limit Distributions for Sums of Independent Random Vectors; Wiley Series in Probability and Statistics; John Wiley and Son: Hoboken, NJ, USA, 2001. [Google Scholar]
- Kolokoltsov, V.N. Differential Equations on Measures and Functional Spaces; Birkhäuser Advanced Texts Basler Lehrbücher; Birkhäuser: Cham, Switzerland, 2019. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).