Next Article in Journal
Complexity of Fracturing in Terms of Non-Extensive Statistical Physics: From Earthquake Faults to Arctic Sea Ice Fracturing
Next Article in Special Issue
Sir Isaac Newton Stranger in a Strange Land
Previous Article in Journal
Adaptive Neuro-Fuzzy Inference System and a Multilayer Perceptron Model Trained with Grey Wolf Optimizer for Predicting Solar Diffuse Fraction
Previous Article in Special Issue
Fractal and Entropy Analysis of the Dow Jones Index Using Multidimensional Scaling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Skellam Type Processes of Order k and Beyond

1
Department of Mathematics, Indian Institute of Technology Ropar, Rupnagar, Punjab 140001, India
2
Cardiff School of Mathematics, Cardiff University, Senghennydd Road, Cardiff CF24 4AG, UK
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(11), 1193; https://doi.org/10.3390/e22111193
Submission received: 31 August 2020 / Revised: 17 October 2020 / Accepted: 19 October 2020 / Published: 22 October 2020
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)

Abstract

:
In this article, we introduce the Skellam process of order k and its running average. We also discuss the time-changed Skellam process of order k. In particular, we discuss the space-fractional Skellam process and tempered space-fractional Skellam process via time changes in Skellam process by independent stable subordinator and tempered stable subordinator, respectively. We derive the marginal probabilities, Lévy measures, governing difference-differential equations of the introduced processes. Our results generalize the Skellam process and running average of Poisson process in several directions.

1. Introduction

The Skellam distribution is obtained by taking the difference between two independent Poisson distributed random variables, which was introduced for the case of different intensities λ 1 , λ 2 by (see [1]) and for equal means in [2]. For large values of λ 1 + λ 2 , the distribution can be approximated by the normal distribution and if λ 2 is very close to 0, then the distribution tends to a Poisson distribution with intensity λ 1 . Similarly, if λ 1 tends to 0, the distribution tends to a Poisson distribution with non-positive integer values. The Skellam random variable is infinitely divisible, since it is the difference of two infinitely divisible random variables (see Proposition 2.1 in [3]). Therefore, one can define a continuous time Lévy process for Skellam distribution, which is called Skellam process.
The Skellam process is an integer valued Lévy process and it can also be obtained by taking the difference of two independent Poisson processes. Its marginal probability mass function (pmf) involves the modified Bessel function of the first kind. The Skellam process has various applications in different areas, such as to model the intensity difference of pixels in cameras (see [4]) and for modeling the difference of the number of goals of two competing teams in a football game [5]. The model based on the difference of two point processes is proposed in (see [6,7,8,9]).
Recently, the time-fractional Skellam process has been studied in [10], which is obtained by time-changing the Skellam process with an inverse stable subordinator. Further, they provided the application of time-fractional Skellam process in modeling of arrivals of jumps in high frequency trading data. It is shown that the inter-arrival times between the positive and negative jumps follow a Mittag–Leffler distribution rather then the exponential distribution. Similar observations are observed in the case of Danish fire insurance data (see [11]). Buchak and Sakhno, in [12], have also proposed the governing equations for time-fractional Skellam processes. Recently, [13] introduced time-changed Poisson process of order k, which is obtained by time changing the Poisson process of order k (see [14]) by general subordinators.
In this paper, we introduce Skellam process of order k and its running average. We also discuss the time-changed Skellam process of order k. In particular, we discuss space-fractional Skellam process and tempered space-fractional Skellam process via time changes in Skellam process by independent stable subordinator and tempered stable subordinator, respectively. We obtain closed form expressions for the marginal distributions of the considered processes and other important properties. Skellam process is used to model the difference between the number of goals between two teams in a football match. At the beginning, both teams have scores 0 each and at time t the team 1 score is N 1 ( t ) , which is the cumulative sum of arrivals (goals) of size 1 until time t with exponential inter-arrival times. Similarly for team 2, the score is N 2 ( t ) at time t . The difference between the number of goals can be modeled using N 1 ( t ) N 2 ( t ) at time t. Similarly, the Skellam process of order k can be used to model the difference between the number of points scored by two competing teams in a basketball match where k = 3 . Note that, in a basketball game, a free throw is count as one point, any basket from a shot taken from inside the three-point line counts for two points and any basket from a shot taken from outside the three-point line is considered as three points. Thus, a jump in the score of any team may be of size one, two, or three. Hence, a Skellam process of order 3 can be used to model the difference between the points scored.
In [10], it is shown that the fractional Skellam process is a better model then the Skellam process for modeling the arrivals of the up and down jumps for the tick-by-tick financial data. Equivalently, it is shown that the Mittag–Leffler distribution is a better model than the exponential distribution for the inter-arrival times between the up and down jumps. However, it is evident from Figure 3 of [10] that the fractional Skellam process is also not perfectly fitting the arrivals of positive and negative jumps. We hope that a more flexible class of processes like time-changed Skellam process of order k (see Section 6) and the introduced tempered space-fractional Skellam process (see Section 7) would be better model for arrivals of jumps. Additionally, see [8] for applications of integer-valued Lévy processes in financial econometrics. Moreover, distributions of order k are interesting for reliability theory [15]. The Fisher dispersion index is a widely used measure for quantifying the departure of any univariate count distribution from the equi-dispersed Poisson model [16,17,18]. The introduced processes in this article can be useful in modeling of over-dispersed and under-dispersed data. Further, in (49), we present probabilistic solutions of some fractional equations.
The remainder of this paper proceeds, as follows: in Section 2, we introduce all the relevant definitions and results. We also derive the Lévy density for space- and tempered space-fractional Poisson processes. In Section 3, we introduce and study running average of Poisson process of order k. Section 4 is dedicated to Skellam process of order k. Section 5 deals with running average of Skellam process of order k. In Section 6, we discuss the time-changed Skellam process of order k. In Section 7, we determine the marginal pmf, governing equations for marginal pmf, Lévy densities, and moment generating functions for space-fractional Skellam process and tempered space-fractional Skellam process.

2. Preliminaries

In this section, we collect relevant definitions and some results on Skellam process, subordinators, space-fractional Poisson process, and tempered space-fractional Poisson process. These results will be used to define the space-fractional Skellam processes and tempered space-fractional Skellam processes.

2.1. Skellam Process

In this section, we revisit the Skellam process and also provide a characterization of it. Let S ( t ) be a Skellam process, such that
S ( t ) = N 1 ( t ) N 2 ( t ) , t 0 ,
where N 1 ( t ) and N 2 ( t ) are two independent homogeneous Poisson processes with intensity λ 1 > 0 and λ 2 > 0 , respectively. The Skellam process is defined in [8] and the distribution has been introduced and studied in [1], see also [2]. This process is only symmetric when λ 1 = λ 2 . The pmf s k ( t ) = P ( S ( t ) = k ) of S ( t ) is given by (see e.g., [1,10])
s k ( t ) = e t ( λ 1 + λ 2 ) λ 1 λ 2 k / 2 I | k | ( 2 t λ 1 λ 2 ) , k Z ,
where I k is modified Bessel function of first kind (see [19], p. 375),
I k ( z ) = n = 0 ( z / 2 ) 2 n + k n ! ( n + k ) ! .
The pmf s k ( t ) satisfies the following differential difference equation (see [10])
d d t s k ( t ) = λ 1 ( s k 1 ( t ) s k ( t ) ) λ 2 ( s k ( t ) s k + 1 ( t ) ) , k Z ,
with initial conditions s 0 ( 0 ) = 1 and s k ( 0 ) = 0 , k 0 . For a real-valued Lévy process Z ( t ) the characteristic function admits the form
E ( e i u Z ( t ) ) = e t ψ Z ( u ) ,
where the function ψ Z is called characteristic exponent and it admits the following Lévy-Khintchine representation (see [20])
ψ Z ( u ) = i a u b u 2 + R \ { 0 } ( e i u x 1 i u x 1 { | x | 1 } ) π Z ( d x ) .
Here, a R , b 0 and π Z is a Lévy measure. If π Z ( d x ) = ν Z ( x ) d x for some function ν Z , then ν Z is called the Lévy density of the process Z . The Skellam process is a Lévy process, its Lévy density ν S is a linear combination of two Dirac delta functions, ν S ( y ) = λ 1 δ 1 ( y ) + λ 2 δ 1 ( y ) and the corresponding characteristic exponent is given by
ψ S ( 1 ) ( u ) = ( 1 e u y ) ν S ( y ) d y .
The moment generating function (mgf) of Skellam process is
E [ e θ S ( t ) ] = e t ( λ 1 + λ 2 λ 1 e θ λ 2 e θ ) , θ R .
With the help of mgf, one can easily find the moments of Skellam process. In the next result, we give a characterization of Skellam process, which is not available in literature as per our knowledge. For a function h, we write h ( δ ) = o ( δ ) if lim δ 0 h ( δ ) / δ = 0 .
Theorem 1.
Suppose that an arrival process has the independent and stationary increments and it also satisfies the following incremental condition, then the process is Skellam.
P ( S ( t + δ ) = m | S ( t ) = n ) = λ 1 δ + o ( δ ) , m > n , m = n + 1 ; λ 2 δ + o ( δ ) , m < n , m = n 1 ; 1 λ 1 δ λ 2 δ + o ( δ ) , m = n ; o ( δ ) otherwise .
Proof. 
Consider the interval [0,t], which is discretized with n sub-intervals of size δ each, such that n δ = t . For k 0 , we have
P ( S ( t ) = k ) = m = 0 [ n k 2 ] n ! m ! ( m + k ) ! ( n 2 m k ) ! ( λ 1 δ ) m + k ( λ 2 δ ) m ( 1 λ 1 δ λ 2 δ ) n 2 m k + o ( δ ) = m = 0 [ n k 2 ] n ! m ! ( m + k ) ! ( n 2 m k ) ! λ 1 t n m + k λ 2 t n m 1 λ 1 t n λ 2 t n n 2 m k + o ( δ ) = m = 0 [ n k 2 ] ( λ 1 t ) m + k ( λ 2 t ) m m ! ( m + k ) ! n ! ( n 2 m k ) ! n 2 m + k 1 λ 1 t n λ 2 t n n 2 m k + o ( δ ) = e ( λ 1 + λ 2 ) t m = 0 ( λ 1 t ) m + k ( λ 2 t ) m m ! ( m + k ) ! ,
by taking n . The result follows now by using the definition of modified Bessel function of first kind I k . Similarly, it can be proved for k < 0 .

2.2. Poisson Process of Order k

In this section, we recall the definition and some important properties of Poisson process of order k (PPoK). Kostadinova and Minkova (see [14]) introduced and studied the PPoK. Let x 1 , x 2 , , x k be non-negative integers and ζ k = x 1 + x 2 + + x k , Π k ! = x 1 ! x 2 ! x k ! and
Ω ( k , n ) = { X = ( x 1 , x 2 , , x k ) | x 1 + 2 x 2 + + k x k = n } .
Additionally, let { N k ( t ) } t 0 , represent the PPoK with rate parameter λ t , then probability mass function (pmf) is given by
p n N k ( t ) = P ( N k ( t ) = n ) = X = Ω ( k , n ) e k λ t ( λ t ) ζ k Π k ! .
The pmf of N k ( t ) satisfies the following differential-difference equations (see [14])
d d t p n N k ( t ) = k λ p n N k ( t ) + λ j = 1 n k p n j N k ( t ) , n = 1 , 2 , d d t p 0 N k ( t ) = k λ p 0 N k ( t ) ,
with initial condition p 0 N k ( 0 ) = 1 and p n N k ( 0 ) = 0 and n k = min { k , n } . The characteristic function of PPoK N k ( t )
ϕ N k ( t ) ( u ) = E [ e i u N k ( t ) ] = e λ t ( k j = 1 k e i u j ) ,
where i = 1 . The process PPoK is Lévy, so it is infinite divisible i.e., ϕ N k ( t ) ( u ) = ( ϕ N k ( 1 ) ( u ) ) t . The Lévy density for PPoK is easy to derive and it is given by
ν N k ( x ) = λ j = 1 k δ j ( x ) ,
where δ j is the Dirac delta function concentrated at j. The transition probability of the PPoK { N k ( t ) } t 0 is also given by Kostadinova and Minkova [14],
P ( N k ( t + δ ) = m | N k ( t ) = n ) = 1 k λ δ , m = n ; λ δ m = n + i , i = 1 , 2 , , k ; 0 otherwise .
The probability generating function (pgf) G N k ( s , t ) is given by (see [14])
G N k ( s , t ) = e λ t ( k j = 1 k s j ) .
The mean, variance and covariance function of the PPoK are given by
E [ N k ( t ) ] = k ( k + 1 ) 2 λ t ; Var [ N k ( t ) ] = k ( k + 1 ) ( 2 k + 1 ) 6 λ t ; Cov [ N k ( t ) , N k ( s ) ] = k ( k + 1 ) ( 2 k + 1 ) 6 λ ( t s ) .

2.3. Subordinators

Let D f ( t ) be a real valued Lévy process with non-decreasing sample paths and its Laplace transform has the form
E [ e s D f ( t ) ] = e t f ( s ) ,
where
f ( s ) = b s + 0 ( 1 e x s ) π ( d x ) , s > 0 , b 0 ,
is the integral representation of Bernstein functions (see [21]). The Bernstein functions are C , non-negative and such that ( 1 ) m d m d x m f ( x ) 0 for m 1 [21]. Here, π denote the non-negative Lévy measure on the positive half line, such that
0 ( x 1 ) π ( d x ) < , π ( [ 0 , ) ) = ,
and b is the drift coefficient. The right continuous inverse E f ( t ) = inf { u 0 : D f ( u ) > t } is the inverse and first exist time of D f ( t ) , which is non-Markovian with non-stationary and non-independent increments. Next, we analyze some special cases of Lévy subordinators with drift coefficient b = 0,
f ( s ) = p log ( 1 + s α ) , p > 0 , α > 0 , ( gamma subordinator ) ; ( s + μ ) α μ α , μ > 0 , 0 < α < 1 , ( tempered α - stable   subordinator ) ; δ ( 2 s + γ 2 γ ) , γ > 0 , δ > 0 , ( inverse   Gaussian   subordinator ) ; s α , 0 < α < 1 , ( α - stable   subordinator ) .
It is worth noting that, among the subordinators given in (14), all of the integer order moments of α -stable subordinators are infinite and others subordinators have all finite moments.

2.4. The Space-Fractional Poisson Process

In this section, we discuss the main properties of a space-fractional Poisson process (SFPP). We also provide the Lévy density for SFPP, which is not discussed in the literature. The SFPP N α ( t ) was introduced by (see [22]), as follows
N α ( t ) = N ( D α ( t ) ) , t 0 , 0 < α < 1 , N ( t ) , t 0 , α = 1 ,
where D α ( t ) is an α -stable subordinator, which is independent of the homogeneous Poisson process N ( t ) .
The probability generating function (pgf) of this process is
G N α ( s , t ) = E [ s N α ( t ) ] = e λ α ( 1 s ) α t , | s | 1 , α ( 0 , 1 ) .
The pmf of SFPP is
P α ( k , t ) = P { N α ( t ) = k } = ( 1 ) k k ! r = 0 ( λ α ) r t r r ! Γ ( r α + 1 ) Γ ( r α k + 1 ) = ( 1 ) k k ! 1 ψ 1 ( 1 , α ) ; ( 1 k , α ) ; ( λ α t ) ,
where h ψ i ( z ) is the Fox Wright function (see formula ( 1.11 . 14 ) in [23]). It was shown in [22] that the pmf of the SFPP satisfies the following fractional differential-difference equations
d d t P α ( k , t ) = λ α ( 1 B ) α P α ( k , t ) , α ( 0 , 1 ] , k = 1 , 2 ,
d d t P α ( 0 , t ) = λ α P α ( 0 , t ) ,
with initial conditions
P α ( k , 0 ) = δ k , 0 ,
where δ k , 0 is the Kronecker delta function, given by
δ k , 0 = 0 , k 1 , 1 , k = 0 .
The fractional difference operator
( 1 B ) α = j = 0 α j ( 1 ) j B j
is defined in [24], where B is the backward shift operator. The characteristic function of SFPP is
E [ e i u N α ( t ) ] = e λ α ( 1 e i u ) α t .
Proposition 1.
The Lévy density ν N α ( x ) of SFPP is given by
ν N α ( x ) = λ α n = 1 ( 1 ) n + 1 α n δ n ( x ) .
Proof. 
We use Lévy-Khintchine formula (see [20]),
R \ { 0 } ( e i u x 1 ) λ α n = 1 ( 1 ) n + 1 α n δ n ( x ) d x = λ α n = 1 ( 1 ) n + 1 α n e i u n + n = 0 ( 1 ) n α n 1 = λ α n = 0 ( 1 ) n + 1 α n e i u n = λ α ( 1 e i u ) α ,
which is the characteristic exponent of SFPP from Equation (23). □

2.5. Tempered Space-Fractional Poisson Process

The tempered space-fractional Poisson process (TSFPP) can be obtained by subordinating the homogeneous Poisson process N ( t ) with the independent tempered stable subordinator D α , μ ( t ) (see [25])
N α , μ ( t ) = N ( D α , μ ( t ) ) , α ( 0 , 1 ) , μ > 0 .
This process has finite integer order moments due to the tempered α -stable subordinator. The pmf of TSFPP is given by (see [25])
P α , μ ( k , t ) = ( 1 ) k e t μ α m = 0 μ m r = 0 ( t ) r r ! λ α r m α r m α r m k = e t μ α ( 1 ) k k ! m = 0 μ m λ m m ! 1 ψ 1 ( 1 , α ) ; ( 1 k m , α ) ; ( λ α t ) , k = 0 , 1 , .
The governing difference-differential equation is given by
d d t P α , μ ( k , t ) = ( ( μ + λ ( 1 B ) ) α μ α ) P α , μ ( k , t ) , k > 0 .
The characteristic function of TSFPP,
E [ e i u N α , μ ( t ) ] = e t ( ( μ + λ ( 1 e i u ) ) α μ α ) .
While using a standard conditioning argument, the mean and variance of TSFPP are given by
E [ N α , μ ( t ) ] = λ α μ α 1 t , Var [ N α , μ ( t ) ) ] = λ α μ α 1 t + λ 2 α ( 1 α ) μ α 2 t .
Proposition 2.
The Lévy density ν N α , μ ( x ) of TSFPP is
ν N α , μ ( x ) = n = 1 μ α n α n λ n l = 1 n n l ( 1 ) l + 1 δ l ( x ) , μ > 0 .
Proof. 
Using (28), the characteristic exponent of TSFPP is given by ψ N α , μ ( u ) = ( ( μ + λ ( 1 e i u ) ) α μ α ) . We find the Lévy density with the help of Lévy-Khintchine formula (see [20]),
R \ { 0 } ( e i u x 1 ) n = 1 μ α n α n λ n l = 1 n n l ( 1 ) l + 1 δ l ( x ) d x = n = 1 μ α n α n λ n l = 1 n n l ( 1 ) l + 1 e i u x l = 1 n n l ( 1 ) l + 1 = n = 0 μ α n α n λ n l = 0 n n l ( 1 ) l + 1 δ l ( x ) μ α = ( ( μ + λ ( 1 e i u ) ) α μ α ) ,
hence proved. □
Definition 1.
A stochastic process X ( t ) is over-dispersed, equi-dispersed or under-dispersed [18], if the Fisher index of dispersion, given by (see e.g., [17])
FI [ X ( t ) ] = Var [ X ( t ) ] E [ X ( t ) ]
is more than 1, equal to 1, or smaller than 1, respectively, for all t > 0 .
Remark 1.
Using (29), we have FI [ N α , μ ( t ) ] = 1 + λ ( 1 α ) μ > 1 , i.e. TSFPP N α , μ ( t ) is over-dispersed.

3. Running Average of PPoK

In this section, we first introduced the running average of PPoK and their main properties. These results will be used further to discuss the running average of SPoK.
Definition 2
(Running average of PPoK). We define the running average process N A k ( t ) , t 0 by taking time-scaled integral of the path of the PPoK (see [26]),
N A k ( t ) = 1 t 0 t N k ( s ) d s .
We can write the differential equation with initial condition N A k ( 0 ) = 0 ,
d d t ( N A k ( t ) ) = 1 t N k ( t ) 1 t 2 0 t N k ( s ) d s .
Which shows that it has continuous sample paths of bounded total variation. We explored the compound Poisson representation and distribution properties of running average of PPoK. The characteristic of N A k ( t ) is obtained using the Lemma 1 of [26]. We recall Lemma 1 from [26] for ease of reference.
Lemma 1.
If X t is a Lévy process and Y t its Riemann integral is defined by
Y t = 0 t X s d s ,
then the characteristic functions of Y satisfies
ϕ Y ( t ) ( u ) = E [ e i u Y ( t ) ] = e t 0 1 log ϕ X ( 1 ) ( t u z ) d z , u R .
Proposition 3.
The characteristic function of N A k ( t ) is given by
ϕ N A k ( t ) ( u ) = e t λ k j = 1 k ( e i u j 1 ) i u j .
Proof. 
Using the Equation (10), we have
0 1 log ϕ N k ( 1 ) ( t u z ) d z = λ k j = 1 k ( e i t u z j 1 ) i t u j .
Using (32) and (31), we have
ϕ N A k ( t ) ( u ) = e t 0 1 log ϕ N k ( 1 ) ( u z ) d z = e t λ k j = 1 k ( e i u j 1 ) i u j .
Proposition 4.
The running average process has a compound Poisson representation, such that
Y ( t ) = i = 1 N ( t ) X i ,
where X i = 1 , 2 , are independent, identically distributed (iid) copies of X random variables, independent of N ( t ) and N ( t ) is a Poisson process with intensity k λ . Subsequently,
Y ( t ) = l a w N A k ( t ) .
Further, the random variable X has the following pdf
f X ( x ) = i = 1 k p V i ( x ) f U i ( x ) = 1 k i = 1 k f U i ( x ) ,
where V i follows discrete uniform distribution over ( 0 , k ) and U i follows continuous uniform distribution over ( 0 , i ) , i = 1 , 2 , , k .
Proof. 
The pdf of U i is f U i ( x ) = 1 i , 0 x i . Using (35), the characteristic function of X is given by
ϕ X ( u ) = 1 k j = 1 k ( e i u j 1 ) i u j .
For fixed t, the characteristic function of Y ( t ) is
ϕ Y ( t ) ( u ) = e k λ t ( 1 ϕ X ( u ) ) = e t λ k j = 1 k ( e i u j 1 ) i u j ,
which is equal to the characteristic function of PPoK that is given in (33). Hence, by the uniqueness of characteristic function, the result follows. □
Using the definition
m r = E [ X r ] = ( i ) r d r ϕ X ( u ) d u r ,
the first two moments for random variable X given in Proposition (4) are m 1 = ( k + 1 ) 4 and m 2 = 1 18 [ ( k + 1 ) ( 2 k + 1 ) ] . Further, using the mean, variance, and covariance of compound Poisson process, we have
E [ N A k ( t ) ] = E [ N ( t ) ] E [ X ] = k ( k + 1 ) 4 λ t ; Var [ N A k ( t ) ] = E [ N ( t ) ] E [ X 2 ] = 1 18 [ k ( k + 1 ) ( 2 k + 1 ) ] λ t ; Cov [ N A k ( t ) , N A k ( s ) ] = E [ N A k ( t ) , N A k ( s ) ] E [ N A k ( t ) ] E [ N A k ( s ) ] = E [ N A k ( s ) ] E [ N A k ( t s ) ] E [ N A k ( s ) 2 ] E [ N A k ( t ) ] E [ N A k ( s ) ] = 1 18 [ k ( k + 1 ) ( 2 k + 1 ) ] λ s k 2 ( k + 1 ) 2 16 λ 2 s 2 , s < t .
Corollary 1.
Putting k = 1 , the running average of PPoK N A k ( t ) reduces to the running average of standard Poisson process N A ( t ) (see Appendix in [26]).
Corollary 2.
The mean and variance of PPoK and running average of PPoK satisfy, E [ N A k ( t ) ] / E [ N k ( t ) ] = 1 2 and Var [ N A k ( t ) ] / Var [ N k ( t ) ] = 1 3 .
Remark 2.
The Fisher index of dispersion for running average of PPoK N A k ( t ) is given by FI [ N A k ( t ) ] = 2 9 ( 2 k + 1 ) . If k = 1 the process is under-dispersed and for k > 1 it is over-dispersed.
Next we discuss the long-range dependence (LRD) property of running average of PPoK. We recall the definition of LRD for a non-stationary process.
Definition 3 
(Long range dependence (LRD)). Let X ( t ) be a stochastic process that has a correlation function for s t for fixed s, that satisfies,
c 1 ( s ) t d Cor ( X ( t ) , X ( s ) ) c 2 ( s ) t d ,
for large t, d > 0 , c 1 ( s ) > 0 and c 2 ( s ) > 0 . For the particular case when c 1 ( s ) = c 2 ( s ) = c ( s ) , the above equation reduced to
lim t Cor ( X ( t ) , X ( s ) ) t d = c ( s ) .
We say that, if d ( 0 , 1 ) , then X(t) has the LRD property and if d ( 1 , 2 ) it has short-range dependence (SRD) property [27].
Proposition 5.
The running average of PPoK has LRD property.
Proof. 
Let 0 s < t < , then the correlation function for running average of PPoK N A k ( t ) is
Cor [ N A k ( t ) , N A k ( s ) ] = 8 ( 2 k + 1 ) 9 ( k + 1 ) k λ s s 1 / 2 t 1 / 2 8 ( 2 k + 1 ) .
Subsequently, for d = 1 / 2 , it follows
lim t Cor [ N A k ( t ) , N A k ( s ) ] t d = 8 ( 2 k + 1 ) 9 ( k + 1 ) k λ s s 1 / 2 8 ( 2 k + 1 ) = c ( s ) .

4. Skellam Process of Order k (SPoK)

In this section, we introduce and study the Skellam process of order k (SPoK).
Definition 4
(SPoK). Let N 1 k ( t ) and N 2 k ( t ) be two independent PPoK with intensities λ 1 > 0 and λ 2 > 0 . The stochastic process
S k ( t ) = N 1 k ( t ) N 2 k ( t )
is called a Skellam process of order k (SPoK).
Proposition 6.
The marginal distribution R m ( t ) = P ( S k ( t ) = m ) of SPoK S k ( t ) is given by
R m ( t ) = e k t ( λ 1 + λ 2 ) λ 1 λ 2 m / 2 I | m | ( 2 t k λ 1 λ 2 ) , m Z .
Proof. 
For m 0 , using the pmf of PPoK that is given in (8), it follows
R m ( t ) = n = 0 P ( N 1 k ( t ) = n + m ) P ( N 2 k ( t ) = n ) I m 0 = n = 0 X = Ω ( k , n + m ) e k λ 1 t ( λ 1 t ) ζ k Π k ! X = Ω ( k , n ) e k λ 2 t ( λ 2 t ) ζ k Π k ! .
Setting x i = n i and n = x + i = 1 k ( i 1 ) n i , we have
R m ( t ) = e k t ( λ 1 + λ 2 ) x = 0 ( λ 2 t ) x x ! ( λ 1 t ) m + x ( m + x ) ! n 1 + n 2 + + n k = m + x m + x n 1 ! n 2 ! n k ! n 1 + n 2 + + n k = x x n 1 ! n 2 ! n k ! = e k t ( λ 1 + λ 2 ) x = 0 ( λ 2 t ) x x ! ( λ 1 t ) m + x ( m + x ) ! k m + x k x ,
using the multinomial theorem and modified Bessel function given in (2). Similarly, it follows for m < 0 . □
Proposition 7.
The Lévy density for SPoK is
ν S k ( x ) = λ 1 j = 1 k δ j ( x ) + λ 2 j = 1 k δ j ( x ) .
Proof. 
The proof follows by using the independence of two PPoK used in the definition of SPoK. □
Remark 3.
Using (12), the pgf of SPoK is given by
G S k ( s , t ) = m = s m R m ( t ) = e t k ( λ 1 + λ 2 ) λ 1 j = 1 k s j λ 2 j = 1 k s j .
Further, the characteristic function of SPoK is given by
ϕ S k ( t ) ( u ) = e t [ k ( λ 1 + λ 2 ) λ 1 j = 1 k e i j u λ 2 j = 1 k e i j u ] .

SPoK as a Pure Birth and Death Process

In this section, we provide the transition probabilities of SPoK at time t + δ , given that we started at time t. Over such a short interval of length δ 0 , it is nearly impossible to observe more than k event; in fact, the probability to see more than k event is o ( δ ) .
Proposition 8.
The transition probabilities of SPoK are given by
P ( S k ( t + δ ) = m | S k ( t ) = n ) = λ 1 δ + o ( δ ) , m > n , m = n + i , i = 1 , 2 , , k ; λ 2 δ + o ( δ ) , m < n , m = n i , i = 1 , 2 , , k ; 1 k λ 1 δ k λ 2 δ + o ( δ ) , m = n ; o ( δ ) otherwise .
Basically, at most k events can occur in a very small interval of time δ. Additionally, even though the probability for more than k event is non-zero, it is negligible.
Proof. 
Note that S k ( t ) = N 1 k ( t ) N 2 k ( t ) . We call N 1 k ( t ) as the first process and N 2 k ( t ) as the second process. For i = 1 , 2 , , k , we have
P ( S k ( t + δ ) = n + i | S k ( t ) = n ) = j = 1 k i P ( the first process has i + j arrivals and the second process has j arrivals ) + P ( the first process has i arrivals and the second process has 0 arrivals ) + o ( δ ) = j = 0 k i ( λ 1 δ + o ( δ ) ) × ( λ 2 δ + o ( δ ) ) + ( λ 1 δ + o ( δ ) ) × ( 1 k λ 2 δ + o ( δ ) ) + o ( δ ) = λ 1 δ + o ( δ ) .
Similarly, for i = 1 , 2 , , k , we have
P ( S k ( t + δ ) = n i | S k ( t ) = n ) = j = 1 k i P ( the first process has j arrivals and the second process has i + j arrivals ) + P ( the first process has 0 arrivals and the second process has i arrivals ) + o ( δ ) = j = 0 k i ( λ 1 δ + o ( δ ) ) × ( λ 2 δ + o ( δ ) ) + ( 1 k λ 1 δ + o ( δ ) ) × ( λ 2 δ + o ( δ ) ) + o ( δ ) = λ 2 δ + o ( δ ) .
Further,
P ( S k ( t + δ ) = n | S k ( t ) = n ) = j = 1 k P ( the first process has j arrivals and the second process has j arrivals ) + P ( the first process has 0 arrivals and the second process has 0 arrivals ) + o ( δ ) = j = 0 k ( λ 1 δ + o ( δ ) ) × ( λ 2 δ + o ( δ ) ) + ( 1 k λ 1 δ + o ( δ ) ) × ( 1 k λ 2 δ + o ( δ ) ) + o ( δ ) = 1 k λ 1 δ k λ 2 δ + o ( δ ) .
Remark 4.
The pmf R m ( t ) of SPoK satisfies the following difference differential equation
d d t R m ( t ) = k ( λ 1 + λ 2 ) R m ( t ) + λ 1 j = 1 k R m j ( t ) + λ 2 j = 1 k R m + j ( t ) = λ 1 j = 1 k ( 1 B j ) R m λ 2 j = 1 k ( 1 F j ) R m ( t ) , m Z ,
with initial condition R 0 ( 0 ) = 1 and R m ( 0 ) = 0 for m 0 . Let B be the backward shift operator defined in (22) and F be the forward shift operator defined by F j X ( t ) = X ( t + j ) , such that ( 1 F ) α = j = 0 α j F j . Multiplying by s m and summing for all m in (42), we obtain the following differential equation for the pgf
d d t G S k ( s , t ) = k ( λ 1 + λ 2 ) + λ 1 j = 1 k s j + λ 2 j = 1 k s j G S k ( s , t ) .
The mean, variance and covariance of SPoK can be easily calculated by using the pgf,
E [ S k ( t ) ] = k ( k + 1 ) 2 ( λ 1 λ 2 ) t ; Var [ S k ( t ) ] = 1 6 k ( k + 1 ) ( 2 k + 1 ) ( λ 1 + λ 2 ) t ; Cov [ S k ( t ) , S k ( s ) ] = 1 6 k ( k + 1 ) ( 2 k + 1 ) ( λ 1 + λ 2 ) s , s < t .
Remark 5.
For the SPoK, when λ 1 > λ 2 , Var [ S k ( t ) ] E [ S k ( t ) ] = k ( k + 1 ) 3 [ ( k 1 ) λ 1 + ( k + 2 ) λ 2 > 0 , which implies that FI [ S k ( t ) ] > 1 and hence S k ( t ) exhibits over-dispersion. For λ 1 < λ 2 , the process is under-dispersed.
Next, we show the LRD property for SPoK.
Proposition 9.
The SPoK has LRD property defined in Definition 3.
Proof. 
The correlation function of SPoK satisfies
lim t Cor ( S k ( t ) , S k ( s ) ) t d = s 1 / 2 t 1 / 2 t 1 / 2 = c ( s ) .
Hence, SPoK exhibits the LRD property. □

5. Running Average of SPoK

In this section, we introduce and study the new stochastic Lévy process, which is the running average of SPoK.
Definition 5.
The following stochastic process defined by taking the time-scaled integral of the path of the SPoK,
S A k ( t ) = 1 t 0 t S k ( s ) d s ,
is called the running average of SPoK.
Next, we provide the compound Poisson representation of running average of SPoK.
Proposition 10.
The characteristic function ϕ S A k ( t ) ( u ) = E [ e i u S A k ( t ) ] of S A k ( t ) is given by
ϕ S A k ( t ) ( u ) = e k t λ 1 1 1 k j = 1 k ( e i u j 1 ) i u j + λ 2 1 1 k j = 1 k ( 1 e i u j ) i u j , u R .
Proof. 
By using the Lemma 3.1 to Equation (40) after scaling by 1 / t . □
Remark 6.
It is easily observable that Equation (43) has removable singularity at u = 0 . To remove that singularity, we can define ϕ S A k ( t ) ( 0 ) = 1 .
Proposition 11.
Let Y ( t ) be a compound Poisson process
Y ( t ) = n = 1 N ( t ) J n ,
where N ( t ) is a Poisson process with rate parameter k ( λ 1 + λ 2 ) > 0 and { J n } n 1 are iid random variables with mixed double uniform distribution function p j , which are independent of N ( t ) . Subsequently,
Y ( t ) = l a w S A k ( t ) .
Proof. 
Rearranging the ϕ S A k ( t ) ( u ) ,
ϕ S A k ( t ) ( u ) = e ( λ 1 + λ 2 ) k t λ 1 λ 1 + λ 2 1 k j = 1 k ( e i u j 1 ) i u j + λ 2 λ 1 + λ 2 1 k j = 1 k ( 1 e i u j ) i u j 1
The random variable J 1 being a mixed double uniformly distributed has density
p J 1 ( x ) = i = 1 k p V i ( x ) f U i ( x ) = 1 k i = 1 k f U i ( x ) ,
where V j follows discrete uniform distribution over ( 0 , k ) with pmf p V j ( x ) = P ( V j = x ) = 1 k , j = 1 , 2 , k , and U i be doubly uniform distributed random variables with density
f U i ( x ) = ( 1 w ) 1 [ i , 0 ] ( x ) + w 1 [ 0 , i ] ( x ) , i x i .
Further, 0 < w < 1 is a weight parameter and 1 ( · ) is the indicator function. Here, we obtained the characteristic of J 1 using the Fourier transform of (45),
ϕ J 1 ( u ) = λ 1 λ 1 + λ 2 1 k j = 1 k ( e i u j 1 ) i u j + λ 2 λ 1 + λ 2 1 k j = 1 k ( 1 e i u j ) i u j .
The characteristic function of Y ( t ) is
ϕ Y ( t ) ( u ) = e k t ( λ 1 + λ 2 ) t ( 1 ϕ J 1 ( u ) ) ,
putting the characteristic function ϕ J 1 ( u ) in the above expression yields the characteristic function of S A k ( t ) , which completes the proof. □
Remark 7.
The q-th order moments of J 1 can be calculated using (37) and also using Taylor series expansion of the characteristic ϕ J 1 ( u ) , around 0, such that
( e i u j 1 ) i u j = 1 + r = 1 ( i u j ) r ( r + 1 ) ! & ( 1 e i u j ) i u j = 1 + r = 1 ( i u j ) r ( r + 1 ) ! .
We have m 1 = ( k + 1 ) ( λ 1 λ 2 ) 4 ( λ 1 + λ 2 ) and m 2 = 1 18 [ ( k + 1 ) ( 2 k + 1 ) ] . Further, the mean, variance, and covariance of running average of SPoK are
E [ S A k ( t ) ] = E [ N ( t ) ] E [ J 1 ] = k ( k + 1 ) 4 ( λ 1 λ 2 ) t Var [ S A k ( t ) ] = E [ N ( t ) ] E [ J 1 2 ] = 1 18 [ k ( k + 1 ) ( 2 k + 1 ) ] ( λ 1 + λ 2 ) t Cov [ S A k ( t ) , S A k ( s ) ] = 1 18 [ k ( k + 1 ) ( 2 k + 1 ) ] ( λ 1 λ 2 ) s k 2 ( k + 1 ) 2 16 ( λ 1 λ 2 ) 2 s 2 .
Corollary 3.
For λ 2 = 0 the running average of SPoK is same as the running average of PPoK, i.e.,
ϕ S A k ( t ) ( u ) = ϕ N A k ( t ) ( u ) .
Corollary 4.
For k = 1 this process behave like the running average of Skellam process.
Corollary 5.
The ratio of mean and variance of SPoK and running average of SPoK are 1 / 2 and 1 / 3 , respectively.
Remark 8.
For running average of SPoK, when λ 1 > λ 2 and k > 1 , the process is over-dispersed. Otherwise, it exhibits under-dispersion.

6. Time-Changed Skellam Process of Order k

We consider time-changed SPoK, which can be obtained by subordinating SPoK S k ( t ) with the independent Lévy subordinator D f ( t ) satisfying E [ D f ( t ) ] c < for all c > 0 . The time-changed SPoK is defined by
Z f ( t ) = S k ( D f ( t ) ) , t 0 .
Note that the stable subordinator does not satisfy the condition E [ D f ( t ) ] c < . The mgf of time-changed SPoK Z f ( t ) is given by
E [ e θ Z f ( t ) ] = e t f ( k ( λ 1 + λ 2 ) λ 1 j = 1 k e θ j λ 2 j = 1 k e θ j ) .
Theorem 2.
The pmf H f ( t ) = P ( Z f ( t ) = m ) of time-changed SPoK is given by
H f ( t ) = x = max ( 0 , m ) ( k λ 1 ) m + x ( k λ 2 ) x ( m + x ) ! x ! E [ e k ( λ 1 + λ 2 ) D f ( t ) D f 2 m + x ( t ) ] , m Z .
Proof. 
Let h f ( x , t ) be the probability density function of Lévy subordinator. Using conditional argument
H f ( t ) = 0 R m ( y ) h f ( y , t ) d y = 0 e k y ( λ 1 + λ 2 ) λ 1 λ 2 m / 2 I | m | ( 2 y k λ 1 λ 2 ) h f ( y , t ) d y = x = max ( 0 , m ) ( k λ 1 ) m + x ( k λ 2 ) x ( m + x ) ! x ! 0 e k ( λ 1 + λ 2 ) y y 2 m + x h f ( y , t ) d y = x = max ( 0 , m ) ( k λ 1 ) m + x ( k λ 2 ) x ( m + x ) ! x ! E [ e k ( λ 1 + λ 2 ) D f ( t ) D f 2 m + x ( t ) ] .
The mean and covariance of time changed SPoK are given by,
E [ Z f ( t ) ] = k ( k + 1 ) 2 ( λ 1 λ 2 ) E [ D f ( t ) ] . Cov [ Z f ( t ) , Z f ( s ) ] = 1 6 [ k ( k + 1 ) ( 2 k + 1 ) ] ( λ 1 + λ 2 ) ) E [ D f ( s ) ] + k 2 ( k + 1 ) 2 4 ( λ 1 λ 2 ) 2 Var [ D f ( s ) ] .

7. Space Fractional Skellam Process and Tempered Space Fractional Skellam Process

In this section, we introduce time-changed Skellam processes where time-change are stable subordinator and tempered stable subordinator. These processes give the space-fractional version of the Skellam process similar to the time-fractional version of the Skellam process introduced in [10].

7.1. The Space-Fractional Skellam Process

In this section, we introduce space-fractional Skellam processes (SFSP). Further, for introduced processes, we study main results, such as state probabilities and governing difference-differential equations of marginal pmf.
Definition 6
(SFSP). Let N 1 ( t ) and N 2 ( t ) be two independent homogeneous Poison processes with intensities λ 1 > 0 and λ 2 > 0 , , respectively. Let D α 1 ( t ) and D α 2 ( t ) be two independent stable subordinators with indices α 1 ( 0 , 1 ) and α 2 ( 0 , 1 ) , respectively. These subordinators are independent of the Poisson processes N 1 ( t ) and N 2 ( t ) . The subordinated stochastic process
S α 1 , α 2 ( t ) = N 1 ( D α 1 ( t ) ) N 2 ( D α 2 ( t ) )
is called a SFSP.
Next, we derive the mgf of SFSP. We use the expression for marginal (pmf) of SFPP that is given in (17) to obtain the marginal pmf of SFSP.
M θ ( t ) = E [ e θ S α 1 , α 2 ( t ) ] = E [ e θ ( N 1 ( D α 1 ( t ) ) N 2 ( D α 2 ( t ) ) ) ] = e t [ λ 1 α 1 ( 1 e θ ) α 1 + λ 2 α 2 ( 1 e θ ) α 2 ] , θ R .
In the next result, we obtain the state probabilities of the SFSP.
Theorem 3.
The pmf H k ( t ) = P ( S α 1 , α 2 ( t ) = k ) of SFSP is given by
H k ( t ) = n = 0 ( 1 ) k n ! ( n + k ) ! 1 ψ 1 ( 1 , α 1 ) ; ( 1 n k , α 1 ) ; ( λ 1 α 1 t ) 1 ψ 1 ( 1 , α 2 ) ; ( 1 n , α 2 ) ; ( λ 2 α 2 t ) I k 0 + n = 0 ( 1 ) | k | n ! ( n + | k | ) ! 1 ψ 1 ( 1 , α 1 ) ; ( 1 n , α 1 ) ; ( λ 1 α 1 t ) 1 ψ 1 ( 1 , α 2 ) ; ( 1 n | k | , α 2 ) ; ( λ 2 α 2 t ) I k < 0
for k Z .
Proof. 
Note that N 1 ( D α 1 ( t ) ) and N 2 ( D α 2 ( t ) ) are independent, hence
P ( S α 1 , α 2 ( t ) = k ) = n = 0 P ( N 1 ( D α 1 ( t ) ) = n + k ) P ( N 2 ( D α 2 ( t ) ) = n ) I k 0 + n = 0 P ( N 1 ( D α 1 ( t ) ) = n ) P ( N 2 ( D α 2 ( t ) ) = n + | k | ) I k < 0 .
Using (17), the result follows. □
In the next theorem, we discuss the governing differential-difference equation of the marginal pmf of SFSP.
Theorem 4.
The marginal distribution H k ( t ) = P ( S α 1 , α 2 ( t ) = k ) of SFSP satisfies the following differential difference equations
d d t H k ( t ) = λ 1 α 1 ( 1 B ) α 1 H k ( t ) λ 2 α 2 ( 1 F ) α 2 H k ( t ) , k Z
d d t H 0 ( t ) = λ 1 α 1 H 0 ( t ) λ 2 α 2 H 1 ( t ) ,
with initial conditions H 0 ( 0 ) = 1 and H k ( 0 ) = 0 for k 0 .
Proof. 
The proof follows by using pgf. □
Remark 9.
The mgf of the SFSP solves the differential equation
d M θ ( t ) d t = M θ ( t ) ( λ 1 α 1 ( 1 e θ ) α 1 + λ 2 α 2 ( 1 e θ ) α 2 ) .
Proposition 12.
The Lévy density ν S α 1 , α 2 ( x ) of SFSP is given by
ν S α 1 , α 2 ( x ) = λ 1 α 1 n 1 = 1 ( 1 ) n 1 + 1 α 1 n 1 δ n 1 ( x ) + λ 2 α 2 n 2 = 1 ( 1 ) n 2 + 1 α 2 n 2 δ n 2 ( x ) .
Proof. 
Substituting the Lévy density ν N α 1 ( x ) and ν N α 2 ( x ) of N 1 ( D α 1 ( t ) ) and N 2 ( D α 2 ( t ) ) , respectively, from the Equation (24), we obtain
ν S α 1 , α 2 ( x ) = ν N α 1 ( x ) + ν N α 2 ( x ) ,
which gives the desired result. □

7.2. Tempered Space-Fractional Skellam Process (TSFSP)

In this section, we present the tempered space-fractional Skellam process (TSFSP). We discuss the corresponding fractional difference-differential equations, marginal pmfs, and moments of this process.
Definition 7
(TSFSP). The TSFSP is obtained by taking the difference of two independent tempered space fractional Poisson processes. Let D α 1 , μ 1 ( t ) , D α 2 , μ 2 ( t ) be two independent TSS (see [28]) and N 1 ( t ) , N 2 ( t ) be two independent Poisson processes that are independent of TSS. Subsequently, the stochastic process
S α 1 , α 2 μ 1 , μ 2 ( t ) = N 1 ( D α 1 , μ 1 ( t ) N 2 ( D α 2 , μ 2 ( t ) )
is called the TSFSP.
Theorem 5.
The PMF H k μ 1 , μ 2 ( t ) = P ( S α 1 , α 2 μ 1 , μ 2 ( t ) = k ) is given by
H k μ 1 , μ 2 ( t ) = n = 0 ( 1 ) k n ! ( n + k ) ! e t ( μ 1 α 1 + μ 1 α 1 ) m = 0 μ 1 m λ 1 m m ! 1 ψ 1 ( 1 , α 1 ) ; ( 1 n k m , α 1 ) ; ( λ 1 α 1 t ) × l = 0 μ 2 l λ 2 l l ! 1 ψ 1 ( 1 , α 2 ) ; ( 1 l k , α 2 ) ; ( λ 2 α 2 t )
when k 0 and similarly for k < 0 ,
H k μ 1 , μ 2 ( t ) = n = 0 ( 1 ) | k | n ! ( n + | k | ) ! e t ( μ 1 α 1 + μ 1 α 1 ) m = 0 μ 1 m λ 1 m m ! 1 ψ 1 ( 1 , α 1 ) ; ( 1 n m , α 1 ) ; ( λ 1 α 1 t ) × l = 0 μ 2 l λ 2 l l ! 1 ψ 1 ( 1 , α 2 ) ; ( 1 l n | k | , α 2 ) ; ( λ 2 α 2 t ) .
Proof. 
Because N 1 ( D α 1 , μ 1 ( t ) ) and N 2 ( D α 2 , μ 2 ( t ) ) are independent,
P S α 1 , α 2 μ 1 , μ 2 ( t ) = k = n = 0 P ( N 1 ( D α 1 , μ 1 ( t ) ) = n + k ) P ( N 2 ( D α 2 , μ 2 ( t ) ) = n ) I k 0 + n = 0 P ( N 1 ( D α 1 , μ 1 ( t ) ) = n ) P ( N 2 ( D α 2 , μ 2 ( t ) ) = n + | k | ) I k < 0 ,
which gives the marginal pmf of TSFPP using (26). □
Remark 10.
We use this expression to calculate the marginal distribution of TSFSP. The mgf is obtained using the conditioning argument. Let f α , μ ( x , t ) be the density function of D α , μ ( t ) . Subsequently,
E [ e θ N ( D α , μ ( t ) ) ] = 0 E [ e θ N ( u ) ] f α , μ ( u , t ) d u = e t { ( λ ( 1 e θ ) + μ ) α μ α } .
Using (54), the mgf of TSFSP is
E [ e θ S α 1 , α 2 μ 1 , μ 2 ( t ) ] = E e θ N 1 ( D α 1 , μ 1 ( t ) ) E e θ N 2 ( D α 2 , μ 2 ( t ) ) = e t [ { ( λ 1 ( 1 e θ ) + μ 1 ) α 1 μ 1 α 1 } + { ( λ 2 ( 1 e θ ) + μ 2 ) α 2 μ 2 α 2 } ] .
Remark 11.
We have E [ S α 1 , α 2 μ 1 , μ 2 ( t ) ] = t ( λ 1 α 1 μ 1 α 1 1 λ 2 α 2 μ 2 α 2 1 ) . Further, the covariance of TSFSP can be obtained by using (29) and
Cov S α 1 , α 2 μ 1 , μ 2 ( t ) , S α 1 , α 2 μ 1 , μ 2 ( s ) = Cov [ N 1 ( D α 1 , μ 1 ( t ) ) , N 1 ( D α 1 , μ 1 ( s ) ) ] + Cov [ N 2 ( D α 2 , μ 2 ( t ) ) , N 2 ( D α 2 , μ 2 ( s ) ) ] = Var ( N 1 ( D α 1 , μ 1 ( min ( t , s ) ) ) + Var ( N 2 ( D α 2 , μ 2 ( min ( t , s ) ) ) .
Proposition 13.
The Lévy density ν S α 1 , α 2 μ 1 , μ 2 ( x ) of TSFSP is given by
ν S α 1 , α 2 μ 1 , μ 2 ( x ) = n 1 = 1 μ 1 α 1 n 1 α 1 n 1 λ 1 n 1 l = 1 n 1 n 1 l 1 ( 1 ) l 1 + 1 δ l 1 ( x ) + n 2 = 1 μ 2 α 2 n 2 α 2 n 2 λ 2 n 2 l 2 = 1 n 2 n 2 l 2 ( 1 ) l 2 + 1 δ l 2 ( x ) , μ 1 , μ 2 > 0 .
Proof. 
By adding Lévy density ν N α 1 , μ 1 ( x ) and ν N α 2 , μ 2 ( x ) of N 1 ( D α 1 , μ 1 ( t ) ) and N 2 ( D α 2 , μ 2 ( t ) ) , respectively, from Equation (30), which leads to
ν S α 1 , α 2 μ 1 , μ 2 ( x ) = ν N α 1 , μ 1 ( x ) + ν N α 2 , μ 2 ( x ) .

7.3. Simulation of SFSP and TSFSP

We present the algorithm to simulate the sample trajectories for SFSP and TSFSP. We use Python 3.7 and its libraries Numpy and Matplotlib for the simulation purpose. It is worth mentioning that Python is an open source and freely available software.
Simulation of SFSP: fix the values of the parameters α 1 , α 2 , λ 1 and λ 2 ;
  • Step-1: generate independent and uniformly distributed random vectors U, V of size 1000 each in the interval [ 0 , 1 ] ;
  • Step-2: generate the increments of the α 1 -stable subordinator D α 1 ( t ) (see [29]) with pdf f α 1 ( x , t ) , while using the relationship D α 1 ( t + d t ) D α 1 ( t ) = d D α 1 ( d t ) = d ( d t ) 1 α 1 D α 1 ( 1 ) , where
    D α 1 ( 1 ) = sin ( α 1 π U ) [ sin ( ( 1 α 1 ) π U ) ] 1 / α 1 1 [ sin ( π U ) ] 1 / α 1 | log V | 1 / α 1 1 ;
  • Step-3: generate the increments of Poisson distributed rvs N 1 ( D α 1 ( d t ) ) with parameter λ 1 ( d t ) 1 / α 1 D α 1 ( 1 ) ;
  • Step-4: cumulative sum of increments gives the space fractional Poisson process N 1 ( D α 1 ( t ) ) sample trajectories; and,
  • Step-5: similarly generate N 2 ( D α 2 ( t ) ) and subtract these to obtain the SFSP S α 1 , α 2 ( t ) .
We next present the algorithm for generating the sample trajectories of TSFSP.
Simulation of TSFSP: fix the values of the parameters α 1 , α 2 , λ 1 , λ 2 , μ 1 and μ 2 .
Use the first two steps of previous algorithm for generating the increments of α -stable subordinator D α 1 ( t ) .
  • Step-3: for generating the increments of TSS D α 1 , μ 1 ( t ) with pdf f α 1 , μ 1 ( x , t ) , we use the following steps, called “acceptance-rejection method”;
    (a)
    generate the stable random variable D α 1 ( d t ) ;
    (b)
    generate uniform ( 0 , 1 ) rv W (independent from D α 1 );
    (c)
    if W e μ 1 D α 1 ( d t ) , then D α 1 , μ 1 ( d t ) = D α 1 ( d t ) (“accept"); otherwise, go back to (a) (“reject"). Note that, here we used that f α 1 , μ 1 ( x , t ) = e μ 1 x + μ 1 α 1 t f α 1 ( x , t ) , which implies f α 1 , μ 1 ( x , t ) ( x , d t ) c f α 1 ( x , d t ) = e μ 1 x for c = e μ 1 α 1 d t and the ratio is bounded between 0 and 1;
  • Step-4: generate Poisson distributed rv N ( D α 1 , μ 1 ( d t ) ) with parameter λ 1 D α 1 , μ 1 ( d t )
  • Step-5: cumulative sum of increments gives the tempered space fractional Poisson process N 1 ( D α 1 , μ 1 ( t ) ) sample trajectories; and,
  • Step-6: similarly generate N 2 ( D α 2 , μ 2 ( t ) ) , then take difference of these to get the sample paths of the TSFSP S α 1 , α 2 μ 1 , μ 2 ( t ) .
The tail probability of α -stable subordinator behaves asymptotically as (see e.g., [30])
P ( D α ( t ) > x ) t Γ ( 1 α ) x α , as x .
For α 1 = 0.6 and α 2 = 0.9 and fixed t, it is more probable that the value of the rv D α 1 ( t ) is higher than the rv D α 2 ( t ) . Thus, for same intensity parameter λ for Poisson process the process N ( D α 1 ( t ) ) will have generally more arrivals than the process N ( D α 2 ( t ) ) until time t. This is evident from the trajectories of the SFSP in Figure 1, because the trajectories are biased towards positive side. The TSFPP is a finite mean process, however SFPP is an infinite mean process and hence SFSP paths are expected to have large jumps, since there could be a large number of arrivals in any interval.

Author Contributions

Conceptualization, N.G., A.K. and N.L.; Methodology, A.K. and N.L.; Simulation, N.G.; Writing-Original Draft Preparation, N.G., A.K. and N.L.; Writing-Review & Editing, N.G., A.K. and N.L. All authors have read and agreed to the published version of the manuscript.

Funding

N.G. would like to thank Council of Scientific and Industrial Research (CSIR), India for supporting her research under the fellowship award number 09/1005(0021)2018-EMR-I. Further, A.K. would like to express his gratitude to Science and Engineering Research Board (SERB), India for the financial support under the MATRICS research grant MTR/2019/000286.

Acknowledgments

N.G. would like to thank Council of Scientific and Industrial Research(CSIR), India, for the award of a research fellowship.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Skellam, J.G. The frequency distribution of the difference between two Poisson variables belonging to different populations. J. Roy. Statist. Soc. Ser. A 1946, 109, 296. [Google Scholar] [CrossRef]
  2. Irwin, J.O. The frequency distribution of the difference between two independent variates following the same Poisson distribution. J. Roy. Statist. Soc. Ser. A 1937, 100, 415–416. [Google Scholar] [CrossRef]
  3. Steutel, F.W.; Van Harn, K. Infinite Divisibility of Probability Distributions on the Real Line; Marcel Dekker: New York, NY, USA, 2004. [Google Scholar]
  4. Hwang, Y.; Kim, J.; Kweon, I. Sensor noise modeling using the Skellam distribution: Application to the color edge detection. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  5. Karlis, D.; Ntzoufras, I. Bayesian modeling of football outcomes: Using the Skellam’s distribution for the goal difference. IMA J. Manag. Math. 2008, 20, 133–145. [Google Scholar] [CrossRef] [Green Version]
  6. Bacry, E.; Delattre, M.; Hoffman, M.; Muzy, J. Modeling microstructure noise with mutually exciting point processes. Quant. Finance 2013, 13, 65–77. [Google Scholar] [CrossRef]
  7. Bacry, E.; Delattre, M.; Hoffman, M.; Muzy, J. Some limit theorems for Hawkes processes and applications to financial statistics. Stoch. Proc. Appl. 2013, 123, 2475–2499. [Google Scholar] [CrossRef]
  8. Barndorff-Nielsen, O.E.; Pollard, D.; Shephard, N. Integer-valued Lévy processes and low latency financial econometrics. Quant. Financ. 2011, 12, 587–605. [Google Scholar] [CrossRef] [Green Version]
  9. Carr, P. Semi-static hedging of barrier options under Poisson jumps. Int. J. Theor. Appl. Financ. 2011, 14, 1091–1111. [Google Scholar] [CrossRef] [Green Version]
  10. Kerss, A.; Leonenko, N.N.; Sikorskii, A. Fractional Skellam processes with applications to finance. Fract. Calc. Appl. Anal. 2014, 17, 532–551. [Google Scholar] [CrossRef]
  11. Kumar, A.; Leonenko, N.N.; Pichler, A. Fractional risk process in insurance. Math. Financ. Econ. 2019, 529, 121–539. [Google Scholar]
  12. Buchak, K.V.; Sakhno, L.M. On the governing equations for Poisson and Skellam processes time-changed by inverse subordinators. Theory Probab. Math. Statist. 2018, 98, 87–99. [Google Scholar] [CrossRef]
  13. Sengar, A.S.; Maheshwari, A.; Upadhye, N.S. Time-changed Poisson processes of order k. Stoch. Anal. Appl. 2020, 38, 124–148. [Google Scholar] [CrossRef] [Green Version]
  14. Kostadinova, K.Y.; Minkova, L.D. On the Poisson –process of order k. Pliska Stud. Math. Bulgar. 2012, 22, 117–128. [Google Scholar]
  15. Philippou, A.N. Distributions and Fibonacci polynomials of order k, longest runs, and reliability of consecutive k-out-of-n:F systems. In Fibonacci Numbers and Their Applications; Philippou, A.N., Ed.; D. Reidel: Dordrecht, The Netherlands, 1986; pp. 203–227. [Google Scholar]
  16. Fisher, R.A. The effects of methods of ascertainment upon the estimation of frequencies. Ann. Eugen. 1934, 6, 13–15. [Google Scholar] [CrossRef]
  17. Cox, D.R.; Lewis, P.A.W. The Statistical Analysis of Series of Events; Wiley: New York, NY, USA, 1966. [Google Scholar]
  18. Beghin, L.; Macci, C. Fractional discrete processes: Compound and mixed Poisson representations. J. Appl. Probab. 2014, 51, 9–36. [Google Scholar] [CrossRef]
  19. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions; Dover: New York, NY, USA, 1974. [Google Scholar]
  20. Sato, K.I. Lévy Processes and Infinitely Divisible Distributions; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  21. Schilling, R.L.; Song, R.; Vondracek, Z. Bernstein Functions: Theory and Applications; De Gruyter: Berlin, Germany, 2012. [Google Scholar]
  22. Orsingher, E.; Polito, F. The space-fractional Poisson process. Statist. Probab. Lett. 2012, 82, 852–858. [Google Scholar] [CrossRef] [Green Version]
  23. Kilbas, A.; Srivastava, H.; Trujillo, J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  24. Beran, J. Statistics for Long-Memory Processes; Chapman & Hall: New York, NY, USA, 1994. [Google Scholar]
  25. Gupta, N.; Kumar, A.; Leonenko, N. Tempered Fractional Poisson Processes and Fractional Equations with Z-Transform. Stoch. Anal. Appl. 2020, 38, 939–957. [Google Scholar] [CrossRef] [Green Version]
  26. Xia, W. On the distribution of running average of Skellam process. Int. J. Pure Appl. Math. 2018, 119, 461–473. [Google Scholar]
  27. Maheshwari, A.; Vellaisamy, P. On the long-range dependence of fractional Poisson and negative binomial processes. J. Appl. Probab. 2016, 53, 989–1000. [Google Scholar] [CrossRef] [Green Version]
  28. Rosiński, J. Tempering stable processes. Stochastic. Process. Appl. 2007, 117, 677–707. [Google Scholar] [CrossRef] [Green Version]
  29. Cahoy, D.; Uchaikin, V.; Woyczynski, A. Parameter estimation from fractional Poisson process. J. Statist. Plann. Inference 2013, 140, 3106–3120. [Google Scholar] [CrossRef] [Green Version]
  30. Samorodnitsky, G.; Taqqu, M.S. Stable Non-Gaussian Random Processes; Chapman and Hall: Boca Raton, FL, USA, 1994. [Google Scholar]
Figure 1. The left hand figure shows the sample trajectories of SFSP with parameters α 1 = 0.6 , α 2 = 0.9 , λ 1 = 6 and λ 2 = 10 . The sample trajectories of TSFSP are shown in the right figure with parameters α 1 = 0.6 , α 2 = 0.9 , λ 1 = 6 , λ 2 = 10 , μ 1 = 0.2 and μ 2 = 0.5 .
Figure 1. The left hand figure shows the sample trajectories of SFSP with parameters α 1 = 0.6 , α 2 = 0.9 , λ 1 = 6 and λ 2 = 10 . The sample trajectories of TSFSP are shown in the right figure with parameters α 1 = 0.6 , α 2 = 0.9 , λ 1 = 6 , λ 2 = 10 , μ 1 = 0.2 and μ 2 = 0.5 .
Entropy 22 01193 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gupta, N.; Kumar, A.; Leonenko, N. Skellam Type Processes of Order k and Beyond. Entropy 2020, 22, 1193. https://doi.org/10.3390/e22111193

AMA Style

Gupta N, Kumar A, Leonenko N. Skellam Type Processes of Order k and Beyond. Entropy. 2020; 22(11):1193. https://doi.org/10.3390/e22111193

Chicago/Turabian Style

Gupta, Neha, Arun Kumar, and Nikolai Leonenko. 2020. "Skellam Type Processes of Order k and Beyond" Entropy 22, no. 11: 1193. https://doi.org/10.3390/e22111193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop