Next Article in Journal
A Sustainable Fault Diagnosis Approach for Photovoltaic Systems Based on Stacking-Based Ensemble Learning Methods
Next Article in Special Issue
Modeling Languages for Internet of Things (IoT) Applications: A Comparative Analysis Study
Previous Article in Journal
New Development of Variational Iteration Method Using Quasilinearization Method for Solving Nonlinear Problems
Previous Article in Special Issue
Trans-Planckian Censorship and Spacetime Singularities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Local Time for Telegraph Processes

Department of Mathematical Analysis, Chelyabinsk State University, 454001 Chelyabinsk, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(4), 934; https://doi.org/10.3390/math11040934
Submission received: 31 December 2022 / Revised: 1 February 2023 / Accepted: 10 February 2023 / Published: 12 February 2023
(This article belongs to the Special Issue Mathematics: 10th Anniversary)

Abstract

:
The article consists of an introduction into the theory of passage times associated with telegraph processes. Local time for the telegraph process is defined and analysed. We provide some limited results for telegraphic local times.

1. Introduction

The main subject of research in this paper is the Goldstein–Kac asymmetric telegraph process, which describes the motion of a particle along a real line moving with some finite constant speed and alternating between the two possible directions of motion (positive and negative) at random (inhomogeneous) Poisson time intervals.
The mathematical study of such a model began from the seminal paper by M.Kac [1], where the symmetric case is introduced. See also [2]. This approach presumes finiteness as the speed of motion, and the intensity of direction changes per unit of time, which serves as an alternative to classical Wiener processes. This is all the more so because, under the appropriate scaling, the telegraph process weakly converges to a Brownian motion. At present, the theory of telegraph processes is a deep and well-developed area. A presentation of the current state of research in this area can be found in the monographs [3,4], especially in the recently published second edition [5] of the latter book.
In addition to their considerable theoretical value, these random processes have numerous and varied applications. The process can be used to simulate motion undergoing shocks that deviate from the current direction. Similar models based on a persistent random walk arise in physics, chemical kinetics, and various biological models for gene developments, population dynamics, or propagation of nerve impulses (see, for example, [6,7,8,9,10] and references therein).
Mathematical models of financial markets based on such persistent random motions have been intensively studied since [11], (for a complete review, we refer to the monograph [4,5]).
Many phenomena in the physical and biological sciences can be mathematically explained by considering the properties of level crossings by random processes. One of the main objectives of our article is to study the distributions of crossing times associated with telegraph processes.
It is worth noting that the distribution of the random crossing time and of the number of crossings are important, for example, in neural modelling; see, for example, a classical textbook (neural firing as a first passage time [12]). This approach to neural models continues to develop, see, e.g., [13]. The correlated level crossings resulting from a variety of correlated processes are intensively studied by Tchumatchenko Group, see [14]. Nonlinear settings, which are very useful and interesting for understanding neural firing are also beginning to be studied [15]. See also [16]. In [17], these ideas are applied in combination with naively obvious advantages of a persistent random motion/telegraph processes.
Based on the well-studied results associated with these processes, we begin to explore a relatively new area related to telegraphic bridges, meanders, and excursions. It is worth noting that these new objects of interest require a preliminary detailed study of the following topics:
  • distribution of crossing time;
  • distribution of the number of crossings;
  • distributions of return time to the starting point.
The article is organised as follows. First, we recall the well-known formulae for the distribution of the first passage time (Theorem 1). These results are then used to analyse the distribution of the return time τ 0 (Theorem 2). It turns out that the distribution of τ 0 is defective (the telegraph process does not return to 0 with positive probability, Theorem 3). In the case of the proper distribution, the averages can be obtained explicitly. We also derive formulae of the distribution of the crossing time, which is the last one in a given time interval (Theorem 4).
Section 3 is devoted to the definition and analysis of local time for the telegraph process.

2. Telegraph Processes and Passage Time Distributions

Let us first recall the definition and main properties of asymmetric telegraph processes. A detailed presentation can be found in (Chap. 3: Asymmetric Jump-Telegraph Processes) [5].
Consider a particle moving with alternating velocities c 0 > 0 and c 1 < 0 , starting from the origin. The change of velocities is driven by an inhomogeneous Poisson process N ( t ) , t 0 , with alternating rates λ 0 , λ 1 > 0 . The current position T ( t ) of a moving particle can be given by
T ( t ) = 0 t ( 1 ) ε ( s ) c ε ( s ) d s , t 0 .
Here, ε = ε ( t ) { 0 , 1 } is a two-state Markov process with the infinitesimal generator matrix
λ 0 λ 0 λ 1 λ 1 .
The starting velocity is determined by the initial state ε ( 0 ) of the Markov process ε = ε ( t ) .
The distribution of T ( t ) is determined by two pairs of parameters, ( v 0 , λ 0 ) and ( v 1 , λ 1 ) , which correspond to two alternating states of the process t T ( t ) , t 0 .
Let the telegraph process T = T ( t ) , t 0 , (1) be determined by two alternating states ( c 0 , λ 0 ) and ( c 1 , λ 1 ) , c 0 > c 1 > 0 . That is, the speeds v 0 = c 0 , v 1 = c 1 have opposite signs, and the process starts from the origin.
The probability triple ( Ω , F , P ) , where the process T ( t ) is defined, can be divided into two parts ( Ω 0 , F 0 , P 0 ) and ( Ω 1 , F 1 , P 1 ) , with respect to the initial state ε ( 0 ) of the underlying Markov process ε = ε ( t ) . Here, Ω 0 = { A { ε ( 0 ) = 0 } | A Ω } , Ω 1 = { A { ε ( 0 ) = 0 } | A Ω } , and F i = F { ε ( 0 ) = i } , P i = P { · | ε ( 0 ) = i } , i { 0 , 1 } .
The conditional distribution of T ( t ) for a given initial state ε ( 0 ) can be expressed as
T ( t ) | ε ( 0 ) = 0 = d c 0 t 𝟙 { τ > t } + c 0 τ + T ( t τ ) | ε ( 0 ) = 1 𝟙 { τ < t } , T ( t ) | ε ( 0 ) = 1 = d c 1 t 𝟙 { τ > t } + c 1 τ + T ( t τ ) | ε ( 0 ) = 0 𝟙 { τ < t } .
Here, the time τ = τ ( i ) of the first switching has an exponential distribution, exp ( λ i ) , which depends on the initial state i of the underlying process ε , ε ( 0 ) = i { 0 , 1 } .
By definition, T ( t ) [ c 1 t , c 0 t ] a. s. and
P 0 { T ( t ) = c 0 t } = e λ 0 t , P 1 { T ( t ) = c 1 t } = e λ 1 t , t > 0 ,
which corresponds to straight-line motion without switching. The explicit expressions for the transition probability density functions
p i ( t , x ; n ) = P i { T ( t ) d x , N ( t ) = n } / d x , i { 0 , 1 } , t > 0 , n N ,
can be written using (2) separately for even and odd n in the form:
p 0 ( t , x ; 2 n ) = n κ n ξ 0 ( t , x ) n ξ 1 ( t , x ) n 1 θ ( t , x ) , p 1 ( t , x ; 2 n ) = n κ n ξ 0 ( t , x ) n 1 ξ 1 ( t , x ) n θ ( t , x ) , n 1 ,
and
p 0 ( t , x ; 2 n + 1 ) = λ 0 κ n ξ 0 ( t , x ) n ξ 1 ( t , x ) n θ ( t , x ) , p 1 ( t , x ; 2 n + 1 ) = λ 1 κ n ξ 0 ( t , x ) n ξ 1 ( t , x ) n θ ( t , x ) , n 0 ,
where c 1 t < x < c 0 t . Here, we use the following notations:
ξ 0 ( t , x ) = x + c 1 t c 0 + c 1 , ξ 1 ( t , x ) = t ξ 0 ( t , x ) = c 0 t x c 0 + c 1 , κ n = λ 0 n λ 1 n ( n ! ) 2 ,
and
θ ( t , x ) = 1 c 0 + c 1 exp ( λ 0 ξ 0 ( t , x ) λ 1 ξ 1 ( t , x ) ) .
See, e.g., Section 3.1.1 of [5].
Summing up in (3) and (4), one can obtain the transition probabilities accompanying with an even and odd number of switchings: for c 1 t < x < c 0 t ,
p 0 even ( t , d x ) = P 0 { T ( t ) d x , N ( t ) is even } = e λ 0 t δ c 0 t ( d x ) + λ 0 λ 1 x + c 1 t c 0 t x I 1 2 λ 0 λ 1 ( c 0 t x ) ( x + c 1 t ) c 0 + c 1 θ ( t , x ) d x , p 1 even ( t , d x ) = P 1 { T ( t ) d x , N ( t ) is even } = e λ 1 t δ c 1 t ( d x ) + λ 0 λ 1 c 0 t x x + c 1 t I 1 2 λ 0 λ 1 ( c 0 t x ) ( x + c 1 t ) c 0 + c 1 θ ( t , x ) d x ,
and
p 0 odd ( t , d x ) = P 0 { T ( t ) d x , N ( t ) is odd } = λ 0 I 0 2 λ 0 λ 1 ( c 0 t x ) ( x + c 1 t ) c 0 + c 1 θ ( t , x ) d x , p 1 odd ( t , d x ) = P 1 { T ( t ) d x , N ( t ) is odd } = λ 1 I 0 2 λ 0 λ 1 ( c 0 t x ) ( x + c 1 t ) c 0 + c 1 θ ( t , x ) d x .
Here, I 0 and I 1 are the modified Bessel functions of the first kind
I 0 ( x ) = 1 + n = 1 ( x / 2 ) 2 n n ! 2 , I 1 ( x ) = n = 1 ( x / 2 ) 2 n 1 ( n 1 ) ! n ! .
Let T ( x ) be the first passage time, that is, T ( x ) = min { t > 0 : T ( t ) = x } .
The distribution of T ( x ) is known. For the sake of completeness, we present an exact result. The formulae differ in cases where the process starts moving towards the threshold x or it starts in the opposite direction.
Theorem 1. 
Let x 0 . Therefore, the distribution of T ( x ) is given by
P 0 { T ( x ) d t } = e λ 0 x / c 0 δ x / c 0 ( d t ) + λ 0 λ 1 x I 1 ( t , x ) θ 0 ( t , x ) d t , x > 0 , λ 0 ξ 1 ( t , x ) x I 0 ( t , x ) + c 0 ξ 0 ( t , x ) I 1 ( t , x ) θ 1 ( t , x ) d t , x < 0 ,
and
P 1 { T ( x ) d t } = λ 1 ξ 0 ( t , x ) x I 0 ( t , x ) + c 1 ξ 1 ( t , x ) I 1 ( t , x ) θ 0 ( t , x ) d t , x > 0 , e λ 1 x / c 1 δ x / c 1 ( d t ) λ 0 λ 1 x I 1 ( t , x ) θ 1 ( t , x ) d t , x < 0 .
Here,
I 0 ( t , x ) = I 0 2 λ 0 λ 1 ξ 0 ( t , x ) ξ 1 ( t , x ) 1 / 2 , I 1 ( t , x ) = I 1 2 [ λ 0 λ 1 ξ 0 ( t , x ) ξ 1 ( t , x ) ] 1 / 2 [ λ 0 λ 1 ξ 0 ( t , x ) ξ 1 ( t , x ) ] 1 / 2 ,
θ 0 ( t , x ) = θ ( t , x ) 𝟙 { t > x / c 0 } , x > 0 , and θ 1 ( t , x ) = θ ( t , x ) 𝟙 { t > x / c 1 } , x < 0 .
Proof. 
See ((2.7)–(2.8) of [18]) and [5]. See also [19], where slightly erroneous formulae are presented in the symmetric case λ 0 = λ 1 , c 0 = c 1 .
We are especially interested in explicit formulae for the distribution of the return times to the origin. Despite the fact that Formulas (8) and (9) are well known, the distributions of the moments of the first and last (at the time interval [ 0 , T ] ) returns to the origin are known less.
Let
τ 0 = min { t > 0 : T ( t ) = 0 } and τ 1 = max { t [ 0 , T ] : T ( t ) = 0 } ,
which are the moments of the first and last (at the time interval [ 0 , T ] ) visits to the origin by the telegraph process T = T ( t ) . We say that τ 0 = if the process never returns to the origin, that is, the set { t > 0 : T ( t ) = 0 } is empty.
The explicit form of the distribution of τ 0 can be obtained from Theorem 1 using the following auxiliary result.
Lemma 1. 
For x 0 , we have
T ( x ) | ε ( 0 ) = 0 0 , T ( x ) | ε ( 0 ) = 1 0 ,
and
T ( x ) τ 0 | ε ( 0 ) = 0 0 , T ( x ) τ 0 | ε ( 0 ) = 1 0 , a . s .
Proof. 
The limit as x 0 of T ( x ) = T ( x , ω ) exists path-by-path since T ( x ) , T ( x ) > 0 is path-wise monotone, that is, T ( x 1 , ω ) < T ( x 2 , ω ) for 0 < x 1 < x 2 , ω Ω . Further,
P 0 T ( x ) = x c 0 = e λ 0 x / c 0 1 , as x 0 .
Therefore, T ( x ) | ε ( 0 ) = 0 0 a.s. as x 0 . The proof for T ( x ) | ε ( 0 ) = 1 , x 0 , is symmetric.
Properties (11) are proved similarly. □
Theorem 2. 
The probability density functions for τ 0 are given by
P 0 { τ 0 d t } = k Φ ( t ) d t , P 1 { τ 0 d t } = k 1 Φ ( t ) d t ,
where
k = λ 0 c 1 λ 1 c 0 1 / 2 = ν 0 ν 1 1 / 2
and
Φ ( t ) = t 1 I 1 ( 2 α t ) exp ( β t ) , t > 0 ,
Here,
α = λ 0 λ 1 c 0 c 1 c 0 + c 1 , β = λ 1 c 0 + λ 0 c 1 c 0 + c 1 ,
and ν 0 = λ 0 / c 0 and ν 1 = λ 1 / c 1 are switching intensities per unit path.
Proof. 
By virtue of (11),
P 0 { τ 0 d t } = lim x 0 P 0 { T ( x ) d t } , and P 1 { τ 0 d t } = lim x 0 P 1 { T ( x ) d t } .
Notice that
ξ 0 ( t , 0 ) ξ 1 ( t , 0 ) = c 1 c 0 , ξ 0 ( t , 0 ) · ξ 1 ( t , 0 ) = c 0 c 1 ( c 0 + c 1 ) 2 t 2 ,
and
θ ( t , 0 ) = exp ( β t ) c 0 + c 1 .
Therefore, passing to the limit in (8), we obtain
P 0 { τ 0 d t } = lim x 0 λ 0 ξ 1 ( t , x ) [ x I 0 ( t , x ) + c 0 ξ 0 ( t , x ) I 1 ( t , x ) ] θ 1 ( t , x ) = λ 0 c 1 I 1 ( 2 α t ) α t exp ( β t ) c 0 + c 1 d t = k Φ ( t ) d t ,
where k and Φ ( t ) are defined by (13)–(15), respectively.
The proof for P 1 { τ 0 d t } is symmetric. □
The survival functions S 0 ( t ) = P 0 { τ 0 > t } = P 0 { m t = 0 } , S 1 ( t ) = P 1 { τ 0 > t } = P 1 { M t = 0 } corresponding to τ 0 can be expressed by using (12)–(14). Indeed, by virtue of (12),
S 0 ( t ) = k Ψ ( t ) + D 0 , S 1 ( t ) = k 1 Ψ ( t ) + D 1 .
Here, m t and M t are the running minimum and maximum, respectively, m t = min u [ 0 , t ] T ( u ) and M t = max u [ 0 , t ] T ( u ) ,
Ψ ( t ) = t Φ ( s ) d s = t s 1 I 1 ( 2 α s ) exp ( β s ) d s ,
and D 0 = P 0 { τ 0 = } , D 1 = P 1 { τ 0 = } are the probabilities of not returning to 0.
By the definition of the Bessel function I 1 , (7), we have
Ψ ( t ) = n = 0 α 2 n + 1 n ! ( n + 1 ) ! · t s 2 n exp ( β s ) d s = n = 0 ( α / β ) 2 n + 1 n ! ( n + 1 ) ! · Γ ( 2 n + 1 , β t ) ,
where Γ ( s , x ) = x t s 1 e t d t is the upper incomplete gamma-function.
The properties of the distribution of τ 0 can be detailed as follows.
Theorem 3. 
1.
In the asymmetric case, i.e., if k 1 , the distribution of τ 0 is defective. Exactly, the probabilities of not returning to the origin are given by
D 0 = P 0 { τ 0 = } = max ( 0 , 1 k 2 ) , D 1 = P 1 { τ 0 = } = max ( 0 , 1 k 2 ) .
2.
The average of τ 0 is given by the following way:
E [ τ 0 | ε ( 0 ) = 0 ] = c 0 1 + c 1 1 ν 0 ν 1 , if ν 0 > ν 1 , , otherwise ,
and
E [ τ 0 | ε ( 0 ) = 1 ] = c 0 1 + c 1 1 ν 1 ν 0 , if ν 1 > ν 0 , , otherwise ,
ν 0 = λ 0 / c 0 , ν 1 = λ 1 / c 1 .
3.
The survival functions S 0 ( t ) and S 1 ( t ) satisfy the condition
S 0 ( t ) , S 1 ( t ) t γ L ( t ) , t ,
where L is slowly varying, if and only if ν 0 = ν 1 and γ = 1 / 2 .
Notice that in the symmetric case, ν 0 = ν 1 , the distribution of τ 0 is proper, but the mean value E τ 0 does not exist.
Proof. 
Due to (5.2.13.9 [20]) from (18), we obtain
Ψ ( 0 ) = n = 0 α β 2 n + 1 · ( 2 n ) ! n ! ( n + 1 ) ! = z 1 / 2 n = 0 ( 2 n ) ! n ! ( n + 1 ) ! z n + 1 = 1 2 z 1 / 2 ( 1 ( 1 4 z ) 1 / 2 ) ,
where z = α 2 / β 2 .
Note that
1 4 z = β 2 4 α 2 β 2 = λ 1 c 0 λ 0 c 1 λ 1 c 0 + λ 0 c 1 2 = ν 0 ν 1 ν 0 + ν 1 2 ,
which, by virtue of (23), gives
Ψ ( 0 ) = β 2 α · 1 | ν 0 ν 1 | ν 0 + ν 1 = ν 0 + ν 1 2 ν 0 ν 1 · 1 | ν 0 ν 1 | ν 0 + ν 1 = min ν 1 ν 0 1 / 2 , ν 0 ν 1 1 / 2 .
Since
D 0 = 1 0 P 0 { τ 0 d t } , D 1 = 1 0 P 1 { τ 0 d t } ,
by virtue of (12), we obtain
D 0 = 1 ν 0 ν 1 1 / 2 Ψ ( 0 ) = 0 ( 1 k 2 ) ,
and, similarly, D 1 = 0 ( 1 k 2 ) , which is equivalent to (19).
The expectation is obtained by a simple computation. We consider first E [ τ 0 | ε ( 0 ) = 0 ] .
By (19), E [ τ 0 | ε ( 0 ) = 0 ] = , if ν 0 < ν 1 .
Let ν 0 ν 1 . By virtue of (12) and (14),
E [ τ 0 | ε ( 0 ) = 0 ] = ν 0 ν 1 1 / 2 0 t Φ ( t ) d t = ν 0 ν 1 1 / 2 0 I 1 ( 2 α t ) exp ( β t ) d t .
Since
I 1 ( z ) e z / 2 π z ,
as z , the latter integral converges if and only if β > 2 α .
Notice that by definition, β 2 α = ( c 0 + c 1 ) 1 ( λ 1 c 0 λ 0 c 1 ) 2 . Hence, the expectation E [ τ 0 | ε ( 0 ) = 0 ] can be finite only if ν 0 ν 1 . Thus, in the case of ν 0 > ν 1 , using ([2.15.3.2 [21]), we obtain
E [ τ 0 | ε ( 0 ) = 0 ] = 2 α ν 0 / ν 1 1 / 2 β 2 4 α 2 1 / 2 β + β 2 4 α 2 1 / 2 = 2 α ( ν 0 / ν 1 ) 1 / 2 ( c 0 + c 1 ) 2 | λ 1 c 0 λ 0 c 1 | · ( λ 1 c 0 + λ 0 c 1 + | λ 1 c 0 λ 0 c 1 | = c 0 1 + c 1 1 ν 1 ν 0 .
Formulae for E [ τ 0 | ε ( 0 ) = 1 ] are obtained in the same way.
To prove the property 3, consider the mapping t t γ Ψ ( t ) , t 0 . Due to (17) and (24),
lim t ( r t ) γ Ψ ( r t ) t γ Ψ ( t ) = lim t r γ I 1 ( 2 α r t ) exp β r t I 1 ( 2 α t ) exp ( β t ) = lim t r γ 1 / 2 exp 2 α β ( r 1 ) t .
Therefore, function t γ Ψ ( t ) is slowly varying, that is,
lim t ( r t ) γ Ψ ( r t ) t γ Ψ ( t ) = 1 ,
only if 2 α = β , that is, only in the symmetric case, ν 0 = ν 1 , and with γ = 1 / 2 , which gives the part 3 of the theorem. □
Remark 1. 
Due to (16) and (19), the survival functions S 0 ( t ) and S 1 ( t ) , (16) can be expressed as follows:
S 0 ( t ) = k Ψ ( t ) + max ( 0 , k 1 k ) , S 1 ( t ) = k 1 Ψ ( t ) + max ( 0 , k k 1 ) .
In the symmetric case, ν 0 = ν 2 , that is, if k = 1 , then the survival functions S 0 and S 1 coincide with Ψ :
S 0 ( t ) S 1 ( t ) Ψ ( t ) , t > 0 .
The distribution of the time τ 1 = τ 1 ( T ) of the last visit to the origin, see (10), has an atom. This corresponds to the case when the telegraph particle has no return to 0 during the time interval [ 0 , T ] . In this case, τ 1 = 0 , which corresponds to τ 0 > T . Therefore, by virtue of (16) and (25),
P 0 { τ 1 = 0 } = S 0 ( T ) = k Ψ ( T ) + max ( 0 , k 1 k ) , P 1 { τ 1 = 0 } = S 1 ( T ) = k 1 Ψ ( T ) + max ( 0 , k k 1 ) ,
where Ψ ( T ) is determined by (17) and (18). The absolutely continuous part of the distribution of τ 1 can be written as follows.
Theorem 4. 
The distribution of τ 1 is determined by the probability density functions
P 0 ( τ 1 d t ) / d t = λ 0 λ 1 c 0 c 1 1 / 2 I 0 ( 2 α t ) + λ 0 c 1 c 0 I 1 ( 2 α t ) Ψ ( T t ) θ t + λ 0 D 1 I 0 ( 2 α t ) + λ 0 λ 1 c 1 c 0 1 / 2 D 0 I 1 ( 2 α t ) θ t , P 1 ( τ 1 d t ) / d t = λ 0 λ 1 c 1 c 0 1 / 2 I 0 ( 2 α t ) + λ 1 c 0 c 1 I 1 ( 2 α t ) Ψ ( T t ) θ t + λ 1 D 0 I 0 ( 2 α t ) + λ 0 λ 1 c 0 c 1 1 / 2 D 1 I 1 ( 2 α t ) θ t , 0 < t T .
Here, θ t = θ ( t , x ) | x = 0 = exp β t / ( c 0 + c 1 ) ; constants α , β , and D 0 , D 1 are defined by (15) and (19), respectively, and Ψ ( · ) is defined by (17)–(18).
Proof. 
Since the telegraph process has renewal behaviour, each path continues regardless of the past after each return to the origin. Namely, for 0 < t < u and any Borel set A , i { 0 , 1 } ,
P i { T ( u ) A | T ( t ) = 0 , N ( t ) is even } = P i { T ( u t ) A }
and
P i { T ( u ) A | T ( t ) = 0 , N ( t ) is odd } = P 1 i { T ( u t ) A } ,
see, e.g., Theorem 3.1 of [22].
Let τ 1 [ 0 , T ] be the time of the last visit of process T to the origin. The distribution of τ 1 can be described by
P 0 { τ 1 d t } / d t = P 0 { T ( t ) = 0 , N ( t ) is even } P { min t u T T ( u ) = 0 | T ( t ) = 0 , ε ( t ) = 0 } + P 0 { T ( t ) = 0 , N ( t ) is odd } P { max t u T T ( u ) = 0 | T ( t ) = 0 , ε ( t ) = 1 } , P 1 { τ 1 d t } / d t = P 1 { T ( t ) = 0 , N ( t ) is even } P { max t u T T ( u ) = 0 | T ( t ) = 0 , ε ( t ) = 1 } + P 1 { T ( t ) = 0 , N ( t ) is odd } P { min t u T T ( u ) = 0 | T ( t ) = 0 , ε ( t ) = 0 } .
Furthermore, by (28) and (29),
P 0 { τ 1 d t } / d t = P 0 { T ( t ) = 0 , N ( t ) is even } P 0 { m T t = 0 } + P 0 { T ( t ) = 0 , N ( t ) is odd } P 1 { M T t = 0 } , P 1 { τ 1 d t } / d t = P 1 { T ( t ) = 0 , N ( t ) is even } P 1 { M T t = 0 } + P 1 { T ( t ) = 0 , N ( t ) is odd } P 0 { m T t = 0 } ,
where m t and M t are the running minimum and maximum.
Therefore, by virtue of (30),
P 0 ( τ 1 d t ) / d t = p 0 even ( t ) · S 0 ( T t ) + p 0 odd ( t ) · S 1 ( T t ) , P 1 ( τ 1 d t ) / d t = p 1 even ( t ) · S 1 ( T t ) + p 1 odd ( t ) · S 0 ( T t ) , 0 < t T ,
where
p 0 even ( t ) = p 0 even ( t , d x ) | x = 0 / d x = λ 0 λ 1 c 1 c 0 I 1 2 t λ 0 λ 1 c 0 c 1 c 0 + c 1 θ t , p 0 odd ( t ) = p 0 odd ( t , d x ) | x = 0 / d x = λ 0 I 0 2 t λ 0 λ 1 c 0 c 1 c 0 + c 1 θ t , p 1 even ( t ) = p 1 even ( t , d x ) | x = 0 / d x = λ 0 λ 1 c 0 c 1 I 1 2 t λ 0 λ 1 c 0 c 1 c 0 + c 1 θ t , p 1 odd ( t ) = p 1 odd ( t , d x ) | x = 0 / d x = λ 1 I 0 2 t λ 0 λ 1 c 0 c 1 c 0 + c 1 θ t .
See (5) and (6) with x = 0 .
Note that by (16), the probability of positive on [ 0 , t ] telegraphic paths
S 0 ( t ) = P 0 { m t = 0 } = P 0 { τ 0 > t } = ν 0 ν 1 1 / 2 Ψ ( t ) + D 0 .
The probability of always negative paths are given by
S 1 ( t ) = P 1 { M t = 0 } = P 1 { τ 0 > t } = ν 1 ν 0 1 / 2 Ψ ( t ) + D 1 , t > 0 .
Formula (27) follow from (31), (32), and (16). □

3. Telegraphic Local Time

Let T = T ( t ) be the (asymmetric) telegraph process. Local time ( t , x ) at point x is defined as the weighted number of visits to x by the process T :
( t , x ) = T ( s ) = x , 0 s t 1 | T ( s ) | 𝟙 { m t x M t } ,
see Proposition 2.1 of [23].
First, let x = 0 . Since the process T starts from the origin, then the sum in (33) has only one term with the probabilities given by
P 0 { ( t , 0 ) = c 0 1 } = P 0 { τ 0 > t } = ν 0 ν 1 Ψ ( t ) + D 0 = k Ψ ( t ) + max ( 0 , k 1 k ) , P 1 { ( t , 0 ) = c 1 1 } = P 1 { τ 0 > t } = ν 1 ν 0 Ψ ( t ) + D 1 = k 1 Ψ ( t ) + max ( 0 , k k 1 ) .
Further, we have the following explicit expressions.
Theorem 5. 
For n 1 ,
N 2 n ( 0 ) : = P 0 { ( t , 0 ) = n c 0 1 + n c 1 1 } = k s = ( s 1 , , s 2 n 1 ) 0 s 1 + s 2 + + s 2 n 1 t Φ ( s 1 ) Φ ( s 2 s 1 ) · · Φ ( s 2 n 1 s 2 n 2 ) S 1 ( t s 2 n 1 ) d s , N 2 n ( 1 ) : = P 1 { ( t , 0 ) = n c 0 1 + n c 1 1 } = k 1 s = ( s 1 , , s 2 n 1 ) 0 s 1 + s 2 + + s 2 n 1 t Φ ( s 1 ) Φ ( s 2 s 1 ) · · Φ ( s 2 n 1 s 2 n 2 ) S 0 ( t s 2 n 1 ) d s , N 2 n + 1 ( 0 ) : = P 0 { ( t , 0 ) = ( n + 1 ) c 0 1 + n c 1 1 } = s = ( s 1 , , s 2 n ) 0 s 1 + s 2 + + s 2 n t Φ ( s 1 ) Φ ( s 2 s 1 ) · · Φ ( s 2 n s 2 n 1 ) S 0 ( t s 2 n ) d s , N 2 n + 1 ( 1 ) : = P 1 { ( t , 0 ) = n c 0 1 + ( n + 1 ) c 1 1 } = s = ( s 1 , , s 2 n ) 0 s 1 + s 2 + + s 2 n t Φ ( s 1 ) Φ ( s 2 s 1 ) · · Φ ( s 2 n s 2 n 1 ) S 1 ( t s 2 n ) d s ,
where Φ = Φ ( · ) and S 0 ( · ) , S 1 ( · ) are defined by (14) and (16), respectively,
Φ ( t ) = t 1 I 1 ( 2 α t ) exp ( β t ) ,
S 0 ( t ) = k Ψ ( t ) + max ( 0 , k 1 k ) , S 1 ( t ) = k 1 Ψ ( t ) + max ( 0 , k k 1 ) ,
Ψ ( t ) = t Φ ( s ) d s , k = λ 0 c 1 λ 1 c 0 1 / 2 .
Proof. 
Since the telegraph process has renewal behaviour, the excursions of the process T (the paths between successive returns to the origin) are independent of each other.
Therefore, (35) is obtained by successive applying of Theorem 2, (12), and (16). □
The local time for any threshold x , x 0 , can be obtained similarly. The formulae for the probabilities P i { ( t , x ) = c 0 1 } = P i { T ( x ) > t } , i { 0 , 1 } follow from (8) and (9). Similarly to (35), one can obtain the following result.
Theorem 6. 
For n 1 and x 0 , i { 0 , 1 } ,
P i { ( t , x ) = n c 0 1 + n c 1 1 } = 0 t N 2 n ( t s ) P i { T ( x ) d s } , P i { ( t , x ) = ( n + 1 ) c 0 1 + n c 1 1 } = 0 t N 2 n + 1 ( 0 ) ( t s ) P i { T ( x ) d s } , x > 0 , P i { ( t , x ) = n c 0 1 + ( n + 1 ) c 1 1 } = 0 t N 2 n + 1 ( 1 ) ( t s ) P i { T ( x ) d s } , x < 0 .
Here N 2 n ( · ) , N 2 n + 1 ( 0 ) ( · ) and N 2 n + 1 ( 1 ) ( · ) are defined by (35), and P i { T ( x ) d s } , i { 0 , 1 } are given in Theorem 1.
In the symmetric case, the local time ( t , 0 ) satisfies the following limit theorem.
Theorem 7. 
Let λ 0 = λ 1 , c 0 = c 1 = c . Therefore, for y > 0 ,
P { Ψ ( t ) · ( t , 0 ) 3 y 1 / 2 c 1 } 2 1 F ( 3 ( π / 2 y ) 1 / 2 ) , as t .
Here, F = F ( z ) = 1 2 π z exp ( x 2 / 2 ) d x is the cumulative distribution function of the standard normal distribution.
Proof. 
In the symmetric case, λ 0 = λ 1 , c 0 = c 1 = c , the distribution of ( t , 0 ) is independent of the initial state, [ ( t , 0 ) | ε ( 0 ) = 0 ] = D [ ( t , 0 ) | ε ( 0 ) = 1 ] , and ( t , 0 ) = c 1 n t , where n t is the number of returns to the origin within [ 0 , t ] .
It is known that n t has a proper limit distribution if and only if
P { τ 0 > t } t γ L ( t ) , t
where L is slowly varying and 0 < γ < 2 . See (Ch. IX, 8; XI, 5; XVII, 5 of [24]). Note that, by Theorem 3, (22), property (38) is true only in the symmetric case with γ = 1 / 2 . Notice that in this case, P { τ 0 > t } = Ψ ( t ) , (26).
Therefore, due to Ch. XI.5, p. 373 of [24],
P { Ψ ( t ) n t > 3 y 1 / 2 } G ( y ) , as t ,
where G = G ( y ) is the cumulative distribution function of the one-sided 1 / 2 -stable (inverse-gamma) distribution, satisfying the limiting condition
y 1 / 2 [ 1 G ( y ) ] 3 as y + .
The probability density function of the 1 / 2 -stable one-sided distribution has the form
g ( x ) = A 2 π x 3 1 / 2 exp A 2 x , x > 0 ,
see (Ch. 2.4, (4.8), p. 52 of [24]). Therefore, for y > 0 ,
G ( y ) = 0 y A 2 π x 3 1 / 2 exp A 2 x d x = 2 A / y 1 2 π exp ( z 2 / 2 ) d z = 2 1 F ( A / y ) ,
and the scale parameter is determined by the limiting condition (39), that is,
3 = lim y + y 1 / 2 · 2 1 F ( A / y ) = 2 A lim y + f ( A / y ) ,
where f ( x ) = 1 2 π exp ( x 2 / 2 ) is the probability density function of the standard Gaussian law.
Hence, A = 9 π / 2 , which proves (37). □

4. Conclusions

In this paper, we analyse the properties of the crossing time distributions associated with the asymmetric telegraph process. In view of potential applications in financial engineering and neural and biological modelling, the results on the times of the last crossings and the time of returning to the starting point and the local times of the telegraph processes are of the most interest.
It turns out that in the asymmetric case, the return time τ 0 is infinite with positive probability. Further, the survival probability of τ 0 satisfies the Feller condition only in the symmetric case. To the best of our knowledge, these results are new and have never been presented before. These results can help in constructing the theory of telegraph bridges, excursions, and meanders.
Further research in this field can be aimed at extending this model to telegraph processes with jumps and to the analysis of other dynamics governed by
  • various distributions of inter-arrival times, other than exponential;
  • alternating nonlinear patterns.

Author Contributions

N.R. and M.T. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Russian Science Foundation (RSF), project number 22-21-00148, https://rscf.ru/project/22-21-00148/ (accessed on 1 December 2022).

Data Availability Statement

All data generated and analysed during this study are included in the published article.

Acknowledgments

We are deeply grateful to the referees for their careful reading of the manuscript and very helpful suggestions for improving the text.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kac, M. A stochastic model related to the telegrapher’s equation. Rocky Mt. J. Math. 1974, 4, 497–509. [Google Scholar] [CrossRef]
  2. Goldstein, S. On diffusion by discontinuous movements and on the telegraph equation. Quart. J. Mech. Appl. Math. 1951, 4, 129–156. [Google Scholar] [CrossRef]
  3. Kolesnik, A.D. Markov Random Flights; Chapman and Hall/CRC: Boca Raton, FL, USA, 2021. [Google Scholar]
  4. Kolesnik, A.D.; Ratanov, N. Telegraph Processes and Option Pricing, 1st ed.; Springer: Heidelberg, Germany; New York, NY, USA; Dordrecht, The Netherlands; London, UK, 2013. [Google Scholar]
  5. Ratanov, N.; Kolesnik, A.D. Telegraph Processes and Option Pricing, 2nd ed.; Springer: Heidelberg, Germany; New York, NY, USA; Dordrecht, The Netherlands; London, UK, 2022. [Google Scholar]
  6. Hadeler, K.P. Reaction transport systems in biological modelling. In Mathematics Inspired by Biology, Lecture Notes in Mathematics; Capasso, V., Diekmann, O., Eds.; Springer: Berlin, Germany, 1999; Volume 1714, pp. 95–150. [Google Scholar]
  7. Othmer, H.G.; Dunbar, S.R.; Alt, W. Models of dispersal in biological systems. J. Math. Biol. 1988, 26, 263–298. [Google Scholar] [CrossRef] [PubMed]
  8. Weiss, G.H. Aspects and Applications of the Random Walk; North-Holland: Amsterdam, The Netherlands, 1994. [Google Scholar]
  9. Weiss, G.H. Some applications of persistent random walks and the telegrapher’s equation. Physica A 2002, 311, 381–410. [Google Scholar] [CrossRef]
  10. Alharb, W.; Petrovskii, S. Critical domain problem for the reaction-telegraph equation model of population dynamics. Mathematics 2018, 6, 59. [Google Scholar] [CrossRef]
  11. Ratanov, N. A jump telegraph model for option pricing. Quant. Financ. 2007, 7, 575–583. [Google Scholar] [CrossRef]
  12. Tuckwel, H.C. Stochastic Processes in the Neurosciences; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1989. [Google Scholar]
  13. Badel, L. Firing statistics and correlations in spiking neurons: A level-crossing approach. Phys. Rev. E 2011, 84, 041919. [Google Scholar] [CrossRef] [PubMed]
  14. Di Bernardino, E.; León, J.R.; Tchumatchenko, T. Cross-correlations and joint Gaussianity in multivariate level crossing models. J. Math. Neurosci. 2014, 4, 22. [Google Scholar] [CrossRef] [PubMed]
  15. Luboeinski, J.; Tchumatchenko, T. Nonlinear response characteristics of neural networks and single neurons undergoing optogenetic excitation. Netw. Neurosci. 2020, 4, 852–870. [Google Scholar] [CrossRef] [PubMed]
  16. La Rosa, M.; Rabinovich, M.I.; Huerta, R.; Abarbanel, H.D.I.; Fortuna, L. Slow regularization through chaotic oscillation transfer in an unidirectional chain of Hindmarsh-Rose models. Phys. Lett. A 2000, 266, 88–93. [Google Scholar] [CrossRef]
  17. Ratanov, N. Mean-reverting neuronal model based on two alternating patterns. BioSystems 2020, 196, 104190. [Google Scholar] [CrossRef] [PubMed]
  18. Ratanov, N. On telegraph processes, their first passage times and running extrema. Stat. Probab. Lett. 2021, 174, 109101. [Google Scholar] [CrossRef]
  19. Pogorui, A.A.; Rodríguez-Dagnino, R.M.; Kolomiets, T. The first passage time and estimation of the number of level-crossings for a telegraph process. Ukr. Math. J. 2015, 67, 998–1007. [Google Scholar] [CrossRef]
  20. Prudnikov, A.P.; Brychkov, Y.A.; Marichev, O.I. Integrals and Series, Vol. 1. Elementary Functions; Gordon and Breach Science Publishers: Philadelphia, PA, USA, 1992. [Google Scholar]
  21. Prudnikov, A.P.; Brychkov, Y.A.; Marichev, O.I. Integrals and Series, Vol. 2. Special Functions; Gordon and Breach Science Publishers: Philadelphia, PA, USA, 1992. [Google Scholar]
  22. Cinque, F. A note on the conditional probabilities of the telegraph process. Stat. Probab. Lett. 2022, 185, 109431. [Google Scholar] [CrossRef]
  23. Björk, T. The pedesrian’s guide to local time. In Risk and Stochastics: Ragnar Norberg; World Scientific: Singapore, 2019; Chapter 3. [Google Scholar]
  24. Feller, W. An Introduction to Probability Theory and Its Applications, 2nd ed.; Wiley: Hoboken, NJ, USA, 1971; Volume II. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ratanov, N.; Turov, M. On Local Time for Telegraph Processes. Mathematics 2023, 11, 934. https://doi.org/10.3390/math11040934

AMA Style

Ratanov N, Turov M. On Local Time for Telegraph Processes. Mathematics. 2023; 11(4):934. https://doi.org/10.3390/math11040934

Chicago/Turabian Style

Ratanov, Nikita, and Mikhail Turov. 2023. "On Local Time for Telegraph Processes" Mathematics 11, no. 4: 934. https://doi.org/10.3390/math11040934

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop