Next Article in Journal
The Numerical Solution of an Inverse Pseudoparabolic Problem with a Boundary Integral Observation
Previous Article in Journal
Nonlinear Almost Contractions of Pant Type Under Binary Relations with an Application to Boundary Value Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Self-Similarity of Remainder Processes and the Relationship Between Stable and Dickman Distributions

Department of Mathematics and Statistics, University of North Carolina Charlotte, Charlotte, NC 28223, USA
Mathematics 2025, 13(6), 907; https://doi.org/10.3390/math13060907
Submission received: 16 February 2025 / Revised: 4 March 2025 / Accepted: 6 March 2025 / Published: 8 March 2025
(This article belongs to the Section D1: Probability and Statistics)

Abstract

:
A common approach to simulating a Lévy process is to truncate its shot-noise representation. We focus on subordinators and introduce the remainder process, which represents the jumps that are removed by the truncation. We characterize when these processes are self-similar and show that, in the self-similar case, they can be indexed by a parameter α ( , 1 ) . When α ( 0 , 1 ) , they correspond to α -stable distributions, and when α = 0 , they correspond to certain generalizations of the Dickman distribution. Thus, the Dickman distribution plays the role of a 0-stable distribution in this context.

1. Introduction

Lévy processes are used in many applications, including ones to finance, insurance, physics, and reliability theory, see, e.g., [1,2] and the references therein. A common approach to the simulation of a Lévy process is to use its shot-noise representation, which represents the Lévy process as an infinite sum. This idea goes back to the pioneering work of Khintchine in 1931 [3], see also [4] and the references therein for recent developments. To use a shot-noise representation for simulation, one must truncate the infinite series at some finite point. It has been argued (see, e.g., [1] or [2]) that this truncation point should not be deterministic but random, and should be chosen so as to control the magnitudes of the jumps that are excluded. In this paper, we introduce the remainder process, which represents the jumps that are removed due to the truncation. This process has two parameters, one representing time in the original Lévy process and the other representing the cut-off beyond which jumps are removed. Throughout, we focus on subordinators, which are Lévy processes with only positive jumps. They are the building blocks from which all finite variation Lévy processes are built, even in the multivariate case, see [5]. We characterize when the remainder process of a subordinator is self-similar and show that the resulting class can be indexed by a parameter α ( , 1 ) . We further show that the remainder process can be written as a continuous function of α and that, under mild assumptions, this function is monotonically increasing. When α ( 0 , 1 ) , it corresponds to the classes of α -stable and truncated α -stable subordinators. From this perspective, the distinction between these two is not relevant as they have the same remainder processes on bounded intervals. When α = 0 , the process corresponds to the class of generalized h-Dickman distributions, and when α ( , 0 ) , it corresponds to certain compound Poisson distributions. In this context, generalized h-Dickman distributions play the role of 0-stable distributions.
There has been some interest in finding a natural analogue of α -stable distributions for α = 0 . In [6,7], it is argued that gamma distributions are a natural analogue. The idea is as follows. First, apply an Esscher transform to an α -stable distribution to obtain a certain type of tempered α -stable distribution. Next, take the limit as α 0 + , which leads to a gamma distribution. We note that a similar approach can be made with other classes of tempered stable distributions, which lead to a variety of other distributions in the limit, see [8] or [9]. From a different perspective, ref. [10] showed that, as α 0 + , the distribution of | X α b | α , where X α is an α -stable random variable and b R is an appropriate shift, approaches a reciprocal exponential distribution. In [11,12], the Dickman distribution is called a “truncated 0-stable subordinator” due to the form of its Lévy measure. All of these approaches to thinking about 0-stable distributions are valid from different perspectives. From the perspective of the remainder process, generalized h-Dickman distributions are the natural analogue.
The rest of this paper is organized as follows. In Section 2, we review basic properties of infinitely divisible distributions and their associated Lévy processes. In Section 3, we introduce the remainder process for subordinators, and in Section 4, we give some examples. In Section 5, we give our main results about continuity and monotonicity, and we fully characterize when remainder processes are self-similar. In Section 6, we give two useful lemmas. In the interest of generality, we present these in the multivariate setting as they may be of independent interest.
Before proceeding, we introduce our notation. We write “w.p. 1” to mean “with probability 1”, “increasing” to mean non-decreasing, and “decreasing” to mean “non-increasing”. We write B ( R ) to denote the Borel sets on R , 1 A to denote the indicator function of set A, and to denote the empty set. We write ∨ and ∧ to denote the maximum and minimum, respectively. For a distribution μ , we write μ ^ to denote its characteristic function, X μ to denote that X is a random variable with distribution μ , and X 1 , X 2 , iid μ to denote that X 1 , X 2 , are independent and identically distributed (iid) random variables with distribution μ . We write : = , = d , and = f d d to denote, respectively, a defining equality, an equality in distribution, and equality in finite dimensional distributions (fdd). We write U ( a , b ) to denote the uniform distribution on ( a , b ) and Exp ( λ ) to denote an exponential distribution with rate λ . Unless otherwise specified, throughout this paper, we let E 1 , E 2 , iid Exp ( 1 ) and U 1 , U 2 , iid U ( 0 , 1 ) be independent sequences of random variables and we set Γ i = k = 1 i E k for i = 1 , 2 , .

2. Infinitely Divisible Distributions and Lévy Processes

The characteristic function of an infinitely divisible (ID) distribution μ can be written in the form
μ ^ ( z ) = exp 1 2 σ 2 z 2 + i b z + R e i z x 1 i z x 1 [ | x | 1 ] M ( d x ) , z R ,
where σ 2 > 0 is the Gaussian part, b R is the shift, and M is the Lévy measure, which is a Borel measure satisfying
M ( { 0 } ) = 0 and R ( | x | 2 1 ) M ( d x ) < .
The parameters σ 2 , M, and b uniquely determine this distribution and we write μ = ID ( σ 2 , M , b ) . Associated with every ID distribution μ is a Lévy process { X t : t 0 } , where X 1 μ . Lévy processes are characterized by independent and stationary increments, càdlàg paths, stochastic continuity, and the initial condition X 0 = 0 w.p. 1. Of these, only stochastic continuity and càdlàg paths cannot be determined by finite dimensional distributions. This leads to the following.
Lemma 1. 
If two stochastic processes are equal in fdd and one of them is a Lévy process, then the other is also a Lévy process if and only if it is stochastically continuous and has càdlàg paths.
The jumps of a Lévy process are governed by its Lévy measure. Specifically, for a Lévy process { X t : t 0 } with X 1 ID ( A , M , b ) and any B B ( R ) with B { 0 } = , M ( B ) is the expected number of jumps that the process has in the time interval [ 0 , 1 ] that falls inside B. A Lévy process has finite variation if and only if σ 2 = 0 and M satisfies the additional condition
R ( | x | 1 ) M ( d x ) < .
Through a slight abuse of terminology, we also say that the associated distribution μ and the associated Lévy measure M have finite variation. In this case, the characteristic function can be written in the form
μ ^ ( z ) = exp i γ z + R e i z x 1 M ( d x ) , z R ,
where γ = b | x | 1 x M ( d x ) R is the drift, and we write μ = ID 0 ( M , γ ) . If, in addition, M ( ( , 0 ] ) = 0 and γ 0 , then the associated Lévy process is increasing and is called a subordinator. Again, by a slight abuse of terminology, we call the associated distribution μ a subordinator as well. For more on infinitely divisible distributions and Lévy processes see [2] or [13].

3. Shot-Noise and the Remainder Process

Let μ = ID 0 ( L , γ ) be a subordinator. For simplicity and without loss of generality, we take γ = 0 throughout. For s > 0 , define the truncated Lévy measure L s ( d x ) = 1 [ 0 < x < s ] L ( d x ) . We refer to a subordinator with Lévy measure L s as a truncated subordinator. For z > 0 , let
V ( z ) = z L ( d x )   and   V s ( z ) = z L s ( d x ) .
Note that these functions uniquely determine the corresponding Lévy measures, that they are decreasing, and that
V s ( z ) = V ( z ) V ( s ) 0 < z < s 0 z s .
For notational convenience, we write L = L , V = V , and V ( ) = V s ( ) = 0 . Let
V ( y ) = inf { x > 0 : V ( x ) < y } , y 0
be the generalized inverse of V. We use a similar definition for V s . Note that V and V s are decreasing.
Fix a finite time horizon T > 0 and consider the process
X , t = i = 1 V ( Γ i / T ) 1 [ U i t / T ] , t [ 0 , T ] .
This is a Lévy process with X , t ID 0 ( t L , 0 ) , and the infinite sum converges almost surely and uniformly for t [ 0 , T ] , see, e.g., Proposition 6.3 in [2]. We call V ( Γ i / T ) : i = 1 , 2 , the jumps of the process. Note that they are ordered from largest to smallest. This series representation is sometimes called a shot-noise representation. It can be used to approximately simulate the Lévy process. To do so, we must first truncate the infinite series at some finite value. It has been argued, see e.g., [1] or [2], that the proper way to truncate is not at a deterministic point. Instead, we should choose some τ > 0 and truncate at the random time N ( τ ) , where N ( τ ) = max { i : Γ i τ } . This way, we control the magnitudes of the jumps that we are removing. We take τ τ s = T V ( s ) for some s > 0 . We will see that this is equivalent to removing jumps of magnitudes greater that s. The remainder in this approximation can be written as
X s , t : = i = 1 V ( Γ i / T ) 1 [ U i t / T ] i = 1 N τ s V ( Γ i / T ) 1 [ U i t / T ] = i = 1 V ( Γ i / T ) 1 [ Γ i > T V ( s ) ] 1 [ U i t / T ] , s [ 0 , ] , t [ 0 , T ] .
We call this the remainder process with Lévy measure L. We have not previously seen processes of this type in the literature. We now give a lemma that is useful when working with remainder processes. First, for s 0 , we define the set H s = { x > 0 : V ( x ) = s } .
Lemma 2. 
1. For any h , y > 0 , we have V h ( y ) = V y + V ( h ) .
2. If h > 0 and Y is a positive random variable with P ( V ( Y ) = h ) = 0 , then 1 [ V ( Y ) < h ] = 1 [ Y > V ( h ) ] w.p. 1.
3. There is at most a countable number of s 0 for which H s has more than one element.
4. If b R , then
i = 1 V ( Γ i + b ) / T 1 [ Γ i + b > T V ( s ) ] 1 [ U i t / T ] 1 [ Γ i > ( b 0 ) ] : s [ 0 , ] , t [ 0 , T ] = f d d i = 1 V Γ i / T 1 [ Γ i > T V ( s ) ] 1 [ U i t / T ] 1 [ Γ i > b 0 ] : s [ 0 , ] , t [ 0 , T ] .
5. If b > 0 , then
i = 1 V Γ i b / T 1 [ Γ i b > T V ( s ) ] 1 [ U i b t / T ] : s [ 0 , ] , t [ 0 , T ( 1 b 1 ) ] = f d d i = 1 V Γ i / T 1 [ Γ i > T V ( s ) ] 1 [ U i t / T ] : s [ 0 , ] , t [ 0 , T ( 1 b 1 ) ] .
Proof. 
The first part follows from the fact that
V h ( y ) = inf { x > 0 : V h ( x ) < y } = inf { h x > 0 : V h ( x ) < y } = inf { h x > 0 : V ( x ) < y + V ( h ) } = inf { x > 0 : V ( x ) < y + V ( h ) } = V y + V ( h ) ,
where the second line follows from the fact that V h ( x ) = 0 for x h and the fourth from the fact that V ( x ) V ( h ) for x h . For the second part, if
h < V ( Y ) = inf { x > 0 : V ( x ) < Y } ,
then, since V is decreasing, we have V ( h ) Y . Similarly, if
h > V ( Y ) = inf { x > 0 : V ( x ) < Y } ,
then V ( h ) < Y . From here the results follow from the fact that P ( V ( Y ) = h ) = 0 .
Next, we turn to the third part. Without loss of generality, take s > 0 . Assume that there are y 1 , y 2 H s with y 1 y 2 . Let s n + > s be a sequence with s n + s and s n < s be a sequence with s n s . We have
lim sup n V ( s n + ) min { y 1 , y 2 } < max { y 1 , y 2 } lim inf n V ( s n ) .
It follows that V has a discontinuity at s. From here, the result follows from the fact that a decreasing function can have at most a countable number of discontinuities. The remaining parts are a special case of Lemma 5 given in Section 6. □
For any s [ 0 , ] and t [ 0 , T ] , if the set H s has Lebesgue measure zero, then Lemma 2 implies that
X s , t = a . s . i = 1 V ( Γ i / T ) 1 [ V ( Γ i / T ) < s ] 1 [ U i t / T ] ,
where we use the fact that the number of terms in the sum is countable. Further, Lemma 2 tells us that there are at most a countable number of s for which H s has positive Lebesgue measure. This number reduces to 0 if V is invertible. Note that (5) justifies saying that X s , t is obtained from X , t by removing the large jumps. Next, applying Lemma 2 gives
X s , t = f d d i = 1 V ( Γ i / T + V ( s ) ) 1 [ U i t / T ] = i = 1 V s ( Γ i / T ) 1 [ U i t / T ] , s [ 0 , ] , t [ 0 , T ] .
Lemma 3. 
For any fixed s [ 0 , ] , { X s , t : 0 t [ 0 , T ] } is a Lévy process with X s , t ID 0 ( t L s , 0 ) .
Proof. 
Applying (4) to V s instead of V shows that for fixed s [ 0 , ] , i = 1 V s ( Γ i / T ) 1 [ U i t / T ] : t [ 0 , T ] is a Lévy process with X s , t ID 0 ( t L s , 0 ) . It is readily checked that for fixed s, { X s , t : 0 t [ 0 , T ] } is stochastically continuous with càdlàg paths. Hence, the result follows by Lemma 1 and (6). □

4. Classes of Subordinators

One of the best known classes of subordinators is the class of stable subordinators. Recall that a distribution μ is said to be stable if for any n, there exist b n > 0 and d n R such that if X 1 , X 2 , , X n iid μ , then
i = 1 n X i = d b n X 1 + d n .
In this sense, these distributions are stable under addition. It can be shown that b n = n 1 / α for some α ( 0 , 2 ] and the corresponding distribution is said to be α -stable. The class of 2-stable distributions corresponds to the class of normal distributions. It is well-known that the Lévy processes associated with stable distributions are the only ones that are self-similar. Specifically, { X t : t 0 } is a Lévy process such that for any a > 0 , there exists a b > 0 and a function d ( t ) with
{ X a t : t 0 } = f d d { b X t + d ( t ) : t 0 }
if and only if X 1 has a stable distribution. Further, in this case, b = a 1 / α and d ( t ) = t d for some constant d R , see Section 13 in [13]. For more on stable distributions, see [14] or [15].
Only stable distributions with α ( 0 , 1 ) can be subordinators. The Lévy measure of an α -stable subordinator is of the form
L ( α ) ( d x ) = c x 1 α 1 [ x > 0 ] d x ,
where c > 0 . In this case, for z > 0 , we have
V ( z ) = c α z α   and   V ( z ) = α z / c 1 / α .
The remainder process is given by
X s , t ( α ) = i = 1 α c 1 Γ i / T 1 / α 1 [ Γ i > c T s α / α ] 1 [ U i t / T ] = i = 1 α c 1 Γ i / T 1 / α 1 [ α c 1 Γ i / T 1 / α < s ] 1 [ U i t / T ] , s [ 0 , ] , t [ 0 , T ] .
It is sometimes convenient to restrict s to the set [ 0 , h ] for some h ( 0 , ] . In this case, Lemma 2 gives
X s , t ( α ) = f d d i = 1 h α + α c 1 Γ i / T 1 / α 1 [ h α + α c 1 Γ i / T 1 / α < s ] 1 [ U i t / T ] , s [ 0 , h ] , t [ 0 , T ] ,
where in the case h = we take h α = 0 .
When h < , (7) gives the remainder process with Lévy measure
L h ( α ) ( d x ) = c x 1 α 1 [ 0 < x < h ] d x .
Such distributions are called truncated stable subordinators, see [16]. They can be seen as variants of tempered stable distributions, and in [17] they were called tempered stable distributions with hard truncation. See [8,9] and the references therein for more on tempered stable distributions. Note that the remainder process for a truncated stable subordinator is just the remainder process for a stable subordinator with the parameter s restricted to a bounded interval.
Lévy measures of the form (8) can be extended to Lévy measures for any α ( , 2 ) and are subordinators so long as α ( , 1 ) . When α < 0 and z > 0 , we have
V ( z ) = c | α | h | α | z | α | + and V ( z ) = h | α | z | α | / c + 1 / | α | ,
where ( · ) + denotes the positive part, i.e., x + = x 0 for x R . The remainder process is given by
X s , t ( α ) = i = 1 h | α | | α | Γ i / ( c T ) + 1 / | α | 1 [ U i t / T ] 1 h | α | | α | Γ i / ( c T ) < s | α | , s [ 0 , h ] , t [ 0 , T ] .
When α = 0 , for z > 0 ,
V ( z ) = c log ( h z ) / z and V ( z ) = h e z / c
and the remainder process is given by
X s , t ( 0 ) = h i = 1 e Γ i / ( c T ) 1 [ Γ i > c T log ( h / s ) ] 1 [ U i t / T ] , s [ 0 , h ] , t [ 0 , T ] ,
where for s = 0 , we take 1 [ Γ i > c T log ( h / 0 ) ] = 0 .
Truncated stable subordinators are no longer stable under addition. However, they satisfy the following property. For any α ( , 1 ) { 0 } , if X 1 , X 2 , , X n iid ID 0 ( L h ( α ) , 0 ) , then
i = 1 n X i = d n 1 / α Y ,
where Y ID 0 L n 1 / α h ( α ) , 0 . This is readily checked by comparing the characteristic functions of the random variables on the two sides.
When α = 0 , the class of truncated stable subordinators coincides with the class of generalized h-Dickman distributions. Recall that a random variable X is said to have a generalized h-Dickman distribution if
X = d U 1 / c ( X + h ) ,
where h , c > 0 and U U ( 0 , 1 ) is independent of X on the right side. We denote this distribution by GD h ( c ) and note that it has Lévy measure L h ( 0 ) . When h = 1 , this is called a generalized Dickman distribution, and when h = c = 1 , it is just called the Dickman distribution. Many properties of these distributions are discussed in [18,19,20,21,22,23] and the references therein. It is easily shown that, for any h , δ > 0 , if X GD h ( c ) , then 1 δ X GD h / δ ( c ) . The idea of the Dickman distribution being a truncated 0-stable subordinator was first published in [11], see also [12].

5. Main Results

In this section, we consider a process indexed by three parameters, s, t, and α , which combines all of the processes described in Section 4 into one. We will derive various properties of this process, which will show how the processes in Section 4 relate to each other. Recall that we need α ( , 1 ) to ensure that the measure in (8) is a Lévy measure with finite variation. For α ( , 1 ) , x [ 0 , ) , and h ( 0 , ) , let
g h ( α , x ) = h | α | | α | x + 1 / | α | α < 0 h e x α = 0 h α + α x 1 / α α ( 0 , 1 ) .
With this notation, all of the remainder processes discussed in Section 4 can be written as
X s , t ( α ) = i = 1 g h α , c 1 Γ i / T 1 [ g h α , c 1 Γ i / T < s ] 1 [ U i t / T ] , s [ 0 , h ] , t [ 0 , T ] ,
where c , T ( 0 , ) . When α ( 0 , 1 ) , we can allow for h = so long as we take h α = 0 in this case. We now collect some basic properties of the function g h .
Lemma 4. 
1. For fixed x [ 0 , ) and h ( 0 , ) , the function g h ( · , x ) is continuous.
2. We have g h ( α , x ) = h g 1 ( α , h α x ) for each choice of the parameters.
3. For any fixed α ( , 1 ) and h ( 0 , ) , the function g h ( α , · ) is decreasing.
4. For any α ( , 1 ) , h ( 0 , ) , and x [ 0 , ) , we have g h ( α , x ) g h ( α , 0 ) = h .
5. For h ( 0 , 1 ] and x [ 0 , ) fixed, the function g h ( · , x ) is increasing.
It can be shown that for h > 1 , g h ( · , x ) is neither increasing for every x nor decreasing for every x.
Proof. 
The first part is immediate for α 0 and can be easily verified for α = 0 using L’Hôpital’s rule. The second and third parts are immediate. The fourth part follows directly from the third. We now turn to the fifth part. Assume either α > 0 or α < 0 and x < h | α | / | α | . By Part 2, it suffices to show that log g 1 ( α , h α x ) is increasing in α . We have
α log g 1 ( α , h α x ) = α h α x + 1 1 α 2 ( α h α x + 1 ) log ( α h α x + 1 ) α h α x α 2 h α x log h ,
which is non-negative by 4.1.33 in [24] and the fact that log h 0 . Next, by L’Hôpital’s rule,
lim α 0 α log g 1 ( α , h α x ) = 1 2 x 2 + x log h 0 ,
where we note that for any x and α < 0 close enough to 0, we have x < h | α | / | α | . □
Lemma 4 shows that g h ( · , x ) is continuous in α . However, X s , t ( α ) cannot be continuous in α for all values of s. This is because the indicator function 1 [ g h α , c 1 Γ i / T < s ] depends on both α and s. Nevertheless, we will show that there is continuity in α so long as s is bounded away from g h α , c 1 Γ i / T for each i = 1 , 2 , . Toward this end, let S α [ 0 , h ] be such that S ¯ α g h α , c 1 Γ i / T : i = 1 , 2 , = , where S ¯ α is the closure of S α . Note that S α is a random set. Clearly, we can construct such a set, where its complement has an arbitrarily small (but positive) Lebesgue measure.
Theorem 1. 
Fix α ( , 1 ) and h , c ( 0 , ) . We have
lim α α sup ( s , t ) S α × [ 0 , T ] X s , t ( α ) X s , t ( α ) = 0 w . p . 1
and for any finite set F [ 0 , h ] , we have
lim α α sup ( s , t ) F × [ 0 , T ] X s , t ( α ) X s , t ( α ) = 0 w . p . 1 .
Proof. 
Fix i { 1 , 2 , } and, for simplicity, let b i = g h α , c 1 Γ i / T . Note that b i S ¯ α and, thus, that there exists an ϵ i > 0 with ( b i ϵ i , b i + ϵ i ) S ¯ α = . For α close enough to α so that g h α , c 1 Γ i / T b i < ϵ i and any s S α , we have 1 [ g h α , c 1 Γ i / T < s ] = 1 [ g h α , c 1 Γ i / T < s ] . Combining this with Lemma 4 gives
lim α α sup s S α g h α , c 1 Γ i / T 1 [ g h α , c 1 Γ i / T < s ] g h α , c 1 Γ i / T 1 [ g h α , c 1 Γ i / T < s ] = 0 .
Hence,
lim α α sup ( s , t ) S α × [ 0 , T ] X s , t ( α ) X s , t ( α ) lim α α i = 1 sup s S α g h α , c 1 Γ i / T 1 [ g h α , c 1 Γ i / T < s ] g h α , c 1 Γ i / T 1 [ g h α , c 1 Γ i / T < s ] .
From here, the result follows by dominated convergence. To obtain a dominating function, we note that for any δ ( 0 , 1 α ) and α close enough to α ,
g h α , c 1 Γ i / T = h g 1 α , h α c 1 Γ i / T h g 1 α + δ , h α c 1 Γ i / T h g 1 α + δ , h α + δ 1 c 1 Γ i / T ,
where we use Lemma 4. This is summable w.p. 1 by the shot-noise representation of the appropriate distribution. The second part follows by combining the fact that
P i = 1 g h α , c 1 Γ i / T F = 0
with the first part. □
Next, we verify the self-similarity of the remainder process.
Theorem 2. 
If α ( , 1 ) and h ( 0 , ) , then for any a ( 0 , ) , we have
{ X a s , a α t ( α ) ( c , h , T ) : s [ 0 , h ( 1 a 1 ) ] , t [ 0 , T ( 1 a α ) ] } = f d d { a X s , t ( α ) ( c , h , T ) : s [ 0 , h ( 1 a 1 ) ] , t [ 0 , T ( 1 a α ) ] } .
For α ( 0 , 1 ) , the result also holds when h = .
Proof. 
For α ( 0 , 1 ) , the fifth part of Lemma 2 implies that
X a s , a α t ( α ) = f d d a i = 1 α c 1 Γ i a α / T 1 / α 1 [ α c 1 a α Γ i / T > s α ] 1 [ U i a α t / T ] = f d d a i = 1 α c 1 Γ i / T 1 / α 1 [ α c 1 Γ i / T > s α ] 1 [ U i t / T ] = a X s , t ( α ) .
For α = 0 , we have
X a s , t ( 0 ) = a h i = 1 e ( Γ i + c T log a ) / ( c T ) 1 [ Γ i + c T log a > c T log ( h / s ) ] 1 [ U i t / T ] = f d d a h i = 1 e Γ i / ( c T ) 1 [ Γ i > c T log ( h / s ) ] 1 [ U i t / T ] = a X s , t ( 0 ) ,
where the equality in fdd follows from the fourth part of Lemma 2 with b = c T log a . Here, we use the facts that for a ( 0 , 1 ) , we have [ Γ i + c T log a > c T log ( h / s ) ] [ Γ i > c T log a ] and for a > 1 , we have [ Γ i > c T log ( h / s ) ] [ Γ i > c T log a ] for s [ 0 , h / a ] .
For α < 0 , Lemma 2 gives
X a s , a α t ( α ) = a i = 1 ( h / a ) | α | | α | Γ i / ( c T a | α | ) + 1 / | α | 1 [ U i t / ( T a | α | ) ] 1 ( h / a ) | α | | α | Γ i / ( c T a | α | ) < s | α | = f d d a i = 1 ( h / a ) | α | | α | Γ i / ( c T ) + 1 / | α | 1 [ U i t / T ] 1 ( h / a ) | α | | α | Γ i / ( c T ) < s | α | = a i = 1 h | α | | α | ( Γ i + b ) / ( c T ) + 1 / | α | 1 [ U i t / T ] 1 h | α | | α | ( Γ i + b ) / ( c T ) < s | α | = f d d a i = 1 h | α | | α | Γ i / ( c T ) + 1 / | α | 1 [ U i t / T ] 1 h | α | | α | Γ i / ( c T ) < s | α | = a X s , t ( α ) ,
where b = c T h | α | ( h / a ) | α | / | α | . We note that for a ( 0 , 1 ) , we have ( h / a ) | α | | α | Γ i / ( c T ) < s | α | [ Γ i > b ] and for a > 1 , we have h | α | | α | Γ i / ( c T ) < s | α | [ Γ i > b ] . □
Theorem 3. 
Let L 0 be a Borel measure on [ 0 , ) such that L h ( d x ) = 1 [ x h ] L ( d x ) is a Lévy measure with finite variation for each h > 0 . Let X = { X s , t : s [ 0 , h ] , t [ 0 , T ] } be a remainder process with Lévy measure L h . If there are functions ϕ , ψ : ( 0 , ) ( 0 , ) such that for each a > 0 and each h > 0 ,
{ X ϕ ( a ) s , ψ ( a ) t : s [ 0 , h 1 1 / ϕ ( a ) ] , t [ 0 , T 1 1 / ψ ( a ) ] } = f d d { a X s , t : s [ 0 , h 1 1 / ϕ ( a ) ] , t [ 0 , T 1 1 / ψ ( a ) ] } ,
then there exists an α ( , 1 ) such that X is of the form (9).
Proof. 
Fix a , s > 0 and note that s [ 0 , h 1 1 / ϕ ( a ) ] for large enough h. For any t [ 0 , T 1 1 / ψ ( a ) ] , t L s is the Lévy measure of X s , t . Since X ϕ ( a ) s , ψ ( a ) t = d a X s , t , we have
L s a 1 B = ψ ( a ) L s ϕ ( a ) ( B ) , B B ( R ) .
Now, let B B ( R ) be bounded away from infinity. There exists an s > 0 such that | x | < min { s , s ϕ ( a ) / a } for every x B . Thus, L ( B ) = L s ( B ) and
L ( a 1 B ) = L s / a ( a 1 B ) = ψ ( a ) L s ϕ ( a ) / a ( B ) = ψ ( a ) L ( B ) .
Hence, L satisfies (11) and the result follows by Lemma 6. □
We now give some implications of our results for Lévy processes. First, we recall a definition. Let X = { X r : r I } and Y = { Y r : r I } be an R -valued stochastic processes, each with index set I. We say that X is smaller than Y in the usual stochastic order and write X st Y if for any positive integer n and any r 1 , r 2 , , r n I , we have
E [ ϕ ( X r 1 , X r 2 , , X r n ) ] E [ ϕ ( Y r 1 , Y r 2 , , Y r n ) ]
for every increasing Borel function ϕ : R n R for which both expectations exist. Here, ϕ being increasing means that ϕ ( x 1 , x 2 , , x n ) ϕ ( y 1 , y 2 , , y n ) whenever x i y i for each i = 1 , 2 , , n . For a monographic treatment of stochastic order, see [25].
Theorem 4. 
For any c , h , T ( 0 , ) and α ( , 1 ) , set
X t ( α ) ( c , h , T ) = i = 1 g h α , c 1 Γ i / T 1 [ U i t / T ] , t [ 0 , T ] .
1. 
If h ( 0 , 1 ] , then for any fixed α, { X t ( α ) ( c , h , T ) : t [ 0 , T ] } is a increasing Lévy process and for any fixed t [ 0 , T ] , { X t ( α ) ( c , h , T ) : α ( , 1 ) } is a continuous and increasing process.
2. 
For < α β < 1 , we have
{ X t ( α ) ( c , h , T ) : t [ 0 , T ] } st { X t ( β ) ( h β α c , h , T ) : t [ 0 , T ] }
and
{ X t ( α ) ( c , h , T ) : t [ 0 , T ( 1 h β α ) ] } st { X t h β α ( β ) ( c , h , T ) : t [ 0 , T ( 1 h β α ) ] } .
If, in addition, h ( 0 , 1 ] , then
{ X t ( α ) ( c , h , T ) : t [ 0 , T ] } st { X t ( β ) ( c , h , T ) : t [ 0 , T ] } .
Proof. 
The first part follows from Theorem 1 with F = { h } and Lemma 4. The second part follows by combining Lemma 4 with (10) and Lemma 2. □

6. Lemmas

In this section, we give two lemmas that are used in our proofs. We present these in the multivariate setting. For the first lemma, this is needed because it will be applied in the context of equality in fdd. For the second lemma, it is done in the interest of generality as the proof is no more difficult in the multivariate case, and the result may be of independent interest. We now introduce our notation for working in the multivariate setting. Let R d be the space of d-dimensional column vectors equipped with the usual norm | · | . Let S d 1 = x R d : | x | = 1 be the unit sphere in R d . We write B ( R d ) and B ( S d 1 ) to denote the Borel sets in R d and S d 1 , respectively. For A ( 0 , ) and B S d 1 , we write A B = x R d : | x | A , x / | x | B . For x , y R d , we write x y to denote component-wise multiplication.
Lemma 5. 
Let E 1 , E 2 iid Exp ( λ ) , let Y 1 , Y 2 , be iid R k -valued random vectors independent of the E i ’s, and set Γ i = k = 1 i E k . Let I be a non-empty index set and for each r I , let f r : R k + 1 R m be a Borel function. Assume that for each r I ,
i = 1 f r Γ i , Y i c o n v e r g e s w . p . 1 .
1. 
If b 0 , then
i = 1 f r Γ i + b , Y i : r I = f d d i = 1 f r Γ i , Y i 1 [ Γ i > b ] : r I .
2. 
If U 1 , U 2 , iid U ( 0 , 1 ) are independent of the Y i ’s and the E i ’s, then for any b ( 0 , 1 ) and any deterministic sequence { t i , r } [ 0 , 1 ] , we have
i = 1 f r Γ i b , Y i 1 [ U i b t i , r ] : r I = f d d i = 1 f r Γ i , Y i 1 [ U i t i , r ] : r I .
Proof. 
Let r 1 , r 2 , , r n I be an arbitrary finite collection of distinct elements in I. Consider the function f : R k + 1 R n m , whose output is the outputs of f r 1 , f r 2 , , f r n stacked on top of each other. Similarly, we write 1 [ U i t i , r ] to denote the random vector in R n m , where the first m elements of the vector are 1 [ U i t i , r 1 ] , the next m elements are 1 [ U i t i , r 2 ] , and so forth.
We begin with the first part. Let N ( b ) = max { i : Γ i b } . For any integer i 1 , we have Γ N ( b ) + i = b + Γ i , where Γ i = k = 1 i E k , E k = E N ( b ) + k for k 2 and E 1 = Γ N ( b ) + 1 b . By the memoryless property of the exponential distribution, E 1 Exp ( λ ) . Thus, { Γ i : i = 1 , 2 , } = f d d { Γ i : i = 1 , 2 , } and
i = 1 f Γ i + b , Y i = d i = 1 f Γ i + b , Y i = i = 1 f Γ N ( b ) + i , Y i = d i = 1 f Γ i , Y i 1 [ Γ i > b ] ,
where the last equality follows by relabeling the Y i ’s.
We now turn to the second part. Let A 0 = 0 and, for integer i 1 , recursively define A i = min { k > A i 1 : U k b } . For i = 1 , 2 , , let M i = A i A i 1 and note that these are iid random variables each having a geometric distribution with parameter b. Let
Γ i * = k = 1 A i E k = k = 1 i E k * ,   where   E k * = = A k 1 + 1 A k E = d = 1 M i E Exp ( b λ ) .
Here, we use the well-known and easily verified fact that the distribution of a geometric sum of iid exponential random variables is exponential, see e.g., Proposition 2.3 in [26]. Since E 1 * , E 2 * , are iid random variables with b E 1 * Exp ( λ ) , it follows that { Γ i : i = 1 , 2 , } = d { b Γ i * : i = 1 , 2 , } . Next, note that the conditional distribution of U i given U i b is the same as the distribution of b U i . We have
i = 1 f Γ i b , Y i 1 [ U i b t i , r ] = d i = 1 f Γ i * b , Y i 1 [ b U i b t i , r ] = d i = 1 f Γ i , Y i 1 [ U i t i , r ] ,
where the first equality in distribution is obtained by removing all terms in the sum on the left that have U i > b , relabeling the Y i ’s and U i ’s, and replacing the U i ’s with random variables that have the appropriate conditional distribution. □
Lemma 6. 
Let D 0 be a Borel measure on R d with D ( { 0 } ) = 0 and assume that, for every 0 < m < M < , we have m < | x | < M D ( d x ) < . If there exists a function ψ such that for every a > 0 and every B B ( R d ) that is bounded away from 0 and infinity, we have
D ( a B ) = ψ ( a ) D ( B ) ,
then there exists an α R and a finite Borel measure σ 0 on S d 1 such that ψ ( a ) = a α and
D ( B ) = S d 1 0 1 B ( r s ) r 1 α d r σ ( d s ) , B B ( R d ) .
In this case, D is a Lévy measure if and only if α ( 0 , 2 ) . Further, if for some 0 < h < , D h ( d x ) = 1 | x | h D ( d x ) , then D h is a Lévy measure if and only if α ( , 2 ) and it has finite variation if and only if α ( , 1 ) .
Proof. 
First, we show that (11) holds for every B B ( R d ) . Fix B B ( R d ) and note that we can write B = i = 1 B i where the B i ’s are mutually disjoint and each B i is bounded away from 0 and infinity. By countable additivity, we have
D ( a B ) = i = 1 D ( a B i ) = ψ ( a ) i = 1 D ( B i ) = ψ ( a ) D ( B ) .
Next, fix B B ( R d ) so that B is bounded away from 0 and and satisfies 0 < D ( B ) < . Let a > 0 , let a n > 0 with a n a , and note that D ( n a n B ) < and a n B a B . By (11) and the continuity of measures (see e.g., Problem 10.4 in [27]), we have
lim n ψ ( a n ) D ( B ) = lim n D ( a n B ) = D ( a B ) = ψ ( a ) D ( B ) ,
which shows that ψ is continuous. Next, for any a , b > 0 , we have D ( a b D ) = ψ ( a ) ψ ( b ) D ( B ) and D ( a b D ) = ψ ( a b ) D ( B ) , which implies that ψ ( a ) ψ ( b ) = ψ ( a b ) . Now, let ψ * ( x ) = ψ ( e x ) for x R and note that ψ * ( x + y ) = ψ * ( x ) ψ * ( y ) for x , y R . It follows (see e.g., Appendix A20 in [27]) that ψ * ( x ) = e α x for some α R . Hence, ψ ( x ) = x α , x > 0 , for some α R .
For α > 0 , countable additivity implies
D ( { x : | x | > 1 } ) = m = 0 D ( { x : e m | x | < e m + 1 } ) = D ( { x : 1 | x | < e } ) m = 0 e α m < .
Let σ ( C ) = α D ( C ( 1 , ) ) for C B ( S d 1 ) and note that for any a > 0 ,
D ( C ( a , ) ) = a α D ( C ( 1 , ) ) = a α α 1 σ ( C ) = S d 1 0 1 C ( a , ) ( r s ) r 1 α d r σ ( d s ) .
Since the collection of sets of the form C ( a , ) for a > 0 and C B ( S d 1 ) is a π -system that generates the Borel sets, (12) follows by Theorem 10.3 in [27].
Next, for α < 0 , note that
D ( { x : | x | 1 } ) = m = 0 D ( { x : e m 1 < | x | e m } ) = D ( { x : e 1 < | x | 1 } ) m = 0 e | α | m < .
Let σ ( C ) = | α | D ( C ( 0 , 1 ) ) for C B ( S d 1 ) . For any a > 0 ,
D ( C ( 0 , a ) ) = a α D ( C ( 0 , 1 ) ) = a α | α | 1 σ ( C ) = S d 1 0 1 C ( 0 , a ) ( r s ) r 1 α d r σ ( d s ) .
From here, (12) follows as in the previous case.
Finally, we turn to the case α = 0 . Let σ ( C ) = D ( ( 1 , e ] C ) for C B ( S d 1 ) . Fix C B ( S d 1 ) . For any positive integers m and n, (11) implies that
n m σ ( C ) = n m i = 1 m D ( ( e ( i 1 ) / m , e i / m ] C ) = n m i = 1 m D ( e ( i 1 ) / m ( 1 , e 1 / m ] C ) = n D ( ( 1 , e 1 / m ] C ) = i = 1 n D ( ( e ( i 1 ) / m , e i / m ] C ) = D ( ( 1 , e n / m ] C ) .
Applying (11) again gives
D ( ( e n / m , 1 ] C ) = D ( ( 1 , e n / m ] C ) = n m σ ( C ) .
By the continuity of measures (see Theorem 10.2 in [27]) and the fact that rational numbers are dense in R , it follows that for y > 0 , we have
D ( ( 1 , e y ] C ) = D ( ( e y , 1 ] C ) = σ ( C ) y .
Equivalently, for x > 1 , we have
D ( ( 1 , x ] C ) = D ( ( 1 / x , 1 ] C ) = σ ( C ) log x .
Noting that for x > 1 ,
log x = 1 x r 1 d r = 1 / x 1 r 1 d r
allows us to conclude that, for any 0 < a < b < ,
D ( ( a , b ] C ) = S d 1 0 1 ( a , b ] C ( r s ) r 1 d r σ ( d s ) .
From here, the results follow as in the previous cases.
The results for when D and D h are Lévy measures or have finite variation are easily verified and, hence, the proof is omitted. □

Funding

This research received no external funding.

Data Availability Statement

This is purely theoretical research.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Yuan, S.; Kawai, R. Numerical aspects of shot noise representation of infinitely divisible laws and related processes. Probab. Surv. 2021, 18, 201–271. [Google Scholar] [CrossRef]
  2. Cont, R.; Tankov, P. Financial Modeling With Jump Processes; Chapman & Hall: Boca Raton, FL, USA, 2004. [Google Scholar]
  3. Khintchine, A.Y. Zur Theorie der unbeschränkt teilbaren Verteilungsgesetze. Matematiceskij Sbornik 1931, 44, 79–119. [Google Scholar]
  4. Rosiński, J. Series representations of Lévy processes from the perspective of point processes. In Lévy Processes—Theory and Applications; Barndorff-Nielsen, O.E., Mikosch, T., Resnick, S.I., Eds.; Birkhauser: Boston, MA, USA, 2001; pp. 401–415. [Google Scholar]
  5. Xia, Y.; Grabchak, M. Estimation and simulation for multivariate tempered stable distributions. J. Stat. Comput. Simul. 2022, 92, 451–475. [Google Scholar] [CrossRef]
  6. Tsilevich, N.; Vershik, A.; Yor, M. An infinite-dimensional analogue of the Lebesgue measure and distinguished properties of the gamma process. J. Funct. Anal. 2001, 185, 274–296. [Google Scholar] [CrossRef]
  7. Vershik, A.M.; Smorodina, N.V. Nonsingular transformations of symmetric Lévy processes. J. Math. Sci. 2014, 199, 123–129. [Google Scholar] [CrossRef]
  8. Grabchak, M. Tempered Stable Distributions: Stochastic Models for Multiscale Processes; Springer: Cham, Switzerland, 2016. [Google Scholar]
  9. Rosiński, J. Tempering stable processes. Stoch. Processes Their Appl. 2007, 117, 677–707. [Google Scholar] [CrossRef]
  10. Cressie, N. A note on the behaviour of the stable distributions for small index α. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 1975, 33, 61–64. [Google Scholar] [CrossRef]
  11. Caravenna, F.; Sun, R.; Zygouras, N. The Dickman subordinator, renewal theorems, and disordered systems. Electron. J. Probab. 2019, 24, 1–40. [Google Scholar] [CrossRef]
  12. Gupta, N.; Kumar, A.; Leonenko, N.; Vaz, J. Generalized fractional derivatives generated by Dickman subordinator and related stochastic processes. Fract. Calc. Appl. Anal. 2024, 27, 1527–1563. [Google Scholar] [CrossRef]
  13. Sato, K. Lévy Processes and Infinitely Divisible Distributions; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  14. Nolan, J.P. Univariate Stable Distributions: Models for Heavy Tailed Data; Springer Nature: Cham, Switzerland, 2020. [Google Scholar]
  15. Samorodnitsky, G.; Taqqu, M.S. Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance; Chapman & Hall: New York, NY, USA, 1994. [Google Scholar]
  16. Dassios, A.; Lim, J.W.; Qu, Y. Exact Simulation of a Truncated Lévy Subordinator. ACM Trans. Model. Comput. Simul. (TOMACS) 2020, 30, 17. [Google Scholar] [CrossRef]
  17. Grabchak, M. On the simulation of general tempered stable Ornstein-Uhlenbeck processes. J. Stat. Comput. Simul. 2020, 90, 1057–1081. [Google Scholar] [CrossRef]
  18. Covo, S. On approximations of small jumps of subordinators with particular emphasis on a Dickman-type limit. J. Appl. Probab. 2009, 46, 732–755. [Google Scholar] [CrossRef]
  19. Grabchak, M.; Molchanov, S.A.; Panov, V.A. Around the infinite divisibility of the Dickman distribution and related topics. Zap. Nauchnykh Semin. Pomi 2022, 115, 91–120. [Google Scholar]
  20. Grabchak, M.; Zhang, X. Representation and simulation of multivariate Dickman distributions and Vervaat perpetuities. Stat. Comput. 2024, 34, 28. [Google Scholar] [CrossRef]
  21. Grahovac, D.; Kovtun, A.; Leonenko, N.N.; Pepelyshev, A. Dickman type stochastic processes with short-and long-range dependence. arXiv 2024, arXiv:2408.11521. [Google Scholar]
  22. Molchanov, S.A.; Panov, V.A. The Dickman–Goncharov distribution. Russ. Math. Surv. 2020, 75, 1089. [Google Scholar] [CrossRef]
  23. Penrose, M.; Wade, A. Random minimal directed spanning trees and Dickman-type distributions. Adv. Appl. Probab. 2004, 36, 691–714. [Google Scholar] [CrossRef]
  24. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions, 10th ed.; Dover Publications: New York, NY, USA, 1972. [Google Scholar]
  25. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer Science & Business Media: New York, NY, USA, 2007. [Google Scholar]
  26. Kalashnikov, V.V. Geometric Sums: Bounds for Rare Events with Applications: Risk Analysis, Reliability, Queueing; Springer Science & Business Media: Dordrecht, The Netherlands, 2013. [Google Scholar]
  27. Billingsley, P. Probability and Measure, 3rd ed.; John Wiley & Sons: New York, NY, USA, 1995. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Grabchak, M. On the Self-Similarity of Remainder Processes and the Relationship Between Stable and Dickman Distributions. Mathematics 2025, 13, 907. https://doi.org/10.3390/math13060907

AMA Style

Grabchak M. On the Self-Similarity of Remainder Processes and the Relationship Between Stable and Dickman Distributions. Mathematics. 2025; 13(6):907. https://doi.org/10.3390/math13060907

Chicago/Turabian Style

Grabchak, Michael. 2025. "On the Self-Similarity of Remainder Processes and the Relationship Between Stable and Dickman Distributions" Mathematics 13, no. 6: 907. https://doi.org/10.3390/math13060907

APA Style

Grabchak, M. (2025). On the Self-Similarity of Remainder Processes and the Relationship Between Stable and Dickman Distributions. Mathematics, 13(6), 907. https://doi.org/10.3390/math13060907

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop