Next Article in Journal
On Generalized Wirtinger Inequalities for (k,ψ)-Caputo Fractional Derivatives and Applications
Previous Article in Journal
Advances in the Application of Fractal Theory to Oil and Gas Resource Assessment
Previous Article in Special Issue
Ensemble Mean Dynamics of the ENSO Spatiotemporal Oscillator with Fractional Stochastic Forcing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Donsker-Type Construction for the Self-Stabilizing and Self-Scaling Process

by
Xiequan Fan
1,* and
Jacques Lévy Véhel
2
1
School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
2
Case Law Analytics, 5 rue Olympe de Gouges, 44200 Nantes, France
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(10), 677; https://doi.org/10.3390/fractalfract9100677
Submission received: 6 September 2025 / Revised: 12 October 2025 / Accepted: 16 October 2025 / Published: 21 October 2025
(This article belongs to the Special Issue Fractional Processes and Systems in Computer Science and Engineering)

Abstract

Using a Donsker-type construction, we prove the existence of a new class of processes, which we call the self-stabilizing processes. These processes have a particular property: the “local intensities of jumps” vary with the values. Moreover, we also show that the self-stabilizing processes have many other good properties, such as stochastic Hölder continuity and strong localizability. Such a self-stabilizing process is simultaneously a Markov process, a martingale (when the local index of stability is greater than 1), a self-scaling process and a self-regulating process.

1. Introduction

Stable Lévy motions owe their importance in both theory and practice to, among other factors, price fluctuations. The seeming departure from normality, along with the demand for a self-similar model for financial data (i.e., one in which the shape of the distribution for yearly asset price changes resembles that of the constituent daily or monthly price changes), led Benoît Mandelbrot to propose that cotton prices follow an α -stable Lévy motion with α equal to 1.7 . The high variability of the stable Lévy motions means that they are much more likely to take values far away from the median, and this is one of the reasons why they play an important role in modeling. Stable Lévy motions have been frequently used to model such diverse phenomena as gravitational fields of stars, temperature distributions in nuclear reactors and stresses in crystalline lattices, as well as stock market prices, gold prices and other financial data.
Recall that a stochastic process { L ( t ) , t 0 } is called a (standard) α –stable Lévy motion if the following three conditions hold:
(C1)
L ( 0 ) = 0 almost surely;
(C2)
L has independent increments;
(C3)
L ( t ) L ( s ) S α ( ( t s ) 1 / α , β , 0 ) for any 0 s < t and for some 0 < α 2 , 1 β 1 .
  • Here, S α ( σ , β , 0 ) stands for a stable random variable with index of stability α , scale parameter σ , skewness parameter β and a shift parameter equal to 0 . When β = 0 , we denote L as L α for clarity. Such processes have stationary increments, and they are 1 / α –self-similar (unless α = 1 , β 0 ); that is, for all c > 0 , the processes { L α ( c t ) , t 0 } and { c 1 / α L α ( t ) , t 0 } have the same finite-dimensional distributions. An α –stable Lévy motion is symmetric when β = 0 . Recall that α governs the intensity of jumps. When α is small, the intensity of jumps is large. Contrarily, when α is large, the intensity of jumps is small; see Figure 1.
However, the stationary property of their increments restricts the uses of stable Lévy motions in some situations, and generalizations are needed for instance to model real-world phenomena such as annual temperature (rainfall, wind speed), epileptic episodes in EEG, stock market prices over long periods and daily internet traffic. A significant feature of these cases is that the “local intensity of jumps” varies with time t; that is, α = α ( t ) varies with time t.
One way to deal with such a variation is to set up a class of processes whose stability index α is a function of t. More precisely, one aims at defining non-stationary increments processes that are, at each time t, “tangent” (in a certain sense explained below) to a stable process with stability index α ( t ) . Such processes are called multistable Lévy motions (MsLM); these are non-stationary increments extensions of stable Lévy motions.
Formally, one says that a stochastic process { X ( t ) , t [ 0 , 1 ] } is multistable if, for almost all t [ 0 , 1 ) , X is localizable at t with tangent process X t an α ( t ) –stable process, α ( t ) ( 0 , 2 ) . Recall that { X ( t ) , t [ 0 , 1 ] } is said to be h –localizable at t (cf. Falconer [1,2]), with h > 0 , if there exists a non-trivial process X t , called the tangent process of X at t, such that
lim r 0 X ( t + r u ) X ( t ) r h = X t ( u ) ,
where convergence is in finite dimensional distributions. By (1), a multistable process is also a multifractional process (cf. [3,4] for such processes). Two such extensions exist [5]:
1.
The field-based MsLM admit the following series representation:
L F ( t ) = C α ( t ) 1 / α ( t ) ( X , Y ) Π 1 [ 0 , t ] ( X ) Y < 1 / α ( t ) > ( t [ 0 , T ] ) ,
where Π is a Poisson point process on [ 0 , 1 ] × R with the Lebesgue measure as a mean measure L , a < b > : = sign ( a ) | a | b and
C u = Γ ( 1 u ) cos ( u π 2 ) , u ( 0 , 2 ) .
Their joint characteristic function is as follows:
  E exp i j = 1 m θ j L F ( t j ) = exp 2 [ 0 , T ] 0 + sin 2 j = 1 m θ j C α ( t j ) 1 / α ( t j ) 2 y 1 / α ( t j ) 1 [ 0 , t j ] ( x ) d y d x
for d N , ( θ 1 , , θ d ) R d and ( t 1 , , t d ) R d . These processes have correlated increments, and they are localizable as soon as the function α ( t ) is Hölder-continuous.
2.
The independent-increments MsLM admit the following series representation:
L I ( t ) = ( X , Y ) Π C α ( X ) 1 / α ( X ) 1 [ 0 , t ] ( X ) Y < 1 / α ( X ) > ( t [ 0 , T ] ) .
As their name indicates, they have independent increments, and their joint characteristic function is as follows:
E exp i j = 1 d θ j L I ( t j ) = exp | j = 1 d θ j 1 [ 0 , t j ] ( s ) | α ( s ) d s ,
for d N , ( θ 1 , , θ d ) R d and ( t 1 , , t d ) R d . These processes are localizable as soon as the function α satisfies the following condition uniformly for all x in finite interval as t 0 ; see [5]:
α ( x ) α ( x + t ) ln t 0
In particular, independent-increments MsLM are, at each time t, “tangent ” to a stable Lévy process with stability index α ( t ) .
  • Of course, when α ( t ) is a constant α for all t, both L F and L I are simply the Poisson representation of α –stable Lévy motion L α . In general, L F and L I are semi-martingales. For more properties of L F and L I , such as Ferguson-Klass-LePage series representations, Hölder exponents, stochastic Hölder continuity, strong localizability, functional central limit theorem and the Hausdorff dimension of the range, we refer to [6,7,8]. See also [9,10] for wavelet series representation of the multifractional multistable processes.
Similar to the MsLM, we find that some stochastic processes have the property that the “local intensity of jumps” varies with the values of the processes. For instance, when one analysis certain records such as the stock market prices (see Figure 2), the exchange rates (see Figure 3) or annual temperature (see Figure 4), it seems that there exits a relation between the local value of the records, denoted by S ( t ) , and the local intensity of jumps, measured by the index of stability α ( t ) .
This calls for the development of the self-stabilizing models, i.e., a class of stochastic processes S satisfying a functional equation of the form: α ( t ) = g ( S ( t ) ) almost surely for all t, where g is a smooth deterministic function. All the information concerning the future evolution of α is then incorporated in g , which may be estimated from historical data under the assumption that the relation between S and α does not vary in time. This class of models is in a sense analogous to local volatility models: instead of having the local volatility depending on S, it is the local intensity of jumps that does so. This class of models is also in a sense analogous to MsLM: instead of the “local intensity of jumps” depending on time t, the “local intensity of jumps” depending on the values of S .
The main aim of this paper is to establish the self-stabilizing models, called the self-stabilizing processes (cf. Falconer and Lévy Véhel [11]), via a Donsker-type construction. Formally, one says that a stochastic process { S ( t ) , t [ 0 , 1 ] } is a self-stabilizing process if, for almost surely all t [ 0 , 1 ) , S is localizable at t with tangent process S t a g ( S ( t ) ) –stable process, with respect to the conditional probability measure P S ( t ) . In formula, it holds
lim r 0 S ( t + r u ) S ( t ) r 1 / g ( S ( t ) ) = S t ( u ) ,
where convergence is in finite dimensional distributions with respect to P S ( t ) . Formula (8) states that “local intensity of jumps” varies with the values of S ( t ) , instead of time t. In particular, if S t ( u ) = L g ( S ( t ) ) ( u ) , equality (8) implies that
S ( t + r u ) S ( t ) r 1 / g ( S ( t ) ) L g ( S ( t ) ) ( u ) = r 1 / g ( S ( t ) ) u L g ( S ( t ) ) ( 1 ) ,
provided that r is small. Thus, it is natural to define S ( t ) = lim n S n ( n t n ) , where
S n k + 1 n S n k n = n 1 / g ( S n ( k / n ) ) L g ( S n ( k / n ) ) ( 1 ) ,
which illustrates the use of our method to prove the existence of these self-stabilizing processes. The main difficulty associated with using this method involves proving the weak convergence of S n ( n t n ) . To this end, we make use of the Arzelà–Ascoli theorem and its generalization.
The paper is organized as follows. In Section 2, we establish a self-stabilizing and self-scaling process. In Section 3, we show that it has many good properties, such as stochastic Hölder continuity and strong localizability. In particular, it is simultaneously a Markov process, a martingale (when g ( · ) > 1 ), and a self-regulating process. Conclusions are presented in Section 4. In Appendix A, we give the Arzelà–Ascoli theorem and its generalization.

2. Existence of Self-Stabilizing and Self-Scaling Process

In this section, we make use of the general version of the Arzelà–Ascoli theorem to prove the existence of this self-stabilizing process. Moreover, our self-stabilizing process is also a self-scaling process. We call a random process self-scaling if the scale parameter σ is also a function of the value of the process.
Definition 1. 
We call the sequence ( f n ( θ ) ) n 1 subequicontinuous on I R d , if for any ε > 0 , there exist δ > 0 and a sequence of nonnegative numbers ( ε n ) n 1 , ε n 0 as n , such that, for all functions f n in the sequence,
| f n ( θ 1 ) f n ( θ 2 ) | ε + ε n , θ 1 , θ 2 I ,
whenever | | θ 1 θ 2 | | < δ . In particular, if ε n = 0 for all n , then ( f n ( θ ) ) n 1 is equicontinuous.
The following lemma gives a general version of the Arzelà–Ascoli theorem, whose proof is given in Appendix A.
Lemma 1. 
Assume that ( f n ) n 1 is a sequence of real-valued continuous functions defined on a closed and bounded set Π i = 1 d [ a i , b i ] R d . If this sequence is uniformly bounded and subequicontinuous, then there exists a subsequence ( f n k ) k 1 that converges uniformly.
We give an approximation for a self-stabilizing and self-scaling process via Markov processes. The main idea behind this method is that the unknown stability index and scale parameter of the process at point t + 1 n are replaced by the predictable values g ( S ( t ) ) and σ ( S ( t ) ) , respectively. When n , it is obvious that g ( S ( t + 1 n ) ) g ( S ( t ) ) in distribution, provided that g is a Hölder function and that S ( t + 1 n ) S ( t ) 0 almost surely. The same argument holds for σ ( S ( t ) ) .
Theorem 1. 
Let g ( · ) be a Hölder function defined on R with values in the range [ a , b ] ( 0 , 2 ] .
Let σ ( · ) be a positive Hölder function defined on R and assume that σ ( x ) lies within the range [ c , d ] , c > 0 . There exists a self-stabilizing and self-scaling process S ( t ) , t [ 0 , 1 ] , such that it is tangent at u to a stable Lévy process σ ( S ( u ) ) L g ( S ( u ) ) under the conditional expectation given as S ( u ) .
Proof. 
The theorem can be proved in four steps.
Step 1. Donsker’s construction: For all n N and all k = 0 , 1 , , n , set
ξ 1 σ ( 0 ) 1 n 1 / g ( 0 ) L g ( 0 ) , S 1 = ξ 1 , ξ 2 σ ( S 1 ) 1 n 1 / g ( S 1 ) L g ( S 1 ) , S 2 = S 1 + ξ 2 , ξ 3 σ ( S 2 ) 1 n 1 / g ( S 2 ) L g ( S 2 ) , S 3 = S 2 + ξ 3 ,  
where L g ( S k ) is a symmetric g ( S k ) –stable random variable S g ( S k ) S with unit-scale parameters and is independent of the random variables L g ( S j ) , j < k for a given S k . Then, we define a sequence of partial sums ( S k ) 1 k n . In particular, when g ( x ) 2 and σ ( x ) 1 , this method is known as Donsker’s construction (cf. Theorem 16.1 of Billingsley [12] for instance). Define the processes ( S n ) n N , where
S n ( t ) = S n t , t [ 0 , 1 ] .
Then, for given n, S n ( t ) , t [ 0 , 1 ] , is a Markov process. For simplicity of notation, denote this as
E S n ( t 1 ) , , S n ( t d ) · = E · | S n ( t 1 ) , , S n ( t d ) .
According to the construction of S n ( t ) , we have S n ( k / n ) = S k , and, for all θ R ,
E S n ( k / n ) exp i θ S n k + 1 n S n k n = E S n ( k / n ) exp i θ ξ k = exp | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n .
It is easy to see that, for all t 1 , t 2 [ 0 , 1 ] satisfying t 1 < t 2 ,
E S n ( t 1 ) exp i θ S n ( t 2 ) S n ( t 1 ) + k = n t 1 n t 2 1 | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n = E S n ( t 1 ) E S n ( n t 2 1 n ) , , S n ( t 1 ) exp i θ S n ( t 2 ) S n ( t 1 ) + k = n t 1 n t 2 1 | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n = E S n ( t 1 ) exp i θ S n n t 2 1 n S n ( t 1 ) + k = n t 1 n t 2 2 | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n = = E S n ( t 1 ) exp i θ S n n t 1 + 1 n S n ( t 1 ) + | σ ( S n ( n t 1 / n ) ) θ | g ( S n ( n t 1 / n ) ) 1 n = 1 ,
where the last line follows from (10). In particular, it can be rewritten in the following form:
E S n ( t 1 ) exp i θ S n ( t 2 ) S n ( t 1 ) + t 1 ( n ) t 2 ( n ) | σ ( S n ( z ( n ) ) ) θ | g ( S n ( z ) ) d z = 1 ,
where z ( n ) = n z n as before. More generally, we have, for all θ j R , t j [ 0 , 1 ] and t j t j 1 , j = 1 , 2 , , d ,
E S n ( t 1 ) exp i j = 2 d θ j S n ( t j ) S n ( t 1 ) + | j = 2 d θ j σ ( S n ( z ( n ) ) ) 1 [ t 1 ( n ) , t j ( n ) ] ( z ) | g ( S n ( z ) ) d z = E S n ( t 1 ) [ exp { i j = 2 d k = j d θ k S n ( t j ) S n ( t j 1 ) + j = 2 d | k = j d θ k σ ( S n ( z ( n ) ) ) 1 [ t j 1 ( n ) , t j ( n ) ] ( z ) | g ( S n ( z ) ) d z } ] = E S n ( t 1 ) E S n ( t 1 ) , , S n ( t d 1 ) [ exp { i j = 2 d k = j d θ k S n ( t j ) S n ( t j 1 ) + j = 2 d | k = j d θ k σ ( S n ( z ( n ) ) ) 1 [ t j 1 ( n ) , t j ( n ) ] ( z ) | g ( S n ( z ) ) d z } ] = E S n ( t 1 ) [ exp { i j = 2 d 1 k = j d θ k S n ( t j ) S n ( t j 1 ) + j = 2 d 1 | k = j d θ k σ ( S n ( z ( n ) ) ) 1 [ t j 1 ( n ) , t j ( n ) ] ( z ) | g ( S n ( z ) ) d z } ] = = E S n ( t 1 ) exp i k = 2 d θ k S n ( t 2 ) S n ( t 1 ) + | k = 2 d θ k σ ( S n ( z ( n ) ) ) 1 [ t 1 ( n ) , t 2 ( n ) ] ( z ) | g ( S n ( z ) ) d z = 1 ,
where the last line follows from (12). Since S n ( t ) , t [ 0 , 1 ] , is a Markov process. Equality (13) also holds if E S n ( t 1 ) is replaced by E S n ( t l ) , , S n ( t 0 ) , S n ( t 1 ) , where t l , , t 0 [ 0 , 1 ] and t l , , t 0 t 1 .
Set
f n ( θ , t 1 , t 2 ) = E e i θ S n ( t 2 ) S n ( t 1 )
and set
Δ n ( t 1 , t 2 ) = | t 2 ( n ) t 1 ( n ) |
for all t 1 , t 2 [ 0 , 1 ] and θ R . It is worth noting that the following estimation holds:
Δ n ( t 1 , t 2 ) | t 1 t 2 | + 1 n .
Recall σ ( · ) [ c , d ] , c > 0 , and g ( · ) [ a , b ] ( 0 , 2 ] . Then, we may assume that σ ( x ) g ( x ) lies in the range [ m , M ] , m > 0 . By (11), it is easy to see that the following inequalities hold for all | θ | 1 ,
exp m k = n t 1 n t 2 1 | θ | b 1 n = exp m k = n t 1 n t 2 1 | θ | b 1 n E exp i θ S n ( t 2 ) S n ( t 1 ) + k = n t 1 n t 2 1 | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n E exp i θ S n ( t 2 ) S n ( t 1 ) + k = n t 1 n t 2 1 | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n k = n t 1 n t 2 1 | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n = f n ( θ , t 1 , t 2 )
and
exp M k = n t 1 n t 2 1 | θ | a 1 n = exp M k = n t 1 n t 2 1 | θ | a 1 n E exp i θ S n ( t 2 ) S n ( t 1 ) + k = n t 1 n t 2 1 | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n E exp i θ S n ( t 2 ) S n ( t 1 ) + k = n t 1 n t 2 1 | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n k = n t 1 n t 2 1 | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) 1 n = f n ( θ , t 1 , t 2 ) .
Thus, for all | θ | 1 ,
exp M Δ n ( t 1 , t 2 ) | θ | a f n ( θ , t 1 , t 2 ) exp m Δ n ( t 1 , t 2 ) | θ | b .
By a similar argument, we have, for all | θ | > 1 ,
exp M Δ n ( t 1 , t 2 ) | θ | b f n ( θ , t 1 , t 2 ) exp m Δ n ( t 1 , t 2 ) | θ | a .
The inequalities (14) and (15) can be rewritten in the following form: for all θ R ,
exp M Δ n ( t 1 , t 2 ) max | θ | a , | θ | b f n ( θ , t 1 , t 2 ) exp m Δ n ( t 1 , t 2 ) min | θ | a , | θ | b .
Step 2. Sub-equicontinuous for f n ( θ , 0 , t ) : Denote f n ( θ , 0 , t ) by f n ( θ , t ) . Next, we prove that for given N , the sequence ( f n ( θ , t ) ) n 1 is subequicontinuous on ( θ , t ) [ N , N ] × [ 0 , 1 ] . By Theorems A2 and (14), it is easy to see that ( f n ( θ , t ) ) n 1 is equicontinuous with respect to θ [ N , N ] . However, we can even prove a better result, that is ( f n ( θ , t ) ) n 1 is Hölder equicontinuous of order a / ( 1 + a ) with respect to θ [ N , N ] . By (14) and the Billingsley inequality (cf. p. 47 of [12]), it is easy to see that, for all t , v [ 0 , 1 ] with t , v t 1 and all x > 2 ,
P | S n ( t ) S n ( v ) | x x 2 2 / x 2 / x 1 E e i θ S n ( t ) S n ( v ) d θ x 2 2 / x 2 / x 1 exp M | θ | a Δ n ( t , v ) d θ x 2 2 / x 2 / x M | θ | a Δ n ( t , v ) d θ = 2 a + 1 a + 1 M x a Δ n ( t , v ) .
Similarly, by (15), for all t , v [ 0 , 1 ] with t , v t 1 and all 0 < x 2 ,
P | S n ( t ) S n ( v ) | x x 2 2 / x 2 / x 1 E e i θ S n ( t ) S n ( v ) d θ x 2 | θ | < 1 M | θ | a d θ + 1 | θ | 2 / x M | θ | b d θ Δ n ( t , v ) = M x a + 1 + 2 b + 1 b + 1 1 x b Δ n ( t , v ) .
For all ( θ i , t i ) R × [ 0 , 1 ] , i = 1 , 2 , we have
| f n ( θ 1 , t 1 ) f n ( θ 2 , t 1 ) | = | E e i θ 1 S n ( t 1 ) 1 e i ( θ 2 θ 1 ) S n ( t 1 ) | E | 1 e i ( θ 2 θ 1 ) S n ( t 1 ) |
and
| f n ( θ 2 , t 1 ) f n ( θ 2 , t 2 ) | = | E e i θ 2 S n ( t 1 ) 1 e i θ 2 ( S n ( t 2 ) S n ( t 1 ) ) | E | 1 e i θ 2 ( S n ( t 2 ) S n ( t 1 ) ) | .
The random variable | 1 e i ( θ 2 θ 1 ) S n ( t 1 ) | is dominated by the constant 2. Therefore, (19) and (17) imply that
| f n ( θ 1 , t 1 ) f n ( θ 2 , t 1 ) | E | 1 e i ( θ 2 θ 1 ) S n ( t 1 ) | 1 { | S n ( t 1 ) | | θ 2 θ 1 | 1 / ( 1 + a ) } + E | 1 e i ( θ 2 θ 1 ) S n ( t 1 ) | 1 { | S n ( t 1 ) | > | θ 2 θ 1 | 1 / ( 1 + a ) } E | ( θ 2 θ 1 ) S n ( t 1 ) | 1 { | S n ( t 1 ) | | θ 2 θ 1 | 1 / ( 1 + a ) } + 2 E 1 { | S n ( t 1 ) | > | θ 2 θ 1 | 1 / ( 1 + a ) } C | θ 2 θ 1 | a / ( 1 + a ) ,
where C is a constant only depending on M and a . Thus, ( f n ( θ , t ) ) n 1 is Hölder equicontinuous of order a / ( 1 + a ) with respect to θ . Similarly, the inequalities (20) and (18) imply that
| f n ( θ 2 , t 1 ) f n ( θ 2 , t 2 ) | E | 1 e i θ 2 ( S n ( t 2 ) S n ( t 1 ) ) | 1 { | S n ( t 2 ) S n ( t 1 ) | Δ n ( t 1 , t 2 ) 1 / ( 1 + b ) } + E | 1 e i θ 2 ( S n ( t 2 ) S n ( t 1 ) ) | 1 { | S n ( t 2 ) S n ( t 1 ) | Δ n ( t 1 , t 2 ) 1 / ( 1 + b ) } E | θ 2 ( S n ( t 2 ) S n ( t 1 ) ) | 1 { | S n ( t 2 ) S n ( t 1 ) | Δ n ( t 1 , t 2 ) 1 / ( 1 + b ) } + 2 E 1 { | S n ( t 2 ) S n ( t 1 ) | Δ n ( t 1 , t 2 ) 1 / ( 1 + b ) } | θ 2 | Δ n ( t 1 , t 2 ) 1 / ( 1 + b ) + C Δ n ( t 1 , t 2 ) 1 / ( 1 + b ) | θ 2 | + C | t 2 t 1 | 1 / ( 1 + b ) + n 1 / ( 1 + b ) ,
where C is a constant depending on M , a and b . The last inequality implies that for all θ [ N , N ] , ( f n ( θ , t ) ) n 1 is Hölder subequicontinuous of order 1 / ( 1 + b ) with respect to t . Notice that
| f n ( θ 1 , t 1 ) f n ( θ 2 , t 2 ) | | f n ( θ 1 , t 1 ) f n ( θ 2 , t 1 ) | + | f n ( θ 2 , t 1 ) f n ( θ 2 , t 2 ) | .
Thus, the sequence ( f n ( θ , t ) ) n 1 is subequicontinuous on ( θ , t ) [ N , N ] × [ 0 , 1 ] .
Step 3. Convergence for a subsequence: Denote by f σ 0 ( n ) ( θ , t ) = f n ( θ , t ) . For every given N N and N 1 , by Lemma 1, there exists a subsequence ( f σ N ( n ) ( θ , t ) ) n 1 of ( f σ N 1 ( n ) ( θ , t ) ) n 1 and a function f ( θ , t ) defined on ( θ , t ) R × [ 0 , 1 ] such that lim n f σ N ( n ) ( θ , t ) = f ( θ , t ) uniformly on ( θ , t ) [ N , N ] × [ 0 , 1 ] . By induction, the following relation holds:
( f n ( θ , t ) ) n 1 ( f σ 1 ( n ) ( θ , t ) ) n 1 ( f σ N 1 ( n ) ( θ , t ) ) n 1 ( f σ N ( n ) ( θ , t ) ) n 1
Hence, the diagonal subsequence ( f σ N ( N ) ( θ , t ) ) N 1 converges to f ( θ , t ) on ( θ , t ) R × [ 0 , 1 ] . Moreover, by (14) and (15), the following inequalities hold for all | θ | 1 , :
e t M | θ | a f ( θ , t ) e t m | θ | b
and, for all | θ | > 1 ,
e t M | θ | b f ( θ , t ) e t m | θ | a .
By the Lévy continuous theorem, these exists a random process S ( t ) such that S σ N ( N ) ( t ) converges to S ( t ) in distribution for any t [ 0 , 1 ] as N .
Similarly, by (13) instead of by (11), we can prove that for all t j [ 0 , 1 ] , j = 1 , , d , there exists a random process S ( t ) such that S n k ( t ) converges to S ( t ) in finite dimensional distribution, where S n k ( t ) is a subsequence of S σ N ( N ) ( t ) . Letting k , by (13) and the dominated convergence theorem, we have, for all ( θ j , t j ) R × [ 0 , 1 ] , j = 1 , 2 , , d ,
E S ( t 1 ) exp i j = 2 d θ j S ( t j ) S ( t 1 ) + | j = 2 d θ j σ ( S ( z ) ) 1 [ t 1 , t j ] ( z ) | g ( S ( z ) ) d z = 1 .
Equality (23) also holds if E S ( t 1 ) is replaced by E S ( t l ) , , S ( t 0 ) , S ( t 1 ) , where t l , , t 0 [ 0 , 1 ] and t l , , t 0 t 1 .
Step 4. Self-stabilizing for the limiting process: Next, we prove that S is a self-stabilizing process, that is, S is localizable at u to a g ( S ( u ) ) –stable Lévy motion L g ( S ( u ) ) ( t ) under the conditional expectation given as S ( u ) . For any ( t 1 , , t d ) [ 0 , 1 ] d and r > 0 , from equality (23), it is easy to see that
E S ( u ) [ exp { i j = 1 d θ j S ( u + r t j ) S ( u ) r 1 / g ( S ( u ) ) + | j = 1 d θ j r 1 / g ( S ( u ) ) σ ( S ( z ) ) 1 [ u , u + r t j ] ( z ) | g ( S ( z ) ) d z } ] = 1 .
Setting z = u + r t , we find that
E S ( u ) [ exp { i j = 1 d θ j S ( u + r t j ) S ( u ) r 1 / g ( S ( u ) ) + | j = 1 d θ j σ ( S ( u + r t ) ) 1 [ 0 , t j ] ( t ) | g ( S ( u + r t ) ) r g ( S ( u ) ) g ( S ( u + r t ) ) / g ( S ( u ) ) d t } ] = 1 .
From equality (23), by an argument similar to (14), we obtain, for all θ R and all t , v [ 0 , 1 ] with t , v u ,
exp M | t v | max | θ | a , | θ | b E S ( u ) e i θ S ( t ) S ( v ) exp m | t v | min | θ | a , | θ | b .
By an argument similar to that of (18), it follows that, for t , v [ 0 , 1 ] and t , v u ,
P S ( u ) | S ( t ) S ( v ) | | t v | β C | t v | 1 b β ,
where C is a constant depending on M , a and b . Since g and σ are both Hölder functions, by (25), we get
lim r 0 r g ( S ( u ) ) g ( S ( u + r t ) ) / g ( S ( u ) ) = 1 , lim r 0 g ( S ( u + r t ) ) = g ( S ( u ) )
and
lim r 0 σ ( S ( u + r t ) ) = σ ( S ( u ) )
in probability with respect to P S ( u ) . Hence, using the dominated convergence theorem, we have
lim r 0 E S ( u ) exp i j = 1 d θ j S ( u + r t j ) S ( u ) r 1 / g ( S ( u ) ) = exp | j = 1 d θ j σ ( S ( u ) ) 1 [ 0 , t j ] ( t ) | g ( S ( u ) ) d t = E S ( u ) exp i j = 1 d θ j σ ( S ( u ) ) L g ( S ( u ) ) ( t j ) ,
which means that S is localizable at u to a g ( S ( u ) ) –stable Lévy motion σ ( S ( u ) ) L g ( S ( u ) ) ( t ) under the conditional expectation given as S ( u ) , where L α ( t ) is the standard symmetric α –stable Lévy motion. This completes the proof of the theorem. □
Remark 1. 
Let us comment on Theorem 1.
1. 
A slightly different method by which to approximate the self-stabilizing and self-scaling process S can be described as follows. Define
S n ( 0 ) 0 , S n 1 n = σ ( 0 ) 1 n 1 / g ( 0 ) L g ( 0 ) , S n k + 1 n = S n k n + σ ( S n ( k / n ) ) 1 n 1 / g ( S n ( k / n ) ) L g ( S n ( k / n ) ) , .
Now, let
S n ( t ) = S n t .
By an argument similar to the proof of Theorem 1, there exists a subsequence S n k of S n such that S n k converges to the process S in finite dimensional distribution. Notice that with this method, all of the random variables ( L g ( S n ( k / n ) ) ) k [ 1 , n ] are changed from S n to S n + 1 , even g ( x ) is a constant. However, in the proof of Theorem 1, we only add one random variable from S n to S n + 1 when g ( · ) c a constant.
2. 
Notice that if we define S n ( k / n ) for all k = 0 , 1 , 2 , , then, by a similar argument of Theorem 1, we can define the self-stabilizing and self-scaling process { S ( x ) : x R + } on the positive whole line via the limit of
S n ( x ) = S n n x n , x [ 0 , N ] .
3. 
An interesting question is whether S n ( t ) converges to S ( t ) in finite dimensional distribution. To answer this question, we need a judgment that if every subsequence of ( x n ) has a further subsequence that converges to x, then ( x n ) converges to x . According to the proof of the theorem, it is known that every subsequence of S n ( t ) has a further subsequence that converges to S in finite dimensional distribution, where S is defined by (23). Thus, the question reduces to proving that (23) defines a unique process S.

3. Properties of the Self-Stabilizing and Self-Scaling Process

In this section, we consider the properties of the self-stabilizing and self-scaling process S established in the proof of Theorem 1. It seems that the process S shares many properties with the multistable Lévy motions (cf. Falconer and Liu [5]). Assume that σ ( x ) g ( x ) lies in the range [ m , M ] , m > 0 .

3.1. Tail Probabilities

Notice that the estimation for the characteristic function of S ( t 1 ) S ( t 2 ) under the probability measure P S ( u ) is given by (24). By an argument similar to that of (17) and (18), for all t 1 , t 2 , u [ 0 , 1 ] such that t 1 , t 2 u , the following two inequalities hold for all x > 2 :
P S ( u ) | S ( t 1 ) S ( t 2 ) | x 2 a + 1 a + 1 M x a | t 1 t 2 |
and, for all 0 < x 2 ,
P S ( u ) | S ( t 1 ) S ( t 2 ) | x M x a + 1 + 2 b + 1 b + 1 1 x b | t 1 t 2 | .
Thus, the following inequalities are obvious:
Property 1. 
For t 1 , t 2 , u [ 0 , 1 ] with t 1 , t 2 u and all x > 2 , it holds
P S ( u ) | S ( t 1 ) S ( t 2 ) | x C 1 x a | t 1 t 2 | ,
where C is a constant depending on M and a . In particular, it implies that, for all x > 2 ,
P | S ( t 1 ) S ( t 2 ) | x C 1 x a | t 1 t 2 | .
For more precise bounds on tail probabilities, we have the following inequalities:
Property 2. 
For t 1 , t 2 , u [ 0 , 1 ] with t 2 t 1 u and all x > 0 , it holds
P S ( u ) | S ( t 2 ) S ( t 1 ) | x E S ( u ) t 1 t 2 2 g ( S ( t ) ) + 1 g ( S ( t ) ) + 1 σ ( S ( t ) ) x g ( S ( t ) ) d t C E S ( u ) t 1 t 2 σ ( S ( t ) ) x g ( S ( t ) ) d t ,
where
C = max 2 a + 1 a + 1 , 2 b + 1 b + 1 .
In particular, it implies that, for all x > 0 ,
P | S ( t 2 ) S ( t 1 ) | x C E t 1 t 2 σ ( S ( t ) ) x g ( S ( t ) ) d t .
Proof. 
Equality (10) and Jensen’s inequality imply that, for u k / n and all θ R ,
E S n ( u ) exp i θ S n k + 1 n S n k n = E S n ( u ) exp 1 n | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) exp 1 n E S n ( u ) | σ ( S n ( k / n ) ) θ | g ( S n ( k / n ) ) .
By the last inequality and an argument similar to that of (11), we have, for all t 1 , t 2 , u [ 0 , 1 ] with t 2 t 1 u ,
E S n ( u ) exp i θ S n ( t 2 ) S n ( t 1 ) + E S n ( u ) t 1 ( n ) t 2 ( n ) | σ ( S n ( t ) ) θ | g ( S n ( t ) ) d t 1 .
Thus, we have
E S ( u ) e i θ ( S ( t 2 ) S ( t 1 ) ) exp E S ( u ) t 1 t 2 | σ ( S ( t ) ) θ | g ( S ( t ) ) d t .
By the Billingsley inequality and (31), it is easy to see that, for all x > 0 ,
P S ( u ) | S ( t 2 ) S ( t 1 ) | x x 2 2 / x 2 / x 1 E S ( u ) e i θ S ( t 2 ) S ( t 1 ) d θ x 2 2 / x 2 / x 1 exp E S ( u ) t 1 t 2 | σ ( S ( t ) ) θ | g ( S ( t ) ) d t d θ x 2 2 / x 2 / x E S ( u ) t 1 t 2 | σ ( S ( t ) ) θ | g ( S ( t ) ) d t d θ = E S ( u ) t 1 t 2 x 2 2 / x 2 / x | σ ( S ( t ) ) θ | g ( S ( t ) ) d θ d t = E S ( u ) t 1 t 2 2 g ( S ( t ) ) + 1 g ( S ( t ) ) + 1 σ ( S ( t ) ) x g ( S ( t ) ) d t max 2 a + 1 a + 1 , 2 b + 1 b + 1 E S ( u ) t 1 t 2 σ ( S ( t ) ) x g ( S ( t ) ) d t .
The last two inequalities represent the desired outcome. □
Remark 2. 
Let us comment on Property 2.
1. 
The order E t 1 t 2 1 x g ( S ( t ) ) d t , x , of Property 2 is the best possible. Indeed, assume [ a , b ] ( 0 , 2 ) and set
T ( x ) = E t 1 t 2 σ ( S ( t ) ) x g ( S ( t ) ) C α ( g ( S ( t ) ) ) d t , t 2 > t 1 ,
where C u is defined by (3). By an argument similar to that of Ayache [13], we can prove that
lim x P | S ( t 2 ) S ( t 1 ) | x / T ( x ) = 1 .
Since the two functions σ ( x ) g ( x ) , x R , and C u , u [ a , b ] , are bounded, T ( x ) is of the order E t 1 t 2 1 x g ( S ( t ) ) d t as x , which represents the best possible order of convergence.
2. 
Property 2 demonstrates that if the stock-market prices are self-stabilizing processes, then investments originating from points with larger stability indices are safer than those originating from points with smaller stability indices.
In mathematical terms, if g ( S ( t 1 ) ) g ( S ( t 2 ) ) , then we have
E t 1 t 1 + z 1 x g ( S ( t ) ) d t E t 2 t 2 + z 1 x g ( S ( t ) ) d t
for any z 0 and any x 1 . Indeed, if g ( S ( t 1 + s ) ) > g ( S ( t 2 + s ) ) , s [ 0 , z ] , then it is obvious that
E t 1 t 1 + z 1 x g ( S ( t ) ) d t < E t 2 t 2 + z 1 x g ( S ( t ) ) d t
for any x 1 . Otherwise, since S is stochastic continuous (cf. Property 5) and g ( · ) is a Hölder function, there almost surely exists a point s 0 [ 0 , z ] such that g ( S ( t 1 + s 0 ) ) = g ( S ( t 2 + s 0 ) ) and g ( S ( t 1 + s ) ) > g ( S ( t 2 + s ) ) for any s [ 0 , s 0 ) . Hence, it holds for all x 1 ,
E t 1 t 1 + s 0 1 x g ( S ( t ) ) d t E t 2 t 2 + s 0 1 x g ( S ( t ) ) d t .
By the construction of S , it is obvious that
P S ( t ) x = P S ( t 0 + t ) S ( t 0 ) x | S ( t 0 ) = g ( 0 ) , x R .
Hence, for x 1 ,
E t 1 + s 0 t 1 + z 1 x g ( S ( t ) ) d t = E t 2 + s 0 t 2 + z 1 x g ( S ( t ) ) d t .
Thus, (33) and (34) imply that
E t 1 t 1 + z 1 x g ( S ( t ) ) d t = E t 1 t 1 + s 0 1 x g ( S ( t ) ) d t + E t 1 + s 0 t 1 + z 1 x g ( S ( t ) ) d t E t 2 t 2 + s 0 1 x g ( S ( t ) ) d t + E t 2 + s 0 t 2 + z 1 x g ( S ( t ) ) d t = E t 2 t 2 + z 1 x g ( S ( t ) ) d t .

3.2. Absolute Moments

The following property gives some sharp estimations for the absolute moments of the self-stabilizing and self-scaling process.
Property 3. 
Let λ 0 be the positive value such that
E S ( u ) t 1 t 2 σ ( S ( t ) ) λ 0 g ( S ( t ) ) d t = 1 .
For all γ ( 0 , a ) and all t 1 , t 2 , u [ 0 , 1 ] such that t 2 t 1 u , it holds
E S ( u ) | S ( t 2 ) S ( t 1 ) | γ C 1 a γ λ 0 γ ,
where C is a constant depending on a and b . In particular, it implies that
E | S ( t 2 ) S ( t 1 ) | γ C 1 a γ λ 0 γ
and
lim γ a ( a γ ) E | S ( t 2 ) S ( t 1 ) | γ C λ 0 a < .
Proof. 
By Property 2, it is easy to see that, for all γ ( 0 , a ) and all t 1 , t 2 , u [ 0 , 1 ] such that t 2 t 1 u ,
E S ( u ) | S ( t 2 ) S ( t 1 ) | γ = γ 0 x γ 1 P S ( u ) | S ( t 2 ) S ( t 1 ) | x d x γ 0 λ 0 x γ 1 d x + λ 0 x γ 1 C E S ( u ) t 1 t 2 σ ( S ( t ) ) x g ( S ( t ) ) d t d x = λ 0 γ + C E S ( u ) t 1 t 2 λ 0 x γ 1 g ( S ( t ) ) σ ( S ( t ) ) g ( S ( t ) ) d x d t = λ 0 γ + C E S ( u ) t 1 t 2 λ 0 γ g ( S ( t ) ) g ( S ( t ) ) γ σ ( S ( t ) ) g ( S ( t ) ) d t λ 0 γ + C λ 0 γ a γ E S ( u ) t 1 t 2 σ ( S ( t ) ) λ 0 g ( S ( t ) ) d t C 1 1 a γ λ 0 γ ,
where C and C 1 are constants depending on a and b .
When g a and σ is a constant, without loss of generality, let σ 1 . Then, S ( t ) is a standard symmetric a –stable Lévy motion. Thus, we have
S ( t 1 ) S ( t 2 ) S a ( | t 1 t 2 | 1 / a , 0 , 0 )
and
lim γ a ( a γ ) E | X | γ = a C a | t 1 t 2 | ,
where C a is defined by (3) and is a constant depending only on a (cf. Property 1.2.18 of Samorodnitsky and Taqqu [14]). Thus, the γ in inequality (35) cannot equal or larger than a .
Property 4. 
For all γ ( 0 , a ) and all u [ 0 , 1 ) , it holds
E S ( u ) | S ( u + r ) S ( u ) | γ 2 γ 1 Γ 1 γ g ( S ( u ) ) γ 0 u γ 1 sin 2 ( u ) d u | σ ( S ( u ) ) | γ r γ g ( S ( u ) ) , r 0 + .
where Γ ( t ) = 0 x t 1 e x d x is the gamma function. In particular, it implies that
E | S ( u + r ) S ( u ) | γ 2 γ 1 γ 0 u γ 1 sin 2 ( u ) d u E Γ 1 γ g ( S ( u ) ) | σ ( S ( u ) ) | γ r γ g ( S ( u ) ) , r 0 + .
Proof. 
Notice that, for all γ ( 0 , a ) and all u [ 0 , 1 ) ,
E S ( u ) | S ( u + r ) S ( u ) r 1 / g ( S ( u ) ) | γ = γ 0 x γ 1 P S ( u ) | S ( u + r ) S ( u ) r 1 / g ( S ( u ) ) | x d x .
Recall that S is localizable at u to a g ( S ( u ) ) –stable Lévy motion σ ( S ( u ) ) L g ( S ( u ) ) ( t ) under the conditional expectation given S ( u ) . Thus
P S ( u ) | S ( u + r ) S ( u ) r 1 / g ( S ( u ) ) | x P S ( u ) | σ ( S ( u ) ) L g ( S ( u ) ) ( 1 ) | x .
By Property 1, for x large enough,
P S ( u ) | S ( u + r ) S ( u ) r 1 / g ( S ( u ) ) | x C 1 x a r a / g ( S ( u ) ) .
Hence, by Lebesgue dominated convergence theorem, we have
lim r 0 + E S ( u ) | S ( u + r ) S ( u ) r 1 / g ( S ( u ) ) | γ = γ 0 x γ 1 P S ( u ) | σ ( S ( u ) ) L g ( S ( u ) ) ( 1 ) | x d x = E S ( u ) | σ ( S ( u ) ) L g ( S ( u ) ) ( 1 ) | γ = | σ ( S ( u ) ) | γ 2 γ 1 Γ 1 γ g ( S ( u ) ) γ 0 u γ 1 sin 2 ( u ) d u .
We refer to page 18 of Samorodnitsky and Taqqu [14] for the last line. □

3.3. Stochastic Hölder Continuity

We call a random process S ( u ) , u in an interval I R , is stochastic Hölder continuous of exponent β ( 0 , 1 ] , if it holds
lim sup | u r | 0 , u r P | S ( u ) S ( r ) | C | u r | β = 0
for a positive constant C . It is obvious that if S ( u ) is stochastic Hölder continuous of exponent β 1 ( 0 , 1 ] , then S ( u ) is stochastic Hölder continuous of exponent β 2 ( 0 , β 1 ] . Stochastic Hölder continuity implies continuity in probability, that is, for any ϵ > 0 and r I it holds that lim h 0 P | S ( r + h ) S ( r ) | > ϵ = 0 .
Example 1. 
Assume that a random process S ( u ) , u I , satisfies the following condition: there exists three strictly positive constants γ , c , ρ such that
E | S ( u ) S ( r ) | γ c | u r | ρ , u , r I .
Then, S ( u ) , u I , is stochastic Hölder continuous of exponent β ( 0 , min { 1 , ρ / γ } ) . Indeed, it is easy to see that, for u , r I ,
P | S ( u ) S ( r ) | C | u r | β E | S ( u ) S ( r ) | γ C γ | u r | β γ c C γ | u r | ρ β γ ,
which implies our claim.
The following theorem shows that the process S is stochastic Hölder continuous.
Property 5. 
For all t 1 , t 2 , u [ 0 , 1 ] and all t 1 , t 2 u , it holds
P S ( u ) | S ( t 1 ) S ( t 2 ) | | t 1 t 2 | β C | t 1 t 2 | 1 b β ,
where C is a constant depending on M , a and b . In particular, it implies that
P | S ( t 1 ) S ( t 2 ) | | t 1 t 2 | β C | t 1 t 2 | 1 b β ,
which means S is stochastic Hölder continuous of exponent β ( 0 , min { 1 , 1 / b } ) .
Proof. 
Taking x = | t 1 t 2 | β in (30), we obtain the required inequality (40). □

3.4. Markov Process

It is easy to see that S n in the proof of Theorem 1 is a Markov process. Thus, it is natural to guess that the self-stabilizing and self-scaling process is also a Markov process. To determine whether this is the case, we need the following theorem:
Theorem 2. 
Assume that ( S n ( t ) ) n 1 , t [ 0 , 1 ] , is a sequence of Markov processes. If S n ( t ) converges to a process S ( t ) , t [ 0 , 1 ] , in finite dimensional distributions, then S is a Markov process.
Proof. 
We only need show that for any d N , t 1 , t 2 , , t d [ 0 , 1 ] and t 1 < t 2 < < t d , it holds for any Borel set B R ,
P S ( t d ) B | S ( t 1 ) , S ( t 2 ) , , S ( t d 1 ) = P S ( t d ) B | S ( t d 1 ) ,
or E [ 1 { S ( t d ) B } | S ( t d 1 ) ] = E [ 1 { S ( t d ) B } | S ( t 1 ) , S ( t 2 ) , , S ( t d 1 ) ] . Notice that S n ( t ) , t [ 0 , 1 ] , is a Markov process. Thus
P S n ( t d ) B | S n ( t 1 ) , S n ( t 2 ) , , S n ( t d 1 ) = P S n ( t d ) B | S n ( t d 1 ) .
For any two Borel sets A 1 R d 2 and A 2 R such that
P S ( t 1 ) , S ( t 2 ) , , S ( t d 1 ) , S ( t d ) A 1 × A 2 × B > 0 ,
by (43), it is easy to see that
E [ 1 { S ( t d ) B } 1 { S ( t d 1 ) A 2 } ] = lim n E [ 1 { S n ( t d ) B } 1 { S n ( t d 1 ) A 2 } ] = lim n E [ 1 { S n ( t d ) B } 1 { ( S n ( t 1 ) , S n ( t 2 ) , , S n ( t d 1 ) ) A 1 × A 2 } ] = E [ 1 { S ( t d ) B } 1 { ( S ( t 1 ) , S ( t 2 ) , , S ( t d 1 ) ) A 1 × A 2 } ] ,
which means E [ 1 { S ( t d ) B } | S ( t d 1 ) ] = E [ 1 { S ( t d ) B } | S ( t 1 ) , S ( t 2 ) , , S ( t d 1 ) ] .
Since that there exists a subsequence ( n k ) k 1 of N such that
S n k ( t 1 ) , S n k ( t 2 ) , , S n k ( t d ) S ( t 1 ) , S ( t 2 ) , , S ( t d )
in distribution. Thus, by Theorem 2, we have the following property.
Property 6. 
The process S ( t ) , t [ 0 , 1 ] , is a Markov process.

3.5. Strongly Localizability

We have proven that S is localizable under certain conditional expectations in the proof of Theorem 1. In the following theorem, we prove that S is strongly localizable under the conditional expectation given as S ( u ) .
Let D [ 0 , 1 ] be the set of càdlàg functions on [ 0 , 1 ] —that is, functions which are continuous on the right and have left limits at all t [ 0 , 1 ] , and are endowed with the Skorohod metric d S [12]. If X and X t have versions in D [ 0 , 1 ] and convergence in (1) is in distribution with respect to d S , then X is said to be h strongly localizable at t with strong local form X t .
Property 7. 
The process S ( t ) , t [ 0 , 1 ] , is strongly localizable at u to a g ( S ( u ) ) –stable Lévy motion L g ( S ( u ) ) ( · ) under the conditional expectation given as S ( u ) .
Proof. 
For any u [ 0 , 1 ) , define
S r ( x ) = S ( u + r x ) S ( u ) r 1 / g ( S ( u ) ) , r , u ( 0 , 1 ] .
We only need to prove the tightness. By Theorem 15.6 of Billingsley [12], it suffices to show that, for some β > 1 and τ 0 ,
P S ( u ) | S r ( x ) S r ( x 1 ) | λ , | S r ( x 2 ) S r ( x ) | λ C λ τ x 2 x 1 β
for x 1 x x 2 , λ > 0 and r ( 0 , 1 ] , where C is a positive constant. Since S ( x ) is a Markov process with respect to x, then S r ( x ) is a Markov process with respect to x. Hence, it follows that
P S ( u ) | S r ( x ) S r ( x 1 ) | λ , | S r ( x 2 ) S r ( x ) | λ = P S ( u ) | S r ( x ) S r ( x 1 ) | λ P S ( u ) | S r ( x 2 ) S r ( x ) | λ | | S r ( x ) S r ( x 1 ) | λ .
By the Billingsley inequality (cf. p. 47 of [12]), (29) and (30), we have
P S ( u ) | S r ( x ) S r ( x 1 ) | λ C 1 λ γ x x 1 ,
where γ = a 1 [ 1 , ) ( λ ) + b 1 ( 0 , 1 ) ( λ ) and C 1 is a positive constant depending only on M , a and b . By a similar argument, we have
P S ( u ) | S r ( x 2 ) S r ( x ) | λ | S r ( x ) , S r ( x 1 ) C 2 λ γ x 2 x .
Thus
P S ( u ) | S r ( x 2 ) S r ( x ) | λ | | S r ( x ) S r ( x 1 ) | λ C 2 λ γ x 2 x .
Using the inequality x y ( x + y ) 2 / 4 , x , y 0 , we have
P S ( u ) | S r ( x ) S r ( x 1 ) | λ , | S r ( x 2 ) S r ( x ) | λ C 1 C 2 4 λ 2 γ x 2 x 1 2 ,
which gives inequality (44). □

3.6. Self-Regulating Process

The pointwise Hölder exponent at t of a stochastic process (or continuous function) S : R R is the number α S ( t ) such that
α S ( t ) = sup β : lim sup h 0 | S ( t + h ) S ( t ) | | h | β = 0 almost surely .
A random process S = S ( t ) is called a self-regulating process if there exists a function g ( x ) : R [ α , β ] R such that, at each point t, almost surely,
α S ( t ) = g ( S ( t ) ) .
The self-regulating processes were introduced by Barrière, Echelard and Lévy Véhel [15], in work in which the authors established two self-regulating multi-fractional Brownian motions (srmBm) via methods of fixed-point theory and random midpoint displacement, respectively.
From Property 7, we see that, for u ( 0 , 1 ] ,
S ( u + r x ) S ( u ) r 1 / g ( S ( u ) ) σ ( S ( u ) ) L g ( S ( u ) ) ( x ) , r 0 ,
in distribution P S ( u ) with respect to d S ; thus, the following property is obvious.
Property 8. 
The process S ( t ) , t [ 0 , 1 ] , is a self-regulating process.

3.7. Martingale

Since S n is a martingale when a > 1 , we guess that process S is also a martingale when a > 1 . We prove this claim in the following theorem.
Property 9. 
If a > 1 , then the process S ( t ) , t [ 0 , 1 ] , is a martingale.
Proof. 
By Property 3, it follows that, for any t [ 0 , 1 ] ,
E | S ( t ) | < ,
where C is a constant depending on M , a and b . We only need to verify that for any d N , t 1 , t 2 , , t d [ 0 , 1 ] and t 1 < t 2 < < t d , it holds
E S ( t d ) | S ( t 1 ) , S ( t 2 ) , , S ( t d 1 ) = S ( t d 1 ) .
By an argument similar to that of (23), we have, for all ( θ j , t j ) R × [ 0 , 1 ] , j = 1 , 2 , , d ,
E S ( t 1 ) , S ( t 2 ) , , S ( t d 1 ) exp i θ S ( t d ) S ( t d 1 ) + | θ σ ( S ( z ) ) 1 [ t j 1 , t j ] ( z ) | g ( S ( z ) ) d z = 1 .
Taking the derivative with respect to θ on both sides of the last equality, it follows that
E S ( t 1 ) , S ( t 2 ) , , S ( t d 1 ) [ exp i θ S ( t d ) S ( t d 1 ) + | θ σ ( S ( z ) ) 1 [ t j 1 , t j ] ( z ) | g ( S ( z ) ) d z × i S ( t d ) S ( t d 1 ) + g ( S ( z ) ) σ ( S ( z ) ) g ( S ( z ) ) | θ 1 [ t j 1 , t j ] ( z ) | g ( S ( z ) ) 1 d z ] = 0 .
Notice that g ( S ( z ) ) 1 a 1 > 0 . Taking θ = 0 in equality (52), we obtain (51). □

4. Conclusions

Using a general version of the Arzelà–Ascoli theorem and a Donsker-type construction, we prove the existence of the self-stabilizing and self-scaling process. This type of process is localizable at t, with a g ( S ( t ) ) –stable tangent process. Thus, the “local intensities of jumps” vary with the values of the process. Moreover, we also show that the self-stabilizing processes have many other good properties, such as stochastic Hölder continuity and strong localizability. In particular, this type of self-stabilizing process is simultaneously a Markov process, a martingale (when the local index of stability is greater than 1), a self-scaling process and a self-regulating process.

Author Contributions

Conceptualization, J.L.V.; methodology, X.F.; writing—original draft preparation, X.F.; writing—review and editing, X.F. and J.L.V.; project administration, J.L.V.; funding acquisition, X.F. and J.L.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially supported by the Natural Science Foundation of Hebei Province (Grant No. A2025501005).

Data Availability Statement

Data available on request from the authors.

Conflicts of Interest

Author Jacques Lévy Véhel was employed by Case Law Analytics. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A

In this appendix, we give two sufficient conditions for a sequence of random vectors (random processes) to have a subsequence that converges in distribution (in finite dimensional distribution). The proofs of our theorems are based on the Arzelà–Ascoli theorem. The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis and gives necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. A sequence of continuous functions ( f n ) k 1 defined on I R d is equicontinuous with respect to x I if for every ε > 0 , there exists δ > 0 such that for all functions f n ,
| f n ( x ) f n ( y ) | ε , x , y I ,
whenever | | x y | | < δ . Succinctly, a sequence is equicontinuous if and only if all of its elements admit the same modulus of continuity. It is worth noting that ( f n ) k 1 is equicontinuous if and only if ( f n ) k 1 is equicontinuous with respect to every coordinate.
Recall that a sequence of functions ( f n ) k 1 defined on I is uniformly bounded if there is a number M such that
| f n ( x ) | M
for all f n and all x I .
The Arzelà–Ascoli theorem states that
Lemma A1 
(The Arzelà–Ascoli theorem). Assume that ( f n ) n 1 is a sequence of real-valued continuous functions defined on a closed and bounded set Π i = 1 d [ a i , b i ] R d . If this sequence is uniformly bounded and equicontinuous, then there exists a subsequence ( f n k ) k 1 that converges uniformly.
Next, we give two examples in which the conditions of the Arzelà–Ascoli theorem are satisfied.
Example A1. 
A sequence of uniformly bounded functions ( f n ( x ) ) n 1 , x [ a , b ] on a closed and bounded interval, with uniformly bounded derivatives.
Example A2. 
A sequence of uniformly bounded functions ( f n ( x ) ) n 1 , x [ a , b ] on a closed and bounded interval, satisfies a Hölder condition of order α, 0 < α 1 , with a fixed constant M ,
| f n ( x ) f n ( y ) | M | x y | α ,
for all n .

Appendix A.1. Weak Convergence Theorems for Random Vectors

Using the Arzelà–Ascoli theorem and the Lévy continuous theorem, we obtain the following two sufficient conditions for a sequence of random vectors to have a subsequence that converges in distribution.
Denote by x , y the scalar product of two vectors x = ( x ( i ) ) , y = ( y ( i ) ) R d , i.e.,
x , y = i = 1 d x ( i ) y ( i ) ,
and by | | x | | the magnitude of a vector x , i.e., | | x | | 2 = x , x .
Theorem A1. 
Let ( X n ) n 1 be a sequence of real-valued random vectors with values in R d . Assume that
lim | | θ | | 0 E | e i θ , X n 1 | = 0 , θ R d ,
uniformly for all n 1 . Denote by
f n ( θ ) = E e i θ , X n , θ R d ,
the characteristic function of X n . Then, ( f n ) n 1 is equicontinuous. Therefore, there exist a subsequence ( X n k ) k 1 and a random vector X such that X n k X in distribution.
Proof. 
We first give a proof for the case of d = 1 . For all θ 1 , θ 2 R , it is easy to see that
| E e i θ 1 X n E e i θ 2 X n | = | E e i θ 1 X n 1 e i ( θ 2 θ 1 ) X n | E | 1 e i ( θ 2 θ 1 ) X n | .
Since
| 1 e i ( θ 2 θ 1 ) X n | 2 and lim θ 0 E | e i θ X n 1 | = 0 ,
by Lebesgue’s dominated convergence theorem and (A1), we find that ( f n ) n 1 is equicontinuous. Let f σ 0 ( n ) ( θ ) = f n ( θ ) for all n 1 . For all N 1 , by the Arzelà–Ascoli theorem, there exists a subsequence of ( f σ N 1 ( n ) ( θ ) ) n 1 , denoted as ( f σ N ( n ) ( θ ) ) n 1 , and a function f ( θ ) defined on θ R such that lim n f σ N ( n ) ( θ ) = f ( θ ) uniformly on θ [ N , N ] . By induction, the following relation holds:
( f n ( θ ) ) n 1 ( f σ 1 ( n ) ( θ ) ) n 1 ( f σ N 1 ( n ) ( θ ) ) n 1 ( f σ N ( n ) ( θ ) ) n 1
Define the diagonal subsequence ( f σ N ( N ) ( θ ) ) N 1 whose Nth term is the Nth term in the Nth subsequence ( f σ N ( n ) ( θ ) ) . Then, the diagonal subsequence ( f σ N ( N ) ( θ ) ) N 1 converges to f ( θ ) on θ R . Condition (A1) implies that for any ε > 0 , there exists a δ > 0 such that
E | e i θ X n 1 | ε
uniformly for all n and all | θ | δ . In particular, since ε can be any small, inequality (A3) implies that
lim θ 0 lim sup n E | e i θ X n 1 | = 0 .
Moreover, by (A4),
lim θ 0 | f ( θ ) 1 | = lim θ 0 | lim N f σ N ( N ) ( θ ) 1 | lim θ 0 lim sup N E | e i θ X σ N ( N ) 1 | = 0 ,
which means f ( θ ) is continuous at 0 . By the Lévy continuous theorem, these exists a random variable X such that X σ N ( N ) converges to X in distribution as N .
The proof for the case of d > 1 is very simple. Note the fact that, for all θ 1 , θ 2 R d ,
| E e i θ 1 , X n E e i θ 2 , X n | = | j = 1 d E e i k = 1 j θ 1 ( k ) X n ( k ) + k = j + 1 d θ 2 ( k ) X n ( k ) 1 e i ( θ 2 ( j ) θ 1 ( j ) ) X n ( j ) | j = 1 d E | 1 e i ( θ 2 ( j ) θ 1 ( j ) ) X n ( j ) | .
Recall that ( f n ) k 1 is equicontinuous if and only if ( f n ) k 1 is equicontinuous with respect to every coordinate. Hence ( f n ( θ ) ) n 1 is equicontinuous on θ R d . By an argument similar to the case of d = 1 , there exists a subsequence ( f σ N ( N ) ( θ ) ) N 1 converges to f ( θ ) on θ R d . By condition (A1),
lim | | θ | | 0 | f ( θ ) 1 | = lim | | θ | | 0 | lim N E e i θ , X σ N ( N ) 1 | lim | | θ | | 0 lim sup N E | e i θ , X σ N ( N ) 1 | = 0 ,
which means f ( θ ) is continuous at 0 . By the Lévy continuous theorem again, these exists a random vector X such that X σ N ( N ) converges to X in distribution as N . This completes the proof of the theorem. □
As an application of Theorem A1, consider the following example, which illustrates that a sequence of uniformly bounded random variables exists a subsequence that converges in distribution.
Example A3. 
Let ( X n ) n 1 be a sequence of real-valued random vectors with values in R d . If | | X n | | C , for a constant C and uniformly for all n , then there exist a subsequence ( X n k ) k 1 and a random vector X such that X n k X in distribution. Indeed, it is easy to see that
lim | | θ | | 0 E | e i θ , X n 1 | lim | | θ | | 0 E | θ , X n | lim | | θ | | 0 | | θ | | C = 0 , θ R d ,
uniformly for all n 1 . Our conclusion follows from Theorem A1.
Since condition (A1) may not easy to verify, we introduce the second sufficient condition. This sufficient condition could be easier to check.
Theorem A2. 
Let ( X n ) n 1 be a sequence of real-valued random vectors with values in R d . Denote the characteristic function of X n by f n ( θ ) , θ R d . Assume that
lim | | θ | | 0 f n ( θ ) = 1
uniformly for all n 1 . Then, ( f n ( θ ) ) n 1 is equicontinuous. Therefore, there exist a subsequence ( X n k ) k 1 and a random vector X such that X n k X in distribution.
Proof. 
We first give a proof for the case of d = 1 . By the Billingsley inequality (cf. page 47 of [12]), it is easy to see that, for all x > 2 ,
P ( | X n | x ) x 2 2 / x 2 / x 1 f n ( θ ) d θ .
Given any ε > 0 , by (A7), there is a constant M = M ( ε ) 2 such that for all | θ | 2 / M ,
| 1 f n ( θ ) | ε 8 .
Thus, for all x M ,
P ( | X n | x ) x 2 2 / x 2 / x ε 8 d θ ε 4 .
For all θ 1 , θ 2 R , we have
| f n ( θ 1 ) f n ( θ 2 ) | = | E e i θ 1 X n 1 e i ( θ 2 θ 1 ) X n | E | 1 e i ( θ 2 θ 1 ) X n | .
The random variable | 1 e i ( θ 2 θ 1 ) X n | is dominated by the constant 2. Therefore, by the inequalities (A9) and (A10), it follows that, for all x 2 ,
| f n ( θ 1 ) f n ( θ 2 ) | E | 1 e i ( θ 2 θ 1 ) X n | 1 { | X n | x } + E | 1 e i ( θ 2 θ 1 ) X n | 1 { | X n | > x } E | ( θ 2 θ 1 ) X n | 1 { | X n | x } + 2 E 1 { | X n | > x } x | θ 2 θ 1 | + 2 P ( | X n | x ) .
For this fixed ε , from (A11), it is easy to see that
| f n ( θ 1 ) f n ( θ 2 ) | M | θ 2 θ 1 | + 2 P ( | X n | M ) ε 2 + 2 ε 4 = ε
uniformly for all n, whenever | θ 2 θ 1 | ε / ( 2 M ) . Hence, the family ( f n ( θ ) ) n 1 is equicontinuous. By the Arzelà–Ascoli theorem, with an argument similar to the proof of Theorem A1, there exist a subsequence ( f σ N ( N ) ( θ ) ) k 1 and a continuous function f ( θ ) such that f σ N ( N ) ( θ ) f ( θ ) . Since f n ( 0 ) = 1 for all n, inequality (A12) implies that | f ( θ ) 1 | ε whenever | θ | ε / ( 2 M ) . Thus f ( θ ) is continuous at 0 . By the Lévy continuous theorem, these exists a random variable X such that X σ N ( N ) converges to X in distribution as N .
For the case where d > 1 , the proof is similar to that of Theorem A1. This completes the proof of the theorem. □
To illustrate Theorem A2, consider the following two examples.
Example A4. 
Let ( X n ) n 1 be a sequence of real-valued random vectors with values in R d . Assume
lim x sup n P ( | | X n | | x ) = 0 .
Then, there exist a subsequence ( X n k ) k 1 and a random vector X such that X n k X in distribution. Indeed, it is easy to see that
lim | | θ | | 0 | E e i θ , X n 1 | lim | | θ | | 0 E | e i θ , X n 1 | 1 { | | X n | | | | θ | | 1 / 2 } + E | e i θ , X n 1 | 1 { | | X n | | | | θ | | 1 / 2 } lim | | θ | | 0 E | | θ | | | | X n | | 1 { | | X n | | | | θ | | 1 / 2 } + 2 P ( | | X n | | | | θ | | 1 / 2 ) lim | | θ | | 0 | | θ | | 1 / 2 + 2 P ( | | X n | | | | θ | | 1 / 2 ) = 0
uniformly for all n 1 . Our conclusion follows from Theorem A2.
Example A5. 
Assume that ( X n ) n 1 is a sequence of α n –stable symmetric random variables with an unite scale parameter, i.e., X n S α n ( 1 , 0 , 0 ) . If α n [ a , b ] ( 0 , 2 ] , then there exists a subsequence ( X n k ) k 1 and a random variable X such that X n k X in distribution. Indeed, it is obvious that
lim θ 0 E e i θ X n = lim θ 0 e | θ | α n = 1
uniformly for all n . Thus, the conclusion follows immediately from Theorem A2.
Another route to this conclusion can be obtained from the Bolzano–Weierstrass theorem. In fact, by the Bolzano–Weierstrass theorem, there exists a subsequence α n k such that α n k α [ a , b ] . It follows that
lim k E e i θ X n k = lim k e | θ | α n k = e | θ | α ,
which means X n k S α ( 1 , 0 , 0 ) , k , in distribution.
Remark A1. 
It is easy to see that
| f n ( θ ) 1 | E | e i θ , X n 1 | .
Thus, condition (A1) implies condition (A7). Hence, Theorem A1 can be regarded as a corollary of Theorem A2.

Appendix A.2. Weak Convergence Theorems for Random Processes

In the previous subsection, we consider the case of random vectors. Now, we consider the case of random processes.
Theorem A3. 
Let ( X n ( x ) ) n 1 , x [ 0 , 1 ] , be a sequence of random processes with values in R . Assume that
lim θ 0 E | e i θ ( X n ( x 2 ) X n ( x 1 ) ) 1 | = 0
uniformly for all n 1 and all x 1 , x 2 [ 0 , 1 ] , and that for any M > 0 ,
lim | x 2 x 1 | 0 E | e i θ ( X n ( x 2 ) X n ( x 1 ) ) 1 | = 0
uniformly for all n 1 and all | θ | M . Denote by
f n ( θ , x 1 , x 2 ) = E e i θ ( X n ( x 2 ) X n ( x 1 ) ) , x 1 , x 2 [ 0 , 1 ] and θ R .
Then, f n ( θ , x , y ) is equicontinuous with respect to ( θ , x , y ) [ M , M ] × [ 0 , 1 ] 2 and therefore, there exist a subsequence ( X n k ( x ) ) k 1 and a random process X ( x ) such that, for any x 1 , x 2 [ 0 , 1 ] ,
X n k ( x 2 ) X n k ( x 1 ) X ( x 2 ) X ( x 1 )
in distribution.
Moreover, if X n ( 0 ) = 0 and all the processes X ( x ) and X n ( x ) , n 1 , have independent increments, then X n k ( x ) X ( x ) in finite dimensional distribution.
By Theorem A1, condition (A15) is sufficient to prove (A18). However, with this method, the subsequence n k vary in x 1 and x 2 . The use of condition (A16) is to make sure that the sequence n k is common when x 1 and x 2 vary in the range [ 0 , 1 ] .
Proof. 
By Theorem A1, condition (A15) implies that f n ( θ , x , x 1 ) is equicontinuous with respect to θ R . Hence we need only to prove that f n ( θ , x , y ) is equicontinuous with respect to ( x , y ) [ 0 , 1 ] 2 . For any x 2 , x 3 [ 0 , 1 ] , one has
| f n ( θ , x 2 , y ) f n ( θ , x 3 , y ) | = | E e i θ ( X n ( y ) X n ( x 2 ) ) 1 e i θ ( X ( x 2 ) X ( x 3 ) ) | E | 1 e i θ ( X ( x 2 ) X ( x 3 ) ) | .
By (A16), we find that for any M > 0 ,   f n ( θ , x , y ) is equicontinuous with respect to ( θ , x ) [ M , M ] × [ 0 , 1 ] . Similarly, f n ( θ , x , y ) is equicontinuous with respect to ( θ , y ) [ M , M ] × [ 0 , 1 ] . Thus, f n ( θ , x , y ) is equicontinuous with respect to ( θ , x , y ) [ M , M ] × [ 0 , 1 ] 2 . With an argument similar to the proof of Theorem A1, by the Arzelà–Ascoli theorem and the Lévy continuous theorem, there exist a subsequence ( X n k ( x ) ) k 1 and a random process X ( x ) such that for any given x 1 , x 2 [ 0 , 1 ] ,
X n k ( x 2 ) X n k ( x 1 ) X ( x 2 ) X ( x 1 )
in distribution.
Assume that X n ( x ) , x [ 0 , 1 ] , has independent increments. For any ( x 1 , x 2 , , x d ) [ 0 , 1 ] d satisfying 0 = x 0 x 1 x 2 x d , it is easy to see that, for any ( θ 1 , θ 2 , , θ d ) R d ,
E exp i j = 1 d θ j X n k ( x j ) = E exp i j = 1 d k = j d θ k X n k ( x j ) X n k ( x j 1 ) = j = 1 d E exp i k = j d θ k X n k ( x j ) X n k ( x j 1 ) .
Letting k , by the hypothesis that X ( x ) has independent increments, we find that
lim k E exp i j = 1 d θ j X n k ( x j ) = j = 1 d E exp i k = j d θ k X ( x j ) X ( x j 1 ) = E exp i j = 1 d k = j d θ k X ( x j ) X ( x j 1 ) = E exp i j = 1 d θ j X ( x j ) ,
which means X n k ( x ) X ( x ) , x [ 0 , 1 ] , in finite dimensional distribution. □
Example A6. 
Let ( X n ( x ) ) n 1 , x [ 0 , 1 ] , be a sequence of random processes with values in R . Assume that | X n ( x ) | C for a constant C and uniformly for all x and all n . If
lim | x 1 x 2 | 0 X n ( x 1 ) X n ( x 2 ) = 0
in distribution uniformly for all x 1 , x 2 [ 0 , 1 ] and all n , Then, the conclusion of Theorem A3 holds.
Indeed, by (A13), condition (A15) holds. Since lim | x 1 x 2 | 0 X n ( x 1 ) X n ( x 2 ) = 0 in distribution is equal to lim | x 1 x 2 | 0 X n ( x 1 ) X n ( x 2 ) = 0 almost surely. Notice that | X n ( x 2 ) X n ( x 1 ) | 2 C . By Lebesgue’s dominated convergence theorem, it follows that for any M > 0 ,
lim | x 2 x 1 | 0 E | e i θ ( X n ( x 2 ) X n ( x 1 ) ) 1 | lim | x 2 x 1 | 0 E | θ ( X n ( x 2 ) X n ( x 1 ) ) | M lim | x 2 x 1 | 0 E | X n ( x 2 ) X n ( x 1 ) | = 0
uniformly for all n 1 and all | θ | M . Thus, condition (A16) also holds.
The following sufficient condition could be easier to check than that of Theorem A3.
Theorem A4. 
Let ( X n ( x ) ) n 1 , x [ 0 , 1 ] , be a sequence of processes with values in R . Denote by
f n ( θ , x 1 , x 2 ) = E e i θ ( X n ( x 2 ) X n ( x 1 ) ) , x 1 , x 2 [ 0 , 1 ] and θ R .
Assume that
lim θ 0 f n ( θ , x 1 , x 2 ) = 1
uniformly for all n 1 and all x 1 , x 2 [ 0 , 1 ] , and that for any M > 0 ,
lim | x 2 x 1 | 0 f n ( θ , x 1 , x 2 ) = 1
uniformly for all n 1 and all | θ | M . Then, the conclusion of Theorem A3 holds.
Proof. 
By Theorem A2, condition (A21) implies that f n ( θ , x , y ) is equicontinuous with respect to θ R . Next, we prove that f n ( θ , x , y ) is equicontinuous with respect to ( x , y ) [ 0 , 1 ] 2 . For any x 2 , x 3 [ 0 , 1 ] , one has
| f n ( θ , x 2 , x 1 ) f n ( θ , x 3 , x 1 ) | = | E e i θ ( X n ( x 1 ) X n ( x 2 ) ) 1 e i θ ( X n ( x 2 ) X n ( x 3 ) ) | E | 1 e i θ ( X n ( x 2 ) X n ( x 3 ) ) | .
The random variable | 1 e i θ ( X n ( x 2 ) X n ( x 3 ) ) | is dominated by the constant 2. Hence, for all x > 0 ,
| f n ( θ , x 2 , x 1 ) f n ( θ , x 3 , x 1 ) | E | 1 e i θ ( X n ( x 2 ) X n ( x 3 ) ) | 1 { | X n ( x 2 ) X n ( x 3 ) | x } + E | 1 e i θ ( X n ( x 2 ) X n ( x 3 ) ) | 1 { | X n ( x 2 ) X n ( x 3 ) | > x } E | θ ( X n ( x 2 ) X n ( x 3 ) ) | 1 { | X n ( x 2 ) X n ( x 3 ) | x } + 2 E 1 { | X n ( x 2 ) X n ( x 3 ) | > x } x | θ | + 2 P ( | X n ( x 2 ) X n ( x 3 ) | x ) .
Given any ε > 0 and any M > 0 , it holds, for all | θ | M and all 0 < x ε / ( 2 M ) ,
x | θ | x M ε 2 .
By (A22), for the given 4 M / ε > 0 , there exists a δ > 0 such that if | x 1 x 2 | δ , then
| f n ( θ , x 1 , x 2 ) 1 | ε 8
uniformly for all n and all | θ | 4 M / ε . By the Billingsley inequality, it is easy to see that if | x 2 x 3 | δ , then
P | X n ( x 3 ) X n ( x 2 ) | ε 2 M ε 4 M 4 M / ε 4 M / ε 1 f n ( θ , x 2 , x 3 ) d θ ε 4 M 4 M / ε 4 M / ε ε 8 d θ = ε 4 .
From (A24), for all | θ | M and all x 2 , x 3 [ 0 , 1 ] such that | x 2 x 3 | δ ,
| f n ( θ , x 2 , x 1 ) f n ( θ , x 3 , x 1 ) | ε 2 M M + 2 P | X n ( x 2 ) X n ( x 3 ) | ε 2 M ε 2 + 2 ε 4 = ε .
Thus, for any M > 0 ,   f n ( θ , x , x 1 ) is equicontinuous with respect to ( θ , x ) [ M , M ] × [ 0 , 1 ] . Similarly, f n ( θ , x , y ) is equicontinuous with respect to ( θ , y ) [ M , M ] × [ 0 , 1 ] . Thus, f n ( θ , x , y ) is equicontinuous with respect to ( θ , x , y ) [ M , M ] × [ 0 , 1 ] 2 . The rest of the proof is similar to the proof of Theorem A3. □
Example A7. 
Assume that the functions α n ( x ) [ a , b ] ( 0 , 2 ] , and that ( X n ( x ) ) n 1 is a sequence of (localizable) independent-increments α n ( x ) –multistable Lévy motions (cf. Falconer and Liu [5]). Then, there exists a subsequence ( X n k ( x ) ) k 1 and a random process X ( x ) such that X n k ( x ) X ( x ) in distribution for any x [ 0 , 1 ] . In particular, if X ( x ) has independent increments, then X n k ( x ) converges to X ( x ) in finite dimensional distribution.
Indeed, in work by Falconer and Liu [5], the joint characteristic functions of ( X n ( x ) ) n 1 read as follows:
E exp i j = 1 d θ j X n ( x j ) = exp | j = 1 d θ j 1 [ 0 , x j ] ( s ) | α n ( s ) d s ,
for d N , ( θ 1 , , θ d ) R d and ( x 1 , , x d ) [ 0 , 1 ] d . It is obvious that
lim θ 0 f n ( θ , x 1 , x 2 ) = lim θ 0 exp | θ 1 [ x 1 , x 2 ] ( s ) | α n ( s ) d s = 1
uniformly for all n 1 and all x 1 , x 2 [ 0 , 1 ] , and that for any M > 0 ,
lim | x 2 x 1 | 0 f n ( θ , x 1 , x 2 ) = lim | x 2 x 1 | 0 exp | θ 1 [ x 1 , x 2 ] ( s ) | α n ( s ) d s = 1
uniformly for all n 1 and all | θ | M . Hence, the conclusion follows immediately from Theorem A4.
This conclusion can also be obtained from the Arzelà–Ascoli theorem. Since ( X n ( x ) ) n 1 are localizable, by (7), the functions ( α n ( x ) ) n 1 are equicontinuous. Thus, there exist a subsequence ( α n k ) k 1 and a continuous function α ( x ) , satisfying condition (7) such that | | α n k ( x ) α ( x ) | | 0 as k . It is obvious that
lim k E exp i j = 1 d θ j X n k ( x j ) = lim k exp | j = 1 d θ j 1 [ 0 , x j ] ( s ) | α n k ( s ) d s = exp | j = 1 d θ j 1 [ 0 , x j ] ( s ) | α ( s ) d s ,
for d N , ( θ 1 , , θ d ) R d and ( x 1 , , x d ) [ 0 , 1 ] d , which means X n k ( x ) converges to X ( x ) , an independent-increments α ( x ) –multistable Lévy motion, in finite dimensional distribution.
The condition that the sequence ( f n ) n 1 is equicontinuous cannot be satisfied in some particular cases, for instance, in the construction of self-stabilizing process. Fortunately, in the Arzelà–Ascoli theorem, we find that this condition can be replaced by a more general one.
Definition A1. 
We call the sequence ( f n ( θ ) ) n 1 subequicontinuous on I R d , if for any ε > 0 , there exist δ > 0 and a sequence of nonnegative numbers ( ε n ) n 1 , ε n 0 as n , such that, for all functions f n in the sequence,
| f n ( θ 1 ) f n ( θ 2 ) | ε + ε n , θ 1 , θ 2 I ,
whenever | | θ 1 θ 2 | | < δ . In particular, if ε n = 0 for all n , then ( f n ( θ ) ) n 1 is equicontinuous.
Notice that the definition ( f n ( θ ) ) n 1 is subequicontinuous on I R d implies that for any ε > 0 , there exists δ and N such that for all θ 1 , θ 2 I and all n > N ,
| f n ( θ 1 ) f n ( θ 2 ) | ε ,
whenever | | θ 1 θ 2 | | < δ .
Example A8. 
Assume that ( f n ( θ ) ) n 1 is a sequence of equicontinuous functions. Set g n ( θ ) = f n ( θ ) + 1 n h n ( θ ) , where ( h n ( θ ) ) n 1 is a sequence of functions that satisfies | | h n ( θ ) | | C for a constant C. Then, it is easy to see that ( g n ( θ ) ) n 1 is a sequence of subequicontinuous functions. In particular, if h n ( θ ) is the Dirichlet function, then ( g n ( θ ) ) n 1 is a sequence of subequicontinuous, but not equicontinuous, functions.
Similarly, we can define Hölder subequicontinuous. A sequence ( f n ( θ ) ) n 1 is called Hölder subequicontinuous of order α ( 0 , 1 ] on I R d , if there exist a constant C > 0 and a sequence of nonnegative numbers ( ε n ) n 1 , ε n 0 as n , such that, for all functions f n in the sequence,
| f n ( θ 1 ) f n ( θ 2 ) | C | | θ 1 θ 2 | | α + ε n , θ 1 , θ 2 I ,
whenever | | θ 1 θ 2 | | 1 . In particular, if ε n = 0 for all n , then ( f n ( θ ) ) n 1 is Hölder equicontinuous of order α ( 0 , 1 ] . It is obvious that if ( f n ( θ ) ) n 1 is Hölder subequicontinuous, then ( f n ( θ ) ) n 1 is subequicontinuous.

Appendix A.3. Generalization of The Arzel à–Ascoli Theorem

Next, we give a general version of the Arzelà–Ascoli theorem.
Lemma A2. 
Assume that ( f n ) n 1 is a sequence of real-valued continuous functions defined on a closed and bounded set Π i = 1 d [ a i , b i ] R d . If this sequence is uniformly bounded and subequicontinuous, then there exists a subsequence ( f n k ) k 1 that converges uniformly.
Proof. 
The proof is essentially based on a diagonalization argument and is similar to the proof of the Arzelà–Ascoli theorem. We give a proof for the case of d = 1 . For d > 1 , the argument follows by applying the 1 –dimensional version of our theorem d times.
Let [ a , b ] R be a closed and bounded interval. If ( f n ) n 1 is a finite set of functions, then there is a subsequence ( f n k ) k 1 such that f n k f n 1 for all k 1 . Thus, without loss of generality, we may assume that ( f n ) n 1 is an infinite set of functions. Fix an enumeration ( x i ) i 1 of all rational numbers in [ a , b ] . Since ( f n ) n 1 is uniformly bounded, the set of points ( f n ( x 1 ) ) n 1 is bounded. By the Bolzano–Weierstrass theorem, there is a sequence ( f σ 1 ( n ) ( x 1 ) ) n 1 of the functions in ( f n ( x 1 ) ) n 1 such that ( f σ 1 ( n ) ( x 1 ) ) n 1 converges. Repeating the same argument for the sequence of points ( f σ 1 ( n ) ( x 2 ) ) n 1 , there is a subsequence ( f σ 2 ( n ) ( x 2 ) ) n 1 of ( f σ 1 ( n ) ( x 2 ) ) n 1 such that ( f σ 2 ( n ) ( x 2 ) ) n 1 converges. By induction, this process can be continued forever, and so there is a chain of subsequences such that, for each k 1 , the subsequence ( f σ k ( n ) ) n 1 converges at x 1 , , x k . Moreover, the following relation holds:
( f n ) n 1 ( f σ 1 ( n ) ) n 1 ( f σ N 1 ( n ) ) n 1 ( f σ N ( n ) ) n 1
Define the diagonal subsequence ( f σ m ( m ) ) m 1 whose mth term is the mth term in the mth subsequence ( f σ m ( n ) ) n 1 . By construction, ( f σ m ( m ) ) m 1 converges at every rational point of [ a , b ] .
Therefore, given any ε > 0 and rational ( x i ) i 1 in [ a , b ] , there is an integer N = N ( ε , x k ) such that
| f σ m ( m ) ( x i ) f σ m ( m ) ( x i ) | ε 3 .
Since the family ( f n ) n 1 is subequicontinuous, for this fixed ε and for every x in [ a , b ] , there is an open interval U x containing x and N x = N x ( ε ) such that
| f n ( s ) f n ( t ) | ε 3
for all ( f n ) n N x and all s , t U x . The collection of intervals U x , x [ a , b ] , forms an open cover of [ a , b ] . Since [ a , b ] is compact, this covering admits a finite subcover U 1 , , U l . There exists an integer K such that each open interval U j , 1 j l , contains a rational x ˜ k with 1 k K . Finally, for any t [ a , b ] , there are j and k so that t and x ˜ k belong to the same interval U j . Set N = max { N ( ε , x 1 ) , , N ( ε , x K ) , N x 1 , , N x K } , then for all m , n N and t [ a , b ] ,
| f σ m ( m ) ( t ) f σ n ( n ) ( t ) | | f σ m ( m ) ( t ) f σ m ( m ) ( x ˜ k ) | + | f σ m ( m ) ( x ˜ k ) f σ n ( n ) ( x ˜ k ) | + | f σ n ( n ) ( x ˜ k ) f σ n ( n ) ( t ) | ε 3 + ε 3 + ε 3 = ε .
Hence, the sequence ( f σ m ( m ) ) m 1 is an uniform Cauchy sequence, and it therefore uniformly converges to a continuous function, as claimed. This completes the proof. □

Appendix A.4. Some Applications

With this general version of the Arzelà–Ascoli theorem, we can easily obtain the generalizations of Theorems A1–A4. For instance, we have the following generalization of Theorem A4.
Theorem A5. 
Let ( X n ( x ) ) n 1 , x [ 0 , 1 ] , be a sequence of random processes with values in R . Denote by
f n ( θ , x 1 , x 2 ) = E e i θ ( X n ( x 2 ) X n ( x 1 ) ) , x 1 , x 2 [ 0 , 1 ] and θ R .
Assume that for any M > 0 , there exists a sequence of nonnegative numbers ( ε n ( M ) ) n 1 , depending only on M, such that
lim n ε n ( M ) = 0 , lim θ 0 | f n ( θ , x 1 , x 2 ) 1 | ε n ( M )
uniformly for all x 1 , x 2 [ 0 , 1 ] , and that
lim | x 2 x 1 | 0 | f n ( θ , x 1 , x 2 ) 1 | ε n ( M )
uniformly for all | θ | M . Denote by
f n ( θ , x 1 , x 2 ) = E e i θ ( X n ( x 2 ) X n ( x 1 ) ) , x 1 , x 2 [ 0 , 1 ] and θ R .
Then, f n ( θ , x , y ) is subequicontinuous with respect to ( θ , x , y ) [ M , M ] × [ 0 , 1 ] 2 and therefore, there exist a subsequence ( X n k ( x ) ) k 1 and a random process X ( x ) such that, for any x 1 , x 2 [ 0 , 1 ] ,
X n k ( x 2 ) X n k ( x 1 ) X ( x 2 ) X ( x 1 )
in distribution.
Moreover, if X n ( 0 ) = 0 and all the processes X ( x ) and X n ( x ) , n 1 , have independent increments, then X n k ( x ) X ( x ) in finite dimensional distribution.
The proof of Theorem A5 is omitted, since it is similar to that of Theorem A4. Using Theorem A5, we obtain the following corollary.
Denote by x the greatest integer less than x .
Corollary A1. 
Let { S n ( k / n ) : 0 k n } n 1 be a sequence of Markov processes with values in R . Assume that there exists a sequence of Lesbegue measurable functions ( F n ( θ , y ) ) n 1 , ( θ , y ) R × [ 0 , 1 ] , such that, for all k and all θ R ,
E exp i θ S n k + 1 n S n k n | S n k n = exp 1 n F n θ , k n .
Assume that ( F n ) n 1 satisfies the following two conditions:
(A) 
The limit lim θ 0 F n ( θ , · ) = 0 holds uniformly for all n .
(B) 
There exists a function F ( z ) , z R , such that for any M ,   0 F n ( θ , · ) F ( M ) holds uniformly for all | θ | M and all n .
  • Define S n ( x ) = S n ( n x / n ) , x [ 0 , 1 ] . There exist a subsequence ( S n k ( x ) ) k 1 and a random process S ( x ) such that, for any x 1 , x 2 [ 0 , 1 ] , it converges
S n k ( x 2 ) S n k ( x 1 ) S ( x 2 ) S ( x 1 )
in distribution.
Proof. 
For simplicity of notations, denote by
E S n ( t 1 ) , , S n ( t d ) · = E · | S n ( t 1 ) , , S n ( t d ) .
It is easy to see that, for all x 1 , x 2 [ 0 , 1 ] satisfying x 1 < x 2 ,
E [ exp i θ S n ( x 2 ) S n ( x 1 ) + k = n x 1 n x 2 1 F n θ , k n 1 n = E E S n ( n x 2 1 n ) , , S n ( x 1 ) exp i θ S n ( x 2 ) S n ( x 1 ) + k = n x 1 n x 2 1 F n θ , k n 1 n = E exp i θ S n n x 2 1 n S n ( x 1 ) + k = n x 1 n x 2 2 F n θ , k n 1 n = = E exp i θ S n n x 1 + 1 n S n ( x 1 ) + F n θ , n x 1 n 1 n = 1 .
In particular, it can be rewritten in the following form:
E exp i θ S n ( x 2 ) S n ( x 1 ) = exp x 1 ( n ) x 2 ( n ) F n θ , t d t ,
where, here and after, we denote
x ( n ) = n x n ,
which no longer represents the coordinate of x . Set
f n ( θ , x 1 , x 2 ) = E e i θ S n ( x 2 ) S n ( x 1 ) .
Notice that | e x 1 | x , x 0 . By the conditions (A) and (B), it is easy to verify that
lim θ 0 | f n ( θ , x 1 , x 2 ) 1 | lim θ 0 x 1 ( n ) x 2 ( n ) F n θ , t d t = 0
uniformly for all n 1 and all x 1 , x 2 [ 0 , 1 ] , and that
lim | x 2 x 1 | 0 | f n ( θ , x 1 , x 2 ) 1 | lim | x 2 x 1 | 0 x 1 ( n ) x 2 ( n ) F n θ , t d t F ( M ) lim | x 2 x 1 | 0 | x 2 ( n ) x 1 ( n ) | F ( M ) 1 n
uniformly for all | θ | M . Notice that lim n F ( M ) 1 n = 0 . Thus, by Theorem A5, we obtain the conclusion of Corollary A1. □
We give an example to show that how to apply Corollary A1.
Example A9. 
Assume that the function α ( x ) [ a , b ] ( 0 , 2 ] satisfies condition (7). Let g ( x ) be a nonnegative integrable function such that g ( x ) C for a constant C . Let { S n ( k / n ) : 0 k n } n 1 be a sequence of real-value processes with independent increments. Assume that, for all k and θ R ,
E exp i θ S n k + 1 n S n k n = exp 1 n | θ | α ( k / n ) g k n ,
i.e., S n k + 1 n S n k n S α ( k / n ) g ( k n ) / n 1 / α ( k / n ) , 0 , 0 . Then, the conditions (A) and (B) of Corollary A1 are satisfied with F n ( θ , y ) = | θ | α ( y ) g ( y ) and F ( z ) = C ( | z | a + | z | b ) . Define S n ( x ) = S n ( n x / n ) , x [ 0 , 1 ] . There exist a subsequence ( S n k ( x ) ) k 1 and a random process S ( x ) such that, for any x 1 , x 2 [ 0 , 1 ] , it converges
S n k ( x 2 ) S n k ( x 1 ) S ( x 2 ) S ( x 1 )
in distribution.
In fact, when S n ( 0 ) = 0 , this example gives the integral of g ( x ) 1 / α ( x ) with respect to a multistable Lévy measure. In particular, when g 1 , this example gives a functional central limit theorem for the independent-increments MsLM, that is, S ( x ) tends in distribution to L I ( x ) in ( D [ 0 , 1 ] , d S ) , where d S is the Skorohod metric.
Using Theorem A5 again, we still have the following corollary. An application of this corollary is given in the next section.
Corollary A2. 
Let { S n ( k / n ) : 0 k n } n 1 be a sequence of Markov processes with values in R . Assume that there exists a sequence of Lesbegue measurable functions ( F n ( θ , y ) ) n 1 , ( θ , y ) R × [ 0 , 1 ] , such that, for all k and all θ R ,
E exp i θ S n k + 1 n S n k n | S n k n = exp 1 n F n θ , S n k n .
Assume that ( F n ) n 1 satisfies the conditions (A) and (B) of Corollary A1. Define S n ( x ) = S n ( n x / n ) , x [ 0 , 1 ] . Then, there exist a subsequence ( S n k ( x ) ) k 1 and a random process S ( x ) such that, for any x 1 , x 2 [ 0 , 1 ] , it converges
S n k ( x 2 ) S n k ( x 1 ) S ( x 2 ) S ( x 1 )
in distribution.
Proof. 
By an argument similar to that of Corollary A1, it holds, for all x 1 , x 2 [ 0 , 1 ] satisfying x 1 < x 2 ,
E exp i θ S n ( x 2 ) S n ( x 1 ) + k = n x 1 n x 2 1 F n θ , S n k n 1 n = 1 .
Set
f n ( θ , x 1 , x 2 ) = E e i θ S n ( x 2 ) S n ( x 1 ) .
Then, (A35) implies that
exp F ( M ) ( x 2 ( n ) x 1 ( n ) ) f n ( θ , x 1 , x 2 ) 1
uniformly for all | θ | M . Thus, it is easy to verify that
lim θ 0 f n ( θ , x 1 , x 2 ) = lim θ 0 E exp i θ S n ( x 2 ) S n ( x 1 ) + k = n x 1 n x 2 1 F n θ , S n k n 1 n = 1
uniformly for all n 1 and all x 1 , x 2 [ 0 , 1 ] , and that, from (A36),
lim | x 2 x 1 | 0 | f n ( θ , x 1 , x 2 ) 1 | lim | x 2 x 1 | 0 F ( M ) | x 2 ( n ) x 1 ( n ) | F ( M ) 1 n
uniformly for all | θ | M . Notice that lim n F ( M ) 1 n = 0 . Thus, by Theorem A5, we obtain the conclusion of Corollary A2. □

References

  1. Falconer, K.J. Tangent fields and the local structure of random fields. J. Theoret. Probab. 2002, 15, 731–750. [Google Scholar] [CrossRef]
  2. Falconer, K.J. The local structure of random processes. J. London Math. Soc. 2003, 67, 657–672. [Google Scholar] [CrossRef]
  3. Li, M. Multi-fractional generalized Cauchy process and its application to teletraffic. Phys. A Stat. Mechan. Appl. 2020, 550, 123982. [Google Scholar] [CrossRef]
  4. Li, M. Modified multifractional Gaussian noise and its application. Phys. Scr. 2021, 96.12, 125002. [Google Scholar] [CrossRef]
  5. Falconer, K.J.; Liu, L. Multistable Processes and Localisability. Stoch. Model. 2012, 28, 503–526. [Google Scholar] [CrossRef]
  6. Le Guével, R.; Lévy Véhel, J. A Ferguson-Klass-LePage series representation of multistable multifractional processes and related processes. Bernoulli 2012, 18, 1099–1127. [Google Scholar] [CrossRef]
  7. Le Guével, R.; Lévy Véhel, J. Incremental moments and Hölder exponents of multifractional multistable processes. ESAIM Probab. Statist. 2013, 17, 135–178. [Google Scholar] [CrossRef]
  8. Le Guével, R. The Hausdorff dimension of the range of the Lévy multistable processes. J. Theoret. Probab. 2019, 32, 765–780. [Google Scholar] [CrossRef]
  9. Ayache, A.; Hamonier, J. Wavelet series representation for multifractional multistable Riemann-Liouville process. arXiv 2020, arXiv:2004.05874. [Google Scholar] [CrossRef]
  10. Ayache, A.; Jaffard, S.; Taqqu, M.S. Wavelet construction of generalized multifractional processes. Rev. Mat. Iberoam. 2007, 23.1, 327–370. [Google Scholar] [CrossRef]
  11. Falconer, K.J.; Lévy Véhel, J. Self-stabilizing processes based on random signs. J. Theor. Probab. 2020, 33, 134–152. [Google Scholar] [CrossRef]
  12. Billingsley, P. Covergence of Probability Measures; John & Wiley: New York, NY, USA, 1968. [Google Scholar]
  13. Ayache, A. Sharp estimates on the tail behavior of a multistable distribution. Statist. Probab. Letter 2013, 83, 680–688. [Google Scholar] [CrossRef]
  14. Samorodnitsky, G.; Taqqu, M.S. Stable Non-Gaussian Random Processes; Chapman and Hall: London, UK, 1994. [Google Scholar]
  15. Barrière, O.; Echelard, A.; Lévy Véhel, J. Self-regulating processes. Electron. J. Probab. 2012, 17, 1–30. [Google Scholar] [CrossRef]
Figure 1. The figure exhibits how α governs the intensity of jumps. When α is small, the intensity of jumps is large. Contrarily, when α is large, the intensity of jumps is small.
Figure 1. The figure exhibits how α governs the intensity of jumps. When α is small, the intensity of jumps is large. Contrarily, when α is large, the intensity of jumps is small.
Fractalfract 09 00677 g001
Figure 2. The figure exhibits the fluctuations in stock market prices S ( t ) of Apple; see the blue curve. We can see that the “local intensities of jumps” (which means decrements or increments) vary in prices. When the prices are low, the corresponding “local intensities of jumps” are small. Contrarily, when the prices are high, the corresponding local intensities of jumps are large.
Figure 2. The figure exhibits the fluctuations in stock market prices S ( t ) of Apple; see the blue curve. We can see that the “local intensities of jumps” (which means decrements or increments) vary in prices. When the prices are low, the corresponding “local intensities of jumps” are small. Contrarily, when the prices are high, the corresponding local intensities of jumps are large.
Fractalfract 09 00677 g002
Figure 3. The figure exhibits the fluctuations in exchange rates S ( t ) of Euro against US dollar. We can see that the local intensities of jumps vary in values. When the exchange rate is small S 1 , the corresponding local intensity of jumps is small. Contrarily, when the exchange rate is large S 2 , the corresponding local intensity of jumps is large. The local intensities of jumps varies with the level of values.
Figure 3. The figure exhibits the fluctuations in exchange rates S ( t ) of Euro against US dollar. We can see that the local intensities of jumps vary in values. When the exchange rate is small S 1 , the corresponding local intensity of jumps is small. Contrarily, when the exchange rate is large S 2 , the corresponding local intensity of jumps is large. The local intensities of jumps varies with the level of values.
Fractalfract 09 00677 g003
Figure 4. Here is a graph of daytime temperatures S ( t ) in Nice, France. We find that the local intensities of jumps vary in values. When the temperature is low, the corresponding local intensity of jumps is large. Contrarily, when the temperature is high, the corresponding local intensity of jumps is small. When the temperatures are at the same level, the local intensities of jumps coincide.
Figure 4. Here is a graph of daytime temperatures S ( t ) in Nice, France. We find that the local intensities of jumps vary in values. When the temperature is low, the corresponding local intensity of jumps is large. Contrarily, when the temperature is high, the corresponding local intensity of jumps is small. When the temperatures are at the same level, the local intensities of jumps coincide.
Fractalfract 09 00677 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, X.; Lévy Véhel, J. Donsker-Type Construction for the Self-Stabilizing and Self-Scaling Process. Fractal Fract. 2025, 9, 677. https://doi.org/10.3390/fractalfract9100677

AMA Style

Fan X, Lévy Véhel J. Donsker-Type Construction for the Self-Stabilizing and Self-Scaling Process. Fractal and Fractional. 2025; 9(10):677. https://doi.org/10.3390/fractalfract9100677

Chicago/Turabian Style

Fan, Xiequan, and Jacques Lévy Véhel. 2025. "Donsker-Type Construction for the Self-Stabilizing and Self-Scaling Process" Fractal and Fractional 9, no. 10: 677. https://doi.org/10.3390/fractalfract9100677

APA Style

Fan, X., & Lévy Véhel, J. (2025). Donsker-Type Construction for the Self-Stabilizing and Self-Scaling Process. Fractal and Fractional, 9(10), 677. https://doi.org/10.3390/fractalfract9100677

Article Metrics

Back to TopTop