Next Article in Journal
Mass Generation via the Phase Transition of the Higgs Field
Next Article in Special Issue
A Review of Optimization Studies for System Appointment Scheduling
Previous Article in Journal
Automatic Facial Palsy Detection—From Mathematical Modeling to Deep Learning
Previous Article in Special Issue
Digital Coupon Promotion and Inventory Strategies of Omnichannel Brands
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Bound of Four Moment Theorem and Its Application to Orthogonal Polynomials Associated with Laws

Division of Data Science, Data Science Convergence Research Center, Hallym University, Chuncheon 24252, Republic of Korea
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(12), 1092; https://doi.org/10.3390/axioms12121092
Submission received: 29 September 2023 / Revised: 20 November 2023 / Accepted: 26 November 2023 / Published: 29 November 2023
(This article belongs to the Special Issue Applied Mathematics, Intelligence and Operations Research)

Abstract

:
In the case where the square of an eigenfunction F with respect to an eigenvalue of Markov generator L can be expressed as a sum of eigenfunctions, we find the largest number excluding zero among the eigenvalues in the terms of the sum. Using this number, we obtain an improved bound of the fourth moment theorem for Markov diffusion generators. To see how this number depends on an improved bound, we give some examples of eigenfunctions of the diffusion generators L such as Ornstein–Uhlenbeck, Jacobi, and Romanovski–Routh.

1. Introduction

The aim of this paper is to find the improved bound of the fourth moment theorem of a random variable belonging to Markov chaos studied by Bourguin et al. in [1]. The first study in this field is the central limit theorem, called the fourth moment theorem, in [2] studied by Nualart and Peccati. These authors found a necessary and sufficient condition such that a sequence of random variables, belonging to a fixed Wiener chaos, converges in distribution to a Gaussian random variable. More precisely, let X = { X ( h ) , h H } , where H is a real separable Hilbert space, be an isonormal Gaussian process defined on a probability space ( Ω , F , P ) .
Theorem 1
(Fourth moment theorem). Fix an integer q 2 , and let { F n , n 1 } be a sequence of random variables belonging to the qth Wiener chaos with E [ F n 2 ] = 1 for all n 1 . Then, F n L Z if and only if E [ F n 4 ] 3 , where Z is a standard Gaussian random variable and the notation L denotes the convergence in distribution.
Such a result gives a dramatic simplication of the method of moments from the point of view of convergence in distribution. The above fourth moment theorem is expressed in terms of the Malliavin derivative in [3]. However, the results given in [2,3] do not provide any information about the rate of convergence, whereas, in the paper [4], the authors prove that Theorem 1 can be recovered from an upper bound of the Kolmogorov distance (or total variation, Wasserstein) obtained by using the techniques based on the combination between Malliavin calculus (see, e.g., [5,6,7]) and Stein’s method for normal approximation (see, e.g., [8,9,10]). For further explanation of these techniques, we refer to the papers [4,5,11,12,13,14,15].
One of the remarkable achievements of the Nourdin–Peccati approach (see Theorem 3.1 in [4]) is the quantification of the fourth moment theorem for functionals of Gaussian fields. In the particular case where F is an element in the qth Wiener chaos of X with E [ F 2 ] = 1 , the upper bound of Kolmogorov distance is given by
d K o l ( F , Z ) q 1 3 q E [ F 4 ] 3 .
Here, E [ F 4 ] 3 is just the fourth cumulant κ 4 ( F ) of F.
Recently, the author in [16] proved that the fourth moment theorem also holds in the general framework of Markov diffusion generators. More precisely, under a certain spectral condition on the Markov diffusion generator, a sequence of eigenfunctions of such a generator satisfies the bound given in (1). In particular, this new method may avoid the use of a complicated product formula of multiple integrals. Moreover, the authors in [17] introduced a Markov choas of eigenfunctions being less restrictive than Markov chaos defined in [16]. Using this Markov chaos, they derive the quantitative four moments theorem for the convergence of the eigenfuctions towards Gaussian, Gamma, and Beta distributions. Furthermore, the authors in [1] state that the convergence of the elements of a Markov chaos to a Pearson distribution can be still bounded with just four moments by using the new concept of chaos grade.
For the purpose of this paper, we will start by referring to the bound given in Theorem 3.9 obtained by Bourguin et al. in [1]. Pearson diffusions are Itô diffusion given by the following stochastic differential equation (SDE):
d X t = a ( X t ) d t + 2 θ b ( X t ) d B t ,
where a ( x ) = θ ( x m ) and b ( x ) = b 2 x 2 + b 1 x + b 0 . Given the generator L defined on L 2 ( E , μ ) by
L f ( x ) = θ ( x m ) f ( x ) + θ b ( x ) f ( x ) ,
its invariant measure μ is a Pearson distribution and the set of eigenvalue of L is given by
Λ = n ( 1 ( n 1 ) b 2 ) θ : n N 0 , b 2 < 1 2 n 1 .
In only a few cases is the domain of the infinitesimal generator L of the diffusion process completely known. For the purpose of this paper, it is sufficient to know only the subspaces (for example, a space of C 2 -functions with compact support) of the domain of L. The exact domain of any particular generators will be discussed in Remark 2 below.
Theorem 2
(Fourth moment theorem). Let ν be a Pearson distribution associated with the diffusion given by SDE (2). Let F be a chaotic eigenfunction of generator L with eigenvalue λ , chaos grade, and moments up to 4. Set G = F + m and ξ = u 2 ( 1 b 2 ) . Then, it holds
E Γ ( G , L 1 G ) b ( G ) 2 2 1 b 2 u 4 E [ U ( G ) ] + ( ξ ) + ( 1 b 2 ) 2 E [ Q 2 ( G ) ] ,
where ( ξ ) + = ξ for ξ > 0 and 0 for ξ 0 , and the polynomials Q and U are given by
Q ( x ) = x 2 + 2 ( b 1 + m ) 2 b 2 1 x + 1 b 2 1 b 0 + m ( b 1 + m ) 2 b 2 1 ,
U ( x ) = ( 1 b 2 ) Q 2 ( x ) 1 12 ( Q ( x ) ) 3 ( x m ) .
The notations Γ and L 1 in the above theorem, related to Markov generator, will be explained in Section 2.
In this paper, we improve the bound given in Theorem 2 by introducing the notion of the lower chaos grade in the set of eigenvalues of generator L. For example, if the target distribution ν in Theorem 2 is a standard Gaussian measure then the diffusion coefficients are given as b 2 = b 1 = 0 and b 0 = 1 . Since a chaotic random variable F = I q ( f ) , f H q with E [ F 2 ] = 1 has the chaos grade u = 2 , the second term in the bound (5) vanishes and the bound is given as follows:
E Γ ( F , L 1 F ) 1 2 1 3 E [ F 4 ] 3 .
Note that d K o l ( F , Z ) E [ ( Γ ( F , L 1 F ) 1 ) 2 ] . While the bound of (1) shows that it depends on the order of the Wiener chaos in which the random variable lives, the bound (8) obtained from Theorem 2 does not. Obviously, the bound in (1) provides an improved bound for the fourth moment theorem in comparison with the bound of (8) in the case of F = I q ( f ) . In this paper, we will develop a new technique that can provide improved and more informative bounds like (1). When a random variable F comes from eigenfunctions of a Jacobi generator (see Section 4.2), we will provide the fourth moment theorem and fourth moment theorem for the normal approximation to introduce how Theorems 1 and 2 can be equal. One of the bounds is from our main result, Theorem 3 below, and the other bound, obtained by using the result in [18], shows that Theorem 1, called the fourth moment theorem, holds even if the upper chaos grade (see Definition 1) is greater than two. The rest of the paper is organized as follows: Section 2 reviews some of the basic notations and results of Markov diffusion generator. Our main result, in particular the bound in Theorem 3, is presented in Section 3. Finally, as an application of our main result, in Section 4, we consider the case where a random variable G in Theorem 2 comes from an eiegnfunction of a generator associated with a Pearson distribution.

2. Preliminaries

In this section, we recall some basic facts about the Markov diffusion generator. The reader is referred to [19] for a more detailed explanation. We begin by the definition of Markov triple ( E , F , μ ) in the sense of [19]. For the infinitesimal generator L of a Markov semigroup ( P t ) t 0 with L 2 ( μ ) -domain D ( L ) , we associated a bilinear form Γ . Assume that we are given a vector space A 0 of D ( L ) such that for every ( F , G ) of random variables defined on a probability space ( E , F , μ ) , the product F G is in D ( L ) ( A 0 is an algebra). On this algebra A 0 , the bilinear map (carré du champ operator) Γ is defined
Γ ( F , G ) = 1 2 ( L ( F G ) F L G G L F ) .
for every ( F , G ) A 0 × A 0 . As the carré du champ operator Γ and the measure μ completely determine the symmetric Markov generator L, we will work throughout this paper with the Markov triple ( E , F , μ ) equipped with a probability measure μ on a state space ( E , F ) and a symmetric bilinear map Γ : A 0 × A 0 such that Γ ( F , F ) 0 .
Next, we construct domain D ( E ) of the Dirichlet form E by completion of A 0 , and then obtain from this Dirchlet domain, domain D ( L ) of L. Recall the Dirchlet form E as
E ( F , G ) = E [ Γ ( F , G ) ] for ( F , G ) A 0 × A 0 .
If A 0 is endowed with the norm
F E = F 2 2 + E ( F , F ) 1 / 2 ,
the completion of A 0 with respect to this norm turns it into a Hilbert space embedded in L 2 ( μ ) . Once the Dirchlet domian D ( E ) is contructed, the domain D ( L ) D ( E ) is defined as all elements F D ( E ) such that
| E ( F , G ) | c F E [ G 2 ]
for all G D ( E ) , where c F is a finite constant only depending on F. On these domains, a relation of L and Γ holds, namely, the integration by parts formula
E [ Γ ( F , G ) ] = E [ F L G ] = E [ G L F ] .
By the integration by parts Formula (11) and Γ ( F , F ) 0 , the operator L is nonnegative and symmetric, and therefore the spectrum of L is contained S [ 0 , ) . We assume that L ( 1 ) = 0 . The structure of the spectrum S of the Pearson generator given in (3) related to this study is described as follows (see [1] for a more detailed explanation).
(i)
If the invariant measure μ is a Gaussian, Gamma, or Beta distribution, then S is purely discrete and consists of infinitely many eigenvalues, each with multiplicity one.
(ii)
If the invariant measure μ is a skew t, inverse Gamma, or scaled F distribution, then S contains a discrete and a continuous part. The discrete part consists of finitely many eigenvalues.
A Full Markov triple is a Standard Markov triple for which there is an extended algebra A 0 A , with no requirement of integrability for elements of A , satisfying the requirements given in Section 3.4.3 of [19]. In particular, the diffusion property holds: for any C function Ψ : R k R , and F 1 , , F k , G A ,
Γ ( Ψ ( F 1 , F k ) , G ) = i = 1 k i Ψ ( F 1 , F k ) Γ ( F i , G ) ,
and
L ( Ψ ( F 1 , F k ) ) = i = 1 k i Ψ ( F 1 , F k ) L F i + i , j = 1 k i j Ψ ( F 1 , F k ) Γ ( F i , F j ) .
We also define the operator L 1 , called the pseudo-inverse of L, satisfying for any F D ( L ) ,
L L 1 F = L 1 L F = F E [ F ] .
Obviously, this pseudo-inverse  L 1 is naturally constructed and defined on D ( L ) by a self-adjointness of the operator L.

3. Main Results

We denote the set of eigenvalues of the generator L by Λ ( , 0 ] . Then, chaotic random variables are defined as follows:
Definition 1.
An eigenfunction F with respect to an eigenvalue λ of the generator L is called chaotic if there exists u > 1 and e 1 such that u λ and e λ are eigenvalues of L, and
F 2 κ Λ { 0 } e λ κ u λ Ker ( L + κ I d ) K e r ( L ) .
The smallest number u satisfying (15) is called the upper chaos grade of F, and the largest number e satisfying (15) is called the lower chaos grade of F.
Now, we improve the bound given in Theorem 2 described in the introduction.
Theorem 3.
Let ν be a Pearson distribution associated with the diffusion given by SDE (2). Let F be a chaotic eigenfunction of generator L with eigenvalue λ , upper chaos grade u , lower chaos grade e , and moments up to 4. Set G = F + m . Then, we have
E Γ ( G , L 1 G ) b ( G ) 2 2 1 b 2 u + e 4 1 3 b 2 E [ U ˜ ( G ) ] + 1 b 2 2 e 4 u 2 ( 1 b 2 ) E [ Q 2 ( G ) ] ,
where Q is given by (6), and
U ˜ ( x ) = x 4 + 3 ( 1 b 2 ) 1 3 b 2 ( Q 2 ( x ) x 4 ) 1 4 ( 1 3 b 2 ) ( Q ( x ) ) 3 ) ( x m ) 8 x 4 .
Proof. 
From the proof of Theorem 3.9 in [1], we write
Γ ( G , L 1 G ) b ( G ) = 1 2 λ L + 2 ( 1 b 2 ) λ I d ( Q ( G ) ) ,
where Q ( x ) is a quadratic polynomial given by (6). By the assumption,
Q ( G ) = κ Λ κ u λ J κ Q ( G ) .
Direct computations yield, together with (18), that
E ( Γ ( G , L 1 G ) b ( G ) ) 2 = 1 4 λ 2 E L Q ( G ) + 2 ( 1 b 2 ) λ Q ( G ) L + 2 ( 1 b 2 ) λ Q ( G ) = 1 4 λ 2 { E L Q ( G ) L + 2 ( 1 b 2 ) λ ( Q ( G ) ) + 2 ( 1 b 2 ) λ E Q ( G ) L + 2 ( 1 b 2 ) λ Q ( G ) } = 1 4 λ 2 { κ Λ κ u λ ( κ ) ( 2 ( 1 b 2 ) λ κ ) E J κ ( Q ( G ) ) 2 + 2 ( 1 b 2 ) λ κ Λ κ u λ ( 2 ( 1 b 2 ) λ κ ) E J κ ( Q ( G ) ) 2 } = 1 4 λ 2 { κ Λ κ u λ ( κ ) ( u λ κ ) E J κ ( Q ( G ) ) 2 + ( 2 ( 1 b 2 ) u ) λ κ Λ κ u λ ( κ ) E J κ ( Q ( G ) ) 2 + 2 ( 1 b 2 ) λ κ Λ κ u λ ( u λ κ ) E J κ ( Q ( G ) ) 2 + 2 ( 1 b 2 ) ( 2 ( 1 b 2 ) u ) λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 } = 1 4 λ 2 { κ Λ κ u λ ( κ ) ( u λ κ ) E J κ ( Q ( G ) ) 2 + ( 4 ( 1 b 2 ) u ) λ κ Λ κ u λ ( u λ κ ) E J κ ( Q ( G ) ) 2 ( 2 ( 1 b 2 ) u ) u λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 } + 2 ( 1 b 2 ) ( 2 ( 1 b 2 ) u ) λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 }
Since κ e λ for all κ Λ { 0 } , we have that
κ Λ κ u λ ( κ ) ( u λ κ ) E J κ ( Q ( G ) ) 2 = κ Λ { 0 } κ u λ ( κ ) ( u λ κ ) E J κ ( Q ( G ) ) 2 e λ κ Λ κ u λ ( u λ κ ) E J κ ( Q ( G ) ) 2 .
Using (19) yields that
E ( Γ ( G , L 1 G ) b ( G ) ) 2 1 4 λ 2 { 4 λ ( 1 b 2 ) u + e 4 κ Λ κ u λ ( u λ κ ) E J κ ( Q ( G ) ) 2 ( 2 ( 1 b 2 ) u ) u λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 + 2 ( 1 b 2 ) ( 2 ( 1 b 2 ) u ) λ 2 κ Λ κ u λ E J κ ( Q ( G ) ) 2 } = 1 4 λ 2 { 4 λ ( 1 b 2 ) u + e 4 E [ Q ( G ) ( L + u λ I d ) Q ( G ) ] + ( u 2 ( 1 b 2 ) ) u λ 2 E [ Q 2 ( G ) ] 2 ( 1 b 2 ) ( u 2 ( 1 b 2 ) ) λ 2 E [ Q 2 ( G ) ] } = 1 4 λ 2 { 4 λ ( 1 b 2 ) u + e 4 E [ Q ( G ) ( L + 2 ( 1 b 2 ) λ I d ) Q ( G ) ] + 4 1 b 2 2 e 4 ( u 2 ( 1 b 2 ) ) λ 2 E [ Q 2 ( G ) ] } = 2 1 b 2 u + e 4 1 3 b 2 E [ U ˜ ( G ) ] + 1 b 2 2 e 4 ( u 2 ( 1 b 2 ) ) E [ Q 2 ( G ) ] .
Here, for the last equality in (20), we use the following equality obtained from the proof of Theorem 3.9 in [1]:
E [ Q ( G ) ( L + u λ I d ) Q ( G ) ] = E [ Q ( G ) ( L + 2 ( 1 b 2 ) λ I d ) Q ( G ) ] + ( u 2 ( 1 b 2 ) ) λ E [ Q 2 ( G ) ] = 2 λ E [ U ( G ) ] + ( u 2 ( 1 b 2 ) ) λ E [ Q 2 ( G ) ] .

4. Application to Three Polynomials

In this section, three examples will be given in order to illustrate the estimate (16) with the explicit expression. For this, we consider the case where a random variable F in Theorem 3 comes from eigenfunctions of a generator associated with a Pearson distribution. For simplicity, we only consider a one-dimensional case; analogus results in a finite or infinite dimensional case can be extended in a similar way.

4.1. Ornstein–Uhlenbeck Generator

We consider the one dimensional Ornstein–Uhlenbeck generator L, defined for any test function f by
L f ( x ) = f ( x ) x f ( x ) f o r x R ,
action on L 2 ( R , μ ) , where
μ ( d x ) = 1 2 π e x 2 2 d x .
Let us set F = H q ( x ) , where H q denotes the Hermite polynomial of order q. Then, we have that F K e r ( L + q I d ) .
Corollary 1.
Let ν be a Gaussian distribution associated with the diffusion given by (2) with mean m and b 0 = σ 2 . If F = H q ( x ) , q 2 , and G = F + m , then we have
E Γ ( G , L 1 G ) σ 2 2 q 1 3 q E [ F 4 ] 6 σ 2 E [ F 2 ] + 3 σ 4 ,
Proof. 
By the well-known product formula, the square of F can be expressed as a linear combination of Hermite polynomials up to order 2 q such as
H q 2 ( x ) = r = 0 q r ! q r 2 H 2 ( q r ) ( x ) .
This product Formula (22) gives that the upper chaos grade and lower chaos grade of H q are u = 2 and e = 2 q 1 for q 2 . Hence, Theorem 3 yields that
E Γ ( G , L 1 G ) σ 2 2 2 3 1 2 + 2 q 4 E [ U ˜ ( G ) ] q 1 3 q E [ U ˜ ( G ) ] .
When b 2 = b 1 = 0 and b 0 = σ 2 , a directed computation yields that
U ( x ) = 1 3 ( x m ) 4 6 σ 2 ( x m ) 2 + 3 σ 4 ,
so that
E [ U ˜ ( G ) ] = E [ F 4 ] 6 σ 2 E [ F 2 ] + 3 σ 4 .
From (23) and (24), the proof of the result (21) is completed. □
Remark 1.
When L is the infinite dimensional Ornstein–Uhlenbeck generator, then L I q ( f ) = q I q ( f ) , q = 0 , 1 , . Hence, the spectrum of L consists of zero and the negative integers with the eigenfunctions being represented by mutiple stochastic integrals. The product formula of the multiple stochastic integrals gives that
I q ( f ) 2 = r = 0 q r ! q r 2 I 2 q 2 r ( f r f ) .
This formula shows that the upper chaos grade and lower chaos grade of I q ( f ) are still given by u = 2 and e = 2 q as the one-dimensional case. The upper bound in (1) can be obtained from Theorem 3.
Remark 2.
The authors in [20] show that the domain of the Ornstein–Uhlenbeck operator on L p ( R d , μ ) -space equals the weighted Sobolev space W 2 , p ( R d , μ ) , where μ is the corresponding invariant measure. References related to this domain are there. Additionally, see [21,22].

4.2. Jacobi Generator

We consider the one-dimensional Jacobi generator L α , β defined on L 2 ( [ 0 , 1 ] , μ α , β ) by
L α , β f ( x ) = ( α ( α + β ) x ) f ( x ) + x ( 1 x ) f ( x ) ,
where
μ α , β ( d x ) = Γ ( α + β ) Γ ( α ) Γ ( β ) x α 1 ( 1 x ) β 1 1 [ 0 , 1 ] d x .
Its spectrum Λ is of the form
Λ = n ( n + α + β 1 ) : n N 0 .
Set λ n = n ( n + α + β 1 ) , n = 0 , 1 , . Then, we have that
L 2 ( [ 0 , 1 ] , μ α , β ) = n = 0 Ker ( L α , β + λ n I d ) ,
and the kernels are given by
Ker ( L α , β + λ n I d ) = a P n ( α 1 , β 1 ) ( 1 2 x ) ; a R ,
where P n ( α , β ) ( x ) denotes the n-th Jacobi polynomials
P n ( α , β ) ( x ) = ( 1 ) n 2 n n ! ( 1 x ) α ( 1 + x ) β d n d x n ( ( 1 x ) α + n ( 1 + x ) β + n ) .
Recall that p F q denotes the generalized hypergeometric function with numerator p and denominator q, given by
p F q ( a p ) ( b q ) | x = k = 0 ( a 1 ) k ( a 2 ) k ( a p ) k ( b 1 ) k ( b 2 ) k ( b p ) k x k k ! ,
where the notation ( a p ) denotes the array of p parameter a 1 , , a p and
( α ) n = Γ ( α + n ) Γ ( α ) .
Then, Jacobi polynomials are given by
P n ( α , β ) ( x ) = ( α + 1 ) n n ! 2 F 1 n , α + β + n + 1 α + 1 | 1 x 2 .

4.2.1. Beta Approximation

In this section, we consider the case where the target distribution ν is a Beta distribution.
Corollary 2.
Let ν be the Beta distribution associated with the the diffusion given by (2) with mean
m = α α + β , b 2 = 1 α + β , b 1 = 1 α + β a n d b 0 = 0 .
Let F = P n ( α 1 , β 1 ) ( 1 2 x ) , n 2 , and set
G = F + α α + β f o r α , β > 0 .
Then, we have
E Γ ( G , L 1 G ) 1 α + β G ( 1 G ) 2 2 3 1 + 1 α + β 2 n ( 2 n + α + β 1 ) + α + β 4 n ( n + α + β 1 ) α + β + 3 α + β E [ U ˜ ( G ) ] + α + β + 1 2 ( α + β ) α + β 4 n ( n + α + β 1 ) × 2 ( 2 n + α + β 1 ) ( n + α + β 1 ) 2 ( α + β + 1 ) α + β E [ Q 2 ( G ) ] ,
where the constants b 2 , b 1 , b 0 , and m in U ˜ ( x ) and Q ( x ) are given by (29).
Proof. 
The square of a Jacobi polynomial P n ( α , β ) ( x ) can be expressed as a linear combination of Jacobi polynomials up to order 2 n as follows:
P n ( α 1 , β 1 ) ( 1 2 x ) 2 = k = 0 2 n c n , k P k ( α 1 , β 1 ) ( 1 2 x ) ,
where the linearization coefficients c n , k are explicitly given in the paper [23]. This product Formula (31) shows that the upper chaos grade  u and the lower chaos grade e of P n α 1 , β 1 are given by
u = λ 2 n λ n = 2 ( 2 n + α + β 1 ) ( n + α + β 1 ) ,
e = λ 1 λ n = α + β n ( n + α + β 1 ) .
Hence, from (32) and (33) together with b 2 = 1 α + β , the upper bound (30) follows. □

4.2.2. Normal Approximation

In this section, we consider the case where the target distribution ν is a standard Gaussian measure. Then, the diffusion coefficients are given as b 2 = b 1 = 0 and b 0 = 1 . For simplicity, we will deal with the second Jacobi polynomials P 2 α 1 , α 1 ( 1 2 x ) for α > 0 , defined on L 2 ( [ 0 , 1 ] , μ α , α ) , for the case n = 2 and α = β in (28). Let us set
F = P 2 α 1 , α 1 ( 1 2 x ) P 2 α 1 , α 1 ( 1 2 · ) L 2 ( [ 0 , 1 ] , μ α , α ) .
Then, it is obvious that F has E [ F ] = 0 and E [ F 2 ] = 1 . From (32) and (33), it follows that
u = λ 4 λ 2 = 2 ( 2 α + 3 ) ( 2 α + 1 ) ,
e = λ 1 λ 2 = α ( 2 α + 1 ) .
This implies that the upper chaos grade has u > 2 and the lower chaos grade e < 1 . By Theorem 3, the bound is given as follows:
E Γ ( F , L 1 F ) 1 2 2 3 1 u + e 4 E [ F 4 ] 3 + ( 2 e ) ( u 2 ) 4 V a r ( F 2 ) .
Even when the fourth cumulant of F in the first term of (37) is zero, we may not be able to guarantee that F has a standard Gaussian distribution because of the second term in (37). This shows that the fourth moment theorem of Theorem 1 may not hold.
To overcome this problem, a new techique, in [18], has been developed to show that the fourth moment theorem (Theorem 4 below) holds even though the upper chaos grade is greater than two. Let F be a chaotic eigenfunction of L with respect to the eigenvalue λ with E [ F ] = 0 and E [ F 2 ] = 1 . We define a linear function ϕ ( x ) = m x + b , where
m = 1 4 λ e λ κ u λ ( 2 λ κ ) E J κ ( F 2 ) 2 , b = 1 4 λ 2 e λ κ u λ κ ( 2 λ κ ) E J κ ( F 2 ) 2 .
Here, J κ ( F 2 ) denotes the projection of F 2 on Ker ( L + κ I d ) .
Theorem 4.
If m 0 , then we have
E [ ( Γ ( F , L 1 F ) 1 ) 2 ] = 2 c m , b 6 E [ F 4 ] 3 ,
where c m , b is a constant such that ϕ ( c m , b ) = 0 .
Proof. 
Using the argument in the proof of Theorem 3 in [18] shows that, for any x R ,
E [ ( Γ ( F , L 1 F ) 1 ) 2 ] = 2 m x m + ϕ ( x ) .
Since m 0 , there exists a constant c m , b , depending on m and b, such that ϕ ( c m , b ) = 0 . Also, the proof of Theorem 3 in [18] shows that
m = 1 6 E [ F 4 ] 3 E [ F 2 ] 2 .
Plugging c m , b into x in (39) yields, together with (40), that (4) holds. □
When F is given as (34), we will use Theorem 4 to see under what conditions the fourth moment theorem holds. Define a linear function ϕ ( x ) = m x + b , where the slope m and the intercept b are
m = 16 α 2 ( α + 1 ) 4 ( α + 2 ) 2 { ( α + 1 ) ( 2 α 1 ) 2 6 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 1 ) ( 2 α + 3 ) ( 2 α + 5 ) } .
b = 16 α 2 ( α + 1 ) 4 ( α + 2 ) 2 { 12 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 3 ) 2 ( 2 α + 5 ) ( α + 1 ) ( 2 α 1 ) 2 ( 2 α + 1 ) } .
Theorem 5.
Let F be a chaotic random variable given by (34). If α > 0 , one has that,
E [ ( Γ ( F , L 1 F ) 1 ) 2 ] = c α 6 E [ F 4 ] 3 .
Here, c α is a positive constant given by
c α = 4 ϑ ( α ) ( 2 α 1 ) 3 ( α + 1 ) ϑ ( α ) ( 2 α + 1 ) ( 2 α 1 ) 2 ( α + 1 ) ,
where
ϑ ( α ) = 6 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 3 ) ( 2 α + 5 ) .
Proof. 
When n = 2 in (31), the linearization coefficients c n , k are given by
c 2 , 0 = 2 ( 2 α + 1 ) 2 [ ( α ) 2 ] 2 ,
c 2 , 2 = 8 [ ( α + 1 ) 1 ] 3 ( α ) 3 ( 2 α 1 ) 2 ( 2 α + 1 ) 1 ,
c 2 , 4 = 24 ( α + 2 ) 2 ( α ) 2 ( α ) 4 [ ( 2 α + 1 ) 2 ] 2 ,
and c n , k = 0 if k is odd, where ( x ) n = x ( x + 1 ) ( x + n 1 ) . Note that the general form c n , 0 of (45) is also given by
c n , 0 = n ! ( n + 2 α 1 ) n [ ( α ) n ] 2 .
Since P 0 ( α 1 , α 1 ) ( 1 2 x ) = 1 , we have, from (31), that
0 1 P 2 ( α 1 , α 1 ) ( 1 2 x ) 2 ν ( d x ) = c 2 , 0 .
By orthogonality, we have that
0 1 P 4 ( α 1 , α 1 ) ( 1 2 x ) 2 ν ( d x ) = k = 0 8 c 4 , k 0 1 P k ( α 1 , α 1 ) ( 1 2 x ) ν ( d x ) = c 4 , 0 .
Since c n , k = 0 for k = 1 , 3 and
F = P 2 α 1 , α 1 ( 1 2 x ) c 2 , 0 ,
the intercept of a linear function ϕ can be written, using (49) and (50), as
4 λ 2 2 b = λ 2 ( 2 λ 2 λ 2 ) E [ J 2 ( F 2 ) 2 ] λ 4 ( 2 λ 2 λ 4 ) E [ J 4 ( F 2 ) 2 ] = λ 2 ( 2 λ 2 λ 2 ) c 2 , 2 c 2 , 0 2 0 1 P 2 ( α 1 , α 1 ) ( 1 2 x ) 2 ν ( d x ) λ 4 ( 2 λ 2 λ 4 ) c 2 , 4 c 2 , 0 2 0 1 P 4 ( α 1 , α 1 ) ( 1 2 x ) 2 ν ( d x ) = λ 2 ( 2 λ 2 λ 2 ) c 2 , 2 2 c 2 , 0 λ 4 ( 2 λ 2 λ 4 ) c 2 , 4 c 2 , 0 2 c 4 , 0 .
Using (45)–(48), the right-hand side of (51) can be computed as
4 λ 2 2 b = 64 λ 2 2 α 2 ( α + 1 ) 5 ( α + 2 ) 2 ( 2 α 1 ) 2 ( 2 α + 1 ) + 128 × ( 24 ) 3 α 2 ( α + 1 ) 4 ( α + 2 ) 7 ( α + 3 ) 7 ( 2 α + 1 ) 2 × ( 2 α + 3 ) 2 ( 2 α + 5 ) .
Hence, we have
b = 16 α 2 ( α + 1 ) 5 ( α + 2 ) 2 ( 2 α 1 ) 2 ( 2 α + 1 ) + 8 × ( 24 ) 3 α 2 ( α + 1 ) 4 ( α + 2 ) 7 ( α + 3 ) 7 ( 2 α + 3 ) 2 ( 2 α + 5 ) = 16 α 2 ( α + 1 ) 4 ( α + 2 ) 2 { 12 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 3 ) 2 ( 2 α + 5 ) ( α + 1 ) ( 2 α 1 ) 2 ( 2 α + 1 ) } .
From (53), we see that b > 0 for α > 0 . Using the same arguments as for the case of b shows that
4 λ 2 m = λ 2 c 2 , 2 2 c 2 , 0 + ( 2 λ 2 λ 4 ) c 2 , 4 c 2 , 0 2 c 4 , 0 = 64 λ 2 α 2 ( α + 1 ) 5 ( α + 2 ) 2 ( 2 α 1 ) 2 ( 2 α + 1 ) 32 × ( 24 ) 3 α 2 ( α + 1 ) 4 ( α + 2 ) 7 ( α + 3 ) 7 ( 2 α + 1 ) 2 × ( 2 α + 3 ) ( 2 α + 5 ) .
So,
m = 16 α 2 ( α + 1 ) 5 ( α + 2 ) 2 ( 2 α 1 ) 2 4 × ( 24 ) 3 α 2 ( α + 1 ) 4 ( α + 2 ) 7 ( α + 3 ) 7 ( 2 α + 1 ) × ( 2 α + 3 ) ( 2 α + 5 ) = 16 α 2 ( α + 1 ) 4 ( α + 2 ) 2 { ( α + 1 ) ( 2 α 1 ) 2 6 × ( 24 ) 2 ( α + 2 ) 5 ( α + 3 ) 7 ( 2 α + 1 ) ( 2 α + 3 ) ( 2 α + 5 ) } .
Obviously, the right-hand side of (55) shows that m < 0 for α > 0 . Now, we will find a point x such that ϕ ( x ) = 0 . From (53) and (55), the solution of ϕ ( x ) = 0 is given by
x = b m = 2 ϑ ( α ) ( 2 α + 3 ) ( 2 α 1 ) 2 ( α + 1 ) ( 2 α + 1 ) ϑ ( α ) ( 2 α + 1 ) ( 2 α 1 ) 2 ( α + 1 ) .
Hence, it follows from (56) that
c α = 2 + b m = 4 ϑ ( α ) ( 2 α 1 ) 3 ( α + 1 ) ϑ ( α ) ( 2 α + 1 ) ( 2 α 1 ) 2 ( α + 1 ) .
Remark 3.
In Theorem 5, we assume that α = β (ultraspherical case). This assumption shows that the factor 9 F 8 being the generalized hypergeometric function with 9 numerators and 8 denominator parameters, given in the paper [24], vanishes, so Rahman’s formula is considerably simplified. This assumption allows us to find a point x satisfying ϕ ( x ) 0 quickly and explicitly.

4.3. Romanovski–Routh Generator

We consider the one-dimensional generator L α , β , action on L 2 ( R , μ p , q ) , where
μ p , q ( d x ) = Γ p + i q 2 Γ p i q 2 2 2 ( 1 p ) Γ ( 2 p 1 ) ( x 2 + 1 ) p exp q arctan ( x ) d x ,
defined by
L p , q f ( x ) = x 2 2 ( 1 p ) f ( x ) x q 2 ( p 1 ) f ( x ) .
Its spectrum Λ is of the form
Λ = n 1 + n 1 2 ( 1 p ) : n N 0 .
First, note that the Romanovski–Routh polynomials R n ( p , q ) ( x ) can be represented by complexified Jacobi polynomials:
R n ( p , q ) ( x ) = n ! ( 2 i ) n P n ( p + q i 2 , p q i 2 ) ( i x ) ,
where P n ( p , q ) ( x ) , n = 1 , 2 , are the well-known Jacobi polynomials and i = 1 .
Corollary 3.
Let ν be the skew t-distribution with mean q 2 ( p 1 ) and diffusion coefficients given by
b 2 = 1 2 ( p 1 ) a n d b 0 = 1 2 ( p 1 ) .
Let F = R n ( p , q ) ( x ) , p , q R , n 2 , and set
G = F + q 2 ( p 1 ) f o r p , q R .
If p > 2 n + 1 2 , then we have
E Γ ( G , L 1 G ) 1 2 ( p 1 ) G 2 1 2 ( p 1 ) 2 2 2 p 3 2 ( p 1 ) 4 n 4 p + 3 4 ( n 2 p + 1 ) 2 p 5 6 ( p 1 ) × E [ U ˜ ( G ) ] + 2 p 3 4 ( p 1 ) 1 4 ( n 2 p + 1 ) × 2 ( 2 n 2 p + 1 ) ( n 2 p + 1 ) 2 p 3 p 1 E [ Q 2 ( G ) ] ,
where the constants b 2 , b 1 , b 0 , and m in U ˜ ( x ) and Q ( x ) are given by
b 2 = 1 2 ( p 1 ) , b 1 = 0 , b 0 = 1 2 ( p 1 ) a n d m = q 2 ( p 1 ) .
Proof. 
By using (31) and (60), the square of a Ramanovski–Routh polynomial R n ( p . q ) ( x ) can be expressed as a linear combination of Jacobi polynomials up to order 2 n as follows:
R n ( p , q ) ( x ) 2 = ( n ! ) 2 ( 2 i ) 2 n P n ( p + q i 2 , p q i 2 ) ( i x ) 2 = ( n ! ) 2 ( 4 ) n k = 0 2 n c n , k P k ( p + q i 2 , p q i 2 ) ( i x ) = ( n ! ) 2 ( 4 ) n k = 0 2 n c n , k k ! ( 2 i ) k R k ( p , q ) ( x ) ,
where the linearization coefficients c n , k are explicitly given in the paper [23]. By Proposition 4.2 in [1], the random variable F is chaotic. This product Formula (62) shows that the upper chaos grade  u and the lower chaos grade e of R n ( p , q ) are
u = λ 2 n λ n = 2 n 1 + 2 n 1 2 ( 1 p ) n 1 + n 1 2 ( 1 p ) = 2 ( 2 n 2 p + 1 ) n 2 p + 1 ,
e = λ 1 λ n = 1 n 1 + n 1 2 ( 1 p ) = 1 n 2 p + 1 .
Hence, it follows from (63) and (64) together with b 2 = 1 2 ( p 1 ) that
E Γ ( G , L 1 G ) 1 2 ( p 1 ) G 2 1 2 ( p 1 ) 2 2 2 p 3 2 ( p 1 ) 4 n 4 p + 3 4 ( n 2 p + 1 ) 2 p 5 6 ( p 1 ) × E [ U ˜ ( G ) ] + 2 p 3 4 ( p 1 ) 1 4 ( n 2 p + 1 ) × 2 ( 2 n 2 p + 1 ) ( n 2 p + 1 ) 2 p 3 p 1 E [ Q 2 ( G ) ] ,
where the constants b 2 , b 1 , b 0 , and m in U ˜ ( x ) and Q ( x ) are given by
b 2 = 1 2 ( p 1 ) , b 1 = 0 , b 0 = 1 2 ( p 1 ) and m = q 2 ( p 1 ) .

5. Conclusions and Future Works

The fact that tbound (8), given from the fourth moment theorem in the case of F = I q ( f ) , can be improved to bound (1) is the motivation for the research on this paper. We need to develop a new method for obtaining a more improved bound than the bound given in [1]. For this, we find the largest number in the set of eigenvalues, excluding zero, corresponding to its eigenfunction in the case where the square of a random variable F, coming from a Markov triple structure, can be expressed as a sum of eigenfunctions.
Future works will be carried out in two directions: (1) We will develop a new technique that can show that the fourth moment theorem like Theorem 4 holds even when the target distribution is not Gaussian. (2) We will study how the second term of the bound (16) in Theorem 3 can be removed even though the chaos grade is greater than two.

Author Contributions

Conceptualization, Y.-T.K.; methodology, H.-S.P.; writing—original draft, H.-S.P.; writing—review and editing, Y.-T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Hallym University Research Fund (HRF-202302-007).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to express their deepest gratitude to two anonymous referees and Editors for their valuable suggestions and comments, which greatly improved the previous version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bourguin, S.; Campese, S.; Leonenko, N.; Taqqu, M.S. fourth moment theorems on Markov chaos. Ann. Probab. 2019, 47, 1417–1446. [Google Scholar] [CrossRef]
  2. Nualart, D.; Peccati, G. Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab. 2005, 33, 177–193. [Google Scholar] [CrossRef]
  3. Nualart, D.; Ortiz-Latorre, S. Central limit theorems for multiple stochastic integrals and malliavin calculus. Ann. Probab. 2008, 33, 177–193. [Google Scholar] [CrossRef]
  4. Nourdin, I.; Peccati, G. Stein’s method on Wiener Chaos. Probab. Theory Related Fields 2009, 145, 75–118. [Google Scholar] [CrossRef]
  5. Nourdin, I.; Peccati, G. Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality; Cambridge Tracts in Mathematica; Cambridge University Press: Cambridge, UK, 2012; Volume 192. [Google Scholar]
  6. Nualart, D. Malliavin Calculus and Related Topics, 2nd ed.; Probability and its Applications; Springer: Berlin//Heidelberg, Germany, 2006. [Google Scholar]
  7. Nualart, D. Malliavin Calculus and Its Applications; Regional conference series in Mathmatics Number 110; American Mathematical Society: Providence, RI, USA, 2008. [Google Scholar]
  8. Chen, L.H.Y.; Goldstein, L.; Shao, Q.-M. Normal Apprtoximation by Stein’s Method; Probability and its Applications (New York); Springer: Heidelberg, Germany, 2011. [Google Scholar]
  9. Stein, C. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probabiltiy, Volume 2: Probability Theory; University of California Press: Berkeley, CA, USA, 1972; pp. 583–602. [Google Scholar]
  10. Stein, C. Approximate Computation of Expectations; IMS: Hayward, CA, USA, 1986; MR882007. [Google Scholar]
  11. Kim, Y.T.; Park, H.S. An Edeworth expansion for functionals of Gaussian fields and its applications. Stoch. Proc. Their Appl. 2018, 44, 312–320. [Google Scholar]
  12. Nourdin, I. Lectures on Gaussian approximations with Malliavin calculus. In Séminaire de Probabilités XLV; Lecture Notes in Mathematics 2078; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar] [CrossRef]
  13. Nourdin, I.; Peccati, G. Stein’s method and exact Berry-Esseen asymptotics for functionals of Gaussian fields. Ann. Probab. 2009, 37, 2231–2261. [Google Scholar] [CrossRef]
  14. Nourdin, I.; Peccati, G. Stein’s method meets Malliavin calculus: A short survey with new estimates. In Recent Development in Stochastic Dynamics and Stochasdtic Analysis; Interdisciplinary Mathematical Sciences; World Scientific Publishing: Hackensack, NJ, USA, 2010; Volume 8, pp. 207–236. [Google Scholar]
  15. Nourdin, I.; Peccati, G. The optimal fourth moment theorem. Proc. Am. Math. Soc. 2015, 143, 3123–3133. [Google Scholar] [CrossRef]
  16. Ledoux, M. Chaos of a Markov operator and the fourth moment condition. Ann. Probab. 2012, 40, 2439–2459. [Google Scholar] [CrossRef]
  17. Azmoodeh, E.; Campese, S.; Poly, G. Fourth moment theorems for Markov diffusion generators. J. Funct. Anal. 2014, 266, 2341–2359. [Google Scholar] [CrossRef]
  18. Kim, Y.T.; Park, H.S. Normal approximation when a chaos grade is greater than two. Prob. Stat. Lett. 2022, 44, 312–320. [Google Scholar] [CrossRef]
  19. Bakry, D.; Gentil, I.; Ledoux, M. Analysis and Geometry of Markov Diffusion Operators; Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Science]; Springer: Cham, Switzerland, 2014; Volume 348, MR3155209. [Google Scholar]
  20. Metafune, G.; Pruss, J.; Rhandi, A.; Schnaubelt, R. The domain of the Ornstein–Uhlenbeck operator on an Lp-space with invariant measure. Ann. Della Sc. Norm. Super. Pisa-Cl. Sci. 2002, 1, 471–485. [Google Scholar]
  21. Fornaro, S.; Metafune, G.; Paiiara, D.; Schnaubelt, R. Lp-spectrum of degenerate hypoelliptic Ornstein–Uhlenbeck operators. J. Funct. Anal. 2021, 280, 108807. [Google Scholar] [CrossRef]
  22. Lunardi, A.; Metafune, G.; Pallara, D. The Ornstein–Uhlenbeck semigroup in finite dimension. Philos. Trans. R. Soc. A 2020, 378, 20190620. [Google Scholar] [CrossRef] [PubMed]
  23. Chaggara, H.; Koef, W. On linearization coefficients of jacobi polynomials. Appl. Math. Lett. 2010, 23, 609–614. [Google Scholar] [CrossRef]
  24. Rahman, M. A non-negative representation of the linearizatioon coefficients of the product of Jacobi polynomials. Canad. J. Math. 1981, 33, 915–928. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, Y.-T.; Park, H.-S. Improved Bound of Four Moment Theorem and Its Application to Orthogonal Polynomials Associated with Laws. Axioms 2023, 12, 1092. https://doi.org/10.3390/axioms12121092

AMA Style

Kim Y-T, Park H-S. Improved Bound of Four Moment Theorem and Its Application to Orthogonal Polynomials Associated with Laws. Axioms. 2023; 12(12):1092. https://doi.org/10.3390/axioms12121092

Chicago/Turabian Style

Kim, Yoon-Tae, and Hyun-Suk Park. 2023. "Improved Bound of Four Moment Theorem and Its Application to Orthogonal Polynomials Associated with Laws" Axioms 12, no. 12: 1092. https://doi.org/10.3390/axioms12121092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop