Next Article in Journal
On the Interpolating Family of Distributions
Next Article in Special Issue
Parameter Estimation in Spatial Autoregressive Models with Missing Data and Measurement Errors
Previous Article in Journal
The Existence and Averaging Principle for Caputo Fractional Stochastic Delay Differential Systems with Poisson Jumps
Previous Article in Special Issue
Fisher Information, Asymptotic Behavior, and Applications for Generalized Order Statistics and Their Concomitants Based on the Sarmanov Family
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Partial Derivatives Estimation of Multivariate Variance Function in Heteroscedastic Model via Wavelet Method

School of Mathematics and Computational Science, Guilin University of Electronic Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(1), 69; https://doi.org/10.3390/axioms13010069
Submission received: 28 November 2023 / Revised: 14 January 2024 / Accepted: 18 January 2024 / Published: 20 January 2024
(This article belongs to the Special Issue Mathematical and Statistical Methods and Their Applications)

Abstract

:
For derivative function estimation, conventional research only focuses on the derivative estimation of one-dimensional functions. This paper considers partial derivatives estimation of a multivariate variance function in a heteroscedastic model. A wavelet estimator of partial derivatives of a multivariate variance function is proposed. The convergence rates of a wavelet estimator under different estimation errors are discussed. It turns out that the strong convergence rate of the wavelet estimator is the same as the optimal uniform almost sure convergence rate of nonparametric function problems.

1. Introduction

This paper considers the following heteroscedastic model:
Y i = g ( X i ) + f ( X i ) U i , i { 1 , , n } .
In this model, ( X 1 , Y 1 ) , , ( X n , Y n ) are independent and identically distributed random vectors. The functions g ( x ) and f ( x ) all are defined on [ a , b ] d . U 1 , , U n are identically distributed random variables, which satisfy E [ U i ] = 0 and V a r [ U i ] = 1 . Furthermore, the random vector X i and random variable U i are uncorrelated for any i { 1 , , n } . This paper is devoted to estimating the partial derivatives ( m r ) ( x ) of the variance function r ( x ) ( r ( x ) : = f 2 ( x ) ) from the observed data ( X 1 , Y 1 ) , , ( X n , Y n ) . The partial derivative ( m r ) ( x ) is defined by
( m r ) ( x ) = m r x 1 m 1 x d m d ( x 1 , , x d ) ,
with m = ( m 1 , , m d ) N d , and | m | = i = 1 d m i .
In practical applications, the heteroscedastic model is widely used for monitoring the signal-to-noise ratios in quality control [1,2], measuring the reliability of time series prediction [3], evaluating the volatility or risk in financial markets [4] and so on. Hence, many significant results have been obtained by [5,6,7,8] and others. For this heteroscedastic model, Fan and Yao [9] propose a residual-based estimator of the variance function, and study the asymptotic normality properties of an estimator. A class of difference-based kernel estimators of a variance function are constructed by [10]. Moreover, the asymptotic rates of convergence and the optimal bandwidth of kernel estimators are discussed. Wang et al. [11] consider the minimax convergence rates of a kernel estimator over pointwise squared error and global integrated mean squared error, respectively. The optimal estimation of a variance function with random design is discussed by [12]. Zaoui [13] studies the variance function estimation with model selection aggregation and convex aggregation.
Derivative estimation plays a crucial role in nonparametric statistics estimation, big data processing, and other practical applications. For example, some companies predict the profit growth rate when the research and development investment increases. Many financial and fund institutions evaluate the volatility of stock market prices and so on. Many important and interesting results of derivative estimation are obtained using different methods. Zhou and Wolfe [14] propose a spline derivative estimator, and discuss the asymptotic normality and variance property. A spatially adaptive derivative estimator is constructed by [15]. Chaubey et al. [16] consider the upper bound over L p -loss for wavelet estimators of density derivative functions. A convergence rate over the mean integrated error of a derivative estimator for mixing sequences is proved by [17]. For the estimation problem (1), the derivatives estimation of the variance function via the wavelet method is proved by [18]. However, it should be pointed out that those above results all focus on the estimation of the derivatives of a one-dimensional function. There is a lack of partial derivatives estimation of a multivariate variance function. Hence, in this paper, we construct a partial derivatives estimator using the wavelet method, and discuss the convergence rates of the wavelet estimator under different mild conditions.
The structure of this paper is given as follows. Section 2 specifies some mild hypotheses for the estimation model (1), and constructs a wavelet estimator of the partial derivative function. Two important auxiliary results of the wavelet estimator are proved in Section 3. The estimation errors of the wavelet estimator under different assumptions are discussed in Section 4.

2. Wavelet Estimator

In this section, we will give some hypotheses of the estimation problem (1), and construct a partial derivatives estimator using the wavelet method. For the estimation model (1), the following mild assumptions are proposed, which are used in the later discussion.
A1: For the partial derivative ( m r ) ( x ) with m = ( m 1 , , m d ) N d , if any θ = ( θ 1 , , θ d ) N d and θ i m i , the partial derivative function satisfies ( θ r ) ( x ) = 0 when x i a or b.
A2: The function g ( x ) is known and bounded, i.e., there exists a positive constant c 1 such that | g ( x ) | c 1 .
A3: The density function h ( x ) of the random vector X satisfies that c 2 h ( x ) c 3 , where x [ a , b ] d , c 2 and c 3 are two positive constants.
A4: The random variables Y 1 , Y 2 , , Y n are defined on [ c 4 , c 5 ] , where c 4 and c 5 are two constants.
In order to construct a wavelet estimator, some basic theories of wavelets are given in the following [19,20]. Let Φ be a scaling function and Ψ be a wavelet function such that
Φ τ , k , Ψ j , k , u ; j τ , u { 1 , , 2 d 1 } , k Z d
constitutes an orthonormal basis of L 2 ( R d ) , where τ is a positive integer,
Φ j , k ( x ) = 2 j d 2 Φ ( 2 j x 1 k 1 , , 2 j x d k d ) , Ψ j , k , u ( x ) = 2 j d 2 Ψ u ( 2 j x 1 k 1 , , 2 j x d k d ) .
Then, for any integer j 0 such that j 0 τ , a function f ( x ) L 2 ( [ a , b ] d ) can be expanded into a wavelet series as:
f ( x ) = k Λ j 0 α j 0 , k Φ j 0 , k ( x ) + j = j 0 u = 1 2 d 1 k Λ j β j , k , u Ψ j , k , u ( x ) ,
where α j 0 , k = [ a , b ] d f ( x ) · Φ j 0 , k ( x ) d x , β j , k , u = [ a , b ] d f ( x ) · Ψ j , k , u ( x ) d x and the cardinality of Λ j satisfies | Λ j | 2 j d . In this paper, we choose some compactly supported wavelets, such as the Daubechies wavelet [21]. In addition, this paper adopts the following symbol: A B denotes A c B for some constant c > 0 ; A B means B A ; and A B stand for both A B and B A . For any x R d , x : = i = 1 d | x i | .
Now we define a wavelet estimator of partial derivatives function ( m r ) ( x ) by
( m r ) ^ ( x ) : = k Λ j α ^ j , k Φ j , k ( x ) .
In this definition,
α ^ j , k : = ( 1 ) | m | n i = 1 n Y i 2 h ( X i ) ( m Φ j , k ) ( X i ) ( 1 ) | m | [ a , b ] d g 2 ( x ) ( m Φ j , k ) ( x ) d x
and
( m Φ j , k ) ( x ) = m Φ j , k x 1 m 1 x d m d ( x 1 , , x d ) = 2 j d 2 · 2 j | m | ( m Φ ) ( 2 j x 1 k 1 , , 2 j x d k d ) .

3. Two Lemmas

This section will provide two important lemmas, which are used to prove the main theorem in a later section. According to the following lemma, it is easy to see that our wavelet estimator ( m r ) ^ ( x ) is unbiased.
Lemma 1.
For the model (1) with A1,
E [ α ^ j , k ] = α j , k .
Proof. 
By the definition of α ^ j , k and the properties of random vectors ( X i , Y i ) ,
E [ α ^ j , k ] = E ( 1 ) | m | n i = 1 n Y i 2 h ( X i ) ( m Φ j , k ) ( X i ) ( 1 ) | m | [ a , b ] d g 2 ( x ) ( m Φ j , k ) ( x ) d x = E ( 1 ) | m | Y 1 2 h ( X 1 ) ( m Φ j , k ) ( X 1 ) ( 1 ) | m | [ a , b ] d g 2 ( x ) ( m Φ j , k ) ( x ) d x .
Then, it follows from (1) that
E [ α ^ j , k ] = E ( 1 ) | m | r ( X 1 ) h ( X 1 ) U 1 2 ( m Φ j , k ) ( X 1 ) + 2 E ( 1 ) | m | f ( X 1 ) g ( X 1 ) h ( X 1 ) U 1 ( m Φ j , k ) ( X 1 ) + E ( 1 ) | m | g 2 ( X 1 ) h ( X 1 ) ( m Φ j , k ) ( X 1 ) ( 1 ) | m | [ a , b ] d g 2 ( x ) ( m Φ j , k ) ( x ) d x .
Note that the conditions E [ U 1 ] = 0 and V a r [ U 1 ] = 1 imply E [ U 1 2 ] = 1 . Furthermore, using the assumption of no correlation between U i and X i , one gets
E ( 1 ) | m | r ( X 1 ) h ( X 1 ) U 1 2 ( m Φ j , k ) ( X 1 ) = E ( 1 ) | m | r ( X 1 ) h ( X 1 ) ( m Φ j , k ) ( X 1 )
and
E ( 1 ) | m | f ( X 1 ) g ( X 1 ) h ( X 1 ) U 1 ( m Φ j , k ) ( X 1 ) = 0 .
In addition, because the density function of the random vector X is h ( x ) , the following equation can be obtained easily:
E ( 1 ) | m | g 2 ( X 1 ) h ( X 1 ) ( m Φ j , k ) ( X 1 ) = ( 1 ) | m | [ a , b ] d g 2 ( x ) ( m Φ j , k ) ( x ) d x .
According to the above results, one has
E [ α ^ j , k ] = E ( 1 ) | m | r ( X 1 ) h ( X 1 ) ( m Φ j , k ) ( X 1 ) .
Then, it is easy to see from A1 that
E [ α ^ j , k ] = ( 1 ) | m | [ a , b ] d r ( x ) ( m Φ j , k ) ( x ) d x = [ a , b ] d ( m r ) ( x ) Φ j , k ( x ) d x = α j , k .
For nonparametric estimation, wavelet estimators can be viewed as generalized kernel estimators. For any u , v R d , we introduce the kernel K ( u , v ) by K ( u , v ) = k Z d Φ ( u k ) Φ ( v k ) . Now, we give some important properties of this kernel function, which will be used in the later discussion. Furthermore, we define
K ( m ) ( u , v ) : = k Z d Φ ( u k ) ( v m Φ ) ( v k ) ,
where K ( m ) ( u , v ) : = ( v m K ) ( u , v ) denotes the m th partial derivative of K ( u , v ) with respect to v .
Let the scaling function Φ be λ -regular [20,22,23], i.e., Φ C λ and | D α Φ ( x ) | c l ( 1 + x ) l for any integer l 1 , α = ( α 1 , , α d ) N d with | α | = i = 1 d α i λ and x R d . Then, there exists a positive constant C d such that
| K ( m ) ( u , v ) | C d ( 1 + u v ) d + 1 .
Meanwhile, one can obtain that
| K ( m ) ( u , y ) K ( m ) ( v , y ) | u v .
For more properties and details of kernel functions, one can see [24,25].
Lemma 2.
For the model (1) with conditions A3 and A4, the wavelet estimator ( m r ) ^ ( x ) is defined by (3) and 2 j ( n ln n ) 1 / d , there exists a constant κ > 0 such that
P ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] κ η n n z ( κ ) ,
where z ( κ ) = κ 2 2 ( 1 + κ / 3 ) and η n 2 j ( d 2 + | m | ) ln n n .
Proof. 
According to the definition of ( m r ) ^ ( x ) by (3),
( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] = k Λ j ( α ^ j , k E [ α ^ j , k ] ) Φ j , k ( x ) = 1 n k Λ j i = 1 n Y i 2 h ( X i ) ( m Φ j , k ) ( X i ) E i = 1 n Y i 2 h ( X i ) ( m Φ j , k ) ( X i ) Φ j , k ( x ) = 1 n i = 1 n Y i 2 h ( X i ) K j ( m ) ( x , X i ) E Y i 2 h ( X i ) K j ( m ) ( x , X i ) = 1 n i = 1 n B i ,
where B i : = Y i 2 h ( X i ) K j ( m ) ( x , X i ) E Y i 2 h ( X i ) K j ( m ) ( x , X i ) and
K j ( m ) ( x , X i ) : = k Λ j Φ j , k ( x ) ( m Φ j , k ) ( X i ) = k Λ j 2 j d 2 Φ ( 2 j x 1 k 1 , , 2 j x d k d ) · 2 j ( d 2 + | m | ) ( m Φ ) ( 2 j X i 1 k 1 , , 2 j X i d k d ) = 2 j ( d + | m | ) K ( m ) ( 2 j x , 2 j X i ) .
Then we can obtain that
P ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] κ η n = P 1 n i = 1 n B i κ η n .
By the definition of B i , E B i = 0 . Meanwhile, note that | K j ( m ) ( x , X i ) | 2 j ( d + | m | ) by (5). Now, it follows from A3 and A4 that
| B i | Y i 2 h ( X i ) K j ( m ) ( x , X i ) + E Y i 2 h ( X i ) K j ( m ) ( x , X i ) 2 j ( d + | m | ) .
Using the property of the kernel function in (5),
[ a , b ] d K j ( m ) ( u , v ) 2 d v = 2 2 j ( d + | m | ) [ a , b ] d K ( m ) ( 2 j u , 2 j v ) 2 d v = 2 2 j ( d + | m | ) 2 j d [ a , b ] d K ( m ) ( 2 j u , 2 j v ) 2 d ( 2 j v ) 2 j ( d + 2 | m | ) .
Then, by A3 and A4, one gets
E [ B i 2 ] = V a r Y i 2 h ( X i ) K j ( m ) ( x , X i ) E Y i 4 h 2 ( X i ) K j ( m ) ( x , X i ) 2 E K j ( m ) ( x , X i ) 2 [ a , b ] d K j ( m ) ( x , v ) 2 d v 2 j ( d + 2 | m | ) .
According to the Bernstein’s inequality [26] and the above results, one can obtain that
P 1 n i = 1 n B i κ η n exp n κ 2 η n 2 2 2 j ( d + 2 | m | ) + κ η n 2 j ( d + | m | ) 3 .
The conditions η n ( ln n n ) 1 2 · 2 j ( d 2 + | m | ) and 2 j ( n ln n ) 1 / d imply that η n 2 j | m | . Meanwhile, one can easily obtain that
n κ 2 η n 2 2 2 j ( d + 2 | m | ) + κ η n 2 j ( d + | m | ) 3 = κ 2 2 1 + κ η n 3 · 2 j | m | · n η n 2 2 j ( d + 2 | m | ) z ( κ ) · ln n ,
with z ( κ ) = κ 2 2 ( 1 + κ / 3 ) . Then, this result with (7) and (8) implies that
P ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] κ η n n z ( κ ) .

4. Main Theorem

In this section, we will state the convergence rates of the wavelet estimator under different estimation error and mild conditions.
Theorem 1.
For the problem (1), the wavelet estimator ( m r ) ^ ( x ) is defined by (3), and the following results under different conditions are obtained.
(i)
Let the model (1) satisfy the assumptions A1A4,
sup x [ a , b ] d | ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] | = O a . s . ln n n 1 2 2 j ( d 2 + | m | ) .
(ii)
Assume that the model (1) satisfies the assumptions A1A4, and the partial derivatives functions ( m r ) ( x ) belong to Hölder space H s ( R d ) ( s > 0 ) , one gets
sup x [ a , b ] d | ( m r ) ^ ( x ) ( m r ) ( x ) | = O a . s . ln n n 1 2 2 j ( d 2 + | m | ) + 2 j s .
(iii)
Suppose that in the model (1) with conditions A3 and A4, the random vector X 1 , , X n is independent. Then, one has
V a r ( m r ) ^ ( x ) 2 j ( d + 2 | m | ) n .
Remark 1.
If one takes 2 j ( n ln n ) 1 / ( d + 2 | m | + 2 s ) , then the result of (ii) reduces to
sup x [ a , b ] d | ( m r ) ^ ( x ) ( m r ) ( x ) | = O a . s . ln n n s d + 2 | m | + 2 s .
Then, when m = 0 , this strong convergence rate of the wavelet estimator is the same as the optimal uniform almost sure convergence rate of nonparametric function problems [27].
Remark 2.
According to Lemma 1, it is easy to know that the wavelet estimator ( m r ) ^ ( x ) is an unbiased estimator. Hence, the estimation error of this wavelet estimator in the deviation sense is given by (i). In addition, the result (iii) considers the estimation error of the wavelet estimator in the variance sense.
Proof. 
Proof of (i). Note that [ a , b ] d is a compact set, then it can be covered by a finite number L n of cubes I . Meanwhile, one defines that the centre of I is x : = ( x 1 , x 2 , , x d ) , and the radius length is l n : = c L n 1 / d with a positive constant c. The parametric L n will be taken in the following discussions. Using the triangle inequality, one gets
sup x [ a , b ] d | ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] | Q 1 + Q 2 + Q 3 ,
where
Q 1 : = max 1 L n sup x I | ( m r ) ^ ( x ) ( m r ) ^ ( x ) | , Q 2 : = max 1 L n sup x I | E [ ( m r ) ^ ( x ) ] E [ ( m r ) ^ ( x ) ] | , Q 3 : = max 1 L n | ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] | .
For Q 1 . By the definitions of ( m r ) ^ ( x ) and α ^ j , k in (3) and (4), for any x [ a , b ] d , one can easily obtain
| ( m r ) ^ ( x ) ( m r ) ^ ( x ) | = k Λ j α ^ j , k ( Φ j , k ( x ) Φ j , k ( x ) ) 1 n i = 1 n Y i 2 h ( X i ) K j ( m ) ( x , X i ) K j ( m ) ( x , X i ) + k Λ j [ a , b ] d g 2 ( x ) ( m Φ j , k ) ( x ) d x · ( Φ j , k ( x ) Φ j , k ( x ) ) = : Q 11 + Q 12 .
Using A3, A4 and (6),
Q 11 1 n i = 1 n K j ( m ) ( x , X i ) K j ( m ) ( x , X i ) 2 j ( d + 1 + | m | ) x x .
In addition, by the boundedness assumption of function g ( x ) in A2, the following inequalities are true:
[ a , b ] d g 2 ( x ) ( m Φ j , k ) ( x ) d x [ a , b ] d ( m Φ j , k ) ( x ) d x = 2 j ( d 2 + | m | ) [ a , b ] d ( m Φ ) ( 2 j x 1 k 1 , , 2 j x d k d ) d x 2 j ( d 2 + | m | ) [ a , b ] d ( m Φ ) ( 2 j x 1 k 1 , , 2 j x d k d ) d ( 2 j x k ) 2 j ( d 2 + | m | ) .
Furthermore, it follows from the property of wavelet scaling function Φ that
Q 12 2 j ( d 2 + | m | ) k Λ j Φ j , k ( x ) Φ j , k ( x ) = 2 j ( d 2 + | m | ) · 2 j d 2 k Λ j Φ ( 2 j x k ) Φ ( 2 j x k ) = 2 j | m | k Λ j Φ ( 2 j x k ) · ( 2 j x 1 2 j x 1 , , 2 j x d 2 j x d ) T 2 j ( 1 + | m | ) k Λ j x x 2 j ( d + 1 + | m | ) x x .
Combining (13), (14) and (15), one can get
| ( m r ) ^ ( x ) ( m r ) ^ ( x ) | 2 j ( d + 1 + | m | ) x x .
Because the centre of I is x , x x l n . Then, by the definition of l n ,
Q 1 2 j ( d + 1 + | m | ) l n 2 j ( d + 1 + | m | ) L n 1 / d .
Now, one takes
L n 2 j ( d + 2 ) n ln n d 2 .
Then, the following conclusion is true,
Q 1 ln n n 1 2 2 j ( d 2 + | m | ) .
For Q 2 . Using the above discussions of Q 1 , one knows
Q 2 max 1 L n sup x I E [ | ( m r ) ^ ( x ) ( m r ) ^ ( x ) | ] ln n n 1 2 2 j ( d 2 + | m | ) .
For Q 3 . Note that
P Q 3 κ η n = P max 1 L n | ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] | κ η n = 1 L n P | ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] | κ η n L n sup x [ a , b ] d P | ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] | κ η n .
By Lemma 2 and 2 j ( n ln n ) 1 / d , one can choose a large enough constant κ such that
i = 1 n P Q 3 κ η n i = 1 n L n n z ( κ ) < .
Furthermore, this result with the Borel–Cantelli lemma implies
Q 3 = O a . s . ln n n 1 2 · 2 j ( d 2 + | m | ) .
Finally, together with (12), (16), (17), and (18), one gets
sup x [ a , b ] d | ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] | = O a . s . ln n n 1 2 2 j ( d 2 + | m | ) .
Proof of (ii). Using Lemma 1 and the property (2) of wavelets,
( m r ) ^ ( x ) ( m r ) ( x ) ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] + j = j u = 1 2 d 1 k Λ j β j , k , u Ψ j , k , u ( x ) .
Hence,
sup x [ a , b ] d ( m r ) ^ ( x ) ( m r ) ( x ) I 1 + I 2 ,
where
I 1 : = sup x [ a , b ] d ( m r ) ^ ( x ) E [ ( m r ) ^ ( x ) ] , I 2 : = sup x [ a , b ] d j = j u = 1 2 d 1 k Λ j β j , k , u Ψ j , k , u ( x ) .
For I 1 . According to the conclusion of (i),
I 1 = O a . s . ln n n 1 2 2 j ( d 2 + | m | ) .
For I 2 . Let a function f ( x ) belong to Hölder space H s ( R d ) , and let Ψ j , k , u be a wavelet function, then one can prove that j = j u = 1 2 d 1 k Λ j β j , k , u Ψ j , k , u ( x ) 2 j s . More details and proofs of this above conclusion can be found in [28,29,30]. Furthermore, because the partial derivatives functions ( m r ) ( x ) belong to Hölder space H s ( R d ) , one can easily obtain that
I 2 2 j s .
Now, this conclusion with (20) and (21) shows that
sup x [ a , b ] d | ( m r ) ^ ( x ) ( m r ) ( x ) | = O a . s . ln n n 1 2 2 j ( d 2 + | m | ) + 2 j s .
Proof of (iii). By the definition of ( m r ) ^ ( x ) and the properties of the variance function, one has
V a r ( m r ) ^ ( x ) = V a r k Λ j ( 1 ) | m | n i = 1 n Y i 2 h ( X i ) ( m Φ j , k ) ( X i ) Φ j , k ( x ) = V a r ( 1 ) | m | n i = 1 n Y i 2 h ( X i ) K j ( m ) ( x , X i ) .
Moreover, the assumptions of random vector X , A3 and A4 imply that
V a r ( m r ) ^ ( x ) 1 n V a r K j ( m ) ( x , X 1 ) .
Using the property of the kernel function in (5) and condition A3,
V a r K j ( m ) ( x , X 1 ) E ( K j ( m ) ( x , X 1 ) ) 2 [ a , b ] d ( K j ( m ) ( x , v ) ) 2 d v 2 j ( d + 2 | m | ) .
Hence,
V a r ( m r ) ^ ( x ) 2 j ( d + 2 | m | ) n .

5. Conclusions

For nonparametric derivative estimation, classical research results pay more attention to the derivative estimation of one-dimensional functions. However, this paper studies the nonparametric estimation of partial derivatives of a multivariate function. Firstly, a wavelet estimator of the partial derivatives of the multivariate variance function in a heteroscedastic model is given. More importantly, this wavelet estimator is an unbiased estimator. Secondly, two important lemma are proved, which discuss the key properties of the wavelet estimator. Finally, the convergence rates over different estimation errors of the wavelet estimator are considered. According to the main theorem, it is easy to see that the strong convergence rate of the wavelet estimator is the same as the optimal uniform almost sure convergence rate of nonparametric function estimations.
Because the local analysis characteristics in the time and frequency domains of the wavelet, the wavelet estimator can choose an appropriate wavelet scaling parameter to get the optimal convergence rate. Hence, this paper considers partial derivatives estimation based on the wavelet method. The theoretical results of asymptotic property of the wavelet estimator are discussed in this paper. In addition, it is difficult to present the corresponding practical illustration of the wavelet estimator, which needs more investigations and some new skills. We will consider this in future work.

Author Contributions

Conceptualization, J.K. and H.Z.; writing—original draft preparation, J.K. and H.Z.; writing—review and editing, J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is supported by the Guangxi Natural Science Foundation (No. 2023GXNSFAA026042), the National Natural Science Foundation of China (No. 12361016), the Center for Applied Mathematics of Guangxi (GUET), and the Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation.

Data Availability Statement

Data are contained within the article.

Acknowledgments

All authors would like to thank the reviewers for their important and constructive comments.

Conflicts of Interest

All authors state that there are no conflicts of interest.

References

  1. Box, G. Signal-to-noise ratios, performance criteria, and transformations. Technometrics 1988, 30, 1–17. [Google Scholar] [CrossRef]
  2. Smeds, K.; Wolters, F.; Rung, M. Estimation of signal-to-noise ratios in realistic sound scenarios. J. Am. Acad. Audiol. 2015, 26, 183–196. [Google Scholar] [CrossRef] [PubMed]
  3. Yao, Q.W.; Tong, H. Quantifying the influence of initial values on nonlinear prediction. J. R. Stat. Soc. Ser. B 1994, 56, 701–725. [Google Scholar]
  4. Härdle, W.; Tsybakov, A. Local polynomial estimators of the volatility function in nonparametric autoregression. J. Econom. 1997, 8, 223–242. [Google Scholar] [CrossRef]
  5. Mak, T.K. Estimation of parameters in heteroscedastic linear models. J. R. Stat. Soc. Ser. B 1992, 54, 649–655. [Google Scholar] [CrossRef]
  6. Shen, S.L.; Mei, C.L. Estimation of the variance function in heteroscedastic linear regression models. Commun. Stat. Theory Methods 2009, 38, 1098–1112. [Google Scholar] [CrossRef]
  7. Su, Z.H.; Cook, R.D. Estimation of multivariate means with heteroscedastic errors using envelope models. Stat. Sin. 2013, 23, 213–230. [Google Scholar] [CrossRef]
  8. Zhang, J.; Huang, Z.S. Efficient simultaneous partial envelope model in multivariate linear regression. J. Stat. Comput. Simul. 2022, 92, 1373–1400. [Google Scholar] [CrossRef]
  9. Fan, J.Q.; Yao, Q.W. Efficient estimation of conditional variance functions in stochastic regression. Biometrika 1998, 85, 645–660. [Google Scholar] [CrossRef]
  10. Brown, D.L.; Levine, M. Variance estimation in nonparametric regression via the difference sequence method. Ann. Stat. 2007, 35, 2219–2232. [Google Scholar] [CrossRef]
  11. Wang, L.; Brown, D.L.; Cai, T.T. Effect of mean on variance function estimation in nonparametric regression. Ann. Stat. 2008, 36, 646–664. [Google Scholar] [CrossRef]
  12. Shen, Y.D.; Gao, C.; Witten, D.; Han, F. Optimal estimation of variance in nonparametric regression with random design. Ann. Stat. 2020, 48, 3589–3618. [Google Scholar] [CrossRef]
  13. Zaoui, A. Variance function estimation in regression model via aggregation procedures. J. Nonparametric Stat. 2023, 35, 397–436. [Google Scholar] [CrossRef]
  14. Zhou, S.G.; Wolfe, D.A. On derivatives estimation in spline regression. Stat. Sin. 2000, 10, 93–108. [Google Scholar]
  15. Cai, T.T. On adaptive wavelet estimation of a derivative and other related linear inverse problems. J. Stat. Plan. Inference 2002, 108, 329–349. [Google Scholar] [CrossRef]
  16. Chaubey, Y.P.; Doosti, H.D.; Rao, B.L.S.P. Wavelet based estimation of the derivatives of a density for a negatively associated process. J. Stat. Theory Pract. 2008, 2, 453–463. [Google Scholar] [CrossRef]
  17. Hosseinioun, N.; Doosti, H.; Nirum, H.A. Nonparametric estimation of the derivative of a density by the method of wavelet for mixing sequences. Stat. Pap. 2012, 53, 195–203. [Google Scholar] [CrossRef]
  18. Kou, J.K.; Zhang, H. Wavelet estimation of the derivatives of variance function in heteroscedastic model. AIMS Math. 2023, 8, 14340–14361. [Google Scholar] [CrossRef]
  19. Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
  20. Härdle, W.; Kerkyacharian, G.; Picard, D.; Tsybakov, A. Wavelets, Approximation and Statistical Application; Springer: New York, NY, USA, 1998. [Google Scholar]
  21. Meyer, Y. Wavelet and Operators; Hermann: Paris, France, 1990. [Google Scholar]
  22. Walnut, D.F. An Introduction to Wavelet Analysis; Birkhäuser: Basel, Switzerland, 2001. [Google Scholar]
  23. Triebel, H. Theory of Function Spaces; Birkhäuser: Basel, Switzerland, 2001. [Google Scholar]
  24. Masry, E. Multivariate probability density estimation by wavelet methods: Strong consistency and rates for stationary time series. Stoch. Process. Their Appl. 1997, 67, 177–193. [Google Scholar] [CrossRef]
  25. Allaoui, S.; Bouzebda, S.; Chesneau, C.; Liu, J.C. Uniform almost sure convergence and asymptotic distribution of the wavelet-based estimators of partial derivatives of multivariate density function under weak dependence. J. Nonparametric Stat. 2021, 33, 170–196. [Google Scholar] [CrossRef]
  26. Kou, J.K.; Huang, Q.M.; Guo, H.J. Pointwise wavelet estimations for a regression model in local Hölder space. Axioms 2022, 11, 466. [Google Scholar] [CrossRef]
  27. Giné, E.; Nickl, R. Uniform limit theorems for wavelet density estimators. Ann. Probab. 2009, 37, 1605–1646. [Google Scholar] [CrossRef]
  28. Devore, R.; Kerkyacharian, G.; Picard, D.; Temlyakov, V. Approximation methods for supervised learning. Found. Comput. Math. 2006, 6, 3–58. [Google Scholar] [CrossRef]
  29. Liu, Y.M.; Wu, C. Point-wise estimation for anisotropic densities. J. Multivar. Anal. 2019, 171, 112–125. [Google Scholar] [CrossRef]
  30. Chen, L.; Chesneau, C.; Kou, J.K.; Xu, J.L. Wavelet optimal estimation for a multivariate probability density function under weighted distribution. Results Math. 2023, 78, 66. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kou, J.; Zhang, H. Partial Derivatives Estimation of Multivariate Variance Function in Heteroscedastic Model via Wavelet Method. Axioms 2024, 13, 69. https://doi.org/10.3390/axioms13010069

AMA Style

Kou J, Zhang H. Partial Derivatives Estimation of Multivariate Variance Function in Heteroscedastic Model via Wavelet Method. Axioms. 2024; 13(1):69. https://doi.org/10.3390/axioms13010069

Chicago/Turabian Style

Kou, Junke, and Hao Zhang. 2024. "Partial Derivatives Estimation of Multivariate Variance Function in Heteroscedastic Model via Wavelet Method" Axioms 13, no. 1: 69. https://doi.org/10.3390/axioms13010069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop