1. Introduction
This paper considers the following heteroscedastic model:
In this model,
are independent and identically distributed random vectors. The functions
and
all are defined on
.
are identically distributed random variables, which satisfy
and
. Furthermore, the random vector
and random variable
are uncorrelated for any
. This paper is devoted to estimating the partial derivatives
of the variance function
from the observed data
. The partial derivative
is defined by
with
, and
.
In practical applications, the heteroscedastic model is widely used for monitoring the signal-to-noise ratios in quality control [
1,
2], measuring the reliability of time series prediction [
3], evaluating the volatility or risk in financial markets [
4] and so on. Hence, many significant results have been obtained by [
5,
6,
7,
8] and others. For this heteroscedastic model, Fan and Yao [
9] propose a residual-based estimator of the variance function, and study the asymptotic normality properties of an estimator. A class of difference-based kernel estimators of a variance function are constructed by [
10]. Moreover, the asymptotic rates of convergence and the optimal bandwidth of kernel estimators are discussed. Wang et al. [
11] consider the minimax convergence rates of a kernel estimator over pointwise squared error and global integrated mean squared error, respectively. The optimal estimation of a variance function with random design is discussed by [
12]. Zaoui [
13] studies the variance function estimation with model selection aggregation and convex aggregation.
Derivative estimation plays a crucial role in nonparametric statistics estimation, big data processing, and other practical applications. For example, some companies predict the profit growth rate when the research and development investment increases. Many financial and fund institutions evaluate the volatility of stock market prices and so on. Many important and interesting results of derivative estimation are obtained using different methods. Zhou and Wolfe [
14] propose a spline derivative estimator, and discuss the asymptotic normality and variance property. A spatially adaptive derivative estimator is constructed by [
15]. Chaubey et al. [
16] consider the upper bound over
-loss for wavelet estimators of density derivative functions. A convergence rate over the mean integrated error of a derivative estimator for mixing sequences is proved by [
17]. For the estimation problem (
1), the derivatives estimation of the variance function via the wavelet method is proved by [
18]. However, it should be pointed out that those above results all focus on the estimation of the derivatives of a one-dimensional function. There is a lack of partial derivatives estimation of a multivariate variance function. Hence, in this paper, we construct a partial derivatives estimator using the wavelet method, and discuss the convergence rates of the wavelet estimator under different mild conditions.
The structure of this paper is given as follows.
Section 2 specifies some mild hypotheses for the estimation model (
1), and constructs a wavelet estimator of the partial derivative function. Two important auxiliary results of the wavelet estimator are proved in
Section 3. The estimation errors of the wavelet estimator under different assumptions are discussed in
Section 4.
2. Wavelet Estimator
In this section, we will give some hypotheses of the estimation problem (
1), and construct a partial derivatives estimator using the wavelet method. For the estimation model (
1), the following mild assumptions are proposed, which are used in the later discussion.
A1: For the partial derivative with , if any and , the partial derivative function satisfies when or b.
A2: The function is known and bounded, i.e., there exists a positive constant such that .
A3: The density function of the random vector satisfies that , where , and are two positive constants.
A4: The random variables are defined on , where and are two constants.
In order to construct a wavelet estimator, some basic theories of wavelets are given in the following [
19,
20]. Let
be a scaling function and
be a wavelet function such that
constitutes an orthonormal basis of
, where
is a positive integer,
Then, for any integer
such that
, a function
can be expanded into a wavelet series as:
where
,
and the cardinality of
satisfies
. In this paper, we choose some compactly supported wavelets, such as the Daubechies wavelet [
21]. In addition, this paper adopts the following symbol:
denotes
for some constant
;
means
; and
stand for both
and
. For any
,
.
Now we define a wavelet estimator of partial derivatives function
by
In this definition,
and
3. Two Lemmas
This section will provide two important lemmas, which are used to prove the main theorem in a later section. According to the following lemma, it is easy to see that our wavelet estimator is unbiased.
Lemma 1. For the model (1) with A1,
Proof. By the definition of
and the properties of random vectors
,
Then, it follows from (
1) that
Note that the conditions
and
imply
. Furthermore, using the assumption of no correlation between
and
, one gets
and
In addition, because the density function of the random vector
is
, the following equation can be obtained easily:
According to the above results, one has
Then, it is easy to see from
A1 that
□
For nonparametric estimation, wavelet estimators can be viewed as generalized kernel estimators. For any
, we introduce the kernel
by
Now, we give some important properties of this kernel function, which will be used in the later discussion. Furthermore, we define
where
denotes the
th partial derivative of
with respect to
.
Let the scaling function
be
-regular [
20,
22,
23], i.e.,
and
for any integer
,
with
and
. Then, there exists a positive constant
such that
Meanwhile, one can obtain that
For more properties and details of kernel functions, one can see [
24,
25].
Lemma 2. For the model (1) with conditions A3 and A4, the wavelet estimator is defined by (3) and , there exists a constant such thatwhere and . Proof. According to the definition of
by (
3),
where
and
Then we can obtain that
By the definition of
,
. Meanwhile, note that
by (
5). Now, it follows from
A3 and
A4 that
Using the property of the kernel function in (
5),
Then, by
A3 and
A4, one gets
According to the Bernstein’s inequality [
26] and the above results, one can obtain that
The conditions
and
imply that
. Meanwhile, one can easily obtain that
with
. Then, this result with (
7) and (
8) implies that
□
4. Main Theorem
In this section, we will state the convergence rates of the wavelet estimator under different estimation error and mild conditions.
Theorem 1. For the problem (1), the wavelet estimator is defined by (3), and the following results under different conditions are obtained. - (i)
Let the model (1) satisfy the assumptions A1–A4,
- (ii)
Assume that the model (1) satisfies the assumptions A1–A4, and the partial derivatives functions belong to Hölder space , one gets - (iii)
Suppose that in the model (1) with conditions A3 and A4, the random vector is independent. Then, one has
Remark 1. If one takes , then the result of (ii) reduces toThen, when , this strong convergence rate of the wavelet estimator is the same as the optimal uniform almost sure convergence rate of nonparametric function problems [27]. Remark 2. According to Lemma 1, it is easy to know that the wavelet estimator is an unbiased estimator. Hence, the estimation error of this wavelet estimator in the deviation sense is given by (i). In addition, the result (iii) considers the estimation error of the wavelet estimator in the variance sense.
Proof. Proof of (i). Note that
is a compact set, then it can be covered by a finite number
of cubes
. Meanwhile, one defines that the centre of
is
, and the radius length is
with a positive constant
c. The parametric
will be taken in the following discussions. Using the triangle inequality, one gets
where
For
. By the definitions of
and
in (
3) and (
4), for any
, one can easily obtain
Using
A3,
A4 and (
6),
In addition, by the boundedness assumption of function
in
A2, the following inequalities are true:
Furthermore, it follows from the property of wavelet scaling function
that
Combining (
13), (
14) and (
15), one can get
Because the centre of
is
,
. Then, by the definition of
,
Now, one takes
Then, the following conclusion is true,
For
. Using the above discussions of
, one knows
For
. Note that
By Lemma 2 and
, one can choose a large enough constant
such that
Furthermore, this result with the Borel–Cantelli lemma implies
Finally, together with (
12), (
16), (
17), and (
18), one gets
Proof of (ii). Using Lemma 1 and the property (
2) of wavelets,
Hence,
where
For
. According to the conclusion of (i),
For
. Let a function
belong to Hölder space
, and let
be a wavelet function, then one can prove that
More details and proofs of this above conclusion can be found in [
28,
29,
30]. Furthermore, because the partial derivatives functions
belong to Hölder space
, one can easily obtain that
Now, this conclusion with (
20) and (
21) shows that
Proof of (iii). By the definition of
and the properties of the variance function, one has
Moreover, the assumptions of random vector
,
A3 and
A4 imply that
Using the property of the kernel function in (
5) and condition
A3,
Hence,
□
5. Conclusions
For nonparametric derivative estimation, classical research results pay more attention to the derivative estimation of one-dimensional functions. However, this paper studies the nonparametric estimation of partial derivatives of a multivariate function. Firstly, a wavelet estimator of the partial derivatives of the multivariate variance function in a heteroscedastic model is given. More importantly, this wavelet estimator is an unbiased estimator. Secondly, two important lemma are proved, which discuss the key properties of the wavelet estimator. Finally, the convergence rates over different estimation errors of the wavelet estimator are considered. According to the main theorem, it is easy to see that the strong convergence rate of the wavelet estimator is the same as the optimal uniform almost sure convergence rate of nonparametric function estimations.
Because the local analysis characteristics in the time and frequency domains of the wavelet, the wavelet estimator can choose an appropriate wavelet scaling parameter to get the optimal convergence rate. Hence, this paper considers partial derivatives estimation based on the wavelet method. The theoretical results of asymptotic property of the wavelet estimator are discussed in this paper. In addition, it is difficult to present the corresponding practical illustration of the wavelet estimator, which needs more investigations and some new skills. We will consider this in future work.
Author Contributions
Conceptualization, J.K. and H.Z.; writing—original draft preparation, J.K. and H.Z.; writing—review and editing, J.K. All authors have read and agreed to the published version of the manuscript.
Funding
This paper is supported by the Guangxi Natural Science Foundation (No. 2023GXNSFAA026042), the National Natural Science Foundation of China (No. 12361016), the Center for Applied Mathematics of Guangxi (GUET), and the Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation.
Data Availability Statement
Data are contained within the article.
Acknowledgments
All authors would like to thank the reviewers for their important and constructive comments.
Conflicts of Interest
All authors state that there are no conflicts of interest.
References
- Box, G. Signal-to-noise ratios, performance criteria, and transformations. Technometrics 1988, 30, 1–17. [Google Scholar] [CrossRef]
- Smeds, K.; Wolters, F.; Rung, M. Estimation of signal-to-noise ratios in realistic sound scenarios. J. Am. Acad. Audiol. 2015, 26, 183–196. [Google Scholar] [CrossRef] [PubMed]
- Yao, Q.W.; Tong, H. Quantifying the influence of initial values on nonlinear prediction. J. R. Stat. Soc. Ser. B 1994, 56, 701–725. [Google Scholar]
- Härdle, W.; Tsybakov, A. Local polynomial estimators of the volatility function in nonparametric autoregression. J. Econom. 1997, 8, 223–242. [Google Scholar] [CrossRef]
- Mak, T.K. Estimation of parameters in heteroscedastic linear models. J. R. Stat. Soc. Ser. B 1992, 54, 649–655. [Google Scholar] [CrossRef]
- Shen, S.L.; Mei, C.L. Estimation of the variance function in heteroscedastic linear regression models. Commun. Stat. Theory Methods 2009, 38, 1098–1112. [Google Scholar] [CrossRef]
- Su, Z.H.; Cook, R.D. Estimation of multivariate means with heteroscedastic errors using envelope models. Stat. Sin. 2013, 23, 213–230. [Google Scholar] [CrossRef]
- Zhang, J.; Huang, Z.S. Efficient simultaneous partial envelope model in multivariate linear regression. J. Stat. Comput. Simul. 2022, 92, 1373–1400. [Google Scholar] [CrossRef]
- Fan, J.Q.; Yao, Q.W. Efficient estimation of conditional variance functions in stochastic regression. Biometrika 1998, 85, 645–660. [Google Scholar] [CrossRef]
- Brown, D.L.; Levine, M. Variance estimation in nonparametric regression via the difference sequence method. Ann. Stat. 2007, 35, 2219–2232. [Google Scholar] [CrossRef]
- Wang, L.; Brown, D.L.; Cai, T.T. Effect of mean on variance function estimation in nonparametric regression. Ann. Stat. 2008, 36, 646–664. [Google Scholar] [CrossRef]
- Shen, Y.D.; Gao, C.; Witten, D.; Han, F. Optimal estimation of variance in nonparametric regression with random design. Ann. Stat. 2020, 48, 3589–3618. [Google Scholar] [CrossRef]
- Zaoui, A. Variance function estimation in regression model via aggregation procedures. J. Nonparametric Stat. 2023, 35, 397–436. [Google Scholar] [CrossRef]
- Zhou, S.G.; Wolfe, D.A. On derivatives estimation in spline regression. Stat. Sin. 2000, 10, 93–108. [Google Scholar]
- Cai, T.T. On adaptive wavelet estimation of a derivative and other related linear inverse problems. J. Stat. Plan. Inference 2002, 108, 329–349. [Google Scholar] [CrossRef]
- Chaubey, Y.P.; Doosti, H.D.; Rao, B.L.S.P. Wavelet based estimation of the derivatives of a density for a negatively associated process. J. Stat. Theory Pract. 2008, 2, 453–463. [Google Scholar] [CrossRef]
- Hosseinioun, N.; Doosti, H.; Nirum, H.A. Nonparametric estimation of the derivative of a density by the method of wavelet for mixing sequences. Stat. Pap. 2012, 53, 195–203. [Google Scholar] [CrossRef]
- Kou, J.K.; Zhang, H. Wavelet estimation of the derivatives of variance function in heteroscedastic model. AIMS Math. 2023, 8, 14340–14361. [Google Scholar] [CrossRef]
- Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
- Härdle, W.; Kerkyacharian, G.; Picard, D.; Tsybakov, A. Wavelets, Approximation and Statistical Application; Springer: New York, NY, USA, 1998. [Google Scholar]
- Meyer, Y. Wavelet and Operators; Hermann: Paris, France, 1990. [Google Scholar]
- Walnut, D.F. An Introduction to Wavelet Analysis; Birkhäuser: Basel, Switzerland, 2001. [Google Scholar]
- Triebel, H. Theory of Function Spaces; Birkhäuser: Basel, Switzerland, 2001. [Google Scholar]
- Masry, E. Multivariate probability density estimation by wavelet methods: Strong consistency and rates for stationary time series. Stoch. Process. Their Appl. 1997, 67, 177–193. [Google Scholar] [CrossRef]
- Allaoui, S.; Bouzebda, S.; Chesneau, C.; Liu, J.C. Uniform almost sure convergence and asymptotic distribution of the wavelet-based estimators of partial derivatives of multivariate density function under weak dependence. J. Nonparametric Stat. 2021, 33, 170–196. [Google Scholar] [CrossRef]
- Kou, J.K.; Huang, Q.M.; Guo, H.J. Pointwise wavelet estimations for a regression model in local Hölder space. Axioms 2022, 11, 466. [Google Scholar] [CrossRef]
- Giné, E.; Nickl, R. Uniform limit theorems for wavelet density estimators. Ann. Probab. 2009, 37, 1605–1646. [Google Scholar] [CrossRef]
- Devore, R.; Kerkyacharian, G.; Picard, D.; Temlyakov, V. Approximation methods for supervised learning. Found. Comput. Math. 2006, 6, 3–58. [Google Scholar] [CrossRef]
- Liu, Y.M.; Wu, C. Point-wise estimation for anisotropic densities. J. Multivar. Anal. 2019, 171, 112–125. [Google Scholar] [CrossRef]
- Chen, L.; Chesneau, C.; Kou, J.K.; Xu, J.L. Wavelet optimal estimation for a multivariate probability density function under weighted distribution. Results Math. 2023, 78, 66. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).