Next Article in Journal
Modeling and Transmission Dynamics of a Stochastic Fractional Delay Cervical Cancer Model with Efficient Numerical Analysis
Previous Article in Journal
Modular Invariance and Anomaly Cancellation Formulas for Fiber Bundles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonparametric Density Estimation in a Mixed Model Using Wavelets

School of Mathematics and Computational Science, Guilin University of Electronic Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(10), 741; https://doi.org/10.3390/axioms14100741
Submission received: 5 August 2025 / Revised: 22 September 2025 / Accepted: 24 September 2025 / Published: 30 September 2025

Abstract

This paper investigates nonparametric estimations of a density function within a mixed density model. A linear wavelet density estimator and an adaptive nonlinear wavelet estimator are proposed using wavelet method and hard thresholding algorithm. Under some mild conditions, the convergence rates over the mean integrated squared error of two wavelet density estimators are proved. Compared with the optimal convergence rates of nonparametric wavelet estimations, those two wavelet estimators all are optimal in some cases. Finally, the performances of two wavelet estimators are verified by numerical experimental studies.
MSC:
62G07; 62G20; 42C40

1. Introduction

In this paper, we investigate a mixed density model as follows. The random vectors X 1 , X 2 , , X n are independent and identically distributed, and share a same density g ( x ) satisfying
g ( x ) = θ h ( x ) + ( 1 θ ) f ( x ) , x Ω .
In the above equation, Ω is a compact support subset of R d , and θ is a known mixture parameter, θ ( 0 , 1 ) . Both h ( x ) and f ( x ) are bounded densities. The model aims to recover the unknown density f ( x ) using the observed sample { X i } i = 1 n .
This mixed density model has many practical applications. In the contamination problem [1,2], the density function f ( x ) standing for a reasonable assumption distribution is contaminated by an arbitrary assumption h ( x ) . In addition, this model is also widely used in microarray analysis [3,4,5], neuroimaging [6] and other testing problems [7,8]. During the multiple testing, Efron et al. [9] used the above mixed model to estimate the local false discovery rate.
For the density estimation problem (1), significant results have been established through various methods, including the kernel method, maximum likelihood estimation, and polynomial techniques. Olkin and Spiegelman [10] estimated the mixture parameter θ by the maximum likelihood method, and proposed a kernel density estimator of the true density function f ( x ) . Priebe and Marchette [11] proposed using parameter estimates as weights for kernel density estimation. James et al. [12] constructed a semiparametric density estimator, and studied the almost sure convergence rate of this estimator under some mild conditions. Robin et al. [13] estimated the unknown density function employing a weighted kernel function and adaptively selected the weights. The pointwise quadratic risk for the randomly weighted kernel estimator mentioned above was derived by [14], assuming the unknown density function pertains to a Hölder class. The convergence rates and asymptotic behavior of the density estimator were derived by [15], considering the cases of a known and an unknown mixture parameter, respectively. Deb et al. [16] proposed likelihood based methods for estimating the distribution function of the density function f ( x ) , and discussed the statistical and computational properties of this method. To the best of our knowledge, there is no paper focus on wavelet method for estimating the unknown density function f ( x ) .
This paper focuses on developing wavelet-based methods to address the density estimation problem (1). Wavelet has an important and satisfactory property of local time–frequency analysis. Due to this unique property, wavelet estimator can choose suitable scale parameters for estimating functions which have different functional properties in different intervals. Wavelet methods are now a staple tool in nonparametric statistics; see [17,18,19,20,21,22]. In this paper, a linear wavelet density estimator is initially introduced via the wavelet projector approach. It is noteworthy that this linear estimator exhibits unbiasedness. A convergence rate of the linear estimator over L 2 -risk is proved in Besov spaces. Although it achieves the optimal convergence rate typical of nonparametric wavelet estimation, this linear estimator lacks adaptability. Secondly, a nonlinear wavelet estimator is derived using the hard thresholding method, which is an adaptive density estimator. Compared to its linear estimator, the nonlinear estimator achieves a superior convergence rate when 1 p < 2 . To conclude, a series of numerical experiments are provided to study the performances of those two wavelet estimators.
The rest of the paper proceeds as follows. Section 2 will provide the definitions of two wavelet density estimators and the convergence rates of those two estimators under L 2 -risk. The performances of two wavelet estimators are studied through numerical experiments in Section 3. The proof of the main results and some auxiliary results are presented in Section 4.

2. Wavelet Estimators and Main Results

This work investigates wavelet-based density estimation for mixed models within Besov spaces. Let us first review several fundamental concepts in wavelet theory. The orthogonal multiresolution analysis (MRA) [23] defines a sequence of nested and closed linear subspaces V j V j + 1 , which belong to the space of squared integrable functions L 2 ( R d ) for any j Z ,
(i)
V j = { 0 } , V j ¯ = L 2 ( R d ) ;
(ii)
f ( x ) V 0 if and only if f ( 2 j x ) V j ;
(iii)
There exists a function Φ ( x ) V 0 for which the set { Φ ( x κ ) κ Z d } forms an orthonormal basis of V 0 .
Given an orthonormal scaling function Φ , the corresponding wavelets are denoted by Ψ μ for μ 0 , 1 , , 2 d 1 . Then S = { Φ ϱ , κ : = 2 ϱ d / 2 Φ ( 2 ϱ x κ ) , Ψ j , κ , μ : = 2 j d / 2 Ψ μ ( 2 j x κ ) , j ϱ , κ Λ j } form an orthonormal basis of L 2 ( Ω ) . For every integer j 0 ϱ , the function f ( x ) L 2 ( Ω ) can be represented by S in the following wavelet series:
f ( x ) = κ Λ j 0 a j 0 , κ Φ j 0 , κ ( x ) + j = j 0 μ = 1 2 d 1 κ Λ j b j , κ , μ Ψ j , κ , μ ( x ) .
In this equation, Λ j = { 0 , 1 , , 2 j d 1 } , a j , κ = f , Φ j , κ L 2 ( Ω ) , b j , κ , μ = f , Ψ j , κ , μ L 2 ( Ω ) .
Let P V j denote the orthogonal projection operator from L 2 ( Ω ) onto the subspace V j . Then, for any f ( x ) L 2 ( Ω ) ,
P V j f ( x ) = κ Λ j a j , κ Φ j , κ ( x ) .
For nonparametric density estimations based on wavelet methods, it is very common to assume that the unknown density function belong to Besov spaces. It is generally acknowledged that Besov spaces are very general function spaces and can be characterized simply in terms of wavelet coefficients. Following [23], we next define Besov spaces via their wavelet coefficient characterization.
Lemma 1. 
Suppose the scale function Φ is regular of order ω, 0 < s < ω , let f L 2 ( Ω ) , 1 p , q < , the following statements are thus logically equivalent:
(i) 
f B p , q s ( Ω ) ;
(ii) 
{ 2 j s f P V j f p } l q ;
(iii) 
{ 2 j ( s + d 2 d p ) b j , κ , μ p } l q .
One can characterize the Besov norm of f as follows:
f B p , q s : = a ϱ , κ p + ( 2 j ( s + d 2 d p ) b j , κ , μ p ) j ϱ q
with b j , κ , μ p p = μ = 1 2 d 1 κ Λ j b j , κ , μ p .
We now introduce the linear wavelet estimator as follows
f ^ n lin ( x ) : = κ Λ j 0 a ^ j 0 , κ Φ j 0 , κ ( x ) .
In the above definition,
a ^ j , κ : = 1 n i = 1 n 1 1 θ Φ j , κ ( X i ) γ j , κ
and γ j , κ = Ω θ 1 θ h ( x ) Φ j , κ ( x ) d x . In these definitions, θ stands for a known mixture parameter of the model (1). For the scale function Φ ( x ) , this paper uses the Daubechies wavelets [24]. It is well known that the simplest of these is the Haar wavelet. The corresponding scale functions of Daubechies wavelets can be obtained by a iterative scheme with Haar scale function [24,25]. Next, we derive the convergence rate of the linear wavelet estimator, using the notation: x + : = max { x , 0 } . There exists a constant c > 0 , u v denotes u c v ; u v denotes v u ; and u v denotes both u v and v u .
Theorem 1. 
Consider the model (1), f ( x ) B p , q s ( Ω ) , where p , q [ 1 , ) , s > d p . Define the linear wavelet estimator f ^ n lin ( x ) by (4), taking 2 j 0 n 1 2 s + d where s = s d ( 1 p 1 2 ) + , then
E [ f ^ n lin ( x ) f ( x ) 2 2 ] n 2 s 2 s + d .
Remark 1. 
When p 2 , the convergence rate ( n 2 s 2 s + d ) of the linear wavelet estimator f ^ n lin ( x ) matches the optimal convergence rate [26] for standard nonparametric wavelet estimation problems.
Compared with the optimal convergence rate n 2 s 2 s + d [26], the linear wavelet estimator results in a lower convergence rate in the case of 1 p < 2 . Moreover, the definition of the linear wavelet estimator f ^ n lin ( x ) requires knowledge of the smoothness parameter s of the unknown density f ( x ) . However, the smoothness parameter s of the density function is usually unknown in many practical applications. As a result, this linear wavelet estimator is not adaptive. In order to overcome those shortages of the linear estimator, this paper employs a hard thresholding method to construct a nonlinear wavelet estimator.
We establish the following nonlinear wavelet estimator
f ^ n non ( x ) : = κ Λ j 0 a ^ j 0 , κ Φ j 0 , κ ( x ) + j = j 0 j 2 μ = 1 2 d 1 κ Λ j b ^ j , κ , μ I { | b ^ j , κ , μ | τ r n } Ψ j , κ , μ ( x ) .
In this equation,
b ^ j , κ , μ : = 1 n i = 1 n 1 1 θ Ψ j , κ , μ ( X i ) ϑ j , κ , μ
and ϑ j , κ , μ = Ω θ 1 θ h ( x ) Ψ j , κ , μ ( x ) d x . Then, the wavelet function Ψ ( x ) can be constructed by the scaling relation and the scale function Φ ( x ) . For more details, we can refer to [24,25]. Here, the function I G is an indicator function on G and r n = ln n n . The convergence rate of this nonlinear wavelet estimator is presented as follows.
Theorem 2. 
Consider the model (1), f ( x ) B p , q s ( Ω ) , where p , q [ 1 , ) , s > d p . The nonlinear estimator f ^ n non ( x ) is constructed by (6) together with 2 j 0 n 1 2 ω + d ( ω > s ) , 2 j 2 ( n ln n ) 1 d ; then
E [ f ^ n non ( x ) f ( x ) 2 2 ] ( ln n ) n 2 s 2 s + d .
Remark 2. 
Note that the convergence rate of this nonlinear wavelet estimator matches the optimal convergence rate n 2 s 2 s + d up to the ln n factor.
Remark 3. 
Unlike linear estimators, both attain optimal rates for p 2 . However, the nonlinear wavelet estimator gets better convergence rate when 1 p < 2 . More importantly, the definition of the nonlinear estimator is independent of the smoothness parameter of the unknown density function, making the wavelet estimator adaptive.
Remark 4. 
According to the model (1), we can easily to see that the density function f ( x ) has a compact support Ω on R d . It should be pointed out that this compact support property play a important role in the definitions of two wavelet estimators and the proof of two theorems. For details, in the definitions of two wavelet estimators, κ Λ j 0 and κ Λ j rely on the compact support property. Then the cardinalities of Λ j 0 and Λ j satisfy | Λ j 0 | 2 j 0 d and | Λ j | 2 j d , respectively. These results are used in some steps of the proofs of the two theorems, such as (12), (13), (27) and so on. Hence, similar to the classical and important work of Donoho et al. [26], this paper considers the nonparametric estimation of density function with compact support condition. On the other hand, for the nonparametric density estimation with non-compact support assumption, some significant stidoes have been conducted by [27,28].

3. Numerical Experiments

This section presents numerical experiments using R2024 software to delve into the effect of approximation of the linear and nonlinear wavelet estimators. During the experiments, the density function f ( x ) is estimated from the observed data set { X i } i = 1 n . Because during the definitions of those two wavelet estimators the scale parameters j 0 and j 1 are related to the sample size n, we choose the sample size n = 4096 in the following simulation studies. In the model (1), the mixture parameter θ is selected to be θ = 0.025 . To assess the performance of both estimators, we adopt the mean square error (MSE) as the evaluation criterion, i.e., MSE ( f , f ^ ) = 1 n i = 1 n ( f ( x i ) f ^ ( x i ) ) 2 , which is a classical and effective evaluation method in nonparametric estimations.
Based on the definition of linear wavelet estimator, we construct a set of linear estimators f ^ n lin ( x ) with different scales j 0 = 0 , 1 , , l o g 2 ( n ) 1 . During the following simulation studies, we use j to stand for the scale parameter j 0 , which means that j = j 0 . By minimizing the mean square error, we can obtain the best linear wavelet estimator with the optimal scale parameters j 0 . For example, in the Example 1, the MSE results of the linear wavelet estimator with different scale parameter j 0 are shown in Figure 1c. Then, it is easy to see that the MSE of linear wavelet estimator is least when the scale parameter j 0 is 5, 6, 7, 8, 9 and 10. For simplicity and high efficiency, we choose the minimum scale parameter value 5, i.e., j 0 = 5 .
The nonlinear estimator, according to the definition (6), has two scale parameters, j 0 and j 1 . For the first scale parameter, we used the same optimal parameter j 0 as for the linear estimator. The other scale parameter j 1 is set at the maximum level permitted in the wavelet decomposition (i.e., j 2 = l o g 2 ( n ) 1 with n = 4096 ). In addition, the optimal thresholding parameter λ ( λ = τ r n ) is selected by minimizing the mean square error of the nonlinear wavelet estimator. For example, in the Example 1, the two scale parameters of nonlinear wavelet estimator are j 0 = 5 and j 2 = 11 . On the other hand, the MSE of nonlinear wavelet estimator with different thresholding values are shown in Figure 1d. Then we can select the optimal thresholding parameter λ = 0.0321070234 . According to the model (1), six different functions will be selected as the density function f ( x ) in the following simulation study.
Example 1. 
In model (1), we take the density to be f 1 ( x ) = 8 ( 4 x 2 ) e ( 4 x 2 ) 2 . The density function h ( x ) = 0.57 + 0.5 x 2 + 0.3 cos ( 2 x ) and x [ 0.5 ,   1.5 ] . The performance of both linear and nonlinear wavelet estimators is illustrated in Figure 1a,b, respectively. As can be observed, each estimator provides an effective approximation of the density function f ( x ) . As clearly shown in Figure 1c, the optimal scale parameter is j 0 = 5 . The optimal threshold parameter λ = 0.0321070234 is depicted in Figure 1d.
Example 2. 
In model (1), we set the unknown density to f 2 ( x ) = 10 x ( 1 4 x I { x 0.35 } ) 2 I { x 0.35 } + ( x 2 + 0.5 x + 0.5 ) I { x > 0.35 } + 0.71 , h ( x ) = 0.7 + 0.5 x 2 + 0.3 cos ( 2 x ) and x [ 0 ,   1 ] . The performance of both linear and nonlinear wavelet estimators is illustrated in Figure 2a,b, respectively. Based on Figure 2c,d, the optimal scale parameter j 0 = 6 and the optimal threshold parameter λ = 0.0051170569 are obtained. From those results, it is evident that the nonlinear wavelet estimator outperforms the linear estimator, particularly in capturing sharp features.
Example 3. 
In model (1), we take f 3 ( x ) = 0.81 cos 2 x on x [ 0.5 ,   1.5 ] and set h ( x ) = 0.36 + 0.2 x 2 + 0.1 cos ( 2 x ) . The efficacy of the two wavelet estimators in approximating the density function f ( x ) is evidenced in Figure 3a,b. The corresponding best parameters of two wavelet estimators are presented in Figure 3c,d.
Example 4. 
In model (1), the density function is specified as f 4 ( x ) = 2 ( 0.32 + 0.6 x + 0.3 e 100 ( x 0.3 ) 2 ) I { x 0.5 } 2 ( 0.28 0.6 x 0.3 e 100 ( x 1.3 ) 2 ) I { x > 0.5 } with h ( x ) = 0.43 + 0.1 x 2 + 0.1 cos ( 2 x ) and x [ 1 , 1 ] . The performance of the linear and nonlinear wavelet estimators is illustrated in Figure 4a,b, respectively. Both estimators effectively approximate the density function, but the nonlinear estimator demonstrates superior performance near discontinuities.
Example 5. 
For the experiment, we set the target density to f 5 ( x ) = ( 4 s i n ( 4 π x ) s i g n ( x 0.3 ) s i g n ( 0.72 x ) ) / 10 + 1.1 , h ( x ) = 0.7 + 0.5 x 2 + 0.3 cos ( 2 x ) and x [ 0 ,   1 ] . According to Figure 5c,d, the optimal scale parameter j 0 = 5 and the optimal thresholding parameter λ = 0.0929765886 are obtained. Under these conditions, the density function f ( x ) can be estimated by the two wavelet estimators shown in Figure 5a,b.
Example 6. 
For the experiment, we select the “Time Shift Sine” as the density function f 6 ( x ) [29]. In addition, h ( x ) = 0.7 + 0.5 x 2 + 0.3 cos ( 2 x ) and x [ 0 , 1 ] . Figure 6c,d show that the optimal scale parameter is j 0 = 6 and the optimal thresholding parameter is l a m b d a = 0.006204013 . From the following results, both wavelet estimators accurately estimate the unknown density.
For both the linear and nonlinear wavelet estimators, the best scale parameter, the optimal thresholding parameter and the values of MSE are shown in Table 1. Based on Table 1 and the preceding results, both wavelet estimators effectively approximate the density function, with the nonlinear variant exhibiting superior performance. More importantly, according to the results of Example 2, 4, 5 and 6, both wavelet estimators perform well in cases with discontinuities and spikes.

4. Proof of Main Theorem and Auxiliary Results

In this section, we will give some auxiliary results and the proof of Theorems 1 and 2. It should now be pointed out that this paper does not use symbolic computation software (such as Mathematica, Maple, etc.) for the following theoretical derivations.

4.1. Auxiliary Results

Lemma 2. 
For the model (1), let a ^ j , κ be defined as (5) and b ^ j , κ , μ given by (7). Then we have
E [ a ^ j , κ ] = a j , κ , E [ b ^ j , κ , μ ] = b j , κ , μ .
Proof. 
Accroding to a ^ j , κ = 1 n i = 1 n 1 1 θ Φ j , κ ( X i ) γ j , κ , γ j , κ = Ω θ 1 θ h ( x ) Φ ( x ) d x , we observe that
E [ a ^ j , κ ] = E 1 n i = 1 n 1 1 θ Φ j , κ ( X i ) γ j , κ = E 1 n i = 1 n 1 1 θ Φ j , κ ( X i ) Ω θ 1 θ h ( x ) Φ j , κ ( x ) d x = E 1 1 θ Φ j , κ ( X 1 ) Ω θ 1 θ h ( x ) Φ j , κ ( x ) d x = Ω 1 1 θ g ( x ) Φ j , κ ( x ) d x Ω θ 1 θ h ( x ) Φ j , κ ( x ) d x .
From the Equation (1), g ( x ) = θ h ( x ) + ( 1 θ ) f ( x ) and
E [ a ^ j , κ ] = Ω 1 1 θ [ θ h ( x ) + ( 1 θ ) f ( x ) ] Φ j , κ ( x ) d x Ω θ 1 θ h ( x ) Φ j , κ ( x ) d x = Ω f ( x ) Φ j , κ ( x ) d x = f , Φ j , κ L Ω 2 = a j , κ .
The proof of the second equation is similar to the first one. This concludes the proof of Lemma 2.  □
Lemma 3. 
Consider the model (1) with θ ( 0 , δ ) and 0 < δ < 1 . Two unbiased estimators a ^ j , κ and b ^ j , κ , μ of the wavelet coefficients are proposed by (5) and (7), respectively. Then we have
E [ ( a ^ j , κ a j , κ ) 2 ] 1 n , E [ ( b ^ j , κ , μ b j , κ , μ ) 2 ] 1 n .
Proof. 
According to Lemma 2 and the definition of γ j , κ , one has var a ^ j , κ = E [ ( a ^ j , κ E [ a ^ j , κ ] ) 2 ] ,
E [ ( a ^ j , κ a j , κ ) 2 ] = var 1 n i = 1 n 1 1 θ Φ j , κ ( X i ) γ j , κ = var 1 n i = 1 n 1 1 θ Φ j , κ ( X i ) = 1 n 2 var i = 1 n 1 1 θ Φ j , κ ( X i ) 1 n E 1 ( 1 θ ) 2 Φ j , κ 2 ( X 1 ) 1 n E Φ j , κ 2 ( X 1 ) .
Due to the boundness of h ( x ) , f ( x ) and mixture parameter θ , we have
E Φ j , κ 2 ( X 1 ) = Ω g ( x ) Φ j , κ 2 ( x ) d x = Ω [ θ h ( x ) + ( 1 θ ) f ( x ) ] Φ j , κ 2 ( x ) d x = Ω θ h ( x ) Φ j , κ 2 ( x ) d x + Ω ( 1 θ ) f ( x ) Φ j , κ 2 ( x ) d x Ω Φ j , κ 2 ( x ) d x = 1 .
Hence, we prove the first equality,
E [ ( a ^ j , κ a j , κ ) 2 ] 1 n .
According to the similar arguments, the second equation can be proved easily. Hence, Lemma 3 is proved.  □
Lemma 4. 
Consider the model (1) with θ ( 0 , δ ) and 0 < δ < 1 . The wavelet coefficients estimator b ^ j , κ , μ is given by (7). For 2 j d n ln n , with a constant τ > 1 satisfying
Pr ( | b ^ j , κ , μ b j , κ , μ | τ r n ) n 4 .
Proof. 
In order to prove the above result simply, we take
B i : = 1 1 θ Ψ j , κ , μ ( X i ) E [ Ψ j , κ , μ ( X i ) ] .
Using Lemma 2,
b ^ j , κ , μ b j , κ , μ = | 1 n i = 1 n 1 1 θ Ψ j , κ , μ ( X i ) ϑ j , κ , μ E [ b ^ j , κ , μ ] | = | 1 n i = 1 n 1 1 θ Ψ j , κ , μ ( X i ) E 1 1 θ Ψ j , κ , μ ( X i ) | = 1 n | i = 1 n 1 1 θ ( Ψ j , κ , μ ( X i ) E [ Ψ j , κ , μ ( X i ) ] ) | = 1 n | i = 1 n B i | .
Thus, the following can be concluded:
Pr { | b ^ j , κ , μ b j , κ , μ | τ r n } = Pr 1 n | i = 1 n B i | τ r n .
Note that E [ B i ] = 0 . The boundness of h ( x ) , f ( x ) and θ imply that
| B i | = | 1 1 θ ( Ψ j , κ , μ ( X i ) E [ Ψ j , κ , μ ( X i ) ] ) | | Ψ j , κ , μ ( X i ) E [ Ψ j , κ , μ ( X i ) ] | | Ψ j , κ , μ ( X i ) | + | E [ Ψ j , κ , μ ( X i ) ] | = | 2 j d 2 Ψ μ ( 2 j X i κ ) | + | Ω g ( x ) 2 j d 2 Ψ μ ( 2 j x κ ) d x | 2 j d 2 .
This together with 2 j d n ln n shows that
| B i | n ln n .
On the other hand, using the properties of variance and wavelet function, we have
E [ B i 2 ] = E 1 ( 1 θ ) 2 Ψ j , κ , μ ( X i ) E [ Ψ j , κ , μ ( X i ) ] 2 var [ Ψ j , κ , μ ( X i ) ] E [ Ψ j , κ , μ 2 ( X i ) ] 1 .
Finally, it follows from (9), (10) and Bernstein’s inequality [23] that
Pr 1 n | i = 1 n B i | τ r n exp n τ 2 r n 2 2 ( 1 + τ r n n ln n / 3 ) exp ( ln n ) τ 2 2 ( 1 + τ / 3 ) n τ 2 2 ( 1 + τ / 3 ) .
Then, a sufficiently large τ can be chosen such that
Pr ( | b ^ j , κ , μ b j , κ , μ | τ r n ) n τ 2 2 ( 1 + τ / 3 ) n 4 .
The proof of Lemma 4 is completed.  □

4.2. Proof of Main Theorem

Proof of Theorem 1.
According to (2) and (3),
E [ f ^ n lin ( x ) f ( x ) 2 2 ] = E [ f ^ n lin ( x ) P V j 0 f ( x ) + P V j 0 f ( x ) f ( x ) 2 2 ] = E [ f ^ n lin ( x ) P V j 0 f ( x ) 2 2 ] + P V j 0 f ( x ) f ( x ) 2 2 .
For the first part, it can be inferred that
E [ f ^ n lin ( x ) P V j 0 f ( x ) 2 2 ] = E κ Λ j 0 a ^ j 0 , κ a j 0 , κ Φ j 0 , κ ( x ) 2 = E Ω κ Λ j 0 a ^ j 0 , κ a j 0 , κ Φ j 0 , κ ( x ) 2 d x = E κ Λ j 0 a ^ j 0 , κ a j 0 , κ 2 = κ Λ j 0 E a ^ j 0 , κ a j 0 , κ 2 .
By Lemma 3, since Λ j 0 2 j 0 d and 2 j 0 n 1 2 s + d , we have
E [ f ^ n lin ( x ) P V j 0 f ( x ) 2 2 ] = κ Λ j 0 E a ^ j 0 , κ a j 0 , κ 2 κ Λ j 0 1 n = 2 j 0 d n n 2 s 2 s + d .
For the second part, when p 2 , we can get d ( 1 p 1 2 ) + = 0 and s = s . Moreover, Hölder’s inequality implies that
P V j 0 f ( x ) f ( x ) 2 2 = Ω | P V j 0 f ( x ) f ( x ) | 2 · 1 d x Ω | P V j 0 f ( x ) f ( x ) | 2 · p 2 d x 2 p Ω 1 1 p p 2 d x 1 2 p Ω | P V j 0 f ( x ) f ( x ) | p d x 2 p = P V j 0 f ( x ) f ( x ) p 2 .
Furthermore, using Lemma 1 and f ( x ) B p , q s ( Ω ) , we can get
P V j 0 f ( x ) f ( x ) 2 2 P V j 0 f ( x ) f ( x ) p 2 2 j 0 s n 2 s 2 s + d .
When 1 p < 2 and s > d p , we know that s = s d ( 1 p 1 2 ) and B p , q s ( Ω ) B 2 , + s ( Ω ) . In addition, the following conclusion is true,
P V j 0 f ( x ) f ( x ) 2 2 j = j 0 2 2 j s 2 2 j 0 s n 2 s 2 s + d .
Hence, for 1 p < , the results (13) and (14) show that
P V j 0 f ( x ) f ( x ) 2 2 n 2 s 2 s + d .
Finally, together with (11), (12) and (15), we prove that
E [ f ^ n lin ( x ) f ( x ) 2 2 ] n 2 s 2 s + d .
Proof of Theorem 2.
According to the definitions of linear estimator, nonlinear estimator and projection operator, we have
f ^ n non ( x ) f ( x ) = ( f ^ n lin ( x ) P V j 0 f ( x ) ) ( f ( x ) P V j 2 + 1 f ( x ) ) + j = j 0 j 1 μ = 1 2 d 1 κ Λ j ( b ^ j , κ , μ I { | b ^ j , κ , μ | τ r n } b j , κ , μ ) Ψ j , κ , μ ( x ) .
Hence,
E [ f ^ n non ( x ) f ( x ) 2 2 ] T 1 + T 2 + D .
In this inequality,
T 1 : = E f ^ n lin ( x ) P V j 0 f ( x ) 2 2 , T 2 : = f ( x ) P V j 2 + 1 f ( x ) 2 2 , D : = E j = j 0 j 2 μ = 1 2 d 1 κ Λ j b ^ j , κ , μ I { | b ^ j , κ , μ | τ r n } b j , κ , μ Ψ j , κ , μ ( x ) 2 2 .
For T 1 . According to Lemma 3, (12) and 2 j 0 n 1 2 ω + d ( ω > s ) ,
T 1 = E f ^ n lin ( x ) P V j 0 f ( x ) 2 2 = κ Λ j 0 E a ^ j 0 , κ a j 0 , κ 2 κ Λ j 0 1 n 2 j 0 d n n 2 ω 2 ω + d n 2 s 2 s + d .
For T 2 . Consistent with the reasoning in (15), it follows that
T 2 ( ln n ) n 2 s 2 s + d ,
with the condition 2 j 2 ( n ln n ) 1 d .
The following work is to prove
D = E j = j 0 j 2 μ = 1 2 d 1 κ Λ j b ^ j , κ , μ I { | b ^ j , κ , μ | τ r n } b j , κ , μ Ψ j , κ , μ ( x ) 2 2 ( ln n ) n 2 s 2 s + d .
According to the properties of wavelet function, we have
D = j = j 0 j 2 μ = 1 2 d 1 κ Λ j E b ^ j , κ , μ I { | b ^ j , κ , μ | τ r n } b j , κ , μ 2 .
Note that the following conclusion is true:
b ^ j , κ , μ I { | b ^ j , κ , μ | τ r n } b j , κ , μ 2 b ^ j , κ , μ b j , κ , μ 2 I { | b ^ j , κ , μ b j , κ , μ | > τ r n 2 } + b ^ j , κ , μ b j , κ , μ 2 I { | b j , κ , μ | > τ r n 2 } + b j , κ , μ 2 I { b j , κ , μ | 2 τ r n } .
Then, we have
D = j = j 0 j 2 μ = 1 2 d 1 κ Λ j E b ^ j , κ , μ I { | b ^ j , κ , μ | τ r n } b j , κ , μ 2 D 1 + D 2 + D 3 ,
D 1 = j = j 0 j 2 μ = 1 2 d 1 κ Λ j E b ^ j , κ , μ b j , κ , μ 2 I { | b ^ j , κ , μ b j , κ , μ | > τ r n 2 } , D 2 = j = j 0 j 2 μ = 1 2 d 1 κ Λ j E b ^ j , κ , μ b j , κ , μ 2 I { | b j , κ , μ | τ r n 2 } , D 3 = j = j 0 j 2 μ = 1 2 d 1 κ Λ j b j , κ , μ 2 I { | b j , κ , μ | 2 τ r n } .
For D 1 . Applying Hölder inequality,
E b ^ j , κ , μ b j , κ , μ 2 I { | b ^ j , κ , μ b j , κ , μ | > τ r n 2 } E [ | b ^ j , κ , μ b j , κ , μ | 4 ] 1 2 E [ I { | b ^ j , κ , μ b j , κ , μ | > τ r n 2 } ] 1 2 E [ | b ^ j , κ , μ b j , κ , μ | 4 ] 1 2 Pr | b ^ j , κ , μ b j , κ , μ | τ r n 2 1 2 .
According to Lemmas 3 and 4,
E b ^ j , κ , μ b j , κ , μ 2 I { | b ^ j , κ , μ b j , κ , μ | > τ r n 2 } E b ^ j , κ , μ b j , κ , μ 2 ( b ^ j , κ , μ b j , κ , μ ) 2 1 2 . 1 n 2 n ln n E ( b ^ j , κ , μ b j , κ , μ ) 2 1 2 . 1 n 2 1 n 2 ln n .
Then, D 1 j = j 0 j 2 2 j d n 2 ln n 2 j 2 d n 2 ln n 1 n ( ln n ) 3 2 1 n n 2 s 2 s + d . Hence,
D 1 n 2 s 2 s + d .
For D 2 . We define 2 j 1 n 1 2 s + d . Then 2 j 0 n 1 2 ω + d ( ω > s ) 2 j 1 n 1 2 s + d 2 j 2 n ln n 1 d . Moreover, we can rewrite D 2 as
D 2 : = j = j 0 j 1 + j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j E b ^ j , κ , μ b j , κ , μ 2 I { | b j , κ , μ | τ r n 2 } : = D 21 + D 22 .
Using Lemma 3,
D 21 = j = j 0 j 1 μ = 1 2 d 1 κ Λ j E b ^ j , κ , μ b j , κ , μ 2 I { | b j , κ , μ | τ r n 2 } j = j 0 j 1 μ = 1 2 d 1 κ Λ j 1 n j = j 0 j 1 2 j d n 2 j 1 d n n 2 s 2 s + d .
For D 22 , note that
D 22 : = j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j E b ^ j , κ , μ b j , κ , μ 2 I { | b j , κ , μ | τ r n 2 } j = j 1 + 1 j 1 μ = 1 2 d 1 κ Λ j 1 n I { | b j , κ , μ | τ r n 2 } .
For p 2 , according to f B p , q s ( Ω ) , r n = ln n n and Hölder inequality,
D 22 j = j 1 + 1 j 1 μ = 1 2 d 1 κ Λ j 1 n | b j , κ , μ | τ r n / 2 2 j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j | b j , κ , μ | 2 = j = j 1 + 1 j 2 b j , κ , μ 2 2 j = j 1 + 1 j 2 2 j d ( 1 2 p ) b j , κ , μ p 2 j = j 1 + 1 j 2 2 2 j s 2 2 j 1 s n 2 s 2 s + d .
On the other hand, in the case of 1 p < 2 , by Lemma 1 and the definition of 2 j 1 ,
D 22 j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j 1 n I { | b j , κ , μ | τ r n 2 } j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j 1 n | b j , κ , μ | τ r n / 2 p ( ln n ) n p 2 1 j = j 1 + 1 j 2 b j , κ , μ p p ( ln n ) n p 2 1 j = j 1 + 1 j 2 2 j ( s + d 2 d p ) p ( ln n ) n p 2 1 2 j 1 ( s + d 2 d p ) p ( ln n ) n 2 s 2 s + d .
This with (21)–(23) shows that
D 2 ( ln n ) n 2 s 2 s + d .
For D 3 , it could be written as
D 3 = j = j 0 j 1 + j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j b j , κ , μ 2 I { | b j , κ , μ | 2 τ r n } : = D 31 + D 32 .
For the upper bound of D 31 , it is easily to get that
D 31 = j = j 0 j 1 μ = 1 2 d 1 κ Λ j b j , κ , μ 2 I { | b j , κ , μ | 2 τ r n } j = j 0 j 1 μ = 1 2 d 1 κ Λ j | 2 τ r n | 2 j = j 0 j 1 ln n n 2 j d ln n n 2 j 1 d ( ln n ) n 2 s 2 s + d .
For the upper bound for D 32 , when p 2 , using Lemma 1 and Hölder’s inequality,
D 32 = j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j b j , κ , μ 2 I { | b j , κ , μ | 2 τ r n } j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j b j , κ , μ 2 j = j 1 + 1 j 2 2 j d ( 1 p 2 ) b j , κ , μ p 2 j = j 1 + 1 j 2 2 2 j s 2 2 j 1 s n 2 s 2 s + d .
For 1 p < 2 , have b j , κ , μ 2 I { | b j , κ , μ | 2 τ r n } | b j , κ , μ | p | 2 τ r n | 2 p . Furthermore,
D 32 = j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j b j , κ , μ 2 I { | b j , κ , μ | 2 τ r n } j = j 1 + 1 j 2 μ = 1 2 d 1 κ Λ j | b j , κ , μ | p | 2 τ r n | 2 p j = j 1 + 1 j 2 b j , κ , μ p p ln n n 2 p 2 ln n n 2 p 2 j = j 1 + 1 j 2 2 j ( s + d 2 d p ) p ln n n 2 p 2 2 j 1 ( s + d 2 d p ) p ( ln n ) n 2 s 2 s + d .
Then, according to (26)–(29),
D 3 ( ln n ) n 2 s 2 s + d .
Due to (19), (20), (25) and (30), we can prove that
D ( ln n ) n 2 s 2 s + d .
Together with (16)–(18), this yields
E [ f ^ n non ( x ) f ( x ) 2 2 ] ( ln n ) n 2 s 2 s + d .

5. Conclusions

This paper systematically investigates the application of wavelet methods in density estimation under nonparametric mixture models. Under mild regularity conditions, two estimators are constructed: a linear estimator and a nonlinear adaptive estimator. Meanwhile, we conduct theoretical analysis to derive the convergence rates of these estimators under the L 2 -risk criterion. Results demonstrate that both estimation methods achieve optimal convergence rates in nonparametric density estimation, thereby confirming their statistical efficiency. Furthermore, numerical experiments validate that the practical performance of the proposed methods aligns with theoretical conclusions, indicating their robust statistical efficiency.

Author Contributions

Methodology, J.K.; Writing—original draft, D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is supported by Guangxi Natural Science Foundation (Nos. 2024GXNSFBA010379, 2023GXNSFAA026042), the National Natural Science Foundation of China (No. 12361016), Center for Applied Mathematics of Guangxi (GUET), Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

All authors would like to thank the reviewers for their important comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Huber, P.J. A robust version of the probability ratio test. Ann. Math. Stat. 1965, 36, 1753–1758. [Google Scholar] [CrossRef]
  2. McLachlan, G.; Peel, D. Finite Mixture Models; Wiley: New York, NY, USA, 2000. [Google Scholar]
  3. Allison, D.B.; Gadbury, G.L.; Heo, M.; Fernández, J.R.; Lee, C.K.; Prolla, T.A.; Weindruch, R. A mixture model approach for the analysis of microarray gene expression data. Comput. Stat. Data Anal. 2002, 39, 1–20. [Google Scholar] [CrossRef]
  4. Aubert, J.; Bar-Hen, A.; Daudin, J.J.; Robin, S. Determination of the differentially expressed genes in microarray experiments using local FDR. BMC Bioinform. 2004, 5, 125. [Google Scholar] [CrossRef]
  5. Liao, J.G.; Lin, Y.; Selvanayagam, Z.E.; Shih, W.J. A mixture model for estimating the local false discovery rate in DNA microarray analysis. Bioinformatics 2004, 20, 2694–2701. [Google Scholar] [CrossRef] [PubMed]
  6. Shu, H.; Nan, B.; Koeppe, R. Multiple testing for neuroimaging via hidden markov random field. Biometrics 2015, 71, 741–750. [Google Scholar] [CrossRef] [PubMed]
  7. Efron, B. Large-scale simultaneous hypothesis testing. J. Am. Stat. Assoc. 2004, 99, 96–104. [Google Scholar] [CrossRef]
  8. McLachlan, G.J.; Bean, R.W.; Jones, L.B.T. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays. Bioinformatics 2006, 22, 1608–1615. [Google Scholar] [CrossRef]
  9. Efron, B.; Tibshirani, R.; Storey, J.D.; Tusher, V. Empirical bayes analysis of a microarray experiment. J. Am. Stat. Assoc. 2001, 96, 1151–1160. [Google Scholar] [CrossRef]
  10. Olkin, I.; Spiegelman, C.H. A semiparametric approach to density estimation. J. Am. Stat. Assoc. 1987, 82, 858–865. [Google Scholar] [CrossRef]
  11. Priebe, C.E.; Marchette, D.J. Alternating kernel and mixture density estimates. Comput. Stat. Data Anal. 2000, 35, 43–65. [Google Scholar] [CrossRef]
  12. James, L.F.; Priebe, C.E.; Marchette, D.J. Consistent estimation of mixture complexity. Ann. Stat. 2001, 29, 1281–1296. [Google Scholar] [CrossRef]
  13. Robin, S.; Bar-Hen, A.; Daudin, J.J.; Pierre, L. A semi-parametric approach for mixture models: Application to local false discovery rate estimation. Comput. Stat. Data Anal. 2007, 51, 5483–5493. [Google Scholar] [CrossRef]
  14. Nguyen, V.H.; Matias, C. Nonparametric estimation of the density of the alternative hypothesis in a multiple testing setup. Application to local false discovery rate estimation. ESAIM Probab. Stat. 2014, 18, 584–612. [Google Scholar] [CrossRef]
  15. Patra, R.K.; Sen, B. Estimation of a two-component mixture model with applications to multiple testing. J. R. Stat. Soc. Ser. B Stat. Methodol. 2016, 78, 869–893. [Google Scholar] [CrossRef]
  16. Deb, N.; Saha, S.; Guntuboyina, A.; Sen, B. Two-component mixture model in the presence of covariates. J. Am. Stat. Assoc. 2022, 117, 1820–1834. [Google Scholar] [CrossRef]
  17. Chesneau, C.; Doosti, H.; Stone, L. Adaptive wavelet estimation of a function from an m-dependent process with possibly unbounded m. Commun. Stat.-Theory Methods 2018, 48, 1123–1135. [Google Scholar] [CrossRef]
  18. Amato, U.; Antoniadis, A.; Feis, I.D.; Gijbels, I. Wavelet-based robust estimation and variable selection in nonparametric additive models. Stat. Comput. 2022, 32, 11. [Google Scholar] [CrossRef]
  19. Niles-Weed, J.; Berthet, Q. Minimax estimation of smooth densities in wasserstein distance. Ann. Stat. 2022, 50, 1519–1540. [Google Scholar] [CrossRef]
  20. Shirazi, E.; Doosti, H. Evaluation of threshold selection methods for adaptive wavelet quantile density estimation in the presence of bias. Commun. Stat.-Simul. Comput. 2024, 53, 6633–6646. [Google Scholar] [CrossRef]
  21. Benhaddou, R.; Liu, Q. Wavelet estimation for the nonparametric additive model in random design and long-memory dependent errors. J. Nonparametric Stat. 2024, 36, 1088–1113. [Google Scholar] [CrossRef]
  22. Rademacher, D.; Krebs, J.; Sachs, R.V. Statistical inference for wavelet curve estimators of symmetric positive definite matrices. J. Stat. Plan. Inference 2024, 231, 106140. [Google Scholar] [CrossRef]
  23. Härdle, W.; Kerkyacharian, G.; Picard, D.; Tsybakov, A. Wavelets, Approximation and Statistical Applications; Springer: New York, NY, USA, 1997. [Google Scholar]
  24. Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
  25. Boggess, A.; Narcowich, F.J. A First Course in Wavelets with Fourier Analysis; Wiley and Sons: Toronto, ON, Canada, 2009. [Google Scholar]
  26. Donoho, D.L.; Johnstone, M.I.; Kerkyacharian, G.; Picard, D. Density estimation by wavelet thresholding. Ann. Stat. 1996, 24, 508–539. [Google Scholar] [CrossRef]
  27. Juditsky, A.; Lambert-Lacroix, S. On minimax density estimation on R. Bernoulli 2004, 10, 187–220. [Google Scholar] [CrossRef]
  28. Reynaud-Bouret, P.; Rivoirard, V.; Tuleau-Malot, C. Adaptive density estimation: A curse of support. J. Stat. Plan. Inference 2011, 141, 115–139. [Google Scholar] [CrossRef]
  29. Chesneau, C.; Kolei, S.E.; Kou, J.K.; Navarro, F. Nonparametric estimation in a regression model with additive and multiplicative noise. J. Comput. Appl. Math. 2020, 380, 112971. [Google Scholar] [CrossRef]
Figure 1. Estimations of two wavelet estimators with density function f ( x ) = f 1 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) with different scale parameter j 0 ; (d) M S E ( f ^ n non , f ) with different thresholding parameter λ .
Figure 1. Estimations of two wavelet estimators with density function f ( x ) = f 1 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) with different scale parameter j 0 ; (d) M S E ( f ^ n non , f ) with different thresholding parameter λ .
Axioms 14 00741 g001
Figure 2. Estimations of two wavelet estimators with density function f ( x ) = f 2 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) under different scale parameters j 0 ; (d) M S E ( f ^ n non , f ) under different threshold parameters λ .
Figure 2. Estimations of two wavelet estimators with density function f ( x ) = f 2 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) under different scale parameters j 0 ; (d) M S E ( f ^ n non , f ) under different threshold parameters λ .
Axioms 14 00741 g002
Figure 3. Estimations of two wavelet estimators with density function f ( x ) = f 3 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) with different scale parameter j 0 ; (d) M S E ( f ^ n non , f ) with different thresholding parameter λ .
Figure 3. Estimations of two wavelet estimators with density function f ( x ) = f 3 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) with different scale parameter j 0 ; (d) M S E ( f ^ n non , f ) with different thresholding parameter λ .
Axioms 14 00741 g003
Figure 4. Estimations of two wavelet estimators with density function f ( x ) = f 4 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) under different scale parameters j 0 ; (d) M S E ( f ^ n non , f ) under different threshold parameters λ .
Figure 4. Estimations of two wavelet estimators with density function f ( x ) = f 4 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) under different scale parameters j 0 ; (d) M S E ( f ^ n non , f ) under different threshold parameters λ .
Axioms 14 00741 g004
Figure 5. Estimations of two wavelet estimators with density function f ( x ) = f 5 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) with different scale parameter j 0 ; (d) M S E ( f ^ n non , f ) with different thresholding parameter λ .
Figure 5. Estimations of two wavelet estimators with density function f ( x ) = f 5 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) with different scale parameter j 0 ; (d) M S E ( f ^ n non , f ) with different thresholding parameter λ .
Axioms 14 00741 g005
Figure 6. Estimations of two wavelet estimators with density function f ( x ) = f 6 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) under different scale parameters j 0 ; (d) M S E ( f ^ n non , f ) under different threshold parameters λ .
Figure 6. Estimations of two wavelet estimators with density function f ( x ) = f 6 ( x ) . (a) Linear wavelet estimator result f ^ n lin ( x ) ; (b) nonlinear wavelet estimator result f ^ n non ( x ) ; (c) M S E ( f ^ n lin , f ) under different scale parameters j 0 ; (d) M S E ( f ^ n non , f ) under different threshold parameters λ .
Axioms 14 00741 g006
Table 1. Results of the wavelet estimators.
Table 1. Results of the wavelet estimators.
f 1 f 2 f 3
j 0 569
λ 0.03210702340.00511705690.0255852843
M S E ( f ^ n lin , f ) 0.00823188900.00040925250.0004416361
M S E ( f ^ n non , f ) 0.00465358100.00021919090.0002000215
f 4 f 5 f 6
j 0 856
λ 0.10568561870.09297658860.006204013
M S E ( f ^ n lin , f ) 0.00248058000.00086020930.0005513363
M S E ( f ^ n non , f ) 0.00122821200.00029036620.0002366971
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, D.; Kou, J. Nonparametric Density Estimation in a Mixed Model Using Wavelets. Axioms 2025, 14, 741. https://doi.org/10.3390/axioms14100741

AMA Style

Liang D, Kou J. Nonparametric Density Estimation in a Mixed Model Using Wavelets. Axioms. 2025; 14(10):741. https://doi.org/10.3390/axioms14100741

Chicago/Turabian Style

Liang, Dan, and Junke Kou. 2025. "Nonparametric Density Estimation in a Mixed Model Using Wavelets" Axioms 14, no. 10: 741. https://doi.org/10.3390/axioms14100741

APA Style

Liang, D., & Kou, J. (2025). Nonparametric Density Estimation in a Mixed Model Using Wavelets. Axioms, 14(10), 741. https://doi.org/10.3390/axioms14100741

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop