Next Article in Journal
Explicit Symplectic Runge–Kutta–Nyström Methods Based on Roots of Shifted Legendre Polynomial
Previous Article in Journal
Estimation and Inference for Spatio-Temporal Single-Index Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Results of Some Conditional Nonparametric Functional Parameters in High-Dimensional Associated Data

by
Hamza Daoudi
1,*,
Zouaoui Chikr Elmezouar
2 and
Fatimah Alshahrani
3
1
Department of Electrical Engineering, College of Technology, Tahri Mohamed University, Bechar 08000, Algeria
2
Department of Mathematics, College of Science, King Khalid University, Abha 61413, Saudi Arabia
3
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(20), 4290; https://doi.org/10.3390/math11204290
Submission received: 21 September 2023 / Revised: 9 October 2023 / Accepted: 10 October 2023 / Published: 14 October 2023
(This article belongs to the Section Probability and Statistics)

Abstract

:
In this paper, we propose to study the asymptotic properties of some conditional functional parameters, such as the distribution function, the density, and the hazard function, for an explanatory variable with values in a Hilbert space (infinite dimension) and a response variable real in a quasi-associated dependency framework. We consider the non parametric estimation of the conditional distribution function by the kernel method in the presence of the quasi-associated dependence, and we establish under general hypotheses the almost complete convergence with speed of the estimator built in the associated case. The estimation of the conditional hazard function will be conducted by utilizing the two outcomes of the conditional distribution function and the conditional density. We establish the asymptotic normality of the kernel estimator as the conditional risk function of a properly normalized functional. We explicitly give the asymptotic variance. Simulation studies were conducted to investigate the behavior of the asymptotic property in the context of finite sample data. All the statistical analyses were performed using R software.

1. Introduction

Mathematics and statistical analysis techniques have been of considerable relevance in a variety of scientific sectors in recent years, including engineering, economics, clinical medicine, and healthcare. In particular, the application of methods of mathematical and statistical analysis have been applied to engineering, economics, healthcare, and clinical medicine, and is has been demonstrated how these approaches may assist in such vital areas as comprehension, prediction, correlation, diagnosis, therapy, and data processing.
It is crucial to note that the study of conditional models, which is included in nonparametric functional data analysis, is one of the most significant approaches of statistical analysis. An examination of this kind is carried out with the primary purpose of investigating and modeling the connection that exists between a scalar response variable and a functional regressor. In addition, two essential asymptotic features are the consistency and the asymptotic normality of particular statistical parameter estimators.
Functional data are the subject of this research. As defined in Ref. [1], statistics functional data analysis (FDA) analyzes infinite-dimensional variables, including curves, sets, and pictures. The “Big Data” revolution has spurred its rapid expansion over the past 20 years.
This may be demonstrated by researching the topic’s past (for an example, see [2]). In [3], the topics of density and mode estimation for normed vector space data and the problem of excessive dimensionality in functional data and are discussed and potential remedies are offered. Nonparametric models were investigated for use in regression estimation in [4].
The treatment of functional data today typically involves contemporary theory. For example, reference [5] presented the consistency rates of a variety of conditional distribution functionals, such as the regression function, conditional cumulative distribution, and conditional density, uniformly over a subset of the explanatory variable. These functionals include the conditional cumulative distribution and conditional density. The conditional cumulative distribution, as well as the conditional density, are both included in these functionals.
Uniformly in bandwidth (UIB) consistency was extended to the ergodic scenario, and the rates of consistency for different functional nonparametric models were investigated in [6]. The regression function was a part of these models, a conditional hazard function, conditional distribution, and conditional density.
In the field of statistical mathematics, in recent years, there has been a surge of curiosity in the statistical analysis of functional data. These numbers are used in econometrics, medicine, environmental science, and many other fields. In the statistical functional, ref. [4] made the first attempt to estimate the conditional density function and its derivatives. In addition, they were the ones who were the first to do so in the scientific community. These authors reached an extent of convergence in the i.i.d. case that was really near to being finished. Since this work was published, much more research has been done on estimating the conditional density and its derivatives, especially for computing the conditional mode. In point of fact, ref. [7] demonstrated that a kernel estimator of the conditional mode will almost certainly converge to the true value. The authors showed this by taking into account data that included a-mixing.
The point that nullifies the kernel density estimator derivative was used by [8,9] to estimate the conditional mode. This was done to determine the conditional mode. The outcomes were comparable using this approach, but they put more of an emphasis on the estimator’s asymptotic normality, which was provided in both the i.i.d and mixed circumstances, respectively. Both scenarios included mixing. Ref. [10] was able to identify the level of accuracy of the terms that dominate the quadratic error that is produced by the kernel density estimator.
We suggest that the reader check [11] for further information on the smoothing parameter that should be used in estimating the conditional density in relation to the functional explanatory variable.
The concept of a quasi-association variable refers to a variable that exhibits some degree of association with another variable; examples of research that processed data under both positive and negative dependent random variables are [12,13,14]. In Ref. [15], the authors were the first people to offer the concept of quasi-association to conduct an analysis of real-valued stochastic occurrences. This is a very striking illustration of the idea of weak reliance. It was used by [16] for real-valued random fields, and it provides a unified technique for studying families of positively dependent and negatively dependent random variable families.
According to our knowledge, the nonparametric estimation of quasi-associated random variables is addressed in a vanishingly small number of published papers. The study conducted by [17] focuses on a limit theorem for quasi-associated Hilbertian random variables. The research conducted by [18] explores asymptotic results for an M-estimator of the regression for weak dependance; and in Ref. [19], the authors investigated both quasi-associated processes and asymptotic results. Ref. [20] investigated the asymptotic normality of this final estimator as part of the study, which focused on the single-index structure of the conditional hazard function.
To solve relative regression, ref. [21] explored the nonparametric estimate for linked random variables; it was found to be significant in [22,23]. Both of these results were found to be significant. The authors were responsible for conducting these investigations in their own separate ways. The authors in [24] demonstrate the robust uniform consistency characteristics of partial derivatives of multivariate density functions under weak dependence, namely inside compact subsets of R d , and they determine the relevant rates of convergence to establish the asymptotic normality of these estimators.
In [25], the authors examine the application of the kernel nearest neighbors (k-nn) technique in a regression model with a single index. They specifically focus on cases where the explanatory variable is measured in functional space, in the context of the association dependency condition. The primary outcome of this study involves the determination of the asymptotic distribution for the single index estimator using the k-nn.
The study conducted in [26] examines the application of the k-nn approach within the single index regression model. This particular investigation focuses on scenarios where the predictor is functional in nature, and the response is scalar. The primary outcome of this study is the determination of the almost comprehensive rates of convergence under the assumption of weak dependence.
The primary outcome of the study referenced in [27] is the establishment of the asymptotic properties, specifically the almost complete convergence rates and the asymptotic normality, of nonparametric estimation techniques for the regression function in the context of the single functional index model (SFIM). These features are derived under the assumption of quasi-association dependence.
It is important to keep in mind that our results are connected to the model’s functional space in some manner, just as they are related to every previous asymptotic statistics functional nonparametric finding.
In this research, we establish under general hypotheses the almost complete convergence with the speed of the estimator built in the quasi-associated case, and we will apply the two results of the conditional distribution function and the conditional density in [28] to estimate the conditional hazard function. We establish the asymptotic normality of the kernel estimator as the conditional hazard function of a properly normalized functional. We give explicitly the asymptotic variance.
In the subsequent sections of this work, we will introduce our model, which can be found in Section 2. In Section 3 of this paper, the primary findings are presented. The confidence bands is the subject of discussion in Section 4. In Section 5, an analysis and evaluation of the behavior of our asymptotic normality findings on finite sample data is conducted. The conclusion is articulated in Section 6. The proof of the intermediate findings is provided in Appendix A.

2. Model and Estimator

To commence, we provide a precise delineation of quasi-association about random variables that possess values within a separable Hilbert space.
Consider a separable Hilbert space ( H , < . , . > ) furnished with an orthonormal basis. The sequence of elements is represented by e k , k 1 .
Let ( R n ) n N be a sequence of real random variables with values in H . The statement is made that this sequence exhibits quasi-association about the basis. The user defines e k as a variable. The term “quasi-associated” is used to describe the sequence e k if, for any positive integer d, the d-dimensional sequence { ( < R i , e j 1 > , , < R i , e j d > ) , i N } is also quasi-associated.
In this analysis, we will examine a group of n quasi-associated random variables, which we will represent as W i = ( R i , S i ) 1 i n . The random variables mentioned exhibit the same distribution as the random variable W = ( R , S ) , which represents values in a separable real Hilbert space denoted as H × R . This Hilbert space is equipped with an inner product denoted as < . , . > , which generates the norm. The semi-metric d is considered, as given by: r , r H / d ( r , r ) = r r .
For a fixed r in the H space, its fixed neighborhood N r and a compact subset of R all have the notation S . For each r N r , there is a S such that R = r . Using a sample of n dependent observations from W : = ( R , S ) , the conditional distribution function F r ( s ) is estimated. We present the F ^ r ( s ) estimator of F r ( s ) , a kernel type estimator, defined as:
F ^ r ( s ) = i = 1 n K ( h H 1 d ( r , R i ) ) H ( h H 1 ( s S i ) ) K ( h H 1 d ( r , R i ) ) , s R
K: denotes the kernel, H represents a particular distribution function, and the sequence of positive real integers h K = h K , n (resp. h H = h H , n ) converges to zero when n increases to infinity. We define f ^ r ( s ) estimator of the conditional density f r ( s ) , given by:
f ^ r ( s ) = h H 1 i = 1 n K ( h K 1 d ( r , R i ) ) H ( h H 1 ( s S i ) ) i = 1 n K ( h K 1 d ( r , R i ) ) , s R .
This is the derivative of H expressed as H .
Finally, we obtain the conditional hazard function estimator Z ^ r ( s ) . This estimator is defined as follows:
Z ^ r ( s ) = f ^ r ( s ) 1 F ^ r ( s ) , s R .

3. The Consistency and Asymptotic Normality of the Kernel Estimators

3.1. Assumptions and Necessary Background Knowledge

When there is no chance of misunderstanding, we shall designate any strictly positive generic constants in the paper by the notation l or/and l . This will only happen when there is no risk of mistakes. The expression w in the process signifies a fixed point in H , and N w stands for a fixed neighborhood of w. We take into account the fact that the random couple { ( W i , T i ) , i N } is a process that is stationary.
Let λ k denote the covariance coefficient, which may be found by using the equation:
λ k = s u p s k | i j | s λ i , j
In
λ i , j = k = 1 l = 1 | C o v ( W i k , W j l ) | + k = 1 | C o v ( W i k , T j ) | + l = 1 | C o v ( W i , X j l ) | + | C o v ( T i , T j ) |
The k th component of W i , as represented by R i k , where R i k : = < W i , e k > is the definition of R i k . Let us denote the ball as B ( w , h ) : = { r H / d ( w , w ) < h } . This represents the ball with a center of w and a radius h , when h > 0 .
For the purpose of establishing the virtually full convergence of the estimator of F ^ r ( s ) . In order to establish the results of our research, it will be necessary to rely on the following assumptions.
(P1)
P ( R B ( r , h ) ) = ϕ r ( h ) > 0 and the function ϕ r ( h ) is a differentiable at 0.
There exists a function β ( r , . ) such that:
s [ 0 , 1 ] , lim h 0 ϕ r ( s h K ) ϕ r ( h K ) = β ( r , s )
(P2)
The conditional distribution function F r ( s ) satisfies the Holder condition, that is: ( r 1 , r 2 ) N r × N x , ( s 1 , s 2 ) S 2
| f r 1 ( s 1 ) f r 2 ( s 2 ) | l d z 1 ( r 1 , r 2 ) + | s 1 s 2 | z 2 z 1 > 0 , z 2 > 0
| F r 1 ( s 1 ) F r 2 ( s 2 ) | l d z 3 ( r 1 , r 2 ) + | s 1 s 2 | z 4 z 3 > 0 , z 4 > 0
Here, S is a fixed compact subset of R .
(P3)
The kernel, denoted by H, is a differentiable function, and its inverse, denoted by H , is a positive, bounded, and Lipschitzian continuous function:
| t | z 2 H ( t ) d t < a n d H 2 ( t ) d t <
(P4)
For K, we have the necessary conditions for it to be a bounded continuous Lipschitz function, which are:
l [ 0 , 1 ] ( . ) < K ( . ) < l [ 0 , 1 ] ( . )
Here, [ 0 , 1 ] is referred to as an indicator function.
(P5)
A quasi-association exists between the sequence of random pairings ( R i , S i ) , where i N , and the covariance coefficient λ k , k N , as long as the conditions are met.
a > 0 , l > 0 , t h e n λ k l e a k
(P6)
The joint distribution functions are defined so that they hold for each and every pair ( i , j ) .
Ψ i , j ( h ) = P [ ( R i , R j ) B ( r , h ) × B ( r , h ) ) ]
satisfy:
sup i j Ψ i , j ( h ) = O ( ϕ r 2 ( h k ) ) > 0
(P7)
The frequencies ( h K , h H ) , verified:
(i) lim n h K = 0 ,   lim n h H = 0
(ii) lim n ( h H b 2 + h K b 1 ) n h H ϕ r ( h K ) = 0
(iii) lim n log 5 ( n ) n h H j ϕ r ( h k ) = 0 for j = 0 and j = 1

3.2. Brief Comment on the Conditions

The attribute of concentration of the explanatory variable is denoted by the Assumption (P1) in the context of tiny balls. The function β ( r , . ) is very important to any asymptotic study, but especially for the variance term. The inclusion of Condition (P2) in our model serves the purpose of regulating the smoothness of the functional space. The inclusion of these components is important in order to accurately calculate the bias component of the convergence rates. The (P3) and (P4) Assumptions, similarly, center on the cumulative function H and its associated kernels K H and K. The word “bias” is purposefully left out of the asymptotic normalcy result thanks to this assumption. The presumption (P5) represents a normative restriction on the quasi-associated information. Our model is assumed to be asymptotically normal under quasi-association if and only if the joint distribution of the pair ( R i , R j ) follows the assumptions of assumption (P6). This will enable us to demonstrate that our model exhibits asymptotic normality. Adopting assumption (P7) is required to rule out the possibility of “bias” in the final result of asymptotic normalcy, and this assumption is well-known as a classical one in functional estimation in spaces of finite or infinite dimensions.

3.3. Almost Complete Convergence of F ^ r ( s )

Theorem 1.
Based on assumptions (P1)–(P7), we have:
| F ^ r ( s ) F r ( s ) | = O ( h K b 1 + h K b 2 ) + O L o g ( n ) n ϕ r ( h K ) 1 2
Proof of Theorem 1.
The subsequent decomposition, together with the lemmas that are listed below, provide the foundation for the proof:
F ^ r ( s ) F r ( s ) = 1 F ^ D r F ^ N r ( s ) E F ^ N r ( s ) F r ( s ) E F ^ N r ( s ) + F r ( s ) F ^ D r F ^ D r E F ^ D r
where
F ^ N r ( s ) = 1 n E [ K 1 ( r ) ] i = 1 n K i ( r ) H i ( s )
and
F ^ D r = 1 n E [ K 1 ( r ) ] i = 1 n K i ( r )
with
K i ( r ) = K ( h K 1 d ( r , R i ) ) a n d H i ( s ) = H ( h H 1 ( s S i ) )
Lemma 1.
Based on assumptions (P1)–(P4) and (P6):
1 F ^ D r ( s ) F ^ N r ( s ) E F ^ N r ( s ) = O a . c o L o g ( n ) n ϕ R ( h K ) 1 2
Corollary 1.
Based on assumptions (P1)–(P4) and (P6), we have:
i = 1 P | F ^ D r ( s ) | < 1 / 2 <
Lemma 2.
Based on assumptions (P1)–(P6), we have:
1 F ^ D r ( s ) F r ( s ) E F ^ N r ( s ) = O ( h K b 1 + h K b 2 )
Lemma 3.
Based on assumptions (P1)–(P7), we have:
F ^ D r ( s ) E F ^ D r ( s ) = O a . c o l o g ( n ) n ϕ r ( h K ) 1 2
Theorem 1 may be derived from these lemmas, where their proofs appear in Appendix A.

3.4. Almost Complete Convergence of Z ^ r ( s )

Theorem 2.
Based on assumptions (P1)–(P7), we have:
| Z ^ r ( s ) Z r ( s ) | = O h K b 1 + h K b 2 + O a . c o log n n h H ϕ r ( h K ) 1 2 .
Proof of Theorem 2.
The basis for this analysis is derived from the subsequent decomposition.
Z ^ r ( s ) Z r ( s ) = 1 1 F ^ r ( s ) f ^ r ( s ) f r ( s ) + h r ( s ) 1 F ^ r ( s ) F ^ r ( s ) F r ( s )
and therefore asymptotic results for the estimator Z ^ r ( s ) can be readily deduced from F ^ r ( s ) and f ^ r ( s ) .
After the decomposition precedent, all you have to do demonstrate:
Lemma 4
(See proof of Theorem 1). Based on assumptions (P1)–(P7), we obtain:
F ^ r ( s ) F r ( s ) = O h K b 1 + h K b 2 + O a . c o log n n ϕ r h K 1 2
Lemma 5
(See Bouaker et al. (2021) [28]). Based on assumptions (P1)–(P7), we obtain:
f ^ r ( s ) f r ( s ) = O h K b 1 + h K b 2 + O a . c o log n n h H ϕ r h K 1 2
Corollary 2.
Based on assumptions (P1)–(P7), we obtain:
δ > 0 , i = 1 P 1 F ^ r ( s ) < δ < .

3.5. Asymptotic Normality of the Conditional Hazard Function Estimate

Theorem 3.
Based on the assumptions, we obtain, for any r A :
n h H ϕ r ( h K ) Z ^ r ( s ) Z r ( s ) D N ( 0 , σ h 2 ( r ) ) a s n
where
A = { r H , f r ( s ) ( 1 F r ( s ) ) 0 }
and
σ h 2 ( r ) = C 2 Z r ( s ) C 1 2 ( 1 F ( s , r ) ) H 2 ( t ) d t ,
In
C j = K ( 1 ) 0 1 ( K j ) ( s ) β ( r , s ) d s , f o r j = 1 , 2 .
The symbol D denotes convergence in distribution.
Proof of Theorem 3.
The basis for this analysis is derived from the subsequent decomposition.
Z ^ r ( s ) Z r ( s ) = 1 F ^ D r F ^ N r ( s ) f ^ N r ( s ) f r ( s ) Z r ( s ) F ^ D r F ^ N r ( s ) F ^ D r F ^ N r ( s ) + F r ( s ) 1
where
f ^ N r ( s ) = 1 n h H E [ K 1 ( r ) ] i = 1 n K i ( r ) H i ( s ) .
Lemma 6.
Based on assumptions (P1)–(P7), we have:
n h H ϕ r h K f ^ N ( s , r ) E f ^ N ( s , r ) D N 0 , σ f 2 ( r ) , n
where
σ f 2 ( r ) = C 2 f ( s , r ) C 1 2 H 2 ( t ) d t
Lemma 7.
(See Daoudi, H. and Mechab, B. (2019) [22]). Under assumptions (P1)–(P5); we have:
E f ^ N r ( s ) f r ( s ) = O h H b 2 + O h K b 1 , n
Lemma 8.
Based on assumptions (P1)–(P7), we have:
n h H ϕ r h K F ^ D r F ^ N r ( s ) + F r ( s ) 1 0   i n     p r o b a b i l i t y ,   n
Corollary 3.
Based on assumptions (P1)–(P7), we have:
F ^ D r F ^ N r ( s ) 1 F r ( s )   i n   p r o b a b i l i t y

4. Confidence Bands

One important aspect in statistical analysis is the establishment of confidence bands for estimations. These confidence bands provide a range of values within which we can be confident that the true value lies. By calculating and interpreting these confidence bands, we can gain a better understanding of the uncertainty associated with our estimations. The objective of this section create confidence intervals for the actual value of Z r ( s ) for a specified curve with the format < R , R > = < r , r > . Estimation using nonparametric methods relies on the asymptotic variance, which is determined by a number of unknown functions. Regarding our situation, we have:
σ Z 2 ( r ) = C 2 Z r ( s ) C 1 2 ( 1 F r ( s ) )
The variables Z r ( s ) , F r ( s ) , C 1 , and C 2 are not known in advance and must be approximated during the practical implementation. It is possible to derive confidence bands even when σ Z 2 ( r ) is functionally given. An estimate for σ ^ Z 2 ( r ) may be produced using Z ^ ( r ) ( s ) , F ^ ( r ) ( s ) , C ^ 1 , and C ^ 2 for Z ( r ) ( s ) , F ( r ) ( s ) , C 1 , and C 2 , respectively.
σ ^ Z 2 ( r ) = C ^ 2 Z ^ r ( s ) C ^ 1 2 ( 1 F ^ r ( s ) ) .
The constants C 1 and C 2 are estimated empirically in the following manner:
C ^ 1 = 1 n ϕ r ( h K ) i = 1 n K ( h K 1 d ( r , R i ) ) , C ^ 2 = 1 n ϕ r ( h K ) i = 1 n K 2 ( h K 1 d ( r , R i ) )
where
ϕ r ( h K ) = 1 n i = 1 n I { | < r R i > | < h K } .
The expression for the asymptotic confidence band at a significance level of 1 ζ for Z r ( s ) is provided as follows.
Z ^ r ( s ) u 1 ζ 2 σ ^ Z 2 ( r ) n ϕ r ( h K ) 1 / 2 , Z ^ r ( s ) + u 1 ζ 2 σ ^ Z 2 ( r ) n ϕ r ( h K ) 1 / 2
where u 1 ζ 2 denotes the 1 ζ 2 quantile of the standard normal distribution.

5. A Simulation Study

Within this particular section, we examine how our asymptotic normality conclusions behave across data from a limited sample. Our primary purpose is to demonstrate how simple the conditional risk function is to develop and to study the effect of dependency on this asymptotic characteristic.
We produce functional observations for this purpose by investigating the functional nonparametric model that is shown below:
Z i = r ( W i ) + ϵ i for i = 1 , , n
where ϵ i N ( 0 , . 5 ) .
The linear process with quasi-associated variables is well known to satisfy requirement ( P 6 ). As a result, the functional regressor with quasi-associates shown below is constructed.
Z i ( t ) = j = i + 1 i + m Γ j ( t )
where
Γ j ( t ) = s j t 2 + h j t + g j t [ 0 , 1 ]
( s j ) j N ( 0 , 1 2 ) ( resp . ( h j ) j N ( 1 , 1 2 ) and ( g j ) j N ( 1 , 1 2 ) ) .
Figure 1 shows the Z i ’s curves discretized in the same 100-point grid in [0, 1] for m = 1, 4, and 10 values. Furthermore, the regression operator computes the scalar variable Z i :
r ( w ) = 5 0 1 exp W ( t ) d t .
It can be seen that, by examining the error distribution ( ϵ i ), we are able to infer the theoretical conditional distribution function of Z given W = w . This function is explicitly determined by shifting the distribution of ( ϵ i ) by r ( w ) . Therefore, the determination of the theoretical distribution of the conditional hazard function may be easily determined.
To demonstrate this function’s asymptotic normality, we fix one curve, w = W 0 , and z = Z 0 , from the created data, then collect m independent n-samples of the same data and compute the quantity:
n h H ϕ ( r , h K ) σ ^ h K 1 ( r ) Z ^ r ( s ) Z r ( s )
where σ ^ h K is the previous section’s standard deviation estimation.
The collected sample is next tested for normality. With the cross-validation approach determined, we utilized a quadratic kernel, represented as:
K ( t ) = 3 2 ( 1 t 2 ) I [ 0 , 1 ] .
Figure 2 depicts a graph of the collected sample versus a typical normal distribution at varied m = 1, 4, and 10 values.
The performance of our estimator demonstrates its favorable characteristics and reliable performance in practical applications. Furthermore, the correlation of the data has a significant impact on the pace at which the asymptotic normality converges to its final state. Specifically, its magnitude diminishes in proportion to the values of m.
Table 1 summarizes the p-value from the Kolmogorov–Smirnov test for each value of m, which confirms an inverse relationship between the correlation of the data and the convergence rate of the asymptotic normality.

6. Conclusions and Perspectives

The current study examines the asymptotic properties of certain conditional functional parameters, specifically the distribution function, density, and hazard function. These parameters are analyzed in the context of an explanatory variable that takes values in a Hilbert space of infinite dimension and a response variable that is real in a quasi-associated dependency framework. The estimators’ asymptotic characteristics, including practically certain convergence and normality, are derived in this study using standard conditions that encompass the key components of the research, such as the functional related assumption and the nonparametric nature of the model. The computational aspect highlights the significance of this estimator in practical applications due to its efficiency in updating findings with each new piece of information. Furthermore, the current contribution also presents intriguing avenues for further exploration. It would be of interest to establish the asymptotic normality of the estimators we have proposed in order to generalize the findings to incomplete data, such as missing, censored, or shortened data. One potential trajectory for future research involves the exploration of more intricate dependence structures, such as the ergodic spatial dependence.

Author Contributions

Conceptualization, Z.C.E. and H.D.; methodology, Z.C.E. and H.D.; software, H.D.; validation, Z.C.E. and H.D.; formal analysis, Z.C.E., F.A. and H.D.; investigation, Z.C.E., F.A. and H.D.; resources, Z.C.E., F.A. and H.D.; data curation, H.D.; writing—original draft preparation, H.D.; writing—review and editing, Z.C.E., F.A. and H.D.; visualization, H.D.; supervision, Z.C.E. and F.A.; project administration, Z.C.E.; funding acquisition, Z.C.E. and F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was funded by (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R358), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia, and (2) The Deanship of Scientific Research at King Khalid University through Small group Research Project under grant number R.G.P. 1/366/44.

Data Availability Statement

Not applicable.

Acknowledgments

The authors express their gratitude to the Editors, the Associate Editor, and the anonymous reviewers for their insightful comments and suggestions, which significantly enhanced the overall quality of a previous iteration of this manuscript. They thank and extend their appreciation to the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University and King Khalid University for funding this work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Corollary A1
(See [17]). Let ( R n ) n N be a quasi-associated sequence of random variables with values in H . Let h B L ( H I ) L and M B L ( H J ) L , for some finite disjoint subsets I , J N . Then
C o v ( h ( R i , i I ) , M ( R j , j J ) ) L i p ( h ) L i p ( M ) i I j J k = 1 l = 1 C o v ( R i k , R j l )
where ( B L ( H u ; u > 0 ) is the set of bounded Lipschitz functions h : H u R , and L is the set of bounded functions.
Corollary A2
(See [29]). Let R 1 , , R n be the real random variables such that E ( R j ) = 0 and P ( R j M ) = 1 for all j = 1 , , n and some M < ; let σ n 2 = V a r ( i = 1 n Δ i ) .
Assume, furthermore, that there exist K < and β > 0 such that, for all u-uplets ( s 1 , , s u ) N u , ( t 1 , , t v ) N v with 1 s 1 s u t 1 t v n .
The following inequality is fulfilled:
c o v ( R s 1 R s u , R t 1 R t v ) K 2 M u + v 2 v e β ( t 1 s u ) .
Then,
P ( j = 1 n R j > t ) exp { t 2 / 2 A n + B n 1 / 3 t 5 / 2 }
for some
A n σ n 2
and
B n = ( 16 n K 2 9 A n ( 1 e β ) 1 ) 2 ( K M ) 1 e β .
Proof of Lemma 1.
We put:
Δ i = 1 n h H E [ K 1 ] χ ( R i , S i ) , 1 i n .
where
ψ ( R i , S i ) = K ( h K 1 d ( r , R i ) ) H ( s S i ) E [ K 1 H 1 ] 1 i n ,
R i H , S i R : E ( Δ i ) = 0 and
| F ^ N r ( s ) E F ^ N r ( s ) | = i = 1 n Δ i
Moreover, we can write:
| | ψ | | 2 C | | K | | | | H | |
and
L i p ( ψ ) C h K 1 | | H | | L i p ( K ) + h H 1 | | K | | L i p ( H ) .
In order to use the lemma that was developed by [29], we need to do an evaluation of the variance term V a r ( i = 1 n Δ i ) as well as the covariance term C o v i = 1 u Δ s i , j = 1 v Δ t j .
Both of these terms are denoted by the following for all ( s 1 , , s u ) N u , ( t 1 , , t v ) N v with 1 s 1 s u t 1 t v n .
We explore the following cases with respect to the covariance term if t 1 = s u . Utilizing the reality that
E [ | K 1 H 1 | ] = O a . c o ϕ r ( h K )
and
E [ | K 1 | ] = O a . c o ϕ r ( h K )
we have:
| C o v i = 1 u Δ s i , j = 1 v Δ t j | C n E [ K 1 ] u + v E ψ | R 1 , S 1 | u + v C | | K | | | | H | | n E [ K 1 ] u + v E | K 1 H 1 | ϕ r ( h K ) C n ϕ r ( h K ) u + v
If it t 1 > s u , under ( P 5 ) , the quasi-association yields:
| C o v i = 1 u Δ s i , j = 1 v Δ t j | h K 1 L i p ( K ) + h H 1 L i p ( H ) n E [ K 1 ] 2 × C n E [ K 1 ] u + v 2 i = 1 u j = 1 v λ s i , t j ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 C n E [ K 1 ] u + v v λ t 1 s u ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 C ϕ r ( h K ) u + v v e α ( t 1 s u )
On the other hand, if we take into account ( P 6 ) , we have:
| C o v i = 1 u Δ s i , j = 1 v Δ t j | C | | K | | | | H | | n E [ K 1 ] u + v 2 E | Δ s u , Δ t 1 | + E | Δ s u | E | Δ t 1 | C | | K | | | | H | | n E [ K 1 ] u + v 2 C n E [ K 1 ] × h H ( s u p i j P ( ( R i , R j ) B ( r , h K ) × B θ ( r , h K ) + P ( R 1 B ( r , h K ) ) 2 ) ) C h H ϕ r ( h K ) u + v ( ϕ r ( h K ) ) 2
In addition, by taking a γ p o w e r of (A7), and a ( 1 γ ) p o w e r of (A8), we are able to derive an upper-bound of the tree terms as follows: for: 1 s 1 s u t 1 . t v n ,
| C o v i = 1 u Δ s i , j = 1 v Δ t j | h H ϕ r ( h K ) C n h H ϕ r ( h K ) u + v
Second, we entered the following information for the V a r ( i = 1 n Δ i ) variance term for all 1 i n :
| V a r i = 1 u Δ s i , j = 1 v Δ t j | = 1 n E [ K 1 ] 2 i = 1 n j = 1 n C o v ( K i H i , K j H j ) = 1 n E [ K 1 ] 2 V a r ( K 1 H 1 ) T 1 + 1 n E [ K 1 ] 2 i = 1 n j = 1 , i j n C o v ( K i H i , K j H j ) T 2
For the first term T 1 , we have:
V a r ( K 1 H 1 ) = E ( K 1 2 H 1 2 ) ( E ( K 1 H 1 ) ) 2
Then,
E [ K 1 2 H 1 2 ] = E [ K 1 2 E [ H 1 2 / X 1 ] ]
As a result, considering (P2) and (P3) and integrating over the real component y gives us the following:
E H 1 2 R 1 = O h H
As for all j 1 :
I E [ K 1 j ] = O ϕ x ( h K ) .
Then,
I E [ K 1 2 H 1 2 ] = O ( ϕ x ( h K ) ) .
It follows that:
1 n ( I E [ K 1 ] ) 2 2 V a r ( K 1 H 1 ) = O ( n ϕ x ( h K ) ) .
Regarding the covariance term in (A10), the following decomposition will be utilized.
i = 1 n j = 1 , i j n C o v ( K i H i , K j H j ) = i = 1 n j = 1 , 0 < i j m n n C o v ( K i H i , K j H j ) I + i = 1 n j = 1 , i j > m n n C o v ( K i H i , K j H j ) I I
where ( m n ) is an infinite series of positive integers as n tends to infinity. The (P1)–(P3) and (P6) presuppositions we obtain, if i j :
I n m n max i j E K i H i K j H j + E K 1 H 1 2 C n m n ϕ r 2 h K + ϕ r h K 2 C n m n ϕ r 2 h K
where H and K are both bounded and Lipschitz kernels, we obtain:
I I h K 1 L i p ( K ) + h H 1 L i p ( H ) 2 i = 1 u j = 1 i j > m n v λ i , j C h K 1 L i p ( K ) + h H 1 L i p ( H ) 2 i = 1 u j = 1 i j > m n v λ i , j C n h K 1 L i p ( K ) + h H 1 L i p ( H ) 2 λ m n C n h K 1 L i p ( K ) + h H 1 L i p ( H ) 2 e α m n .
Then, by (A15) and (A16), we get
j = 1 , i j n C o v ( K i H i , K j H j ) C n m n ( h H 2 ϕ r 2 ( h K ) ) + n ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 e α m n
By choosing:
m n = log h K 1 Lip ( K ) + h H 1 Lip ( H ) 2 α ϕ r 2 h K
We get:
1 ϕ r ( h K ) j = 1 , i j n C o v ( K i H i , K j H j ) n + 0
Finally, we get this result by integrating the previous three results (A10), (A14), and (A17).
V a r i = 1 n Δ i = O 1 n ϕ r ( h K )
Therefore, the criteria of the lemma are satisfied by Δ i , where i = 1 , , n .
K n = C n ϕ r ( h K ) , M n = C n ϕ r ( h K ) , a n d A n = V a r i = 1 n Δ i
Thus,
P F ^ N r ( s ) E F ^ N r ( s ) > η log n n ϕ r h K = P i = 1 n Δ i > η log n n ϕ r h K exp η 2 log n n ϕ r h K Var i = 1 n Δ i + log 5 / 6 n n ϕ r h K ( 7 / 6 ) exp η 2 log n C + ( log n ) 5 / 6 n ϕ r h K ( 7 / 6 ) C exp C η 2 log n
by (P7). Last but not least, with an appropriate selection of η , the Borel–Cantelli lemma makes it possible to conclude the proof of Lemma 1. □
Proof of Corollary 1.
We have:
| F D r | 1 2 | F D r 1 | > 1 2
Therefore,
P | F D r | 1 2 P | F D r 1 | > 1 2 P | F D r E F D r | > 1 2
For E ( F D R ) = 1 , we apply the result of Lemma 1 and show that:
P | F D r | 1 2 <
Proof of Lemma 2.
can write:
E F ^ N r ( s ) F r ( s ) = 1 n E K 1 i = 1 n E K i H i ( y ) F r ( s ) = 1 E K 1 E K 1 H 1 s S i h H F r ( s ) = 1 E K 1 E K 1 E H 1 h H 1 s S i / r F r ( s )
We derive the following by making use of the stationarity of the data, the conditioning by the variable that is doing the explaining, and the customary change in the variable t = s u h H :
E H 1 h H 1 s S i / r = R H s u h H f r ( u ) d u = R H ( 1 ) ( t ) F r s h H t d t
and we deduce,
E H 1 h H 1 s S i / r F r ( s ) = R H ( 1 ) ( t ) F r s h H t d t F r ( s ) R H ( 1 ) ( t ) F r s h H t F r ( s ) d t
Therefore, by (P2) we get
E H 1 h H 1 s S i / r F r ( s ) A R R H ( 1 ) h K b 1 + | t | b 2 h H b 2 d t
This inequality holds true everywhere in S and, after substituting in (A22) and simplifying the expression E ( K 1 ) , we get that:
E F ^ N r ( s ) F r ( s ) A r h K b 1 R H ( 1 ) ( t ) d t + h H b 2 R | t | b 2 H ( 1 ) ( t ) d t
In conclusion, the evidence of Lemma 2 is provided by Hypothesis (P4) and Corollary 1. □
Proof of Lemma 3.
Substituting for ψ ( . , . ) in Lemma 1’s proof yields a simple application of Lemma 3.
ψ ( R i ) = K ( h K 1 d ( r , R i ) ) E [ K 1 ] , R i H .
Proof of Lemma 6.
We denote
Z n i ( s , r ) = ϕ ( r , h K ) n h H E ( K 1 ) ( Γ i ( s , r ) E Γ i ( s , r ) )
where
Γ i ( s , r ) = K ( h K 1 d ( r , R i ) ) H i ( s ) E [ K 1 H 1 ] , 1 i n
and
S n : = i = 1 n Z n i ( s , r )
Therefore,
S n = n h H ϕ ( x , h K ) ( f ^ N r ( s ) E ( f ^ N r ( s ) ) .
the outcome is
S n N ( 0 , σ f 2 ( r ) )
We employ Doob’s basic technique [30]. Indeed, we select two natural number sequences that go to infinity.
p = O ( n ϕ ( r , h K ) ) , q = o ( p )
and we split S n into
S n = T n + T n + ξ k with T n = j = 1 k η j , and T n = j = 1 k η j
where
η j = i I J Z n i ( s , r ) , ξ j = i I J Z n i ( s , r ) , ζ k = i = k ( p + q ) + 1 Z n i ( s , r )
with
I j = ( j 1 ) ( p + q ) + 1 , , ( j 1 ) ( p + q ) + p , J j = ( j 1 ) ( p + q ) + p + 1 , , j ( p + q ) .
Observe that, for k = n p + q (where [.] stands for the integral part), we have k q n 0 and k p n 1 , q n 0 , which means as p n 0 as n . Our asymptotic result is now founded on:
E ( T n ) 2 + E ( ζ n ) 2 0
and
T n N ( 0 , 1 )
Proof of Equation (A26).
Stationarity gives us:
E ( T n ) 2 = k V a r ( ζ 1 ) + 2 1 i < j k | C o v ( ζ i , ζ j ) |
and
k V a r ( ζ 1 ) q k V a r ( Z n 1 ( s , r ) ) + 2 k 1 i < j k C o v ( Z n i ( s , r ) , Z n j ( s , r ) )
The fact that k q n 0 gives us
q k V a r ( Z n 1 ( s , r ) ) = ϕ r ( h K ) q k 1 n ( E ( K 1 ) ) 2 V a r ( Γ 1 ( s , r ) ) = O ( k q n ) 0 , a s n .
In another way,
k 1 i < j q C o v ( Z n i ( s , r ) , Z n j ( s , r ) ) = k ϕ r ( h K ) n h H ( E ( K 1 ) ) 2 1 i < j q C o v ( Γ i ( s , r ) , Γ j ( s , r ) )
We will now write down this last covariance.
i = 1 q j = 1 j i q Cov Γ i ( s , r ) , Γ j ( s , r ) = i = 1 q j = 1 0 < | i j | m n q C o v Γ i ( s , r ) , Γ j ( s , r ) + i = 1 q j = 1 | i j | > m n q C o v Γ i ( s , r ) , Γ j ( s , r ) = I + I I
where m n a positive integer sequence that goes to infinity as n .
In he term I , we utilize (P1), (P3), and (P7), to demonstrate that, for i j
I q m n max i j E Γ i ( s , r ) Γ j ( s , r ) + E Γ 1 ( s , r ) 2 q m n max i j E H i K i ( r ) H j K j ( r ) + E H 1 K 1 ( r ) 2 C q m n h H 2 ϕ r 2 h K + h H ϕ r h K 2 C q m n h H 2 ϕ r 2 h K
For (II), we utilize lipschitz and the understanding that the H and K are constrained to demonstrate:
I I h K 1 Lip ( K ) + h H 1 Lip ( H ) 2 i = 1 q j = 1 | i j | > m n q λ i , j C q h K 1 lip ( K ) + h H 1 Lip ( H ) 2 λ m n C q h K 1 lip ( K ) + h H 1 Lip ( H ) 2 e α m n
When we add all of these inequality problems together, we get
i = 1 q j = 1 j i q C o v Γ i ( s , r ) , Γ j ( s , r ) C q m n h H 2 ϕ r 2 h K + q n h K 1 L i p ( K ) + h H 1 L i p ( H ) 2 e α m n
By choosing m n = log h K 1 L i p ( K ) + h H 1 L i p ( H ) 2 α h H 2 ϕ r 2 h K , we get
1 q h H ϕ r h K i = 1 n j = 1 j i n C o v Γ i ( s , r ) , Γ j ( s , r ) 0 , as n
Thus, we obtain,
1 i < j k C o v ( Γ i ( s , r ) , Γ j ( s , r ) ) = o ( q h H ϕ r ( h K ) ) .
Then
k 1 i < j k C o v ( Z n i ( s , r ) , Z n j ( s , r ) ) = O ( k q n ) 0 , as n .
From (A28)–(A31), we obtain
k V a r ( ξ i ) 0 , as n .
To utilize the stationary, go to (A27) and look to the right.
1 i < j k C o v ( ξ n i , ξ n j ) = 1 i < j k ( k l ) C o v ( ξ n i , ξ n j ) k 1 i < j k C o v ( ξ n i , ξ n j ) 1 = 1 ( i , j ) J * J l + 1 C o v ( Z n i ( y , x ) , Z n j ( y , x ) )
For all ( i , j ) J i * J j and | i j | p + 1 > p , then
1 i < j k C o v ( ξ n i , ξ n j ) k C ϕ r ( h K ) ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 n h H ( E [ K 1 ) 2 i = 1 p j = 2 p + q + 1 , i j > p v λ i , j C k p ϕ r ( h K ) ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 n h H ( E [ K 1 ) 2 λ p C k p ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 n h H ϕ r ( h K ) e α p C k p n h H 3 ϕ r 3 ( h K ) e α p 0 .
By this and (A31), we can write
E ( T 1 ) 2 0 as n .
Now, with regard to ζ k , we obtain:
E ( ζ k ) 2 ) ( n k ( p + q ) ) V a r ( Z n 1 ( s , r ) ) + 2 1 i < j k C o v ( Z n i ( s , r ) , Z n j ( s , r ) ) p V a r ( Z n 1 ( s , r ) ) + 2 1 i < j k C o v ( Z n i ( s , r ) , Z n j ( s , r ) ) p ϕ r ( h K ) n h H E ( K 1 ) 2 V a r ( Z n 1 ( s , r ) ) + C ϕ r ( h K ) n h H E ( K 1 ) 2 1 i < j k C o v ( Z n i ( s , r ) , Z n j ( s , r ) ) o ( 1 ) C p n + o ( 1 ) .
Then,
E ( ζ k ) 2 0 as n
Via (A26), the proof of (A27) is complete. □
Proof of Equation (A29).
Based on:
E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) 0
and
k V a r ( η 1 ) σ f 2 , k E ( η 1 2 I η 1 > ϵ σ f ( r ) )
Proof of Equation (A34).
E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) E ( e i t j = 1 k η j ) E ( e i t j = 1 k 1 η j ) E ( e i t η j ) + E ( e i t j = 1 k 1 η j ) j = 1 k 1 E ( e i t η j ) = C o v ( e i t j = 1 k 1 η j , e i t η k ) + E ( e i t j = 1 k 1 η j ) j = 1 k 1 E ( e i t η j )
Successively, we have
E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) C o v ( e i t j = 1 k 1 η j , e i t η k ) + C o v ( e i t j = 1 k 2 η k 1 , e i t η j )
+ . + C o v ( e i t η 2 , e i t η 1 ) .
Once again, we apply the lemma that was developed by [29] to write:
| C o v ( e i t η 2 , e i t η 1 ) C ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 ϕ r ( h K ) n h H ( E [ K 1 ) 2 i I 1 j I 2 λ i , j .
through the last result of each expression on the right-hand side of (A37):
E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) C ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 ϕ r ( h K ) n h H ( E [ K 1 ) 2
× [ i I 1 j I 2 λ i , j + i I 1 I 2 j I 3 λ i , j + . . + i I 1 I k 1 j I k λ i , j .
For every 2 l k 1 , ( i , j ) I l * I l + 1 , we have | i j | q + 1 > q , then
i I 1 I k 1 j I k p λ q .
Hence, (A36) is transformed.
E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) = C t 2 ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 ϕ r ( h K ) n h H ( E [ K 1 ] ) 2 k p λ q = C t 2 ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 ϕ r ( h K ) h H ( E [ K 1 ] ) 2 k p e α q = C t 2 ( h K 1 L i p ( K ) + h H 1 L i p ( H ) ) 2 1 n h H ϕ r ( h K ) k p λ q = C t 2 k p n h H 3 ϕ r 3 ( h K ) λ q 0 .
Proof of Equation (35).
By the same reasoning used in (A28), we have:
lim n k V a r ( η 1 ) = lim n k p V a r ( Z n 1 ( s , r ) ) = lim n ϕ r ( h K ) n h H ( E K 1 ) 2 V a r ( Γ 1 ( s , r ) )
Through a straightforward computation, we establish that:
1 h H ϕ r ( h K ) E ( K 1 2 ) K 1 2 0 1 ( K 2 ) ( s ) β ( r , s ) d s + o ( 1 ) E ( K 1 2 H 1 2 ) E ( K 1 2 ) f ( s , r ) H 2 ( t ) d t E ( K 1 2 H 1 2 ) E ( K 1 2 ) f ( s , r )
which imply that
ϕ r ( h K ) n h H ( E K 1 ) 2 V a r ( Γ 1 ( s , r ) ) σ f 2 ( r ) .
Hence
k V a r ( η 1 ) σ f 2 ( r ) .
For (A35), we conclude η 1 C p Z n 1 ( s , r ) C p n ϕ r ( h K ) and Tchebychev’s inequality to get
k E ( η 1 2 I η 1 > ϵ σ f ( r ) ) C p 2 k n h H ϕ r ( h K ) P ( η 1 > ϵ σ f ( r ) ) C p 2 k n h H ϕ r ( h K ) V a r ( η 1 ) ϵ 2 σ f 2 ( r ) = O p 2 n h H ϕ r ( h K ) .
Proof of Lemma 8.
Remember that we demonstrated previously:
E F ^ N ( s , r ) F ( s , r ) = O h H b 2 + O h K b 1
Therefore, by (P5), we obtain
n h H ϕ r ( h K ) E F ^ N ( s , r ) F ( s , r ) 0 , as n
As E F ^ D ( r ) = 1 , then all that remains is to prove that
n h H ϕ r ( h K ) V a r F ^ D ( r ) F ^ N ( s , r ) 0 , as n
This is a direct result of:
V a r F ^ D ( r ) = O 1 n ϕ r ( h K ) V a r F ^ N ( s , r ) = O 1 n ϕ r ( h K )
and
C o v F ^ D ( r ) , F ^ D ( r ) = O 1 n ϕ r ( h K )
The three proofs are all quite similar and very close to the evidence of (A30). As a result, for the purpose of brevity, we simply provide the first evidence. Indeed,
V a r F ^ D ( r ) = 1 n E K 1 ( r ) 2 i = 1 n j = 1 n C o v K i ( r ) , K j ( r ) = 1 n E K 1 ( r ) 2 V a r K 1 ( r ) + 1 n E K 1 ( r ) 2 i = 1 n j = 1 , i j n C o v K i ( r ) , K j ( r )
For the first term,
V a r K 1 ( r ) = E K 1 2 ( r ) E K 1 ( r ) 2
Then,
E K 1 2 ( r ) = O ϕ r h K
It follows that:
1 n E K 1 ( r ) 2 V a r K 1 ( r ) = O 1 n ϕ r h K
Let us now consider the sum’s asymptotic behavior in term two of (A38). This necessitates to separate:
i = 1 n j = 1 , i j n C o v K i ( r ) , K j ( r ) = i = 1 n j = 1 , 0 < | i j | m n n C o v K i ( r ) , K j ( r ) I + i = 1 n j = 1 , | i j | > m n C o v K i ( r ) , K j ( r ) I I
where m n is a positive integer sequence that goes to infinity as n .
From Assumptions ( P 1 ) , ( P 4 ) , and ( P 7 ) , we have, for i j
I n m n max i j E K i ( r ) K j ( r ) + E K 1 ( r ) 2 C n m n ϕ r 2 h K + ϕ r 2 h K C n m n ϕ r 2 h K
Because K is bounded and Lipschitzian, we obtain:
I I h K 1 L i p ( K ) 2 i = 1 u j = 1 | i j | > m n v λ i , j C h K 1 l i p ( K ) 2 i = 1 u j = 1 | i j | > m n v λ i , j C n h K 1 L i p ( K ) 2 λ m n C n h K 1 l i p ( K ) 2 e α m n
Then, by (A40) and (A41), we get
j = 1 , i j n C o v K i ( r ) , K j ( r ) C n m n ϕ r 2 h K + n h K 1 L i p ( K ) 2 e α m n
By choosing
m n = log h K 1 L i p ( K ) 2 α ϕ r 2 h K
we get
1 ϕ r h K j = 1 , i j n C o v K i ( r ) , K j ( r ) 0 , as n
By (A38), (A39), and (A42) we conclude:
V a r F ^ D ( r ) = O 1 n ϕ r h K .

References

  1. Aneiros, G.; Cao, R.; Fraiman, R.; Genest, C.; Vieu, P. Recent advances in functional data analysis and high-dimensional statistics. Multivar. Anal. 2019, 170, 3–9. [Google Scholar] [CrossRef]
  2. Araujo, A.; Giné, E. The Central Limit Theorem for Real and Banach Valued Random Variables; Wiley Series in Probability and Mathematical Statistics; John Wiley and Sons: New York, NY, USA; Chichester, UK; Brisbane, Australia, 1980; p. xiv+233. [Google Scholar]
  3. Gasser, T.; Hall, P.; Presnell, B. Nonparametric estimation of the mode of a distribution of random curves. J. R. Stat. Society. Ser. B 1998, 60, 681–691. [Google Scholar] [CrossRef]
  4. Ferraty, F.; Vieu, P. Nonparametric Functional Data Analysis: Theory and Practice; Springer Series in Statistics; Springer: New York, NY, USA, 2006. [Google Scholar]
  5. Ferraty, F.; Laksaci, A.; Tadj, A.; Vieu, P. Rate of uniform consistency for nonparametric estimates with functional variables. J. Stat. Plan. Inference 2010, 140, 335–352. [Google Scholar] [CrossRef]
  6. Kara-Zaitri, L.; Laksaci, A.; Rachdi, M.; Vieu, P. Uniform in bandwidth consistency for various kernel estimators involving functional data. J. Non Parametr. Stat. 2017, 29, 85–107. [Google Scholar] [CrossRef]
  7. Ferraty, F.; Peuch, A.; Vieu, P. Modèle à indice fonctionnel simple. Comptes Rendus Math. 2003, 336, 1025–1028. [Google Scholar] [CrossRef]
  8. Ezzahrioui, M.; Ould-Saïd, E. Asymptotic Results of a Nonparametric Conditional Quantile Estimator for Functional Time Series. Commun. Stat. Theory Methods 2008, 37, 2735–2759. [Google Scholar] [CrossRef]
  9. Ezzahrioui, M.; Ould-Said, E. On the asymptotic properties of a nonparametric estimator of the conditional mode for functional dependent data. J. Nonparametr. Stat. 2008, 20, 3–18. [Google Scholar] [CrossRef]
  10. Laksaci, A. Convergence en moyenne quadratique de l’estimateur à noyau de la densité conditionnelle avec variable explicative fonctionnelle. Ann. L’Institut Stat. L’Université Paris 2007, 51, 69–80. [Google Scholar]
  11. Laksaci, A.; Maref, F. Estimation non paramétrique de quantiles conditionnels pour des variables fonctionnelles spatialement dépendantes. Comptes Rendus Math. 2009, 347, 1075–1080. [Google Scholar] [CrossRef]
  12. Matula, P. A note on the almost sure convergence of sums of negatively dependent random variables. Stat. Probab. Lett. 1992, 15, 209–213. [Google Scholar] [CrossRef]
  13. Newman, C.M. Asymptotic independence and limit theorems for positivelyand negatively dependent random variables: In Inequalities in Statistics and Probability. IMS Lect. Notes Monogr. Ser. 1984, 5, 127–140. [Google Scholar]
  14. Roussas, G.G. Positive and negative dependence with some statistical applications. In Asymptotics, Nonparametrics and Time Series; Ghosh, S., Ed.; Marcell Dekker, Inc.: New York, NY, USA, 1999; pp. 757–788. [Google Scholar]
  15. Doukhan, P.; Louhichi, S. A new weak dependence condition andapplications to moment inequalities. Stoch. Process. Their Appl. 1999, 84, 313–342. [Google Scholar] [CrossRef]
  16. Bulinski, A.; Suquet, C. Normal approximation for quasi-associated random fields. Stat. Probab. Lett. 2001, 54, 215–226. [Google Scholar] [CrossRef]
  17. Douge, L. Théorèmes limites pour des variables quasi-associées hilbertiennes. Ann. L’Institut Stat. L’Université Paris 2010, 54, 51–60. [Google Scholar]
  18. Attaoui, S.; Laksaci, A.; Ould-Said, E. Asymptotic Results for an M-estimator of the Regression Function for Quasi-Associated Processes. In Functional Statistics and Applications. Contributions to Statistics; Springer International Publishing: Cham, Switzerland, 2015; pp. 3–28. [Google Scholar] [CrossRef]
  19. Tabti, H.; Ait Saïdi, A. Estimation and simulation of conditionalhazard function in the quasi-associated framework when the observationsare linked via a functional single-index structure, Commun. Stat. Theory Methods 2017, 47, 816–838. [Google Scholar]
  20. Hamza, D.; Mechab, B.; Chikr Elmezouar, Z. Asymptotic normality of a conditional hazard function estimate in the single index for quasi-associated data. Commun. Stat. Theory Methods 2020, 49, 513–530. [Google Scholar] [CrossRef]
  21. Laksaci, A.; Mechab, W. Nonparametric relative regression for associatedrandom variables. Metron 2016, 74, 75–97. [Google Scholar]
  22. Daoudi, H.; Mechab, B. Asymptotic Normality of the Kernel Estimate of Conditional Distribution Function for the quasi-associated data. Pak. J. Stat. Oper. Res. 2019, 15, 999–1015. [Google Scholar]
  23. Daoudi, H.; Mechab, B.; Benaissa, S.; Rabhi, A. Asymptotic normality of the nonparametric conditional density function esti-mate with functional variables for the quasi-associated data. Int. J. Stat. Econ. 2019, 20, 94–106. [Google Scholar]
  24. Allaoui, S.; Bouzebda, S.; Chesneau, C.; Liu, J. Uniform almost sure convergence and asymptotic distribution of the wavelet-based estimators of partial derivatives of multivariate density function under weak dependence. J. Nonparametr. Stat. 2021, 33, 170–196. [Google Scholar] [CrossRef]
  25. Mohammedi, M.; Bouzebda, S.; Laksaci, A.; Bouanani, O. Asymptotic normality of the k-NN single index regression estimator for functional weak dependence data. Commun. Stat. Theory Methods 2022, 33, 1–26. [Google Scholar] [CrossRef]
  26. Bouzebda, S.; Laksaci, A.; Mohammedi, M. The k-nearest neighbors method in single index regression model for functional quasi-associated time series data. Rev. Mat. Complut. 2023, 36, 361–391. [Google Scholar] [CrossRef]
  27. Bouzebda, S.; Laksaci, A.; Mohammedi, M. Single Index Regression Model for Functional Quasi-Associated Times Series Data. REVSTAT Stat. J. 2023, 20, 605–631. [Google Scholar]
  28. Bouaker, I.; Belguerna, A.; Daoudi, H. The consistency of the kernel estimation of the Function conditional density for associated Data in high-dimensional statistics. J. Sci. Arts 2022, 22, 247–256. [Google Scholar] [CrossRef]
  29. Kallabis, R.S.; Neumann, M.H. An exponential inequality under weakdependence. Bernoulli 2006, 12, 333–350. [Google Scholar] [CrossRef]
  30. Doob, J.L. Stochastic Processes; John Wiley and Sons: New York, NY, USA, 1953. [Google Scholar]
Figure 1. A sample of 200 curves.
Figure 1. A sample of 200 curves.
Mathematics 11 04290 g001
Figure 2. The QQ-plot of the obtained sample.
Figure 2. The QQ-plot of the obtained sample.
Mathematics 11 04290 g002
Table 1. The p-value indicated by the test of Kolmogorov–Smirnov.
Table 1. The p-value indicated by the test of Kolmogorov–Smirnov.
m1410
p-value0.880.670.43
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Daoudi, H.; Elmezouar, Z.C.; Alshahrani, F. Asymptotic Results of Some Conditional Nonparametric Functional Parameters in High-Dimensional Associated Data. Mathematics 2023, 11, 4290. https://doi.org/10.3390/math11204290

AMA Style

Daoudi H, Elmezouar ZC, Alshahrani F. Asymptotic Results of Some Conditional Nonparametric Functional Parameters in High-Dimensional Associated Data. Mathematics. 2023; 11(20):4290. https://doi.org/10.3390/math11204290

Chicago/Turabian Style

Daoudi, Hamza, Zouaoui Chikr Elmezouar, and Fatimah Alshahrani. 2023. "Asymptotic Results of Some Conditional Nonparametric Functional Parameters in High-Dimensional Associated Data" Mathematics 11, no. 20: 4290. https://doi.org/10.3390/math11204290

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop