Next Article in Journal
Symmetric Equilibrium Bagging–Cascading Boosting Ensemble for Financial Risk Early Warning
Previous Article in Journal
Stochastic SO(3) Lie Method for Correlation Flow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Limit Theorem for Kernel Estimate of the Conditional Hazard Function with Weakly Dependent Functional Data

by
Abderrahmane Belguerna
1,2,
Abdelkader Rassoul
1,2,
Hamza Daoudi
2,3,*,
Zouaoui Chikr Elmezouar
4 and
Fatimah Alshahrani
5
1
Department of Mathematics, Science Institute, S.A University Center, P.O. Box 66, Naama 45000, Algeria
2
Laboratory of Mathematics, Statistics and Computer Science (W1550900), S.A University Center of Naama, Naama 45000, Algeria
3
Department of Electrical Engineering, College of Technology, Tahri Mohamed University, Al-Qanadisa Road, P.O. Box 417, Bechar 08000, Algeria
4
Department of Mathematics, College of Science, King Khalid University, P.O. Box 9004, Abha 61413, Saudi Arabia
5
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(10), 1777; https://doi.org/10.3390/sym17101777
Submission received: 25 August 2025 / Revised: 24 September 2025 / Accepted: 1 October 2025 / Published: 21 October 2025
(This article belongs to the Section Mathematics)

Abstract

This paper examines the asymptotic behavior of the conditional hazard function using kernel-based methods, with particular emphasis on functional weakly dependent data. In particular, we establish the asymptotic normality of the proposed estimator when the covariate follows a functional quasi-associated process. This contribution extends the scope of nonparametric inference under weak dependence within the framework of functional data analysis. The estimator is constructed through kernel smoothing techniques inspired by the classical Nadaraya–Watson approach, and its theoretical properties are rigorously derived under appropriate regularity conditions. To evaluate its practical performance, we carried out an extensive simulation study, where finite-sample outcomes were compared with their asymptotic counterparts. The results showed the robustness and reliability of the estimator across a range of scenarios, thereby confirming the validity of the proposed limit theorem in empirical settings.

1. Introduction

Recent advances in computational technology and data acquisition systems have made it possible to store and process massive datasets that vary over time, including curves and images. These types of observations are commonly referred to as functional data. Effectively analyzing and modeling such data presents both a challenge and an opportunity for statisticians, leading to the development of powerful statistical tools—chief among them, nonparametric estimation methods.
Pioneering contributions by Bosq and Lecoutre [1], Ferraty and Vieu [2], Ferraty, Mas, and Vieu [3], and Laksaci and Mechab [4] laid the foundation for nonparametric estimation in the functional data context. Their works significantly influenced both the theoretical development and the practical implementation of kernel methods, making them key references in the field of nonparametric functional statistics.
Numerous researchers have addressed nonparametric models from both theoretical and practical perspectives. For instance, in the context of kernel estimation, Ferraty and Vieu [5], and Ferraty, Goia, and Vieu [6] investigated regression operators for functional data. Laksaci and Mechab [7] explored the asymptotic behavior of regression functions under weak dependence. Azzi et al. [8] presented functional modal regression for functional data, while Hyndman and Yao [9] proposed estimation techniques and symmetry tests for conditional density functions. Other notable contributions include those of Attaoui, Laksaci, and Ould Said [10], as well as Xu [11] and Abdelhak et al. [12], who investigated single-index models. Daoudi and Mechab [13] focused on estimating the conditional distribution function under quasi-association assumptions.
These contributions, centered on kernel-based methods for conditional models, provided significant insights into the asymptotic properties of estimators related to prediction, conditional distribution functions, and their derivatives, particularly conditional density. Furthermore, Bulinski and Suquet [14] examined random fields with both positive and negative dependence structures, Bouaker et al. [15] studied the consistency of the kernel estimator for conditional density in high-dimensional statistics, and Newman [16] investigated asymptotic independence and limit theorems in related settings.
With regard to the hazard function, several studies addressed its estimation in dependent contexts. Ferraty, Rabhi, and Vieu [17], Laksaci and Mechab [18], and Gagui and Chouaf [19] established asymptotic normality results under α -mixing conditions.
The concept of α -mixing (or strong mixing) is formalized through mixing coefficients, which quantify the degree of dependence between σ -algebras generated by collections of random variables that are increasingly separated in their index set (e.g., time, spatial locations, or other ordering dimensions). A stochastic process is said to be α -mixing if these coefficients converge to zero as the separation increases. In this sense, α -mixing imposes a strong form of asymptotic independence, ensuring that sufficiently distant observations behave nearly as independent random variables.
In contrast, quasi-association introduces a weaker and more flexible dependence structure, and was first introduced in [20] as a particular form of weak dependence. Rather than relying on the decay of mixing coefficients, quasi-association is defined through covariance inequalities involving monotone functions applied to disjoint subsets of variables (see Bulinski and Suquet, 2001 [21]). This framework controls dependence by bounding covariances, without requiring them to vanish completely, thereby accommodating a broader class of dependent random processes.
This distinction makes quasi-association particularly well-suited for the analysis of weakly dependent data, as it retains enough structure to establish central limit theorems while accommodating a broader class of stochastic processes than those encompassed by α -mixing. Building on this framework, Kallabis and Neumann [22] derived exponential inequalities under weak dependence assumptions, further extending its theoretical foundations.
More recently, several studies have investigated nonparametric models involving quasi-associated random variables. Attaoui [23], Tabti and Ait Saidi [24], and Douge [25] also contributed to this line of research.
In addition, recent work has increasingly focused on the asymptotic analysis of conditional functional models under weak dependence structures, particularly quasi-association. Daoudi, Mechab, and Chikr Elmezouar [26], as well as Daoudi et al. [27], investigated the asymptotic properties of estimators of conditional hazard functions in single-index models for quasi-associated data. Similarly, Bouzebda, Laksaci, and Mohammedi [28] studied the single-index regression model, while Rassoul et al. [29] examined the MSE of the conditional hazard rate, highlighting its asymptotic behavior under weak dependence. Daoudi et al. [30] demonstrated asymptotic results of a conditional risk function estimator for associated data in high-dimensional statistics.
In this context, further contributions strengthened the theoretical foundation of kernel-based nonparametric estimators. High-dimensional statistics and complex dependence frameworks have also been explored in the literature. For instance, some works considered the asymptotic behavior of regression estimators under quasi-associated functional time series [31]. These works confirm the growing interest in developing robust asymptotic results for conditional models with dependent functional data, providing theoretical tools that support practical applications.
While the asymptotic behavior of hazard function estimators has been studied under independence or classical mixing conditions, much less attention has been given to functional data subject to weak dependence. In particular, the quasi-association structure—common in many real-world settings such as spatial or temporal processes—remains underexplored in the context of nonparametric hazard estimation.
This study addresses this gap by examining the asymptotic characteristics of the conditional hazard function estimator for functional data under quasi-association, with the aim of establishing its asymptotic properties. We begin by introducing the functional model together with the necessary notation and mathematical tools. As a first result, we establish almost complete consistency. Then, we derive asymptotic normality by employing analytical techniques and decomposition strategies. All theoretical developments are supported with rigorous proof.
To validate the theoretical findings, we conduct a numerical study demonstrating the asymptotic normal approximation of the proposed estimator. Specifically, we generate three datasets of different sizes to examine the impact of key parameters such as sample size and bandwidth on estimator performance. Graphical comparisons between theoretical and empirical results illustrate the estimator’s effectiveness and assess the quality of the estimation.
Beyond its theoretical contribution, the proposed estimator also has practical significance. In medical survival analysis, it can be used to model patient lifetimes where spatial or functional dependencies naturally arise. In reliability engineering, it offers a tool for evaluating the failure times of complex systems with dependent components. Moreover, in risk assessment involving spatially correlated data, such as environmental or epidemiological studies, our approach provides a more realistic framework than classical independence-based methods. These applications highlight the relevance of our theoretical results in addressing real-world problems.
The remainder of this paper is structured as follows. Section 2 presents the quasi-associated sequence, outlines the model construction, and introduces the estimator. Section 3 states the necessary assumptions and develops the results concerning almost complete convergence and asymptotic normality of the estimator. Section 4 provides a comprehensive numerical study supporting the theoretical findings and offering asymptotic confidence bounds. The conclusion synthesizes the key findings and highlights potential directions for future research. Finally, detailed proofs of the main results are given in Appendix A.

2. The Model

Let Z i = ( S i , T i ) 1 i n , be an H × R -valued measurable and strictly stationary stochastic process defined on a probability space ( Ω , A , P ) , where ( H , d ) denotes a semi metric space, where H is a normed . Hilbert space, provided with an inner product < . , . > .
In the following, we consider a fixed point s H and denote by N s a fixed neighborhood of s. In addition, let S be a fixed compact subset of R , corresponding to the support of the time variable T. The notation s refers to an element of the functional space H , whereas S denotes a compact subset of R .
The semi metric noted d defined by d ( s , s ) = s s , s , s H . We consider a fixed point s in H , N s a fixed neighborhood of s and S be a fixed compact subset of R . We assume the existence of a regular version of the conditional probability distribution of the random variable T given S. Furthermore, for all N s , we suppose that the conditional distribution function of T given S = s denoted by G s ( · ) is three times continuously differentiable. We denote its corresponding conditional density function by g s ( · ) .
In this paper, we investigate the kernel estimation of the conditional hazard function of T given S = s , denoted λ s ( t ) , for all t R such that G s ( t ) < 1 , is given by
λ s ( t ) = g s ( t ) 1 G s ( t ) .
In our functional context, the kernel estimate of this function is given by
λ ^ s ( t ) = g ^ s ( t ) 1 G ^ s ( t ) , t R ,
where G ^ s ( · ) is the conditional distribution functional estimator, given by
G ^ · ( t ) = i = 1 n K θ K 1 d · , S i H θ H 1 t T i i = 1 n K θ K 1 d · , S i , t R
and g ^ s ( · ) is the conditional density functional estimator, given by
g ^ · ( t ) = i = 1 n K θ K 1 d · , S i H θ H 1 t T i i = 1 n K θ K 1 d · , S i , t R .
With K denote a kernel function, H be a given differentiable distribution function with derivative H . The quantities θ K = θ K , n and θ H = θ H , n represent sequences of positive bandwidth parameters. Under this framework, the estimator λ ^ s ( t ) can be expressed as
λ ^ s ( t ) = g ^ N s ( t ) G ^ D s G ^ N s ( t ) .
where
G ^ D · = 1 n E K 1 ( · ) i = 1 n K i ( · ) G ^ N · ( t ) = 1 n E K 1 ( · ) i = 1 n K i ( · ) H i ( t ) g ^ N · ( t ) = 1 n h H E K 1 ( · ) i = 1 n K i ( · ) H i ( t )
with notational convenience:
K i ( · ) = K θ K 1 d · , S i , and H i ( t ) = H θ H 1 t T i .
Our primary objective is to establish both the consistency and the asymptotic normality of the estimator (4) under suitable hypothesis, where the sequence of variables S n n N verify the quasi-association as defined by Bulinski and Suquet [14].
Definition 1.
Let I 1 and I 2 be disjoint subsets of N , i.e., I 1 I 2 = . For all Lipschitz functions
f 1 : R | I 1 | d R , f 2 : R | I 2 | d R ,
the sequence S n n N , with S n R d , is said to be quasi-associated if
Cov f 1 ( S τ , τ I 1 ) , f 2 ( S κ , κ I 2 ) Lip ( f 1 ) Lip ( f 2 ) τ I 1 κ I 2 u = 1 d v = 1 d | Cov ( S τ u , S κ v ) | ,
where
Lip ( f 1 ) = sup s t | f 1 ( s ) f 1 ( t ) | s t , ( s 1 , , s n ) = | s 1 | + + | s n | .
Here, S τ u denotes the u-th component of S τ , i.e., S τ u : = S τ , e u , with ( e u ) u 1 an orthonormal basis.
Finally, the pair Υ l = { ( S τ , h τ ) , τ N } is referred to as a stationary quasi-associated process.

3. Main Results

3.1. Assumptions

In the sequel, when no confusion will likely to arise, we will denote by α and α strictly positive constants, and by χ u the covariance coefficient defined as
χ u = s u p v u | i j | v χ i , j
where
χ i , j = u 1 v 1 | Cov ( S i u , S j v ) | + u 1 | Cov ( S i u , T j ) | + v 1 | Cov ( T i , S j v ) | + | Cov ( T i , T j ) | .
S i u denotes The u t h component of S i , where S i u : = < S i , e u > .
For ρ > 0 , let B ( s , ρ ) : = { s H / d ( s , s ) < ρ } be a small ball, s is its center and ρ is its radius.
To achieve the desired goal, we begin by stating the following required assumptions.
( H 1 )
P ( S B ( s , θ K ) ) = ϕ s ( θ K ) > 0 , and ϕ s ( · ) is a differentiable at 0.
Moreover, β s ( · ) such that u [ 1 , 1 ] :
lim θ K 0 ϕ s u θ K ϕ s θ K = β s ( u ) .
( H 2 )
Assume that the Hölder continuity condition holds for both functions G s ( t ) and g s ( t ) .
G s 1 ( t 1 ) G s 2 ( t 2 ) γ d γ 1 ( s 1 , s 2 ) + | t 1 t 2 | γ 2 g s 1 ( t 1 ) g s 2 ( t 2 ) γ d γ 1 ( s 1 , s 2 ) + | t 1 t 2 | γ 2
for all ( s 1 , s 2 ) N s 2 and ( t 1 , t 2 ) S 2 , with constants γ > 0 , γ > 0 , and S be a subset of R ( S compact).
( H 3 )
H is even, bounded, and Lipschitz continuous, and it satisfies
R H ( z ) d z = 1 , R | z | γ 2 H ( z ) d z < , and R H ( z ) 2 d z < .
( H 4 )
For a differentiable, Lipschitz continuous and bounded kernel K, η and η so that
η I [ 0 , 1 ] ( · ) < K ( · ) < η I [ 0 , 1 ] ( · )
with
< η < K ( z ) < η < 0 for 0 z 1 .
( H 5 )
The random pairs ( S κ , T κ ) , κ N form a quasi-associated sequence with covariance coefficient χ u , u N satisfying the following:
a > 0 , κ > 0 , such that χ u κ e a u .
( H 6 )
0 < sup i j P ( S i , S j ) B ( s , θ K ) × B ( s , θ K ) = max i j P d s , S i < θ K , P d s , S j < θ K = O ϕ s 2 ( θ K ) .
( H 7 )
The bandwidths θ K and θ H satisfy
i-
lim n 1 n θ H ϕ s θ K = 0 , lim n log 5 n n θ H ϕ s θ K = 0 .
ii-
lim n n θ H 5 ϕ s θ K = 0 , lim n n θ H 2 θ K ϕ s θ K = 0 .
iii-
lim n θ H 2 + θ K n ϕ s θ K = 0 .
( H 8 )
For m { 0 , 2 } , the functions
Φ l ( · ) = E m g S ( t ) t m m g s ( t ) t m | d ( s , S ) = ·
and
Ψ ( · ) = E m G S ( t ) t m m G s ( t ) t m | d ( s , S ) = ·
are differentiable at 0.

3.2. Comments on the Assumptions

We classify the assumptions into standard ones commonly used in the nonparametric functional literature and those that are specific or new in the context of quasi-associated functional data:
( H 1 )
Specific to this paper. This assumption specifies conditions governing the probability that S lies within a neighborhood of s, along with the limiting behavior of the corresponding probability ratio as the neighborhood size approaches zero. These conditions are essential for the asymptotic analysis under quasi-association.
( H 2 )
Standard. The Hölder continuity imposed on the conditional distribution G s ( t ) and its density g s ( t ) is a classical regularity condition ensuring smoothness and enabling uniform convergence arguments.
( H 3 )
Standard. Conditions on the kernel H (even, bounded, with bounded Lipschitz derivative) are standard in kernel estimation to ensure proper convergence of the estimator.
( H 4 )
Standard. Properties of the kernel K (bounded, differentiable, indicator bounds) are classical technical assumptions required for Taylor expansions and bias control.
( H 5 )
Specific to this paper. The quasi-association assumption on the sequence { ( S κ , T κ ) } generalizes independence or classical mixing conditions, allowing us to treat weak spatial dependence in functional data.
( H 6 )
Specific to this paper. This condition characterizes the asymptotic behavior of the joint probability of two functional covariates in neighborhoods of s, controlling covariance terms in the asymptotic expansions under quasi-association.
( H 7 )
Standard. Bandwidth conditions for θ K and θ H are classical in nonparametric functional estimation to balance bias and variance.
( H 8 )
Specific to this paper. This assumption concerns the differentiability at 0 of certain conditional expectation functions. It is required for precise control of higher-order terms in the asymptotic analysis under quasi-association.

3.3. The Almost Consistency

Our goal is to derive the almost complete convergence (a.co.) of λ ^ s ( t ) to λ s ( t ) , and this result is formalized in the following theorem.
Theorem 1.
Under the conditions ( H 1 )–( H 8 ), we have
λ s ^ ( t ) λ s ( t ) = O θ H 2 + θ K + O a . co . log n n θ H ϕ s θ K as n .

3.4. Asymptotic Normality

Theorem 2.
Under ( H 1 )–( H 7 ), we infer
n θ H ϕ s θ K λ ^ s ( t ) λ s ( t ) D N 0 , σ λ ^ 2 , s A ; n ,
where
A = s H , g s ( t ) 1 G s ( t ) 0 ,
with
σ λ ^ 2 = ω 2 λ s ( t ) ω 1 2 1 G s ( t ) ( H ( u ) ) 2 d u .
ω τ = K τ ( 1 ) 0 1 K τ ( u ) β s ( u ) d u , for τ = 1 , 2 .
The complete proofs of Theorems 1 and 2 are given in Appendix A.

4. Application and Numerical Study

4.1. Confidence Bounds

Constructing reliable confidence bounds is a key aspect of statistical analysis, as they characterize the variability and reliability of model estimators. Proper interpretation of these bounds enables more informed and robust conclusions about the underlying estimator. In the context of survival and hazard function estimation, confidence bounds are particularly valuable since they provide a quantitative assessment of the uncertainty surrounding the estimated conditional hazard function, thereby guiding both theoretical analysis and practical decision-making. Moreover, confidence bounds serve as a diagnostic tool: narrow bounds suggest stable and precise estimators, while wider bounds highlight regions where the estimator is less reliable due to limited data or high variability.
As an application of the result established in Theorem 2, we construct confidence bounds for λ s ( t ) at the confidence level 1 α . To this end, we must first estimate the unknown components of the asymptotic variance as follows: these include the conditional density, the conditional survival function, and kernel-based quantities that appear in the variance expression. Consistent estimation of these components is essential, since any bias or misspecification would directly affect the coverage probability of the resulting bounds. Once these quantities are estimated, the asymptotic normality result in Theorem 2 allows us to approximate the distribution of the estimator and derive pointwise confidence intervals for λ s ( t ) across the range of t. This methodology not only validates the theoretical properties of the proposed estimator but also ensures its applicability in empirical studies where inference on the conditional hazard function is required.
ω ^ q = : 1 n ϕ ^ s ( θ K ) τ = 1 n K τ q , q = 1 , 2 ;
ϕ ^ s ( θ K ) = # τ : d S τ , s θ K n ,
where # ( A ) represents the cardinality of the set A. Also, σ λ ^ 2 is estimated by
σ λ ^ 2 ^ = : ω 2 ^ λ ^ s ( t ) ω ^ 1 2 ( H ( u ) ) 2 d u .
Corollary 1.
When the assumptions of Theorem 2 hold, we set
n θ H ϕ ^ s θ K λ ^ s ( t ) λ s ( t ) d N 0 , σ λ ^ 2 ^ , n .
Moreover, the confidence bounds will be,
λ ^ s ( t ) Z 1 α 2 σ λ ^ 2 ^ n θ H ϕ ^ s θ K 1 / 2 , λ ^ s ( t ) + Z 1 α 2 σ λ ^ 2 ^ n θ H ϕ ^ s θ K 1 / 2 ,
where Z 1 α 2 is the quantile 1 α 2 of N ( 0 , 1 ) .

4.2. Numerical Study

In this part, we conduct a numerical study using R software to illustrate and validate the theoretical results through graphical representations. The aim of this simulation is to assess the finite-sample performance of the proposed estimator and to highlight the extent to which the asymptotic properties established in the theoretical framework are reflected in practice. By generating controlled data under specific dependence structures and censoring mechanisms, we are able to visualize the behavior of the conditional density and hazard estimators, compare them with their theoretical counterparts, and evaluate their accuracy across different sample sizes. This numerical experiment also provides insights into the rate of convergence, the influence of the smoothing parameters, and the robustness of the estimator under varying conditions.
This simulation is based on the following points: We describe the data-generating process and the dependence structure imposed, specify the choice of kernel functions and bandwidths, outline the implementation of the random censoring mechanism, and finally present the graphical outputs and error metrics that allow for a systematic comparison between theoretical and empirical curves. The numerical results obtained will not only complement the theoretical findings but also serve as practical evidence of the efficiency and reliability of the proposed estimation methodology.
  • Define our model by choosing functional covariate as
    S κ ( u ) = cos W κ u + sin W κ + u + 0.7 W κ u , u [ 0 , + π ] for κ = 1 , , n
    The choice of the model S κ ( u ) is motivated by both theoretical and practical considerations. First, it adequately reflects the main characteristics of real functional data, in particular smoothness and variability, which are essential features in many applied contexts. Second, its structure makes it sufficiently flexible to mimic realistic scenarios while remaining simple enough to allow rigorous analysis within the framework of our simulation study.
    The process W κ satisfies a specific dependence structure, namely a quasi-associated sequence, which is generated as a non-strong mixing auto-regressive process of order 1.
    The choice of the model is constructed by setting the auto-regressive coefficient ρ = 0.1 and modeling the innovation term as a binomial distribution Binom ( 10 , 0.25 ) [17]. We use 100 discretization points of u to obtain the curves S κ shown below in Figure 1, Figure 2 and Figure 3, corresponding to different sample sizes.
    The real variable is defined as T = m ( S ) + ϵ . Where m is the nonlinear regression operator,
    m ( S ) = 1 5 × exp 2 1 0 π S ( u ) d u 2
    and ϵ is distributed as standard normal distribution. It is clear that the explicit form of the conditional density given by
    g s t = 1 2 π e 1 2 t m ( s ) 2 .
    In the next, we select the distance in H as
    d s 1 , s 2 = 0 π s 1 ( u ) s 2 ( u ) 2 d u 1 / 2 s 1 , s 2 H .
    Also
    K ( s ) = 15 16 1 s 2 2 , s [ 1 , 1 ] ; H ( s ) = s K ( u ) d u .
  • A bandwidth selection algorithm: The smoothness of the estimators (2) and (3) is controlled by the smoothing parameter θ K and the regularity of the cumulative distribution function. Therefore, choosing these parameters plays a critical role in the computational process.
    An optimal selection leads to effective estimation with a small mean squared error, which, for the conditional hazard function, is given by
    MSE ( λ ^ ) = 1 n i = 1 n λ ^ s ( t i ) λ s ( t i ) 2 .
    Let H ( . ) be a distribution function on R and H θ ( s ) = H ( s / θ ) . Note that as θ 0 .
    E H θ t T i | S i = s = G s ( t ) + O θ 2 .
    This result shows that G s ( t ) can be interpreted as the regression of H λ ( t T i ) on S i . Consequently, we adopt this regression framework for our estimation problem. By combining this approach with the normal reference rule [7], we obtain a practical algorithm for selecting the bandwidth parameters:
    i
    Compute the bandwidth θ H using the normal reference rule.
    ii
    Given θ H , apply cross-validation (as proposed by [1]) to determine the optimal value of θ K (using the function fregre.np.cv in the fda.usc R package (R version 4.4.1)).
    Cross-validation for bandwidth selection:
    From a theoretical perspective, the cross-validation selector θ K * is asymptotically optimal, in the sense that it converges to the bandwidth minimizing the (MSE). This ensures that the method adapts automatically to the underlying smoothness of the conditional hazard function, while remaining consistent with the dependence structure of the data.
    The choice of the bandwidth parameter θ K governs the trade-off between bias and variance. To determine an optimal value of θ K , we employ the cross-validation method, which provides a data-driven selection procedure.
    In the study of [32], the authors compared the cross-validation procedure suggested by [33,34]. They concluded that the optimal bandwidth is the cross-validation criterion of [2], is the one adopted in this application.
    The idea is to minimize a prediction error criterion based on leave-one-out estimation. More precisely, if g ^ s ( t ) denotes the conditional density functional estimator computed without the i t h observation, the cross-validation criterion is defined by
    C V ( θ K ) = 1 n i = 1 n T i g ^ i θ K ( t i ) 2 ,
    where T i denotes the observed response. The bandwidth θ K * is then obtained as
    θ K * = arg min θ K > 0 C V ( θ K ) .
    In practice, cross-validation provides a robust and reliable alternative to ad hoc choices, and its integration into our estimation procedure guarantees a principled balance between accuracy and stability.
    Now, we calculate the estimates of both the conditional distribution and the conditional density functions, and compare them with their theoretical counterparts on the same graphs (Figure 4 and Figure 5).
    It is apparent that our estimations exhibit high accuracy when optimal bandwidths are selected. To assess the performance of each model more rigorously, we compute the mean squared error, as shown in Table 1.
    For the next step in achieving the desired objective and firmly establishing the normal approximation of λ ^ s ( t ) with high effectiveness, we selected the sample that produced an estimate with the smallest MSE (sample size n = 1000 ), and followed the subsequent steps:
    • We compute the conditional hazard function estimator using (1), the asymptotic variance σ λ ^ 2 ^ defined in (6), and the empirical estimate ϕ ^ s θ K .
    • Under the condition:
      lim n n θ H ϕ s θ K N B λ ( s , t ) = 0 ,
      we can ignore the bias term N B λ ( s , t ) and compute the quantity referred to in (7), namely Quantile Normalized Hazard (QNH), as
      n θ H ϕ ^ s ( θ K ) σ λ ^ 2 ^ λ ^ s ( t ) λ s ( t ) .
    • Plot a histogram of QNH and compare it with the standard normal density. (Figure 6).
    • Finally building a confidence bounds (see Figure 7).
To complement the graphical evidence and provide a quantitative validation of the asymptotic normality of the QNH statistic, we applied the Kolmogorov–Smirnov (KS) goodness-of-fit test. Specifically, we tested the standardized QNH values against the standard normal distribution for different sample sizes. The results are reported in Table 2.
The p-values are all larger than the usual threshold of 0.05 , which means that the null hypothesis of normality cannot be rejected. These results quantitatively confirm that the empirical distribution of the QNH statistic is consistent with the standard normal law, thereby supporting the asymptotic normality established in our main theorem.

5. Conclusions and Some Perspectives

Due to the complexity of the conditional hazard function estimator λ ^ s ( t ) obtained via the kernel approach, we first decomposed it into three parts, as shown in (A1). The first part corresponds to the numerator of the density estimator, g ^ N s ( t ) , which is the dominant component governing the asymptotic properties. We showed that the denominator converges in probability to 1 G s ( t ) , while the remaining two parts capture the bias arising from C ^ N s ( t ) and g ^ N s ( t ) .
From a practical perspective, we conducted a simulation study to validate the theoretical findings. Despite the inherent challenges of bandwidth selection, the proposed estimator exhibited strong performance with low mean squared error. More importantly, the QNH statistic empirically confirmed the asymptotic normality established in the main theorem, thereby demonstrating that the finite-sample results are consistent with the theoretical limit law. These findings highlight both the reliability of the method and its contribution to the broader literature on nonparametric functional estimation.
A potential direction for future work is to examine the sensitivity of the estimator to kernel choice and bandwidth selection procedures. Exploring data-driven or adaptive bandwidth strategies may enhance finite-sample performance. Additionally, investigating the behavior of the estimator in high-dimensional or complex data settings could broaden its practical utility.
This study enhances both the theoretical and empirical comprehension of the asymptotic characteristics of the proposed estimator, laying the foundation for future developments and applications in areas where estimating conditional functional parameters is essential.

Author Contributions

Conceptualization, A.R., Z.C.E., H.D. and A.B.; methodology, Z.C.E. and H.D.; software, A.R.; validation, Z.C.E., A.B., and H.D.; formal analysis, A.R., Z.C.E., F.A. and H.D.; investigation, A.R., Z.C.E., F.A., A.B., and H.D.; resources, A.R., Z.C.E., F.A. and H.D.; data curation, A.R.; writing original draft preparation, H.D. and A.R.; writing review and editing, A.R., Z.C.E., A.B., and H.D.; visualization, A.R. and H.D.; supervision, Z.C.E. and F.A.; project administration, Z.C.E.; funding acquisition, Z.C.E. and F.A. All authors have read and approved the final version of the manuscript for publication.

Funding

This research project was funded by (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2025R358), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia, and (2) The Deanship of Research and Graduate Studies at King Khalid University for funding this work through small under grant numbergroup research R.G.P.1/118/46.

Data Availability Statement

The data used to support the findings of this study are available on request from the corresponding author.

Acknowledgments

The authors thank and express their sincere appreciation to the funders of this work: (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project (Project Number: PNURSP2025R358), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia; (2) The Deanship of Scientific Research at King Khalid University, through the Research Groups Program under grant number R.G.P. 1/118/46.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
We need the following decomposition:
λ s ^ ( t ) λ s ( t ) = g ^ N s ( t ) G ^ D s G ^ N s ( t ) g s ( t ) 1 G s ( t ) = 1 G ^ D s G ^ N s ( t ) g ^ N s ( t ) E g ^ N s ( t ) + 1 G ^ D s G ^ N s ( t ) λ s ( t ) E G ^ N s ( t ) G u ( v ) + E g ^ N s ( t ) g s ( t ) + λ s ( t ) G ^ D s G ^ N s ( t ) 1 E G ^ N s ( t ) G ^ D s G ^ N s ( t )
and the following subsequent results. □
Lemma A1.
Under the assumptions ( H 1 ), ( H 4 )–( H 7 ), and for any fixed t, we have
g ^ N s ( t ) E g ^ N s ( t ) = O a . co . log n n θ H ϕ s θ K as n .
Proof of Lemma A1.
The strategy of the proof is to control the deviation of g ^ N s ( t ) from its expectation by expressing it as a sum of suitably normalized variables Z n ^ τ . We first bound the variance and covariance structure of these variables, using the assumptions ( H 1 )–( H 6 ) and results from Ferraty et al. [2]. Then, we applied the exponential inequality of Kallabis and Nymann [22] to establish the desired probabilistic bound.
We use Lemma A1 on the variables:
Z n ^ τ ( s , t ) = 1 n θ H E K 1 ( s ) Γ τ ( s , t ) E Γ τ ( s , t ) , 1 τ n
where Γ τ ( s , t ) = K τ ( s ) H τ ( t ) , s H , t R . Moreover, we have:
E Z n ^ τ = 0 Z n ^ τ 2 K H n θ H ϕ s ( θ K ) Lip Z n ^ τ 2 α θ K 1 H Lip ( K ) + θ H 1 K Lip ( H ) n θ H ϕ s ( θ K )
and
g ^ N s ( t ) E g ^ N s ( t ) = τ = 1 n Z n ^ τ .
In order to apply Lemma A1 we have to choose the variable A n , which is conditioned by the variance of Z n ^ , starting by upper bound the Var τ = 1 n Z n ^ τ as follows:
Var τ = 1 n Z n ^ τ = 1 n θ H E K 1 ( s ) 2 τ = 1 n j = 1 n Cov Γ τ ( s , t ) , Γ j ( s , t ) = 1 n θ H E K 1 ( s ) 2 Var Γ 1 ( s , t ) + 1 n θ H E K 1 ( s ) 2 τ = 1 n j = 1 τ j n Cov Γ τ ( s , t ) , Γ j ( s , t ) = 1 n θ H E K 1 ( s ) 2 n Var Γ 1 ( s , t ) + τ τ j Cov Γ τ ( s , t ) , Γ j ( s , t ) = 1 n θ H E K 1 ( s ) 2 n A 1 + A τ j
where,
A 1 = Var Γ 1 ( s , t ) = E K 1 2 ( s ) H 1 2 ( t ) E K 1 ( s ) H 1 ( t ) 2 .
Thus, under ( H 2 ) and ( H 3 ), and by integration on the real component z, it follows that
E K 1 2 ( s ) H 1 2 ( t ) = E K 1 2 ( s ) E H 1 2 ( t ) | S 1 = E K 1 2 ( s ) H 2 t z θ H g S ( z ) d z ; taking v = t z θ H = θ H E K 1 2 ( s ) H 2 ( v ) g S t θ H v d v .
We demonstrate, for large enough n, using a first-order Taylor expansion.
g S t θ H v = g S ( t ) + O θ H = g S ( t ) + o ( 1 ) .
Hence
E K 1 2 ( s ) H 1 2 ( t ) = θ H H 2 ( v ) d v E K 1 2 ( s ) g S ( t ) + o θ H .
We denote by φ m ( S , t ) : = m g S ( t ) t m for m { 0 } , then
E K 1 2 ( s ) φ m S , t = φ m ( s , t ) E K 1 2 ( s ) + E K 1 2 ( s ) φ m ( S , t ) φ m ( s , t ) = φ m ( s , t ) E K 1 2 ( s ) + E [ K 1 2 ( s ) ( Φ m ( d ( s , S ) ) ]
In accordance with Ferraty et al. [2] (refer to Lemma 1, page 26), we establish
E K 1 2 ( s ) Φ m ( d ( s , S ) ) = K 1 2 d ( s , S ) θ K Φ m d ( s , S ) d μ d ( s , S ) ( v ) = K 1 2 v θ K Φ m ( v ) d μ d ( s , S ) ( v ) = K 1 2 ( v ) Φ m ( θ K v ) d μ d ( s , S ) θ K ( v ) = θ K Φ m ( 0 ) v K 1 2 ( v ) d μ d ( s , S ) θ K ( v ) + o ( θ K ) 0 for m = 0 .
The last line is justified by the 1-order of the Taylor expansion for Φ around 0, and Φ 0 ( 0 ) = 0 . Additionally, we employ the results of Lemma 2 on page 27 in Ferraty et al. [2].
E K 1 2 ( s ) = ϕ s θ K K 2 ( 1 ) 0 1 ( K ( u ) ) 2 β s ( u ) d u + o ( 1 ) .
Then, (A5) become
E K 1 2 ( s ) φ 0 S , t = E K 1 2 ( s ) g S ( t ) = φ 0 ( s , t ) E K 1 2 ( s ) . = ϕ s θ K g s ( t ) K 2 ( 1 ) 0 1 ( K ( u ) ) 2 β s ( u ) d u + o ϕ s ( θ K ) .
This enables us to deduce
E K 1 2 ( s ) H 1 2 ( t ) = θ H H 2 ( v ) d v ϕ s θ K g s ( t ) K 2 ( 1 ) 0 1 ( K ( u ) ) 2 β s ( u ) d u + o θ H ϕ s θ K .
For the second term (Equation (A4)), by the same steps followed above, ( H 3 ) is satisfied, we demonstrate
E K 1 ( s ) H 1 ( t ) = θ H ϕ s θ K g s ( t ) K ( 1 ) 0 1 ( K ( u ) ) β s ( u ) d u + o θ H ϕ s θ K .
Consequently, we obtain:
Var Γ 1 ( s , t ) = θ H ϕ s θ K g s ( t ) H 2 ( v ) d v K 2 ( 1 ) 0 1 K 2 ( u ) β s ( u ) d u + o θ H ϕ s θ K .
and
1 n θ H E K 1 ( s ) 2 Var Γ 1 ( s , t ) = O 1 n θ H ϕ s θ K .
For the 2 n d term A τ j , we decompose the sum in two sets by m n with m n , as n .
A τ v = τ = 1 n j = 1 τ j n Cov Γ τ ( s , t ) , Γ j ( s , t ) = τ = 1 n j = 1 0 < | τ j | m n n Cov Γ τ ( s , t ) , Γ j ( s , t ) + τ = 1 n j = 1 | τ j | > m n n Cov Γ τ ( s , t ) , Γ j ( s , t ) = : I n + ⨿ n .
Under assumptions ( H 1 ), ( H 3 ), and ( H 5 ), we infer for τ j
| I n | = τ = 1 n j = 1 0 < | τ j | m n n | Cov Γ τ ( s , t ) , Γ j ( s , t ) | n m n sup τ j | Cov Γ τ ( s , t ) , Γ j ( s , t ) | κ n m n sup τ j | E [ K τ H τ K j H j ] | + E [ K 1 H 1 ] 2 κ n m n θ H 2 ϕ s 2 ( θ K ) + ( θ H ϕ s ( θ K ) ) 2 κ n m n θ H 2 ϕ s 2 ( θ K ) .
Now, under the assumptions ( H 3 )–( H 5 ), we set
| ⨿ n | = | τ = 1 n j = 1 | τ j | > m n n Cov Γ τ ( s , t ) , Γ j ( s , t ) | τ = 1 n j = 1 | τ j | > m n n | Cov Γ τ ( s , t ) , Γ j ( s , t ) | Lip ( K ) θ K + Lip ( H ) θ H 2 τ = 1 n j = 1 | τ j | > m n n χ τ , j κ n Lip ( K ) θ K + Lip ( H ) θ H 2 χ m n κ n Lip ( K ) θ K + Lip ( H ) θ H 2 e a m n .
Then, using (A13) and (A14), we obtain
A τ j = τ = 1 n j = 1 τ j n Cov Γ τ ( s , t ) , Γ j ( s , t ) κ n m n θ H 2 ϕ s 2 ( θ K ) + Lip ( K ) θ K + Lip ( H ) θ H 2 e a m n .
By taking
m n = 1 γ log γ θ K 1 Lip ( K ) + θ H 1 Lip ( H ) 2 θ H 2 ϕ s 2 ( θ K ) ,
we get
1 n θ H E [ K 1 ( s ) ] 2 A τ j 0 , as n .
Finally, using (A11) and (A15), we obtain
Var τ = 1 n Z n ^ τ = O 1 n θ H ϕ s ( θ K ) .
Now, we need to evaluate the the covariance term Cov Z n ^ l 1 Z n ^ l u , Z n ^ v 1 Z n ^ v r , for all l 1 , , l u N u and v 1 , , v r N r with 1 l 1 l u v 1 v r n . For that we, distinguish the following cases:
  • If v 1 = l u . Using the result (A8), we obtain
    Cov Z n ^ l 1 Z n ^ l u , Z n ^ v 1 Z n ^ v r 1 n θ H E K 1 u + r E Z n ^ l 1 Z n ^ v 1 2 Z n ^ v r κ K H n θ H ϕ s ( θ K ) u + r E K v 1 2 H v 1 2 κ n θ H ϕ s θ K u + r θ H ϕ s θ K .
  • If v 1 > l u , quasi-association, under ( H 5 ), we obtain
    Cov Z n ^ l 1 Z n ^ l u , Z n ^ v 1 Z n ^ v r 4 Lip ( K ) θ K + Lip ( H ) θ H n θ H ϕ s θ K 2 × 2 κ K H n θ H ϕ s θ K u + r 2 τ = 1 u j = 1 r χ l τ , v j Lip ( K ) θ K + Lip ( H ) θ H 2 κ n θ H ϕ s θ K u + r ( u r ) χ v 1 l u Lip ( K ) θ K + Lip ( H ) θ H 2 κ n θ H ϕ s θ K u + r r e a v 1 l u .
And, by ( H 6 ) we hold,
Cov Z n ^ l 1 Z n ^ l u , Z n ^ v 1 Z n ^ v r κ K H n θ H ϕ s θ K u + r 2 Cov Z n ^ v 1 , Z n ^ l u κ K H n θ H ϕ s θ K u + r 2 E Z n ^ l u Z n ^ v 1 + E Z n ^ l u E Z n ^ v 1 α K H n θ H ϕ s θ K u + r 2 κ n θ H ϕ s θ K 2 × θ H 2 sup l m P S l , S m B s , θ K × B s , θ K + P S 1 B s , θ K 2 κ n θ H ϕ s θ K u + r θ H 2 ϕ s 2 θ K .
Furthermore, taking a ( 1 η )-power of (A17), η -power of (A18), with 1 / 4 < η < 1 / 2 ,
for the tree terms we drive an upper-bound as follows:
for 1 l 1 l u v 1 v r n :
Cov Z n ^ l 1 Z n ^ l u , Z n ^ v 1 Z n ^ v r Lip ( K ) θ K + Lip ( H ) θ H 2 η κ n θ H ϕ s θ K u + r × θ H ϕ s θ K 2 ( 1 η ) r η e a η v 1 l u Lip ( K ) θ K + Lip ( H ) θ H η κ n θ H ϕ s θ K θ H ϕ s θ K ( 1 η ) 2 × κ n θ H ϕ s θ K u + r 2 r e a η v 1 l u
The variables Z n ^ τ , τ = 1 , , n fulfill the requirements of Lemma A1 for
Υ = Lip ( K ) θ K + Lip ( H ) θ H η κ n θ H ϕ s θ K θ H ϕ s θ K ( 1 η ) M = κ n θ H ϕ s θ K ; A ρ = 1 n θ H ϕ s θ K B ρ = 16 ρ Υ 2 9 A ρ ( 1 e η ) 1 2 ( Υ M ) 1 e η = 1 n θ H ϕ s θ H
Thus,
P g ^ N s ( t ) E g ^ N s ( t ) > η log n n θ H ϕ s θ K = P τ = 1 n Z n ^ τ > η log n n θ H ϕ s θ K exp η 2 log n 2 n θ H ϕ s θ K D ( n )
where
D ( n ) = 1 n θ H ϕ s θ K + 1 n θ H ϕ s θ H 1 3 η 2 log n n θ H ϕ s θ K 5 6 .
Hence
P g ^ N s ( t ) E g ^ N s ( t ) > η log n n θ H ϕ s θ K exp η 2 log n 2 + ( η 2 log ( n ) ) 5 / 6 n θ H ϕ s θ K 1 / 6 α exp η 2 log ( n )
finally, for a suitable choice of η , the proof is achieved. □
Lemma A2.
Assuming that the conditions ( H 1 )–( H 8 ) hold, then for n , we infer
E G ^ N s ( t ) G s ( t ) = N B G ( s , t ) + o θ H 2 + o θ K
E g ^ N s ( t ) g s ( t ) = N B g ( s , t ) + o θ H 2 + o θ K
where
N B G ( s , t ) = θ H 2 2 v 2 H ( v ) d v 2 G s ( t ) t 2 + θ K Ψ 2 ( 0 ) ω 0 ω 1 N B g ( s , t ) = θ H 2 2 v 2 H ( v ) d v 2 g s ( t ) t 2 + θ K Φ 2 ( 0 ) ω 0 ω 1 ω 0 = K ( 1 ) 0 1 ( u K ( u ) ) β s ( u ) d u ω τ = K τ ( 1 ) 0 1 K τ ( u ) β s ( u ) d u for τ = 1 , 2 .
Proof of Lemma A2.
To establish the bias expansion of the estimators G ^ N s ( t ) and g ^ N s ( t ) , we first express their expectations in a convenient form and then apply Taylor expansions under the regularity conditions imposed in Assumption ( H 3 ). The strategy consists of isolating the leading terms and carefully controlling the remainders, which will ultimately yield the stated asymptotic orders.
Taking v = t z θ H , using the stationarity property, we can write the following:
  • For the bias term of G ^ N s ( t )
    E G ^ N s ( t ) = E 1 n E K 1 ( s ) i = 1 n K i ( s ) H i ( t ) = 1 E K 1 ( s ) E K 1 ( s ) E H 1 ( t ) | S
    with
    E H 1 ( t ) | S = R H 1 t z θ H g S z d z = 1 θ H R H 1 t z θ H G S z d z = R H 1 ( v ) G S t θ H v d v
    Using a Taylor expansion of the function G S t θ H v
    G S t θ H v = G S ( t ) θ H v G S ( t ) t + θ H 2 v 2 2 2 G S ( t ) t 2 + o θ H 2 .
    Under (A22) and hypothesis ( H 3 ), we deduce
    E H 1 ( t ) | S = G S ( t ) + θ H 2 2 2 G S ( t ) 2 t v 2 H ( v ) d v + o θ H 2 .
    Insert (A23) in (A21)
    E G ^ N s ( t ) = 1 E K 1 ( s ) E K 1 ( s ) G S ( t ) + θ H 2 2 v 2 H ( v ) d v E K 1 ( s ) 2 G S ( t ) 2 t + o θ H 2 .
    Denote by ψ m ( S , t ) : = m G S ( t ) t m for m { 0 , 2 } , then
    E G ^ N s ( t ) = E K 1 ( s ) ψ 0 ( S , t ) E K 1 ( s ) + E K 1 ( s ) ψ 2 ( S , t ) E K 1 ( s ) θ H 2 2 v 2 H ( v ) d v + o λ H 2
    where
    E K 1 ( s ) ψ m S , t = ψ m ( s , t ) E K 1 ( s ) + E K 1 ( s ) ψ m ( S , t ) ψ m ( s , t ) = ψ m ( s , t ) E K 1 ( s ) + E K 1 ( s ) Ψ m ( d ( s , S ) )
    with the same steps following to evaluate (A6), we set
    E K 1 ( s ) Ψ m ( d ( s , S ) ) = K 1 d ( s , S ) θ K Ψ m d ( s , S ) d μ d ( s , S ) ( v ) = K 1 v θ K Ψ m ( v ) d μ d ( s , S ) ( v ) = K 1 ( v ) Ψ m ( θ K v ) d μ d ( s , S ) θ K ( v ) = θ K Ψ m ( 0 ) v K 1 ( v ) d μ d ( s , S ) θ K ( v ) + o ( θ K )
    The last line justifies by the 1-order Taylor expansion for Ψ around 0. Additionally, we employ the results of Ferraty et al. [2].
    E K 1 ( s ) = ϕ s θ K K ( 1 ) 0 1 K ( u ) β s ( u ) d u + o ( 1 ) . v K 1 ( v ) d μ d ( s , S ) θ K ( v ) = K ( 1 ) 0 1 ( u K ( u ) ) β s ( u ) d u
    which allows us, under (A26), to set:
    E K 1 ( s ) Ψ m ( d ( s , S ) ) = θ K ϕ s ( θ K ) Ψ m ( 0 ) K ( 1 ) 0 1 ( u K ( u ) ) β s ( u ) d u + o ( 1 )
    Using (A25), (A27) and the fact that Ψ 0 ( 0 ) = 0 ,
    E K 1 ( s ) ψ 0 ( S , t ) E K 1 ( s ) = ψ 0 ( s , t ) + o ( θ K ) E K 1 ( s ) ψ 2 ( S , t ) E K 1 ( s ) = ψ 2 ( s , t ) + θ K Ψ 2 ( 0 ) K ( 1 ) 0 1 ( u K ( u ) ) β s ( u ) d u + o ( 1 ) K ( 1 ) 0 1 K ( u ) β s ( u ) d u + o ( 1 ) + o ( θ K )
    Hence,
    E G ^ N s ( t ) = G s ( t ) + θ H 2 2 v 2 H ( v ) d v 2 G s ( t ) t 2 + θ K Ψ 2 ( 0 ) K ( 1 ) 0 1 ( u K ( u ) ) β s ( u ) d u K ( 1 ) 0 1 K ( u ) β s ( u ) d u + o θ H 2 + o θ K .
  • For the bias term of g ^ N s ( t ) , we start by writing:
    E g ^ N s ( t ) = 1 E K 1 ( s ) E K 1 ( s ) E θ H 1 H 1 ( t ) | S with θ H 1 E H 1 ( t ) | S = R H ( v ) g S t θ H v d v .
    Using a Taylor expansion under ( H 3 ), we infer:
    λ H 1 E H 1 ( t ) | S = g S ( t ) + θ H 2 2 2 g S ( t ) 2 t v 2 H ( v ) d v + o θ H 2 .
    The same steps used to study E G ^ N s ( t ) (see Rassoul et al. [29]) to infer that
    E g ^ N s ( t ) g s ( t ) = B N g ( s , t ) + o θ H 2 + o θ K .
Corollary A1.
Under assumptions ( H 1 ) ( H 8 ) , we obtain
Var G ^ D s = 1 n K 2 ( 1 ) 0 1 ( K ( u ) ) 2 β s ( u ) d u ϕ s ( θ K ) K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 1 + o 1 n ϕ s ( θ K ) Var G ^ N s ( t ) = G s ( t ) n ϕ s θ K H 2 ( v ) d v K 2 ( 1 ) 0 1 K 2 ( u ) β s ( u ) d u K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 + o 1 n ϕ s θ K Cov G ^ D s , G ^ N s ( t ) = G s ( t ) K 2 ( 1 ) 0 1 K 2 ( u ) β s ( u ) d u n ϕ s θ K K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 G s ( t ) n + o 1 n ϕ s ( θ K )
Proof of Corollary A1.
The aim of this corollary is to establish the asymptotic order of the variance of the estimators g ^ N s ( t ) , G ^ N s ( t ) , and G ^ D s , as well as to evaluate their covariance. The proof builds directly on the decomposition introduced in Equation (A2), together with the variance and covariance controls derived in Lemma A1. We proceed by carefully adapting those arguments, first for the variance of g ^ N s ( t ) , then for G ^ N s ( t ) , and finally for G ^ D s , before concluding with the covariance term.
Using (A2), we can write
Var g ^ N s ( t ) = Var g ^ N s ( t ) E g ^ N s ( t ) = Var τ = 1 n Z n ^ τ
Then, to calculate Var G ^ N s ( t ) , we replace H τ by H τ and follow the same steps used in the evaluation of (A3), in conclusion, we get
Var g ^ N s ( t ) = g s ( t ) n θ H ϕ s θ K K 2 ( 1 ) 0 1 K 2 ( u ) β s ( u ) d u K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 H 2 ( v ) d v + o 1 n θ H ϕ s θ K Var G ^ N s ( t ) = G s ( t ) n ϕ s θ K K 2 ( 1 ) 0 1 K 2 ( u ) β s ( u ) d u K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 H 2 ( v ) d v + o 1 n ϕ s θ K
For the second result about Var ( G ^ D s ) , keeping the same notation with respect the definition of ( G ^ D s ) in (5), we set
Var G ^ D s = 1 n E K 1 ( s ) 2 Var i = 1 n K i ( u ) = 1 n E K 1 ( s ) 2 Var K 1 ( s ) + 1 n E K 1 ( s ) 2 i = 1 n j = 1 i j n Cov K i ( s ) , K j ( s ) = V K 1 + V K 2
Moreover,
V K 1 = E ( K 1 2 ( s ) ) E K 1 ( s ) 2 n E K 1 ( s ) 2 = 1 n E ( K 1 2 ( s ) ) E K 1 ( s ) 2 1 = 1 n K 2 ( 1 ) 0 1 ( K ( u ) ) 2 β s ( u ) d u ϕ s ( θ K ) K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 1 + o 1 n ϕ s ( θ K ) .
Furthermore, in the 2 n d term V K 2 , we decompose as follows:
i = 1 n j = 1 i j n Cov K i ( s ) , K j ( s ) = i = 1 n j = 1 0 < | i j | m n n Cov K i ( s ) , K j ( s ) + i = 1 n j = 1 | i j | > m n n Cov K i ( s ) , K j ( s ) = : J 1 + J 2 .
Now, under assumption ( H 6 ), we have
| J 1 | = i 0 < | i j | m n Cov K i ( s ) , K j ( s ) n m n max i j E K i ( s ) K j ( s ) + E K 1 ( s ) 2 α n m n ϕ s 2 θ K
From condition ( H 5 ), we infer that
| J 2 | = i | i j | > m n Cov K i ( s ) , K j ( s ) α Lip ( K ) θ K 2 i | i j | > m n χ i , j α n θ K 2 e a m n
This implies that
i = 1 n j = 1 i j n Cov K i ( s ) , K j ( s ) i = 1 n i j Cov K i ( s ) , K j ( s ) α n m n ϕ s 2 θ K + θ K 2 e a m n
Next, taking
m n = 1 γ log γ θ K 2 ϕ s 2 θ K ,
we obtain
V K 2 = 1 n ϕ s ( θ K ) 2 i = 1 n j = 1 i j n Cov K i ( s ) , K j ( s ) 0 , as n .
Finally, we get
Var G ^ D s = 1 n K 2 ( 1 ) 0 1 ( K ( u ) ) 2 β s ( u ) d u ϕ s θ K K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 1 + o 1 n ϕ s ( θ K ) .
Now we evaluate Cov G ^ D s , G ^ N s ( t ) as follows:
Cov G ^ D s , G ^ N s ( t ) = 1 n E K 1 ( s ) 2 Cov K 1 ( s ) , Γ 1 ( t ) + 1 n E K 1 ( s ) 2 i j Cov K i ( s ) , Γ j ( t ) = C V 1 + C V i j with Γ i ( s , t ) = K i ( s ) H i ( t )
where
C V 1 = E K 1 2 ( s ) H 1 ( t ) n E K 1 ( s ) 2 E K 1 ( s ) H 1 ( t ) n E K 1 ( s ) .
For the first term in right-hand of (A32), we have
E K 1 2 ( s ) H 1 ( t ) = E K 1 2 ( s ) E H 1 ( t ) | S = E K 1 2 ( s ) R H 1 t z θ H g S ( z ) d z = 1 θ H E K 1 2 ( s ) R H 1 t z θ H G S z d z = E K 1 2 ( s ) R H 1 v G S t θ H v d v = E K 1 2 ( s ) G S ( t ) + o 1 .
The first order Taylor expansion of G around t for n large enough justified the last line; furthermore, replacing K by K 2 in (A25), and following the same steps and techniques, then by the fact that Ψ 0 ( 0 ) = 0 , the second term E K 1 2 ( s ) Φ m ( d ( s , S ) ) 0 allows us to obtain the following:
E K 1 2 ( s ) H 1 ( t ) = G s ( t ) E K 1 2 ( s ) + o 1 = G s ( t ) ϕ s θ K K 2 ( 1 ) 0 1 K 2 ( u ) β s ( u ) d u + o ϕ s θ K
Then,
E K 1 2 ( s ) H 1 ( t ) n E K 1 ( s ) 2 = G s ( t ) n ϕ s θ K K 2 ( 1 ) 0 1 K 2 ( u ) β s ( u ) d u K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 + o 1 n ϕ s θ K
By the same technique used above in (A33) and (A34), we can write
E K 1 ( s ) H 1 ( t ) n E K 1 ( s ) = G s ( t ) n + o 1 n = O 1 n
Finally, by(A35) and (A36), we infer
C V 1 = 1 n E K 1 ( s ) 2 Cov K 1 ( s ) , Γ 1 ( t ) = G s ( t ) K 2 ( 1 ) 0 1 K 2 ( u ) β s ( u ) d u n ϕ s θ K K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 G s ( t ) n + o 1 n ϕ s θ K
Furthermore, for the 2 n d term C V i j , we follow the steps used to analysis Var τ = 1 n Z n ^ τ in Lemma A1, we divide the sum below:
i = 1 n j = 1 i j n Cov K i ( s ) , Γ j ( s , t ) = i = 1 n j = 1 0 < | i j | m n n Cov K i ( s ) , Γ j ( s , t ) + i = 1 n j = 1 | i j | > m n n Cov K i ( s ) , Γ j ( s , t ) = : P 1 + P 2 .
We keep the same notation and, under assumptions ( H 1 ), ( H 3 ), and ( H 6 ), we infer i j :
| P 1 | = i = 1 n j = 1 0 < | i j | m n n Cov K i ( s ) , Γ j ( s , t ) i = 1 n j = 1 0 < | i j | m n n E K i ( s ) Γ j ( s , t ) α i = 1 n j = 1 0 < | i j | m n n E K j H j α i = 1 n j = 1 0 < | i j | m n n E K j E ( H j | S ) θ n m n ϕ s 2 θ K
Since K and H are bounded, we also obtain
| P 2 | = i = 1 n j = 1 | i j | > m n n Cov K i ( s ) , Γ j ( s , t ) Lip ( K ) θ K 2 + Lip ( H ) θ H i = 1 n j = 1 | i j | > m n n χ i , j α n Lip ( K ) θ K 2 + Lip ( H ) θ H χ m n α n Lip ( K ) θ K 2 + Lip ( H ) θ H e a m n .
Then, combining (A38) and (A39), we obtain:
i j Cov K i ( s ) , Γ j ( s , t ) α n m n ϕ s 2 θ K + Lip ( K ) θ K 2 + Lip ( H ) θ H e a m n
Taking
m n = 1 γ log θ K 1 Lip ( K ) 2 + θ H 1 Lip ( H ) γ ϕ s 2 θ K ,
we finally obtain the following:
C V i j = 1 n E ( K 1 ( s ) ) 2 i j Cov K i ( s ) , Γ j ( s , t ) 0 , as n .
Combining the results (A37) and (A40) we obtain
Cov G ^ D s , G ^ N s ( t ) = G s ( t ) K 2 ( 1 ) 0 1 K 2 ( u ) β s ( u ) d u n ϕ s θ K K ( 1 ) 0 1 K ( u ) β s ( u ) d u 2 G s ( t ) n + o 1 n ϕ s θ K
Lemma A3.
Under the assumptions of Theorem 1
G ^ D s G ^ N s ( t ) 1 G s ( t ) , in probability .
and
n θ H ϕ s θ K σ λ ^ 2 1 E G ^ N s ( t ) G ^ D s G ^ N s ( t ) = O p ( 1 )
with
σ λ ^ 2 = ω 2 λ s ( t ) ω 1 2 1 G s ( t ) ( H ( u ) ) 2 d u .
Proof of Lemma A3.
To begin, we note that Lemma A2 and Corollary A1 allow us to control the behavior of the difference G ^ D s G ^ N s ( t ) 1 + G s ( t ) .
It is clear that the result of Lemma A2 and Corollary A1 permits us to write the following:
E G ^ D s G ^ N s ( t ) 1 + G s ( t ) 0
and
Var G ^ D s G ^ N s ( t ) 1 + G s ( t ) 0 .
Then, by Markov Inequality:
G ^ D s G ^ N s ( t ) 1 + G s ( t ) 0 in probability .
Finally, by combining this result with the fact that
E G ^ D s G ^ N s ( t ) 1 + E G ^ N s ( t ) = 0 ,
we get the required result. □
Proof of Theorem 2.
We need the decomposition (A1), together with the results established in Lemmas A2 and A3, and the subsequent result given in Lemma A4:
Lemma A4.
Assuming that ( H 1 )–( H 7 ) hold, then
n θ H ϕ s θ K g ^ N s ( t ) E g ^ N s ( t ) D N 0 , σ g 2
with
σ g 2 = ω 2 g s ( t ) ω 1 2 ( H ( u ) ) 2 d u
Proof of Lemma A4.
Before presenting the detailed computations, we outline the idea: the centered and scaled estimator g ^ N s ( t ) is written as a sum of dependent variables Z n τ ( s , t ) , which is partitioned into large blocks, small blocks, and a remainder. Using stationarity and covariance bounds, we show that the contributions of small blocks and the remainder vanish, while the sum over large blocks converges in distribution to a normal variable via a characteristic function argument.
By the definition of g ^ N s ( t ) in (5), it follows that
n θ H ϕ s θ K g ^ N s ( t ) E g ^ N s ( t ) = τ = 1 n Z n τ ( s , t ) = V n ,
where
Z n τ ( s , t ) = ϕ s θ K n θ H E K 1 ( s ) ( Γ τ ( s , t ) E Γ τ ( s , t ) ) ,
and
Γ τ ( s , t ) = K τ ( s ) H τ ( t ) , s H , t R , 1 τ n .
The result is
V n N ( 0 , σ g 2 ) .
To establish this, we employ Doob’s basic technique [31]. Specifically, we choose two sequences of natural numbers that extend to infinity:
p = O n ϕ s ( θ K ) , q = o ( p ) ,
and divide V n as follows:
V n = R n + R n + ζ k , with R n = τ = 1 k η τ , R n = τ = 1 k ξ τ .
Here,
η τ = τ I τ Z n τ ( s , t ) , ξ τ = τ J τ Z n τ ( s , t ) , ζ k = τ = k ( p + q ) + 1 n Z n τ ( s , t ) ,
with
I τ = { ( τ 1 ) ( p + q ) + 1 , , ( τ 1 ) ( p + q ) + p } , J τ = { ( τ 1 ) ( p + q ) + p + 1 , , τ ( p + q ) } .
Remark that, for k = n p + q (where [ . ] denotes the integer part), we have
k q n 0 , k p n 1 , q n 0 ,
which implies p n 0 as n .
Our asymptotic result is now based on
E ( R n ) 2 + E ζ k 2 0 ,
and
R n N ( 0 , σ g 2 ) .
Proof of (A42).
By stationarity, we have:
E R n 2 = k Var ( ξ 1 ) + 2 1 τ < s k Cov ( ξ τ , ξ s ) .
and
k Var ( ξ 1 ) q k Var Z n 1 ( s , t ) + 2 k 1 τ < s q Cov Z n τ ( s , t ) , Z n s ( s , t ) .
Using (A11) and the fact that k q n 0 , we get
q k Var Z n 1 ( s , t ) = q k ϕ s θ K 1 n θ H E 2 K 1 ( s ) Var Γ 1 ( s , t ) = O k q n 0 , as n .
On the other hand, for the covariance term, we have:
k 1 τ < j q Cov Z n τ ( s , t ) , Z n j ( s , t ) = k ϕ s ( θ K ) n θ H E 2 K 1 ( s ) 1 τ < j q Cov Γ τ ( s , t ) , Γ j ( s , t ) .
Similarly to (A12), we handle this last covariance term using the previously established bounds.
τ = 1 q j = 1 τ j q Cov Γ τ ( s , t ) , Γ j ( s , t ) = τ = 1 n j = 1 0 < | τ j | m n q Cov Γ τ ( s , t ) , Γ j ( s , t ) + τ = 1 q j = 1 | τ j | > m n q Cov Γ τ ( s , t ) , Γ j ( s , t ) = I q + ⨿ q
Thus, by the same steps using to evaluate (A12) we obtain:
1 τ < j q | Cov ( Γ τ ( s , t ) , Γ j ( s , t ) ) | = o ( q θ H ϕ s θ K ) .
Then
k 1 τ < j k | Cov ( Z n τ ( s , t ) , Z n j ( s , t ) ) | = O ( k q n ) 0 , as n .
From (A45)–(A49) we get
k Var ( ξ 1 ) 0 , as n .
For the second term of (A44), we use the stationary to evaluate the right-hand side.
1 τ j k Cov ( ξ τ , ξ j ) = m = 1 k 1 ( k m ) Cov ( ξ 1 , ξ m + 1 ) k m = 1 k 1 Cov ( ξ 1 , ξ m + 1 ) k m = 1 k 1 ( τ , j ) J 1 × J m + 1 Cov Z n τ ( s , t ) , Z n j ( s , t ) .
For all ( τ , j ) J 1 * J s , we have | τ j | p + 1 > p , then
1 τ < j k | Cov ( ξ τ , ξ j ) | k α ϕ s θ K ( θ K 1 L i p ( K ) + θ H 1 L i p ( H ) ) 2 n θ H E 2 K 1 ( s ) τ = 1 p j = 2 p + q + 1 | τ j | > p k ( p + q ) χ τ , j θ k p ϕ s θ K ( θ K 1 L i p ( K ) + θ H 1 L i p ( H ) ) 2 n θ H E 2 K 1 ( s ) χ p α k p ( θ K 1 L i p ( K ) + θ H 1 L i p ( H ) ) 2 n θ H ϕ s θ K e a p α k p n θ H 3 ϕ s 3 θ K e a p 0 .
By this last result and (A50), we set
E ( R 1 ) 2 0 as n .
By the fact ( n k ( p + q ) ) p , we get for the sequence ζ k ;
E ( ζ k ) 2 ) ( n k ( p + q ) ) Var ( Z n 1 ( s , t ) ) + 2 1 τ < j k | Cov ( Z n τ ( s , t ) , Z n j ( s , t ) ) | p Var ( Z n 1 ( s , t ) ) + 2 1 τ < j k | Cov ( Z n τ ( s , t ) , Z n j ( s , t ) ) | p ϕ s θ K n θ H E 2 K 1 ( s ) Var ( Γ 1 ( s , t ) ) + α ϕ s θ K n θ H E 2 K 1 ( s ) 1 τ < j k | Cov ( Γ τ ( s , t ) , Γ j ( s , t ) ) | o ( 1 ) α p n + o ( 1 ) .
Hence,
E ( ζ k ) 2 0 as n .
Combining with (A50) the proof of (A42) is complete. □
Proof of (A43).
This proof is based on the following two results:
| E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) | 0
and
k Var ( η 1 ) σ g 2 ; k E ( η 1 2 I { η 1 > ϵ σ g 2 } ) 0 .
Proof of (A53).
E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) E ( e i t j = 1 k η j ) E ( e i t j = 1 k 1 η j ) E ( e i t η k ) + E ( e i t j = 1 k 1 η j ) j = 1 k 1 E ( e i t η j ) = Cov ( e i t j = 1 k 1 η j , e i t η k ) + E ( e i t j = 1 k 1 η j ) j = 1 k 1 E ( e i t η j )
And successively, we have
E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) Cov ( e i t j = 1 k 1 η j , e i t η k ) + Cov ( e i t j = 1 k 2 η k 1 , e i t η j ) + + Cov ( e i t η 2 , e i t η 1 ) .
Once again we apply Lemma 1, to write
Cov ( e i t η 2 , e i t η 1 ) α ( θ K 1 L i p ( K ) + θ H 1 L i p ( H ) ) 2 ϕ s θ K n θ H E 2 K 1 ( s ) τ I 1 j I 2 χ τ , j .
Applying (A57) to each term on the right-hand side of (A56)
| E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) | α t 2 ( θ K 1 L i p ( K ) + θ H 1 L i p ( H ) ) 2 ϕ s θ K n θ H E 2 K 1 ( s ) × τ I 1 j I 2 χ τ , j + τ I 1 I 2 j I 3 χ τ , j + + τ I 1 . . . I k 1 j I k χ τ , j .
For all 2 r k 1 , ( τ , j ) I r * I r + 1 , we have | τ j | q + 1 > q , then
τ I 1 I k 1 j I k χ τ , j p χ q .
Therefore, inequality (A55) becomes
| E ( e i t j = 1 k η j ) j = 1 k E ( e i t η j ) | = α t 2 ( θ K 1 L i p ( K ) + θ H 1 L i p ( H ) ) 2 ϕ s θ K n θ H E 2 K 1 ( s ) k p χ q = α t 2 ( θ K 1 L i p ( K ) + θ H 1 L i p ( H ) ) 2 ϕ s θ K n θ H E 2 K 1 ( s ) k p e a q = α t 2 ( θ K 1 L i p ( K ) + θ H 1 L i p ( H ) ) 2 1 n θ H ϕ s θ K k p e a q = α t 2 k p n θ H 3 ϕ s 3 θ K e a q 0 .
Proof of (A54).
By the definition of η 1 and Z n 1 ( s , t ) , we have
k Var ( η 1 ) = k p Var ( Z n 1 ( s , t ) ) = k p ϕ s θ K n θ H E 2 K 1 ( s ) Var ( Γ 1 ( s , t ) )
Using the result of (A10) and the fact that k p n 1 , which implies that
k Var ( η 1 ) σ g 2 .
For the second term of (A54), we conclude by | η 1 | α p | Z n 1 ( s , t ) | α p n θ H ϕ s θ K , and Tchebychev’s inequality to get
k E ( η 1 2 I { η 1 > ϵ σ g 2 } ) α p 2 k n θ H ϕ s θ K P η 1 > ϵ σ g 2 α p 2 k n θ H ϕ s θ K Var ( η 1 ) ϵ 2 σ g 2 = O p 2 n θ H ϕ s θ K .
This concludes Lemma A3’s proof. □

References

  1. Bosq, D.; Lecoutre, J.B. Théorie de L’estimation Fonctionnelle; Economica: Paris, France, 1987. [Google Scholar]
  2. Ferraty, F.; Vieu, P. Nonparametric Functional Data Analysis: Theory and Practice; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2006; Available online: https://ideas.repec.org/a/eee/csdana/v51y2007i9p4751-4752.html (accessed on 5 July 2025).
  3. Ferraty, F.; Mas, A.; Vieu, P. Nonparametric regression on functional data: Inference and practical aspects. Aust. N. Z. J. Stat. 2007, 49, 267–286. [Google Scholar] [CrossRef]
  4. Lakcasi, A.; Mechab, B. Conditional hazard estimate for functional random fields. J. Stat. Theory Pract. 2014, 8, 192–220. [Google Scholar] [CrossRef]
  5. Ferraty, F.; Vieu, P. Nonparametric models for functional data, with application in regression, time series prediction and curve discrimination. J. Nonparametric Stat. 2004, 16, 111–125. [Google Scholar] [CrossRef]
  6. Ferraty, F.; Goia, A.; Vieu, P. Régression non-paramétrique pour des variables aléatoires fonctionnelles mélangeantes. Comptes Rendus Math. 2002, 334, 217–220. [Google Scholar] [CrossRef]
  7. Mechab, W.; Lakcasi, A. Nonparametric relative regression for associated random variables. Metron 2016, 74, 75–97. [Google Scholar] [CrossRef]
  8. Azzi, A.; Belguerna, A.; Laksaci, A.; Rachdi, M. The scalar-on-function modal regression for functional time series data. J. Nonparametric Stat. 2023, 36, 503–526. [Google Scholar] [CrossRef]
  9. Hyndman, R.J.; Yao, Q. Nonparametric estimation and symmetry tests for conditional density functions. J. Nonparametric Stat. 2002, 14, 259–278. [Google Scholar] [CrossRef]
  10. Attaoui, S.; Laksaci, A.; Ould Said, E. A note on the conditional density estimate in the single functional index model. Stat. Probab. Lett. 2011, 81, 45–53. [Google Scholar] [CrossRef]
  11. Ling, N.; Xu, Q. Asymptotic normality of conditional density estimation in the single index model for functional time series data. Stat. Probab. Lett. 2012, 82, 2235–2243. [Google Scholar] [CrossRef]
  12. Abdelhak, K.; Belguerna, A.; Laala, Z. Local Linear Estimator of the Conditional Hazard Function for Index Model in Case of Missing at Random Data. Appl. Appl. Math. Int. J. (AAM) 2022, 17, 33–53. [Google Scholar]
  13. Daoudi, H.; Mechab, B. Asymptotic Normality of the Kernel Estimate of Conditional Distribution Function for the quasi-associated data. Pak. J. Stat. Oper. Res. 2019, 15, 999–1015. [Google Scholar] [CrossRef]
  14. Bulinski, A.; Suquet, C. Asymptotical behaviour of some functionals of positively and negatively dependent random fields. Fundam. I Prikl. Mat. 1998, 4, 479–492. [Google Scholar]
  15. Bouaker, I.; Belguerna, A.; Daoudi, H. The Consistency of the Kernel Estimation of the Function Conditional Density for Quasi-Associated Data in High-Dimensional Statistics. J. Sci. Arts 2022, 22, 247–256. [Google Scholar] [CrossRef]
  16. Newman, C.M. Asymptotic independence and limit theorems for positively and negatively dependent random variables. Inequalities Stat. Probab. 1984, 5, 127–140. [Google Scholar]
  17. Ferraty, F.; Rabhi, A.; Vieu, P. Estimation non-paramétrique de la fonction de hasard avec variable explicative fonctionnelle. Rev. Roum. Math. Pures Appl. 2008, 53, 1–18. [Google Scholar]
  18. Laksaci, A.; Mechab, B. Estimation non-paramétrique de la fonction de hasard avec variable explicative fonctionnelle: Cas des données spatiales. Rev. Roum. Math. Pures Appl. 2010, 55, 35–51. [Google Scholar]
  19. Gagui, A.; Chouaf, A. On the nonparametric estimation of the conditional hazard estimator in a single functional index. Stat. Transit. New Ser. 2022, 23, 89–105. [Google Scholar] [CrossRef]
  20. Doukhan, P.; Louhichi, S. A new weak dependence condition and applications to moment inequalities. Stoch. Process. Their Appl. 1999, 84, 313–342. [Google Scholar] [CrossRef]
  21. Bulinski, A.; Suquet, C. Approximation for quasi-associated random fields. Stat. Probab. Lett. 2001, 54, 215–226. [Google Scholar] [CrossRef]
  22. Kallabis, R.S.; Neumann, M.H. An exponential inequality under weak dependence. Bernoulli 2006, 12, 333–350. [Google Scholar] [CrossRef]
  23. Attaoui, S. Sur L’estimation Semi Paramètrique Robuste Pour Statistique Fonctionnelle. Ph.D. Thesis, Université du Littoral Côte d’Opale Lille, France & Université Djillali Liabès, Sidi Bel-Abbès, Algeria, 2012. Available online: https://theses.hal.science/tel-00871026/ (accessed on 2 July 2025).
  24. Hadjila, T.; Ahmed, A.S. Estimation and simulation of conditional hazard function in the quasi-associated framework when the observations are linked via a functional single-index structure. Commun. Stat.-Theory Methods 2017, 47, 816–838. [Google Scholar] [CrossRef]
  25. Douge, L. Thèorèmes limites pour des variables quasi-associées hilbertiennes. Ann. L’ISUP 2010, 4, 51–60. [Google Scholar]
  26. Hamza, D.; Mechab, B.; Zouaoui, C.E. Asymptotic normality of a conditional hazard function estimate in the single index for quasi-associated data. Commun. Stat.-Theory Methods 2020, 49, 513–530. [Google Scholar] [CrossRef]
  27. Daoudi, H.; Elmezouar, Z.C.; Alshahrani, F. Asymptotic Results of Some Conditional Nonparametric Functional Parameters in High Dimensional Associated Data. Mathematics 2023, 11, 4290. [Google Scholar] [CrossRef]
  28. Bouzebda, S.; Laksaci, A.; Mohammedi, M. Single Index Regression Model for Functional Quasi-Associated Times Series Data. Revstat-Stat. J. 2023, 20, 605–631. [Google Scholar] [CrossRef]
  29. Rassoul, A.; Belguerna, A.; Daoudi, H.; Elmezouar, Z.C.; Alshahrani, F. On the Exact Asymptotic Error of the Kernel Estimator of the Conditional Hazard Function for Quasi-Associated Functional Variables. Mathematics 2025, 13, 2172. [Google Scholar] [CrossRef]
  30. Attaoui, S.; Benouda, O.; Bouzebda, S.; Laksaci, A. Limit theorems for kernel regression estimator for quasi-associated functional censored time series within single index structure. Mathematics 2025, 13, 886. [Google Scholar] [CrossRef]
  31. Doob, J.L. Stochastic Processes; Wiley: New York, NY, USA, 1953; pp. 228–231. [Google Scholar]
  32. Chikr-Elmezouar, Z.; Laksaci, A.; Almanjahie, I.M.; Alshahrani, F. Nonparametric Estimation of Dynamic Value-at-Risk: Multifunctional GARCH Model Case. Mathematics 2025, 13, 1961. [Google Scholar] [CrossRef]
  33. Iglesias-Pérez, M.C. Strong representation of a conditional quantile function estimator with truncated and censored data. Stat. Probab. Lett. 2003, 65, 79–91. [Google Scholar] [CrossRef]
  34. Yuan, M. GACV for quantile smoothing splines. Comput. Stat. Data Anal. 2006, 50, 813–829. [Google Scholar] [CrossRef]
Figure 1. Simulated sample paths S κ = 1 , , n of the quasi-associated AR(1) process with n = 50 .
Figure 1. Simulated sample paths S κ = 1 , , n of the quasi-associated AR(1) process with n = 50 .
Symmetry 17 01777 g001
Figure 2. Simulated sample paths S κ = 1 , , n of the quasi-associated AR(1) process with n = 200 .
Figure 2. Simulated sample paths S κ = 1 , , n of the quasi-associated AR(1) process with n = 200 .
Symmetry 17 01777 g002
Figure 3. Simulated sample paths S κ = 1 , , n of the quasi-associated AR(1) process with n = 1000 .
Figure 3. Simulated sample paths S κ = 1 , , n of the quasi-associated AR(1) process with n = 1000 .
Symmetry 17 01777 g003
Figure 4. Estimated conditional distribution functions under the quasi-associated AR(1) process for different sample sizes.
Figure 4. Estimated conditional distribution functions under the quasi-associated AR(1) process for different sample sizes.
Symmetry 17 01777 g004
Figure 5. Estimated conditional density functions under the quasi-associated AR(1) process for different sample sizes.
Figure 5. Estimated conditional density functions under the quasi-associated AR(1) process for different sample sizes.
Symmetry 17 01777 g005
Figure 6. Normal approximation of the conditional hazard function estimator.
Figure 6. Normal approximation of the conditional hazard function estimator.
Symmetry 17 01777 g006
Figure 7. Empirical and theoretical conditional hazard function estimation with confidence bounds.
Figure 7. Empirical and theoretical conditional hazard function estimation with confidence bounds.
Symmetry 17 01777 g007
Table 1. MSE of kernel estimators g ^ and G ^ under a quasi-associated AR(1) process ( ρ = 0.1 , Binomial ( 10 , 0.25 ) innovations) for sample sizes n = 50 , 200 , 1000 .
Table 1. MSE of kernel estimators g ^ and G ^ under a quasi-associated AR(1) process ( ρ = 0.1 , Binomial ( 10 , 0.25 ) innovations) for sample sizes n = 50 , 200 , 1000 .
Mean Square Errorn = 50n = 200n = 1000
MSE( g ^ ) 1.11645 × 10 04 8.028238 × 10 05 7.623378 × 10 05
MSE( G ^ ) 1.017427 × 10 04 3.718499 × 10 05 3.076726 × 10 05
Table 2. Kolmogorov–Smirnov test results for the QNH statistic under different sample sizes.
Table 2. Kolmogorov–Smirnov test results for the QNH statistic under different sample sizes.
Sample Size (n)KS Statisticp-Value
500.1340.306
2000.0530.604
10000.0280.421
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Belguerna, A.; Rassoul, A.; Daoudi, H.; Elmezouar, Z.C.; Alshahrani, F. Limit Theorem for Kernel Estimate of the Conditional Hazard Function with Weakly Dependent Functional Data. Symmetry 2025, 17, 1777. https://doi.org/10.3390/sym17101777

AMA Style

Belguerna A, Rassoul A, Daoudi H, Elmezouar ZC, Alshahrani F. Limit Theorem for Kernel Estimate of the Conditional Hazard Function with Weakly Dependent Functional Data. Symmetry. 2025; 17(10):1777. https://doi.org/10.3390/sym17101777

Chicago/Turabian Style

Belguerna, Abderrahmane, Abdelkader Rassoul, Hamza Daoudi, Zouaoui Chikr Elmezouar, and Fatimah Alshahrani. 2025. "Limit Theorem for Kernel Estimate of the Conditional Hazard Function with Weakly Dependent Functional Data" Symmetry 17, no. 10: 1777. https://doi.org/10.3390/sym17101777

APA Style

Belguerna, A., Rassoul, A., Daoudi, H., Elmezouar, Z. C., & Alshahrani, F. (2025). Limit Theorem for Kernel Estimate of the Conditional Hazard Function with Weakly Dependent Functional Data. Symmetry, 17(10), 1777. https://doi.org/10.3390/sym17101777

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop