Next Article in Journal
Analysis of the Ill-Posedness in Subgroup Parameter Calculation Based on Pade Approximation and Research on Improved Methods
Previous Article in Journal
Elastic Contact Between a Transversely, Uniformly Loaded Circular Membrane and a Spring-Reset Rigid Flat Circular Plate: An Improved Closed-Form Solution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Analysis of a Kernel-Type Estimator for Parabolic Stochastic Partial Differential Equations Driven by Cylindrical Sub-Fractional Brownian Motion

1
Département de Mathématiques et Informatique, Faculté des Sciences et de la Technologie, Université de Tamanghasset, Tamanghasset 11000, Algeria
2
LMAC—Laboratory of Applied Mathematics of Compiègne, Université de Technologie de Compiègne, CS 60 319-60 203 Compiègne, France
3
Laboratoire de Modèles Stochastiques, Statistique et Applications, Université Dr. Moulay Tahar de Saida, B. P. 138, En-Nasr, Saida 20000, Algeria
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(16), 2627; https://doi.org/10.3390/math13162627
Submission received: 23 July 2025 / Revised: 7 August 2025 / Accepted: 12 August 2025 / Published: 15 August 2025
(This article belongs to the Special Issue Partial Differential Equations in Applied Mathematics)

Abstract

The main purpose of the present paper is to investigate the problem of estimating the time-varying coefficient in a stochastic parabolic equation driven by a sub-fractional Brownian motion. More precisely, we introduce a kernel-type estimator for the time-varying coefficient θ ( t ) in the following evolution equation:
d u ( t , x ) = ( A 0 + θ ( t ) A 1 ) u ( t , x ) d t + d ξ H ( t , x ) , x [ 0 , 1 ] , t ( 0 , T ] , u ( 0 , x ) = u 0 ( x ) ,
where ξ H ( t , x ) is a cylindrical sub-fractional Brownian motion in L 2 [ 0 , T ] × [ 0 , 1 ] , and A 0 + θ ( t ) A 1 is a strongly elliptic differential operator. We obtain the asymptotic mean square error and the limiting distribution of the proposed estimator. These results are proved under some standard conditions on the kernel and some mild conditions on the model. Finally, we give an application for the confidence interval construction.

1. Introduction

Over the last few decades, the field of stochastic partial differential equations (SPDEs) has undergone a remarkable evolution, both in scope and depth. Originally viewed as a challenging intersection of probability theory and functional analysis, SPDEs have now matured into a central and indispensable framework for modeling complex systems subject to uncertainty across space and time. This theoretical development has been paralleled by an increasing range of sophisticated applications in diverse disciplines such as fluid dynamics, climatology, financial mathematics, population biology, and ecological modeling. These systems often exhibit inherently random and dynamic behavior, and SPDEs offer a powerful mechanism to encode and analyze such phenomena mathematically.
From a modeling standpoint, SPDEs allow one to describe the evolution of spatially distributed systems influenced by random perturbations, thus generalizing both classical partial differential equations and stochastic ordinary differential equations. Their formulation accommodates persistent and correlated noise structures, making them particularly suited to capture long-range dependencies and memory effects present in natural and engineered systems. We refer the reader to foundational works such as [1,2,3,4], as well as to the recent overview in [5], for a comprehensive account of the theoretical and applied advances in this field. For detailed theoretical developments and rigorous treatment of the analytical framework, the classical monographs [3,6], together with pedagogical resources such as [7,8,9], provide extensive discussion and exposition.
In the context of inverse problems for SPDEs, recent attention has turned toward recovering unknown parameters or functions from partial or noisy observations. For instance, ref. [10] tackled an inverse spectral problem under Dirichlet boundary conditions, laying the foundation for functional parameter identification in stochastic systems. Inspired by this, our work introduces a novel data-driven methodology for estimating unknown components using a deep learning architecture composed of convolutional and residual layers. The key advantage of this architecture lies in its ability to extract hierarchical features from observed data while preserving the core structural dynamics of the underlying SPDE and mitigating information loss due to network degradation.
Moreover, Support Vector Machines (SVMs) have long been recognized for their robustness in classification and regression problems, especially in high-dimensional and nonlinear contexts. Their excellent generalization properties make them well-suited for tasks requiring precision under limited data regimes. Recent advances have explored the integration of SVMs into deep learning frameworks to benefit from both paradigms. However, most such hybrid methods treat deep neural networks and SVMs as distinct components. To overcome this separation, ref. [11] proposed the Deep Siamese Residual Support Vector Machine (DSRSVM)—an end-to-end architecture that unifies the feature extraction capabilities of deep learning with the discriminative power of SVMs. This type of integrated modeling is particularly promising for high-stakes inference in SPDEs, where interpretability, precision, and stability are paramount.
Given these developments, parameter estimation in SPDEs has emerged as an area of increasing importance. In the parametric framework, where the coefficient is assumed to be a scalar parameter, early contributions such as [12] investigated maximum likelihood estimation using spectral data (Fourier coefficients) and established both consistency and asymptotic normality of the estimator. Ref. [13] considered parametric inference for SPDEs driven by infinite-dimensional mixed fractional Brownian motions. Similarly, ref. [14] examined the asymptotic properties of maximum likelihood estimators in discretely sampled settings, while [15] explored parametric estimation for one-dimensional parabolic SPDEs. The work of [16] addressed more intricate scenarios where the involved differential operators do not commute, introducing significant analytical complexity.
In parallel, the non-parametric estimation paradigm has gained prominence, particularly in contexts where the functional form of the coefficient is unknown or too complex to be modeled parametrically. Ref. [17] studied the asymptotic behavior of kernel-based estimators under small-noise assumptions, while [18] demonstrated mean-square consistency of a kernel-type estimator for the drift function in parabolic SPDEs driven by cylindrical Brownian motion. Extending these ideas, ref. [19] proposed and analyzed a kernel-type estimator for time-varying coefficients in SPDEs governed by strongly elliptic operators and driven by fractional Brownian noise.
More recently, as an extension of fractional Brownian motion, ref. [20] introduced a class of self-similar Gaussian processes known as sub-fractional Brownian motion, which preserves many properties of the fractional Brownian motion but exhibits weaker long-range dependence and non-stationary increments. This process arises in the study of occupation time fluctuations of branching particle systems and provides a rich probabilistic structure for modeling systems with memory, but without the full strength of long-range dependence. The sub-fractional Brownian motion, indexed by a Hurst parameter H ( 0 , 1 ) , is a zero-mean Gaussian process whose covariance structure leads to fundamentally different analytical challenges.
Objective of the Present Work. The present paper addresses a non-parametric estimation problem for a time-varying coefficient in the following stochastic parabolic partial differential equation:
d u ( t , x ) = A 0 + θ ( t ) A 1 u ( t , x ) d t + d ξ H ( t , x ) , x G , t ( 0 , T ] , u ( 0 , x ) = u 0 ( x ) , x G , u ( t , 0 ) = u ( t , 1 ) = 0 , t [ 0 , T ] ,
supplemented with homogeneous Dirichlet boundary conditions. Here, ξ H ( t , x ) denotes a cylindrical sub-fractional Brownian motion in L 2 ( [ 0 , T ] × ( 0 , 1 ) ) , with Hurst index H ( 1 / 2 , 1 ) , and the operator A 0 + θ ( t ) A 1 is a strongly elliptic differential operator, where θ ( t ) is an unknown deterministic function of time. The symbol d is understood in the sense of time differentiation.
Our aim is to construct a kernel-based estimator for the trend function θ ( t ) and to rigorously study its asymptotic properties. In particular, we derive the mean square error of the estimator and establish its rate of convergence. Furthermore, we prove a central limit theorem for the estimator, thereby offering a comprehensive theoretical framework that supports its use in practice. These results are particularly valuable in applications where no prior information about the form of θ ( t ) is available, and thus parametric approaches are infeasible.
Organization of the Paper. The remainder of the paper is structured as follows: In Section 2, we introduce the mathematical setting and present the necessary preliminaries and notation. Section 3 states the main results of the paper. Section 3.1 is devoted to the analysis of the mean square error and the derivation of the convergence rate. Section 3.2 presents the central limit theorem. In Section 3.3, we provide an application related to the construction of confidence intervals. Finally, in Section 5, we summarize our findings and outline potential directions for future research. For the sake of clarity and continuity, all technical proofs are collected in Section 6.

2. Preliminaries and Model Assumptions

In this section, let us recall some basic definitions and introduce some notations and assumptions. Assume that Ω , F , { F t } t 0 , P is a stochastic basis satisfying the usual conditions; i.e., it is a filtered probability space where the filtration { F t } t 0 is right-continuous and F 0 contains every P -null set. Let B H ( t ) : t R be the two-sided fractional Brownian motion with Hurst parameter H ( 0 , 1 ) , with the covariance function
Cov B H ( t ) , B H ( s ) = 1 2 | s | 2 H + | t | 2 H | t s | 2 H ;
see [21,22] for basic information. Let { ξ H ( t ) : t 0 } be a one-dimensional sub-fractional Brownian motion with Hurst parameter H ( 0 , 1 ) that is a Gaussian process with continuous sample paths that satisfies ξ H ( 0 ) = 0 ; and the covariance satisfies the following:
R H ( t , u ) = E ξ H ( t ) ξ H ( u ) = t 2 H + u 2 H 1 2 ( u + t ) 2 H + u t 2 H , t 0 , u 0 .
According to [20], the sub-fractional Brownian motion is given by means of the fractional Brownian motion
ξ H ( t ) = 1 2 B H ( t ) + B H ( t ) , t 0 ,
and has the following properties:
(1)
E ξ H ( t ) = 0 and V a r ξ H ( t ) = ( 2 2 2 H 1 ) t 2 H ;
(2)
ξ H is self-similar
ξ H ( a t ) : t 0 = Δ a H ξ H ( t ) : t 0 , for each a > 0 ;
(3)
The process ξ H ( t ) is not Markov and it is not a semi-martingale;
(4)
The process ξ H has continuous sample paths almost surely and, for each 0 < ε < H and T > 0 , there exists a random variable K ε , T such that
ξ H ( t ) ξ H ( u ) K ε , T t u H ε , 0 u , t T ;
(5)
Second moment of increments for any t s
E ξ H ( t ) ξ H ( u ) 2 = 2 2 H 1 t 2 H + u 2 H + t + u 2 H + t u 2 H ,
and
A 1 t s 2 H E ξ H ( t ) ξ H ( u ) 2 A 2 t s 2 H ,
where
A 1 = min ( 1 , 2 2 2 H 1 ) ,
and
A 2 = max ( 1 , 2 2 2 H 1 ) .
Consider the kernel n H ( t , s ) by
n H ( t , s ) = π 2 H 1 2 I T , 2 , 3 2 H 4 H 1 2 u H 1 2 1 [ 0 , t ) ( s ) = 2 1 H π Γ H 1 2 s 3 2 H 0 t x 2 s 2 H 3 2 d x 1 ( 0 , t ) ( s ) ,
where I T , 2 , 3 2 H 4 H 1 2 denotes the Erdeyli–Kober-type fractional integral operator defined by the following:
I T , σ , η α f ( s ) = σ s α η Γ ( α ) s T t σ ( 1 α η ) 1 f ( t ) ( t σ s σ ) 1 α d t , s [ 0 , T ] , α > 0 ,
for all measurable functions f : [ 0 , T ] R and σ , η R . By [23,24], we have the following representation for a sfBm ξ H :
ξ H ( t ) = Δ c H 0 t n H ( t , s ) d W ( s ) , 0 t T ,
where
c H 2 = Γ ( 2 H + 1 ) sin ( π H ) π ,
and W ( t ) : t 0 is the standard Brownian motion. Ref. [25] obtained the prediction formula for a sub-fractional Brownian motion. For any 0 < H < 1 , and 0 < a < t ,
E ξ H ( t ) { ξ H ( s ) : 0 s a } = ξ H ( a ) + 0 a ψ a , t ( u ) d ξ H ( u ) ,
where
ψ a , t ( u ) = 2 sin π ( H 1 2 ) π u a 2 u 2 1 2 H a t z 2 a 2 H 1 2 z 2 u 2 z H 1 2 d z .
Let
M t H = d H 0 t s 1 2 H d W ( s ) = 0 t k H ( t , s ) d ξ H ( s ) ,
where
d H = 2 H 1 2 c H Γ 3 2 H π , k H ( t , s ) = d H s 1 2 H ψ H ( t , s ) ,
and
ψ H ( t , s ) = s H 1 2 Γ 3 2 H t H 3 2 t 2 s 2 1 2 H H 3 2 s t x 2 s 2 1 2 H x H 3 2 d x 1 ( 0 , t ) ( s ) .
Refs. [25,26] showed that the process M H = { M t H : t 0 } is a Gaussian martingale and is called the sub-fractional fundamental martingale. The filtration generated by this martingale is the same as the filtration { F t : t 0 } generated by the sub-fractional Brownian motion ξ H . The quadratic variation M H s of the martingale M H is defined by
M H t = w t H = d H 2 2 2 H t 2 2 H = λ H t 2 2 H .
Given a fixed time interval [ 0 , T ] , we denote by E the set of step function on [ 0 , T ] . Let H ξ H be the Hilbert space defined as the closure of E with respect to the scalar product
1 [ 0 , t ] , 1 [ 0 , u ] = R H ( t , u ) ,
where R H ( t , u ) is the covariance of ξ t H and ξ s H . The mapping 1 [ 0 , t ] ξ t H can be extended to an isometry between H ξ H and the Gaussian space H 1 associated with ξ H . We denote this isometry by φ ξ ( φ ) . Here, ξ ( φ ) is an isonormal Gaussian process associated with the Hilbert space H ξ H , which was introduced by [21]. Recall that if φ , ψ H ξ H satisfies
0 T 0 T φ ( u ) ψ ( r ) u r 2 H 2 u + r 2 H 2 d u d r < ,
then their scalar product in H ξ H is given by
φ , ψ H ξ H = α H 0 T 0 T φ ( u ) ψ ( r ) u r 2 H 2 u + r 2 H 2 d u d r ,
where α H = H ( 2 H 1 ) ; we refer to [21,27].
Remark 1.
As shown in [28], for any Hurst parameter H ( 0 , 1 / 2 ) ( 1 / 2 , 1 ) , the domain of the Wiener integral I is given by the space
f D ( ( 0 , T ) ) : f W 1 / 2 H , 2 ( R ) such that supp ( f ) [ 0 , T ] and f = f | [ 0 , T ] .
When H < 1 / 2 , this space consists of functions, and the notation f | [ 0 , T ] denotes the usual restriction of the function f to the interval [ 0 , T ] . In contrast, when H > 1 / 2 , the Sobolev space W 1 / 2 H , 2 ( R ) contains distributions, and the restriction f | [ 0 , T ] must be understood in the distributional sense, that is, as the restriction of f to the space of test functions D ( ( 0 , T ) ) . It is worth noting that when H = 1 / 2 , the fractional Brownian motion reduces to the standard Wiener process, and the domain of the integral simplifies to the classical space L 2 ( [ 0 , T ] ) . In this case, expression (1) provides a nonstandard, albeit equivalent, characterization of this domain. For a detailed discussion, see [29]. Recall also that the Sobolev space W s , 2 ( R ) , for any real number s, admits the following characterization:
W s , 2 ( R ) = f S ( R ) : 1 + | x | 2 s / 2 F f ( x ) L 2 ( R ) ,
which is a Hilbert space endowed with the scalar product
f , g = R F f ( x ) F g ( x ) ¯ 1 + | x | 2 s d x ,
and
F f ( x ) = R f ( t ) e i x t d t .
For H 1 2 , 1 , the double integral
0 T 0 T | φ ( u ) | | ψ ( r ) | | u r | 2 H 2 ( u + r ) 2 H 2 d u d r
is finite under the regularity condition φ , ψ L 1 / H ( [ 0 , T ] ) . This integrability result plays a crucial role in ensuring the well-posedness of certain stochastic integrals with respect to fractional noise. It relies on the specific behavior of the kernel | u r | 2 H 2 ( u + r ) 2 H 2 , which reflects the singularity structure and decay properties inherent in fractional Brownian motion with Hurst parameter H > 1 / 2 . A detailed justification of this condition can be found on page 3 of [30].
For φ , ψ H ξ H , we have
E 0 T φ ( u ) d ξ ( u ) = 0 , and E 0 T φ ( u ) d ξ ( u ) 0 T ψ ( r ) d ξ ( r ) = φ , ψ H ξ H .
Equations (5) and (6) formalize the inner product and the isometry properties of the stochastic integral with respect to the centered Gaussian process ξ H , which generalizes fractional Brownian motion (fBm). These results are central in stochastic analysis, particularly in the development of integration theory; see Remark 9. We recall the following lemma:
Lemma 1
([31]). The canonical Hilbert space H ξ H associated with ξ H satisfies the following:
L 2 ( [ 0 , T ] ) L 1 / H ( [ 0 , T ] ) H ξ H ,
where H ( 1 / 2 , 1 ) .
Let G be a smooth bounded domain in R . Denote by C 0 ( G ) the collection of infinitely differentiable, compactly supported on G. Let A 0 and A 1 be differential or pseudo-differential operators on C 0 ( G ) . If G is a bounded domain, then, to simplify the presentation, all operators will be considered with zero boundary conditions. We assume that
A i u ( x ) = α m i a i α ( x ) u ( α ) ( x ) , a i α C b ( G ) , i = 0 , 1 ,
where
α = ( α 1 , , α d ) , α i = 0 , 1 , , α = i = 1 d α i , and u ( α ) ( x ) = α u ( x ) x 1 α 1 x d α d .
On a stochastic basis Ω , F , { F t } t 0 , P , consider a cylindrical sub-fractional Brownian motion ξ = ξ ( t , x ) . In other words, ξ is a random process with values in the set D ( G ) of distributions on G so that, for every ϕ C 0 ( G ) with ϕ L 2 ( G ) = 1 , ( ξ , ϕ ) ( t ) is a one-dimensional sub-fractional Brownian motion on Ω , F , { F t } t 0 , P , and for all ϕ 1 , ϕ 2 C 0 ( G ) ,
E ( ξ , ϕ 1 ) ( t ) ( ξ , ϕ 2 ) ( u ) = R H ( t , u ) · ( ϕ 1 , ϕ 2 ) L 2 ( G ) .
Consider the following stochastic evolution equation:
d u ( t , x ) = A 0 + θ ( t ) A 1 u ( t , x ) d t + d ξ H ( t , x ) , x G = [ 0 , 1 ] , t ( 0 , T ] , u ( 0 , x ) = u 0 ( x ) , x G , u ( t , 0 ) = u ( t , 1 ) = 0 , t [ 0 , T ] .
where ξ H ( t , x ) is a cylindrical sub-fractional Brownian motion in L 2 [ 0 , T ] × [ 0 , 1 ] , and A 0 + θ ( t ) A 1 is a strongly elliptic differential operator with the unknown time-varying coefficient θ ( t ) . We assume that θ ( t ) is a bounded measurable function on [ 0 , T ] . A predictable process u ( · ) is called a weak solution to (8) if for every ϕ C 0 ( G ) and t [ 0 , T ] , the equality
u ( t ) , ϕ = u ( 0 ) , ϕ + 0 t A 0 ϕ , u ( s ) + θ ( s ) A 1 ϕ , u ( s ) d s + 0 t ϕ , d ξ ( s )
holds with probability 1 for all t [ 0 , T ] at once, where A i is the formal adjoint of A i ; that is, for all ϕ 1 , ϕ 2 , one gets
A i ϕ 1 , ϕ 2 L 2 ( G ) = A i ϕ 2 , ϕ 1 L 2 ( G ) .
Equation (9) is obtained by applying the definition of a weak (or variational) solution to the stochastic PDE (8): we test both sides of the SPDE against a smooth function ϕ C 0 ( G ) , integrate over the spatial domain G, and use the adjoint relation to transfer differential operators from the solution u onto the test function ϕ . The stochastic integral 0 t ( ϕ , d ξ ( s ) ) is understood in the sense of a generalized Wiener integral.
We focus on the estimation of the drift coefficient in a model driven by sub-fractional white noise, an extension of the framework introduced in [18]. Accordingly, the assumptions established therein remain applicable to our study. In particular, we assume that the underlying equation is diagonalizable, meaning that the operators A 0 and A 1 share a common system of eigenfunctions.
(H1) 
There exists a complete orthonormal system e k k > 1 in L 2 ( G ) = L 2 ( [ 0 , 1 ] ) such that
A 0 e k = κ k e k , A 1 e k = ν k e k ;
(H2) 
The eigenvalues ν k and κ k satisfy ν k k m 1 and κ k k m 0 , uniformly in t [ 0 , T ] ,
μ k ( t ) = κ k + θ ( t ) ν k k 2 m ,
i.e.,
α k κ k + θ ( t ) ν k β K , t [ 0 , T ] ,
and some
α k β k k 2 m ,
where 2 m = max m 0 , m 1 , with m 0 and m 1 denoting the orders of the operators A 0 and A 1 , respectively.
Remark 2.
If the process ξ H in Equation (8) possesses an invertible covariance operator Q (referred to as Q-cylindrical sub-fractional Brownian motion) with eigenfunctions e k , the equation can be transformed into standard form involving cylindrical sub-fractional Brownian motion through the variable change u ˜ = Q 1 / 2 u . Naturally, in the transformed equation, the operators A i , i = 0 , 1 must be replaced by A ˜ i : = Q 1 / 2 A i Q 1 / 2 .
The conditions (H1)−(H2) hold in many physical models (see, for example, [14,32]). A typical situation is when the operators A 0 and A 1 commute and A 0 or A 1 is uniformly elliptical and formally self-adjoint; more details can be found in [33]. According to the results presented in [32] and in ([33], Remark 1.2.2), the asymptotic behavior of the eigenvalues associated with the boundary value problem
A u ( x ) = λ u ( x ) , x G , D α u G = 0 , α such that | α | n 1 ,
is given by the Weyl-type formula
λ k = ζ A k 2 n / d + o k 2 n / d , as k ,
where G denotes either a smooth bounded domain in R d or a smooth compact d-dimensional manifold without a boundary, and A is a formally self-adjoint, uniformly elliptic differential operator of order 2 n on G. The constant ζ A > 0 depends on the geometry of the domain and the principal symbol of the operator A.
Remark 3.
The diagonalizability of the operator in (H1) is classical and facilitates the use of spectral techniques and functional calculus in the analysis. However, we acknowledge that such conditions may limit the applicability of our results to systems where the underlying operators are non-commuting or non-normal, a situation frequently encountered in applied stochastic partial differential equations (SPDEs). In such cases, the spectral decomposition may not exist or may not be tractable, making the current framework inapplicable. Relaxing the diagonalizability assumption would require alternative analytical tools. While this represents a nontrivial technical extension, it is a meaningful direction for future research. We believe that adapting the present methodology to accommodate more general operator structures would broaden the applicability of our results, and we intend to explore this in forthcoming work.
To state the result about the existence and uniqueness of the solution of (8), we need some additional constructions. For f C 0 [ 0 , 1 ] and s R , define
f s = k 1 k 2 s ( f , e k ) L 2 [ 0 , 1 ] 2 1 / 2 ,
and then define the space H s [ 0 , 1 ] as the completion of C 0 with respect to the norm · s . To get a consistent estimate θ ( t ) at fixed t, we give the assumptions on θ ( · ) . For a positive real number β represented as β = ρ + α , where ρ > 0 is an integer and α ( 0 , 1 ] , denote by Θ L β the set of ρ times continuously differentiable functions on ( 0 , T ) satisfying the following properties:
(P1) 
For all θ Θ L β , θ ( ρ ) ( t ) θ ( ρ ) ( s ) L t s α , t , s ( 0 , T ) ;
(P2) 
There exist C 1 , C 2 , N 0 > 0 , for all k > N 0 , t ( 0 , T ) , and θ Θ L β ,
C 1 k 2 m κ k + θ ( t ) ν k C 1 k 2 m .
Theorem 1.
Assume that conditions (H1) and    (H2)  hold. Suppose further that u 0 L 2 ( [ 0 , T ] ; H s ( [ 0 , 1 ] ) ) for some s > 1 2 , and that the Hurst parameter satisfies
H max 1 4 m , 1 2 , 1 .
Then, Equation (8) admits a unique weak solution
u L 2 [ 0 , T ] ; H s ( [ 0 , 1 ] ) .
The proof of this theorem builds upon the results established in [34,35,36].

3. Main Results

3.1. Asymptotic Mean Square Error

Let u u ( t , x ) be a solution of (8) with the operators A 0 and A 1 satisfying the condition (H1). The functions u k = u k ( t , x ) formally represent the Fourier coefficients of the solution of Equation (8) with respect to the basis e k k 1 ; then u k satisfies the following equation:
d u k ( t ) = μ k ( t ) u k ( t ) d t + d ξ k H ( t ) , u k ( 0 ) = u 0 k .
The main goal of this section is to build an estimator of the time-varying coefficient θ ( t ) by a kernel-type estimator based on the observations of the underlying process u k ( t ) : 0 t T k = 1 , , N and the study of the optimal rate of convergence of the estimator as N . Recall that solution to Equation (10) can be expressed by the exact analytical formula:
u k ( t ) = u 0 k exp 0 t μ k ( s ) d s + 0 t exp s t μ k ( τ ) d τ d ξ k H ( s ) ,
with mean and variance, respectively, given by
M k ( t ) = u 0 k exp 0 t μ k ( s ) d s ,
D k 2 ( t ) = E 0 t exp s t μ k ( τ ) d τ d ξ k H ( s ) 2 .
Formally, the estimate of θ 0 at point t 0 is constructed as a weighted sum of the integrals
1 h N 0 T K t t 0 h N d u k ( t ) κ k u k ( t ) d t u k ( t ) .
However, this expression must be modified, since this integral may not be defined due to the vanishing Fourier coefficients u k ( t ) . Let v N N 1 be a sequence of positive real numbers so that v N 0 as N . The random process U k , N = U k , N ( t ) , k = 1 , , N , t ( 0 , T ) is an inverse cut-off function of u k ( t ) defined by
U κ , N ( t ) = 1 u k ( t ) , u k ( t ) > v N , 1 v N , u k ( t ) v N .
Let F ν , N be a weight sequence defined by
F ν , N = k = 1 N ν k ,
where ν k are the eigenvalues of A 1 . Similar to the procedure in [12], we define the kernel-type estimator θ ^ N ( t ) of θ ( t ) for every t ( 0 , T ) by
θ ^ N ( t ) = 1 h N F ν , N k = 1 N 0 T K s t h N U k , N ( s ) d u k ( s ) κ k u k ( s ) d s ,
where K ( · ) is a compactly supported kernel of order ρ 1 . The stochastic integral in (15) is understood in the sense of the Skorohod integral in view of the potential non-adaptedness of the integrand; see [37]. This estimator is motivated by the desire to recover θ ( t ) from high-dimensional observations of the form { u k ( · ) } k = 1 N and extends classical kernel smoothing methods to the stochastic integral framework. For the properties of this estimator, as well as consistency and asymptotic normality, see the results below.
Recall that the function K ( t ) , t R is called a compactly supported kernel of order ρ 1 if it satisfies the following:
(H3) 
K ( t ) = 0 when t is large;
(H4) 
R K ( t ) d t = 1 ;
(H5) 
R t j K ( t ) d t = 0 , for j = 0 , 1 , , ρ 1 , and R t ρ K ( t ) d t < .
Remark 4.
Our results remain valid when replacing the condition that the kernel function K ( · ) has compact support with another condition (H.3)’ whose content is as follows:
(H.3)’ 
There exists a sequence of positive real numbers a n such that a n h n tends to zero when n tends to infinity, and
n | v | > a n | K ( v ) | d v 0 .
By imposing the condition (H5), the kernel function exploits the smoothness of θ ( t ) ; this condition is classical in the non-parametric kernel estimation. For example, K ( t ) = 3 3 5 t 2 / 8 1 { | t | 1 } is a compactly supported kernel of order 3. More examples and a general procedure for constructing such kernels are presented in [38,39].
By the properties of the kernel function, we can assume that the kernel K ( · ) is supported in 1 , 1 without loss of generality. Let us choose t 1 , t 2 ( 0 , T ) such that t 1 / h N 2 , ( T t 2 ) / h N 1 since h N goes to zero when N is large.
Theorem 2.
Under the assumptions (H1)–(H5) and assuming K ( · ) to be bounded, let the function θ belong to Θ L β . In addition, the following assumptions are satisfied:
(A1) 
The orders of the operators A 0 and A 1 satisfy q = 2 m 1 m > 1 ;
(A2) 
The eigenvalues ν k of A 1 are such that F ν , N N m 1 + 1 ;
(A3) 
h N N ( q + 1 ) / ( 4 β + 1 ) and v N h N 2 β N m ;
(A4) 
The initial condition u 0 is deterministic and belongs to H s [ 0 , 1 ] for some s > 1 / 2 .Then, for 0 < t 1 < t 2 < T , we have
lim N sup θ Θ L β sup t 1 t t 2 N 2 β ( q + 1 ) / ( 4 β + 1 ) E ( | θ ^ N ( t ) θ ( t ) | 2 ) < .
The proof of Theorem 2 is postponed until Section 6.
Concerning the Assumption (A1), in [12], the authors showed that the consistency estimate of θ , as N , depends on the order of the operators A 0 and A 0 , and it holds true, if and only if
o r d ( A 1 ) 1 2 o r d ( A 0 + θ A 1 ) d ,
i.e.,
q = 2 m 1 m > 1 .
Furthermore, in our model, a consistent estimate of the kernel-type estimator (15) is not possible in the cases q = 1 , as can be seen in the proof of Theorem 2. Assumption (A2) specifies the rate of growth of weight sequence F ν , N , which is an assumption about the asymptotic behavior of the eigenvalue of the operator A 1 . To minimize the mean square error, we need Assumption (A3).
Remark 5.
The authors of [19] investigated a parabolic stochastic partial differential equation driven by fractional Brownian motion and proposed a kernel-type estimator for the time-varying coefficients within a diagonalizable spectral framework. While our study builds upon several analytical techniques from that framework, it significantly differs in scope and generality. Specifically, we consider a more intricate stochastic perturbation, namely, a cylindrical sub-fractional Brownian motion which lacks both self-similarity and stationary increments, thus requiring a distinct mathematical treatment to define the stochastic integral and establish well-posedness. In addition to deriving convergence rates, our paper further develops the theory by establishing the asymptotic normality of the estimator, which enables the construction of asymptotic confidence intervals for the time-varying coefficient. We also propose a data-driven bandwidth selection strategy based on cross-validation, and we provide practical plug-in procedures for estimating the asymptotic variance. Finally, by relaxing the assumptions on the noise process and extending the estimator’s theoretical guarantees, our framework is suitable for a broader class of parabolic SPDEs, particularly in contexts where the noise may exhibit weaker regularity or long-range dependence structures distinct from classical fBm.

3.2. Asymptotic Normality

To establish the asymptotic normality of estimator θ ^ N ( t ) , we need some assumptions.
(A5) 
F ν , N N β ( q + 1 ) / 4 β + 1 .
Let us now state the following theorem, which gives the weak convergence rate of the estimator θ ^ N ( t ) . Below, we write Z = D N ( μ , σ 2 ) whenever the random variable Z follows a normal law with expectation μ and variance σ 2 . D denotes the convergence in distribution and P the convergence in probability.
The following result is the main result of this section.
Theorem 3.
Suppose that the function θ Θ L β + 1 . Then, under hypotheses (A1)(A5) and (H1)(H5), as N , we have
N β ( q + 1 ) / ( 4 β + 1 ) ( θ ^ N ( t ) θ ( t ) ) D N m θ , ρ ( t ) , σ H 2 ( t ) ,
where
m θ , ρ ( t ) = θ ( ρ + 1 ) ( t ) ( ρ + 1 ) ! 1 1 s ρ + 1 K s d s ,
and
σ H 2 ( t ) = V M , D ( t ) 1 1 1 1 K s K r ϕ H ( s , r ) d s d r ,
with
ϕ H ( s , r ) = H ( 2 H 1 ) s r 2 H 2 s + r 2 H 2 ,
and
V M , D ( s ) = lim N k = 1 N 1 D k ( s ) φ M k ( s ) D k ( s ) + E 1 u k ( s ) 2 .
The proof of Theorem 3 is postponed until Section 6.

3.3. Confidence Interval

A usual application of asymptotic normality is to establish confidence intervals for the estimates. Our goal in this section is the application of our asymptotic normality result (Theorem 3) to build the confidence intervals for the true value of θ ( t ) . In non-parametric estimation, the asymptotic variance depends on certain unknown functions. In our case, we have
σ H 2 ( t ) = V M , D ( t ) 1 1 1 1 K s K r ϕ H ( s , r ) d s d r ,
where
ϕ H ( s , r ) = H ( 2 H 1 ) s r 2 H 2 s + r 2 H 2 ,
and
V M , D ( s ) = k = 1 N 1 D k ( s ) φ M k ( s ) D k ( s ) + E 1 u k ( s ) 2 ,
with
M k ( s ) = E u k ( s ) , M k ( s ) = E 1 u k ( s ) , and D k ( s ) = V a r ( u k ( s ) ) ,
where M k ( s ) , M k ( s ) , and D k ( s ) are unknown a priori and have to be estimated in practice. Then one can obtain a confidence interval even if σ H 2 ( t ) is functionally specified. Now a plug-in estimate for the asymptotic standard deviation σ H 2 ( t ) can be easily obtained using the estimators M ^ k ( s ) , M ^ k ( s ) , and D ^ k ( s ) for M k ( s ) , M k ( s ) , and D k ( s ) , respectively, from the observations u k i = ( u k ( i δ ) ) 0 k n , that is
σ ^ H 2 ( t ) = V ^ M , D ( t ) 1 1 1 1 K s K r ϕ H ( s , r ) d s d r ,
where
V ^ M , D ( s ) = k = 1 N 1 D ^ k ( s ) φ M ^ k ( s ) D ^ k ( s ) + M ^ k ( s ) 2 ,
with
M ^ k ( s ) = 1 n i = 1 n u k ( s i ) , M ^ k ( s ) = 1 n i = 1 n 1 u k ( s i ) , D ^ k ( s ) = 1 n i = 1 n ( u k ( s i ) M ^ k ( s ) ) 2 .
Remark 6.
If the Hurst parameter H ( 1 / 2 , 1 ) is unknown in the formula (19), it is possible to construct an empirical estimator for H that is defined as follows:
H ^ n = H ^ n ( 2 , a ) : = log A n ( 2 , a ) 2 log n ,
where
A n ( 2 , a ) = 1 n i = 1 n ξ i / n H ξ ( i 1 ) / n H 2 n 2 H ϕ H ( i ) ,
with a = { 1 , 1 } , and
ϕ H ( i ) = ( 2 i + 1 ) 2 H 2 2 H 1 ( i + 1 ) 2 H 2 2 H 1 i 2 H .
More information on these various approaches can be found in [40].
Making use of Theorem 3 in connection with Slutsky’s theorem gives the following corollary:
Corollary 1.
Under the Assumption of Theorem 3, we have
N 2 β ( q + 1 ) / ( 4 β + 1 ) σ ^ H ^ n 2 ( t ) 1 / 2 ( θ ^ N ( t ) θ ( t ) m ^ θ , ρ ( t ) ) D N 0 , 1 ,
with
m ^ θ , ρ ( t ) = θ ^ ( ρ + 1 ) ( t ) ( ρ + 1 ) ! 1 1 s ρ + 1 K s d s .
Finally, the asymptotic ( 1 ζ ) -confidence interval of θ ( t ) with t 1 ζ / 2 is the ( 1 ζ / 2 ) quantile of the standard normal distribution, given by the following:
θ ^ N ( t ) m ^ θ , ρ ( t ) ± t 1 ζ / 2 σ ^ H ^ n 2 ( t ) N 2 β ( q + 1 ) / ( 4 β + 1 ) 1 / 2 .
Remark 7.
The choice of the Hurst index H ( 1 / 2 , 1 ) is motivated by both theoretical and practical considerations. In this range, the noise term ξ H ( t , x ) , typically modeled via fractional Brownian motion or its spatial extensions, exhibits long-range dependence and smoother sample paths. These properties are particularly relevant in modeling phenomena with memory and persistence effects, which are common in fields such as hydrology, telecommunications, and finance (see, e.g., ref. [22,41]). From an analytical perspective, the case H > 1 / 2 allows for greater regularity of the sample paths, which is crucial for ensuring the well-posedness of the model. In particular, stochastic integration with respect to fractional noise in this regime can often be handled using techniques based on the Young integral or fractional calculus (cf. Refs. [21,42]), which are more tractable than those required for the rougher case H < 1 / 2 . In contrast, for H ( 0 , 1 / 2 ) , the fractional noise becomes significantly more irregular, and the corresponding stochastic integrals require advanced machinery such as Malliavin calculus or rough path theory (see [43,44]). These cases are technically more demanding and thus lie beyond the scope of the current work. However, they remain an important and challenging direction for future research.
Remark 8.
In [45], the authors introduced an innovative framework: the Newton–Côtes integral corrected by Lévy areas, designed to analyze the stochastic differential equation
X t = x 0 + 0 t σ X s , d B s + 0 t b X s , d s , t [ 0 , 1 ] ,
for all values of the Hurst parameter H ( 0 , 1 ) . Here, x 0 R is a deterministic initial condition, and B = ( B t ) t [ 0 , 1 ] denotes a fractional Brownian motion (fBm) with Hurst index H. The coefficient functions σ and b are assumed to satisfy standard regularity conditions, and the integral Equation (24) is interpreted in the Russo–Vallois sense. This methodology facilitates the use of a fixed-point theorem to prove existence and uniqueness of solutions in the space of processes with Hölder continuous paths of order α ( 0 , 1 ) . Importantly, this framework transcends the limitations of earlier approaches, such as those in [46,47], which are restricted to the more specific and somewhat artificial class of processes of the form X t = f ( B t , A t ) , where f : R × [ 0 , 1 ] R is sufficiently smooth and A is a process with C 1 trajectories.
Remark 9.
The expression in Equation (5) defines the inner product within the Hilbert space H ξ H associated with the centered Gaussian process ξ H , known as sub-fractional Brownian motion (sub-fBm). The corresponding kernel is given by the following:
K ( u , r ) = α H | u r | 2 H 2 ( u + r ) 2 H 2 ,
which is derived from the second-order (distributional) derivative of the covariance function of ξ H . This kernel encapsulates the correlation structure induced by the sub-fBm and distinguishes it from standard fractional Brownian motion (fBm) through the additional term ( u + r ) 2 H 2 . This extra term explicitly reflects the non-stationarity of increments, a defining characteristic of sub-fBm. The resulting Hilbert space H ξ H consists of all deterministic functions φ for which the right-hand side of Equation (5) is finite. This space thereby serves as the domain of admissible integrands for stochastic integration with respect to ξ H . Equation (6) captures a fundamental property of this framework. Specifically, it establishes that the mapping
φ 0 T φ ( u ) d ξ H ( u )
is an isometry from the Hilbert space H ξ H into the space L 2 ( Ω ) . This isometric property enables the rigorous construction of stochastic integrals of deterministic functions with respect to sub-fBm using the theory of isonormal Gaussian processes. The resulting integral is a centered Gaussian random variable whose variance—and covariance with other integrals—is determined precisely by the inner product defined in Equation (5). Thus, Equation (6) extends the classical Itô isometry associated with standard Brownian motion and fBm to the more intricate setting of sub-fBm, which neither exhibits stationary increments nor possesses the semi-martingale property. Together, these equations constitute the mathematical foundation for defining and analyzing stochastic integrals with respect to sub-fractional Brownian motion, especially when this underlying process lacks the semi-martingale structure. In the context of our paper, Equations (5) and (6) play a crucial role in the asymptotic analysis of estimators driven by non-semi-martingale noise. They provide the essential functional framework for the rigorous definition of the stochastic integrals employed, yield explicit variance expressions via the inner product structure of the Hilbert space H ξ H , and clarify the intrinsic connection between the nature of the noise and the statistical behavior of the estimators. These theoretical foundations underpin several key computations and results presented in our work, notably in Equations (16) and (17), and explicitly in the variance computations of Equations (63) and (70).

4. The Bandwidth Selection Criterion

There are essentially no stringent restrictions on the choice of the kernel functions K l ( · ) , for l = 1 , , p , in our framework, apart from the requirement that they satisfy the regularity conditions denoted by assumption (A1). As noted in [48], for sufficiently large sample sizes, the shape of the optimal kernel becomes effectively unique. For instance, in the univariate case R 1 , classical L 2 -theory [49] establishes that, among all positive kernels, the Epanechnikov kernel [50], given by K ( x ) = max 3 4 ( 1 x 2 ) , 0 , is optimal in the sense of minimizing the integrated mean squared error. In the multivariate case R d , ref. [51] demonstrated the L 2 -optimality of the kernel max 1 x 2 , 0 d , further supporting the use of compactly supported kernels in higher dimensions.
The selection of the bandwidth parameter, however, poses a considerably more challenging problem. Although any sequence h N satisfying h N 0 and N 1 / 2 h N β F ν , N suffices for obtaining the asymptotic distribution stated in Theorem 3, practical implementation requires more concrete and adaptive strategies. Numerous data-driven methods have been developed to guide the selection of bandwidth in non-parametric kernel estimation, aiming to achieve asymptotically optimal performance; see, for instance, refs. [52,53,54,55,56].
To that end, we employ a cross-validation-based criterion grounded in the concept of a leave-one-out estimator. Specifically, for each observation i, we define the following kernel-type estimator, excluding the i-th data point:
θ ^ i N 1 ( t ; h ) = 1 h N 1 F ν , N 1 k = 1 N 1 0 T K s t h N 1 U k , N 1 ( s ) d u k ( s ) κ k u k ( s ) d s .
This estimator serves as a smoothed approximation of an underlying functional signal at time t, constructed from all data points except the i-th. The kernel function K ( · ) governs the local weighting, while the bandwidth h N 1 controls the degree of smoothing. The process U k , N 1 ( s ) typically represents a transformed or projected version of the observed data, and the term d u k ( s ) κ k u k ( s ) d s may be interpreted as a stochastic increment, reflecting deviations from a modeled dynamic.
In order to evaluate and compare different choices of the bandwidth parameter h, we define the following cross-validation criterion:
C V h : = 1 N j = 1 N θ ^ N ( t ) θ ^ j N 1 ( u j ( t ) ; h ) 2 W u j ( t ) .
where θ ^ N ( t ) denotes the estimator based on the full dataset, and W ( · ) is a non-negative weight function allowing selective emphasis across the domain. The criterion C V ( h ) effectively measures the average prediction error obtained by leaving out each observation and comparing the resulting estimator to the one based on all data. A natural and commonly adopted approach is to select the bandwidth h that minimizes this criterion:
h : = arg min h C V ( h ) .
While this strategy provides a practical and theoretically grounded approach for bandwidth selection, it must be noted that deriving an optimal procedure tailored to our specific framework remains an open and nontrivial problem. Ideally, one would like to choose a bandwidth h N that minimizes the mean squared error (MSE) for a given finite sample size. However, the analytical derivation of such an optimal choice is intricate and sufficiently complex to warrant a separate theoretical investigation.

5. Concluding Remarks

This article investigates the non-parametric estimation of a time-varying coefficient in a stochastic parabolic equation driven by cylindrical sub-fractional Brownian motion with Hurst parameter H ( 1 / 2 , 1 ) . The underlying model is assumed to be diagonalizable, implying the existence of a common eigenfunction basis shared by all involved operators. To construct a kernel-based estimator for the time-varying coefficient, a spectral method is proposed, relying on finite-dimensional approximations of the system’s solutions.
The study rigorously establishes the asymptotic behavior of the proposed estimator, providing precise conditions under which the mean squared error converges and asymptotic normality is attained as the approximation dimension tends to infinity. These results lay the theoretical foundation for constructing confidence intervals for the time-varying coefficient, thereby offering practical inferential tools in stochastic partial differential equations influenced by long-range dependent noise.
Beyond its immediate contributions, the methodological framework developed herein opens several promising directions for future research. In particular, extending the estimation procedure to incorporate k-nearest neighbors (k-NN) techniques represents a compelling avenue. The k-NN method is a versatile and computationally efficient non-parametric tool, widely recognized for its adaptability to continuous distributions and minimal reliance on parametric assumptions. Its proven consistency in tasks such as density estimation, classification, and regression further underscores its potential utility in this context.
However, integrating k-NN methods within the present stochastic framework would necessitate a substantially more intricate mathematical treatment, which falls beyond the scope of the current study. Nonetheless, such an extension remains an important and challenging direction for future theoretical development.

6. Proofs

This section is devoted to the proof of our results. The previously defined notation continues to be used in the following.
Proof of Theorem 2.
Let C denote a positive constant, whose value may vary from line to line. In view of relation (14), we deduce that
U k , N ( s ) u k ( s ) = χ u k ( s ) v N u k ( s ) v N + χ u k ( s ) > v N = χ u k ( s ) v N u k ( s ) v N 1 + 1 .
Making use of Equations (10) and (15), the difference θ ^ N ( t ) θ ( t ) can be divided into three parts in the following way:
θ ^ N ( t ) θ ( t ) = 1 h N F ν , N k = 1 N 0 T K s t h N U k , N ( s ) θ ( s ) ν k u k ( s ) d s + d ξ k H ( t ) 1 h N F ν , N k = 1 N 0 T K s t h N U k , N ( s ) θ ( s ) ν k d s = 1 h N F ν , N k = 1 N 0 T K s t h N χ u k ( s ) v N u k ( s ) v N 1 θ ( s ) ν k d s + 1 h N F ν , N k = 1 N 0 T K s t h N θ ( s ) θ ( t ) ν k d s + 1 h N F ν , N k = 1 N 0 T K s t h N U k , N ( s ) d ξ k H ( s ) = J 1 + J 2 + J 3 .
We aim to evaluate the three terms under consideration. It has already been established that J 1 and J 2 do not involve stochastic integrals; consequently, their analysis follows a methodology similar to that employed in [18]. At this stage, we emphasize that analyzing the stochastic process u k ( t ) in the presence of sub-fractional Brownian motion adds considerable complexity to the calculations. We begin by estimating the term J 1 . Observe that θ ( s ) is a bounded, measurable function defined on the interval [ 0 , T ] . We first note that
E J 1 2 E C h N F ν , N k = 1 N 0 T K s t h N χ u k ( s ) v N ν k d s 2 C h N F ν , N k = 1 N 0 T ν k K s t h N P u k ( s ) v N d s 2 + C F ν , N 2 k = 1 N E 1 h N 0 T ν k K s t h N χ u k ( s ) v N d s 2 .
On the other hand, we recall that the Gaussian process u k ( t ) is given by
u k ( t ) = u 0 k exp 0 t μ k ( s ) d s + 0 t exp s t μ k ( τ ) d τ d ξ k H ( s ) ,
with mean and variance, respectively, given by
M k ( t ) = u 0 k exp 0 t μ k ( s ) d s ,
D k 2 ( t ) = E 0 t exp s t μ k ( τ ) d τ d ξ k H ( s ) 2 .
Notice that
E 0 t exp s t μ k ( τ ) d τ d ξ k H ( s ) 2 = α H 0 t 0 t r u 2 H 2 + r + u 2 H 2 exp r t μ k ( τ ) d τ × exp u t μ k ( τ ) d τ d u d r α H 0 t 0 t r u 2 H 2 + r + u 2 H 2 exp C k 2 m ( t r ) × exp C k 2 m ( t u ) d u d r α H exp 2 C k 2 m t 0 t 0 t r u 2 H 2 + r + u 2 H 2 exp C k 2 m r × exp C k 2 m u d u d r = α H exp 2 C k 2 m t 0 t exp C k 2 m r 0 t r u 2 H 2 + r + u 2 H 2 × exp C k 2 m u d u d r .
By a direct calculation, we obtain
0 t r u 2 H 2 exp C k 2 m u d u = 0 r r u 2 H 2 exp C k 2 m u d u + r t r u 2 H 2 exp C k 2 m u d u = 0 r u 2 H 2 exp C k 2 m ( r u ) d u + 0 t r u 2 H 2 exp C k 2 m ( u + r ) d u = exp C k 2 m r 0 r u 2 H 2 exp C k 2 m u d u + 0 t r u 2 H 2 exp C k 2 m u d u 1 2 H 1 r 2 H 1 + 1 2 H 1 ( t r ) 2 H 1 1 2 H 1 t 2 H 1 ,
and we obtain likewise that
0 t r + u 2 H 2 exp C k 2 m u d u = 0 t ( r u ) + 2 u 2 H 2 exp C k 2 m u d u 0 t r u 2 H 2 exp C k 2 m u d u 0 t 2 u 2 H 2 exp C k 2 m u d u 1 2 H 1 t 2 H 1 2 2 H 2 0 t u 2 H 2 exp C k 2 m u d u 1 2 H 1 t 2 H 1 2 2 H 2 0 t u 2 H 2 d u 1 2 H 1 T 2 H 1 2 2 H 2 2 H 1 t 2 H 1 1 2 2 H 2 2 H 1 t 2 H 1 .
This readily implies that
0 t r u 2 H 2 + r + u 2 H 2 exp C k 2 m u d u 2 2 2 H 2 2 H 1 t 2 H 1 .
Hence the variance can be lower bounded in the following way:
D k 2 ( t ) α H exp 2 C k 2 m t 0 t exp C k 2 m r 2 2 2 H 2 2 H 1 t 2 H 1 d r = α H exp 2 C k 2 m t 2 2 2 H 2 2 H 1 t 2 H 1 0 t exp C k 2 m r d r C t 2 H 1 exp 2 C k 2 m t k 2 m exp C k 2 m t 1 .
To derive useful bounds for the integrals appearing in our analysis, we begin by estimating the tail probabilities of the process u k ( s ) . Under the assumption that u k ( s ) follows a sub-Gaussian distribution with mean M k ( t ) and variance D k 2 ( t ) , and that D k ( t ) is uniformly bounded below away from zero, we obtain
P u k ( s ) v N = 1 2 π D k ( t ) v N v N exp x M k ( t ) 2 2 D k 2 ( t ) d x 2 v N 2 π D k ( t ) v N D k ( t ) ,
where the last inequality follows from the fact that D k ( t ) c > 0 . This estimate holds uniformly in k and is valid for sufficiently small v N . It allows us to bound the second moment of the random variable U k , N ( t ) , defined as a piecewise estimator with truncation near zero. Specifically, the expectation
E U k , N 2 ( t ) = E 1 v N 2 χ ( | u k ( s ) | v N ) + E 1 u k 2 ( t ) χ ( | u k ( s ) | > v N )
can be bounded as follows:
E U k , N 2 ( t ) = E 1 v N 2 χ u k ( s ) v N + E 1 u k 2 ( t ) χ u k ( s ) > v N P u k ( s ) v N v N 2 + 1 2 π D k ( t ) x > v N exp x M k ( t ) 2 2 D k 2 ( t ) d x x 2 P u k ( s ) v N v N 2 + 2 2 π D k ( t ) v N + d x x 2 2 v N D k ( t ) .
where we used symmetry of the Gaussian distribution and the bound
v N d x x 2 = 1 v N .
Again, this inequality holds under the assumption that D k ( t ) c > 0 .
As a consequence, the first term on the right-hand side of J 1 (defined earlier) can be controlled by combining kernel smoothing and probabilistic bounds. Recall that the kernel function K ( · ) satisfies the conditions (H3) and (H4) and the conditions (H2) and (A2) on ν k . Then, we obtain the following:
C h N F ν , N k = 1 N 0 T ν k K s t h N P u k ( s ) v N d s 2 C v N 2 N 2 m .
In fact, making use of (36), we infer that
C h N F ν , N k = 1 N 0 T ν k K s t h N P u k ( s ) v N d s C v N h N F ν , N k = 1 N 0 T ν k K s t h N d s D k ( s ) C v N h N F ν , N k = 1 N t / h N ( T t ) / h N ν k K s k m t + s h N ( 2 H 1 ) / 2 exp C ( t + s h N ) exp C ( t + s h N ) 1 d s C v N F ν , N k = 1 N ν k k m 1 1 K s t 1 2 ( 2 H 1 ) / 2 exp C ( t 2 + t 1 / 2 ) exp C ( t + s h N ) 1 d s C v N N m .
Given (37), the second term of (29) can be rewritten as
C F ν , N 2 k = 1 N 1 h N 2 0 T 0 T ν k 2 K s 1 t h N K s 2 t h N E χ u k ( s 1 ) v N χ u k ( s 2 ) v N d s 1 d s 2 C F ν , N 2 k = 1 N 1 h N 2 0 T 0 T ν k 2 K s 1 t h N K s 2 t h N E χ u k ( s 1 ) v N d s 1 d s 2 = C F ν , N 2 k = 1 N t / h N ( T t ) / h N t / h N ( T t ) / h N ν k 2 K s 1 K s 2 P u k ( s 1 ) v N d s 1 d s 2 C F ν , N 2 k = 1 N t / h N ( T t ) / h N t / h N ( T t ) / h N ν k 2 K s 1 K s 2 v N D k ( s 1 ) d s 1 d s 2 C F ν , N 2 k = 1 N ν k 2 k m C v N N m 1 .
Thus, by combining (39) with (41), it holds that
E J 1 2 C v N 2 N 2 m + v N N m 1 .
We next evaluate the second term J 2 on the right side of (28). We have
J 2 = 1 h N F ν , N k = 1 N 0 T K s t h N θ ( s ) θ ( t ) ν k d s = 1 F ν , N k = 1 N ν k t / h N ( T t ) / h N K s θ ( t + h N s ) θ ( t ) d s = t / h N ( T t ) / h N K s θ ( t + h N s ) θ ( t ) d s .
An application of Taylor’s formula shows that
θ ( t + τ ) = θ ( t ) + m = 1 ρ τ m m ! θ m ( t ) + τ ρ ρ ! θ ( ρ ) ( t + γ τ ) θ ( ρ ) ( t ) , γ = γ ( τ ) ( 0 , 1 ) .
This implies
J 2 = t / h N ( T t ) / h N K s m = 1 ρ ( h N s ) m m ! θ ( m ) ( t ) + ( h N s ) ρ ρ ! θ ( ρ ) ( t + γ h N s ) θ ( ρ ) ( t ) d s = t / h N ( T t ) / h N K s h N ρ s ρ ρ ! θ ( ρ ) ( t + γ h N s ) θ ( ρ ) ( t ) d s .
The property (P1) of the class of functions ensures that
θ ( ρ ) ( t + γ h N s ) θ ( ρ ) ( t ) L α τ α ,
from which we infer that
J 2 C h N β 1 1 s β K ( s ) d s C h N β .
Therefore, we have
J 2 2 C h N 2 β .
We finally evaluate the second term J 3 in the right side of (28). By the independence of ξ i H and ξ j H ( i j ) , we have
E J 3 2 = 1 h N 2 F ν , N 2 k = 1 N E 0 T K s t h N U k , N ( s ) d ξ k H ( s ) 2 C h N 2 F ν , N 2 k = 1 N 0 T E K s t h N U k , N ( s ) 1 / H d s 2 H .
It is known that for any random variable X and a real number p > 1 , we have the following basic Jensen’s inequality:
E X p E X p .
Let us denote
X = K s t h N U k , N ( s ) 1 / H , p = 2 H .
Then, we have
E K s t h N U k , N ( s ) 1 / H 2 H E K s t h N U k , N ( s ) 2 = K 2 s t h N E U k , N ( s ) 2 K 2 s t h N 2 v N D k ( t ) ,
where the last inequality results from (38). Thus, we readily obtain
E K s t h N U k , N ( s ) 1 / H K s t h N 1 / H 2 v N D k ( t ) 1 / 2 H .
We can therefore write the following chain of inequalities:
E J 3 2 C h N 2 F ν , N 2 k = 1 N 0 T K s t h N 1 / H 2 v N D k ( t ) 1 / 2 H d s 2 H C v N 1 h 2 H h N 2 F ν , N 2 k = 1 N k m t / h N ( T t ) / h N K s 1 / H × t + s h N ( 2 H 1 ) / 2 e C ( t + s h N ) e C ( t + s h N ) 1 1 / 2 H d s 2 H C v N 1 h 2 H 2 F ν , N 2 k = 1 N k m 1 1 K s 1 H t 1 2 2 H 1 2 e C ( t 2 + t 1 / 2 ) e C t 1 / 2 1 1 2 H d s 2 H .
Hence, we have
E J 3 2 C v N 1 h 2 H 2 N m 2 m 1 1 .
As a result, by (28), (42), (45), and (49), we have
E | θ ^ N ( t ) θ ( t ) | 2 v N 2 N 2 m + v N N m 1 + h N 2 β + v N 1 h 2 H 2 N m 2 m 1 1 .
Making use of the hypothesis (A1), (A2) and (A3), we have
v N 1 h N 2 H 2 N m 2 m 1 1 h N 2 β + 2 H 1 ,
whenever
h N N ( q + 1 ) / ( 4 β + 1 ) and v N N m h N 2 β .
Finally, we have
E | θ ^ N ( t ) θ ( t ) | 2 C N 2 β ( q + 1 ) / ( 4 β + 1 ) .
This completes the proof of the theorem. □
Proof of Theorem 3. 
We begin by decomposing the expression N β ( q + 1 ) 4 β + 1 ( θ ^ N ( t ) θ ( t ) ) into a sum of three distinct components, expressed explicitly as follows:
N β ( q + 1 ) / ( 4 β + 1 ) ( θ ^ N ( t ) θ ( t ) ) = N β ( q + 1 ) / ( 4 β + 1 ) h N F ν , N k = 1 N 0 T K s t h N χ u k ( s ) v N u k ( s ) v N 1 θ ( s ) ν k d s    + N β ( q + 1 ) / ( 4 β + 1 ) h N F ν , N k = 1 N 0 T K s t h N θ ( s ) θ ( t ) ν k d s    + N β ( q + 1 ) / ( 4 β + 1 ) h N F ν , N k = 1 N 0 T K s t h N U k , N ( s ) d ξ k H ( s ) = R 1 + R 2 + R 3 .
Consequently, in light of Slutsky’s theorem, it is sufficient to establish the validity of the following assertions:
R 1 P 0 ;
R 2 P m θ , ρ ( t ) ;
R 3 D N 0 , σ H 2 ( t ) .
Obviously, using the Markov inequality and (42), as N 0 , we obtain, for all λ > 0
P R 1 > λ λ 1 E R 1 2 = λ 1 N 2 β ( q + 1 ) / ( 4 β + 1 ) E J 1 2 C λ 1 N 2 β ( q + 1 ) / ( 4 β + 1 ) v N 2 N 2 m + v N N m 1 = C λ 1 N 2 β ( q + 1 ) / ( 4 β + 1 ) v N N m v N N m + N 1 .
Making use of the hypotheses (A1), (A2) and (A3), we have
h N N ( q + 1 ) / ( 4 β + 1 ) and v N N m h N 2 β .
Therefore, we get
P R 1 > λ C λ 1 N 2 β ( q + 1 ) / ( 4 β + 1 ) N 2 β ( q + 1 ) / ( 4 β + 1 ) N 2 β ( q + 1 ) / ( 4 β + 1 ) + N 1 = C λ 1 N 2 β ( q + 1 ) / ( 4 β + 1 ) + N 1 ,
which implies that
R 1 P 0 as N .
Moreover, we note that
R 2 = N β ( q + 1 ) / ( 4 β + 1 ) h N F ν , N k = 1 N 0 T K s t h N θ ( s ) θ ( t ) ν k d s = N β ( q + 1 ) / ( 4 β + 1 ) F ν , N k = 1 N ν k t / h N ( T t ) / h N K s θ ( t + h N s ) θ ( t ) d s = N β ( q + 1 ) / ( 4 β + 1 ) t / h N ( T t ) / h N K s θ ( t + h N s ) θ ( t ) d s .
An application of Taylor’s formula shows that
θ ( t + τ ) θ ( t ) = m = 1 ρ + 1 τ m m ! θ m ( t ) + τ ρ + 1 ( ρ + 1 ) ! θ ρ + 1 ( t + γ τ ) θ ρ + 1 ( t ) ,
where
γ = γ ( τ ) ( 0 , 1 ) .
This implies that
R 2 = N β ( q + 1 ) / ( 4 β + 1 ) t / h N ( T t ) / h N K s m = 1 ρ + 1 ( h N s ) m m ! θ m ( t ) + ( h N s ) ρ + 1 ( ρ + 1 ) ! θ ρ + 1 ( t + γ h N s ) θ ρ + 1 ( t ) d s
= θ ρ + 1 ( t ) ( ρ + 1 ) ! t / h N ( T t ) / h N s ρ + 1 K s d s + 1 ( ρ + 1 ) ! t / h N ( T t ) / h N s ρ + 1 K s θ ρ + 1 ( t + γ h N s ) θ ρ + 1 ( t ) d s .
Notice that
R 2 m 2 = 1 ( ρ + 1 ) ! t / h N ( T t ) / h N s ρ + 1 K s θ ρ + 1 ( t + γ h N s ) θ ρ + 1 ( t ) d s 2 1 ( ρ + 1 ) ! t / h N ( T t ) / h N s ρ + 1 K s h N s α d s 2 h N 2 α ( ρ + 1 ) ! 2 1 1 s 2 ( β + 1 ) K 2 s d s ,
which tends to zero as N . It immediately follows that
R 2 P m as N .
For the term R 3 , we recall that the asymptotic variance σ H 2 ( t ) is given by
σ H 2 ( t ) = V M , D ( t ) 1 1 1 1 K s K r ϕ H ( s , r ) d s d r
where the function V M , D ( t ) is defined as
V M , D ( t ) = lim N k = 1 N 1 D k ( t ) φ M k ( t ) D k ( t ) + E 1 u k ( t ) 2 ,
with
M k ( t ) = E [ u k ( t ) ] , M k ( t ) = E 1 u k ( t ) , D k ( t ) = Var ( u k ( t ) ) .
From [57], the term R 3 is a centered Gaussian process. Consequently, by the independence of ξ i and ξ j for i j , it is obvious to get
0 T K s t h N U i , N ( s ) d ξ i H ( s )
and
0 T K s t h N U j , N ( s ) d ξ j H ( s )
are independents for i j , and then
E R 3 2 = N 2 β ( q + 1 ) / ( 4 β + 1 ) h N 2 F ν , N 2 E k = 1 N 0 T K s t h N U k , N ( s ) d ξ k H ( s ) 2 = N 2 β ( q + 1 ) / ( 4 β + 1 ) h N 2 F ν , N 2 k = 1 N E 0 T K s t h N U k , N ( s ) d ξ k H ( s ) 2 = N 2 β ( q + 1 ) / ( 4 β + 1 ) h N 2 F ν , N 2 j = 1 N k = 1 N E 0 T 0 T K s t h N K r t h N U j , N ( s ) U k , N ( r ) d ξ j H ( s ) d ξ k H ( r ) .
By combining (5) with (6), we readily infer that
E 0 T K s t h N U k , N ( s ) d ξ k H ( s ) 2 = 0 T 0 T K s t h N E U k , N ( s ) K r t h N E U k , N ( r ) ϕ H ( s , r ) d s d r , = h N 2 t / h N ( T t ) / h N t / h N ( T t ) / h N K s K r E U k , N ( h N s + t ) E U k , N ( h N r + t ) × ϕ H ( h N s + t , h N r + t ) d s d r ,
where
ϕ H ( s , r ) = H ( 2 H 1 ) s r 2 H 2 s + r 2 H 2 .
Notice that
E 0 T K s t h N U k , N ( s ) d ξ k H ( s ) 2 = h N 2 0 T 0 T K s t h N E U k , N ( s ) K r t h N E U k , N ( r ) ϕ H ( s , r ) d s d r , = h N 2 H 4 K s K r E U k , N ( h N s + t ) E U k , N ( h N r + t ) ϕ H ( s , r ) d s d r ,
Therefore, we infer
E U k , N ( s ) = E 1 v N χ u k ( s ) v N + E 1 u k ( s ) χ u k ( s ) > v N .
Concerning the first term, we get
E 1 v N χ u k ( s ) v N = 1 v N P u k ( s ) v N = 1 v N P v N M k ( s ) D k ( s ) u k ( s ) M k ( s ) D k ( s ) v N M k ( s ) D k ( s ) = 1 v N P v N M k ( s ) D k ( s ) Z k ( s ) v N M k ( s ) D k ( s ) = 2 D k ( s ) . D k ( s ) 2 v N Φ v N M k ( s ) D k ( s ) Φ v N M k ( s ) D k ( s ) ,
where Φ ( · ) is a distribution function of a standard normal variable. Then, we have
E 1 v N χ u k ( h N s + t ) v N 2 D k ( t ) φ M k ( t ) D k ( t ) as N ,
where φ ( · ) is a density function of a standard normal variable. Let us now examine the second term in (67); we get
E 1 u k ( h N s + t ) χ u k ( h N s + t ) > v N E 1 u k ( t ) as N .
Therefore, under the Assumption (A5), we have
E R 3 2 = 1 1 1 1 K s K r × k = 1 N E U k , N ( h N s + t ) E U k , N ( h N r + t ) × ϕ H ( s , r ) d s d r .
From (66) and (68)–(70), we readily infer that, as N ,
V a r ( R 3 ) σ H 2 ( t ) = V M , D ( t ) 1 1 1 1 K s K r ϕ H ( s , r ) d s d r , as N .
where
V M , D ( s ) = lim N k = 1 N 1 D k ( s ) φ M k ( s ) D k ( s ) + E 1 u k ( s ) 2 ,
and
ϕ H ( s , r ) = H ( 2 H 1 ) s r 2 H 2 s + r 2 H 2 .
Then, we obtain
R 3 D N 0 , σ H 2 ( t ) .
This completes the proof of Theorem 3. □

7. Illustrative Examples

To elucidate the theoretical results established in this work, we present two explicit examples falling within the general framework of model (8), as introduced in [19] for the first example. These models allow for fully tractable computations and provide concrete insight into the performance of the proposed estimation procedure.

7.1. Example 1: Second-Order SPDE

Consider the stochastic parabolic equation:
u ( t , x ) t = θ ( t ) Δ u ( t , x ) u ( t , x ) + ξ ˙ H ( t , x ) , u ( 0 , x ) = 0 , u ( t , 0 ) = u ( t , 1 ) = 0 ,
where u = u ( t , x ) is defined on ( 0 , T ) × ( 0 , 1 ) , θ ( · ) is an unknown smooth function, and ξ ˙ H ( t , x ) denotes a spatially cylindrical sub-fractional Brownian noise with Hurst parameter H ( 1 / 2 , 1 ) . The operator Δ represents the Laplacian with homogeneous Dirichlet boundary conditions.
This equation corresponds to the general setting with the following:
G = ( 0 , 1 ) , A 0 = I , A 1 = Δ ,
yielding m 0 = 0 , m 1 = 2 m = 2 , and hence q = 2 .

7.1.1. Spectral Decomposition

The eigenfunctions of Δ on ( 0 , 1 ) are as follows:
e k ( x ) = 2 sin ( π k x ) , k 1 ,
forming an orthonormal basis of L 2 ( 0 , 1 ) . The corresponding eigenvalues satisfy the following:
A 0 e k = e k , κ k = 1 ; A 1 e k = π 2 k 2 e k , ν k = π 2 k 2 .
Expanding u ( t , x ) on this basis:
u ( t , x ) = k 1 u k ( t ) e k ( x ) ,
each coefficient u k ( t ) satisfies the SDE:
d u k ( t ) = 1 + θ ( t ) π 2 k 2 u k ( t ) d t + d ξ k H ( t ) , u k ( 0 ) = 0 ,
with { ξ k H ( t ) } k 1 denoting independent one-dimensional sub-fractional Brownian motions.

7.1.2. Kernel-Based Estimation

The time-dependent coefficient θ ( t ) is estimated using the kernel-type estimator introduced in Section 3, based on the first N Fourier modes:
θ ^ N ( t ) = 1 h N F ν , N k = 1 N 0 T K s t h N U k , N ( s ) d u k ( s ) + u k ( s ) d s ,
with parameters
F ν , N = π 2 k = 1 N k 2 , h N = N 3 / 25 , v N = N 71 / 25 ,
and a compactly supported kernel of order 5, e.g.,
K ( t ) = 15 128 ( 63 t 4 70 t 2 + 15 ) , | t | 1 .
The regularized inverse is defined by the following:
U k , N ( s ) = 1 u k ( s ) , if | u k ( s ) | > v N , 1 v N , otherwise .

7.1.3. Asymptotic Behavior

All structural and regularity conditions ((H1)(H2), (P1)(P2)) are satisfied. Hence, Theorems 2 and 3 apply, and we obtain the following:
sup t 0 t t 1 E θ ^ N ( t ) θ ( t ) 2 C ( t 0 , t 1 ) N 48 / 25 ,
where C ( t 0 , t 1 ) is a constant depending on the interval [ t 0 , t 1 ] ( 0 , T ) .

7.2. Example 2: Fourth-Order SPDE

We now consider a higher-order stochastic evolution equation:
u ( t , x ) t = θ ( t ) Δ 2 u ( t , x ) + Δ u ( t , x ) + ξ ˙ H ( t , x ) , u ( 0 , x ) = 0 , u ( t , 0 ) = u ( t , 1 ) = 0 ,
where the operators Δ and Δ 2 correspond to the Laplacian and biharmonic operators, respectively, see [58] for similar examples.
This model fits the general formulation with the following:
G = ( 0 , 1 ) , A 0 = Δ , A 1 = Δ 2 ,
implying m 0 = 2 , m 1 = 2 m = 4 , and thus q = 4 .

7.2.1. Spectral Decomposition

The eigenfunctions { e k ( x ) = 2 sin ( k π x ) } k 1 satisfy the following:
A 0 e k = κ k e k , κ k = ( k π ) 2 ; A 1 e k = ν k e k , ν k = ( k π ) 4 .
The solution admits the decomposition:
u ( t , x ) = k 1 u k ( t ) e k ( x ) ,
with each u k ( t ) satisfying the following:
d u k ( t ) = 1 + θ ( t ) π 2 k 2 u k ( t ) d t + d ξ k H ( t ) , u k ( 0 ) = 0 ,
where again { ξ k H ( t ) } k 1 are independent sub-fractional Brownian motions.

7.2.2. Kernel-Based Estimation

The estimator for θ ( t ) in this setting is given by the following:
θ ^ N ( t ) = 1 h N F ν , N k = 1 N 0 T K s t h N U k , N ( s ) d u k ( s ) κ k u k ( s ) d s ,
with
F ν , N = π 4 k = 1 N k 4 , h N = N 5 / 13 , v N = N 43 / 13 , K ( t ) = 3 4 ( 1 t 2 ) · 1 { | t | 1 } , ( second-order kernel ) .
The regularization sequence U k , N ( t ) is defined as before.

7.2.3. Asymptotic Behavior

Under the same assumptions as in the previous case, the convergence rate becomes the following:
sup t 0 t t 1 E θ ^ N ( t ) θ ( t ) 2 C ( t 0 , t 1 ) N 30 / 13 ,
highlighting the estimator’s effectiveness in this higher-order context as well.
Remark 10.
The above examples highlight the practical applicability of the proposed estimation methodology. They illustrate not only the theoretical underpinnings but also the concrete choices needed for implementation. Further numerical simulations may be carried out to examine the finite-sample behavior of the estimator and validate the predicted convergence rates.

Author Contributions

Conceptualization, A.K. and S.B.; methodology, A.K. and S.B.; validation, A.K., S.B. and F.M.; formal analysis, A.K. and S.B.; investigation, A.K. and S.B.; resources, A.K. and S.B.; writing—original draft preparation, A.K. and S.B.; writing—review and editing, A.K., S.B. and F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors express their sincere gratitude to the Editor-in-Chief, the Associate Editor, and the three anonymous reviewers for their invaluable feedback and for highlighting several oversights in the original submission. Their thoughtful and constructive comments have significantly enhanced the clarity, precision, and overall quality of the manuscript.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Definition A1
([59,60]). Let δ > 0 , γ R , and β > 0 be arbitrary parameters. The Erdélyi–Kober [61,62] fractional integral of a function f ( t ) is defined by
I β γ , δ f ( t ) = t β ( γ + δ ) Γ ( δ ) 0 t t β τ β δ 1 τ β γ f ( τ ) d τ β ,
where
d τ β = β τ β 1 d τ
denotes integration with respect to the variable transformation τ τ β . This operator was first introduced in the seminal work of [63].
Similarly, for δ 0 , γ R , and β > 0 , the Erdélyi–Kober fractional derivative of f ( t ) is defined as
D β γ , δ f ( t ) = j = 1 n γ + j + 1 β t d d t I β γ + δ , n δ f ( t ) ,
where n N is the smallest integer satisfying n 1 δ < n . This operator was first introduced in [59].
The Erdélyi–Kober fractional integral operator constitutes a powerful generalization of the classical Riemann–Liouville and Weyl fractional integrals. For parameters δ > 0 , γ R , and β > 0 , the left-sided Erdélyi–Kober fractional integral of a function f is defined by
( I β γ , δ f ) ( t ) = t β ( γ + δ ) Γ ( δ ) 0 t ( t β τ β ) δ 1 τ β γ f ( τ ) , d τ β , t > 0 ,
where d τ β = β τ β 1 d τ denotes integration with respect to the function τ τ β , and Γ ( · ) denotes the Euler gamma function.
It is essential to observe that the operator (A1) is not defined for arbitrary functions. The presence of endpoint singularities in the kernel necessitates specific conditions on the regularity and integrability of the function f, particularly near τ = 0 and τ = t .

Admissibility Conditions on the Function f

  • Local integrability near τ = 0 . The integrand contains the factor τ β γ , and hence it is necessary that
    f ( τ ) τ β γ L loc 1 ( 0 , t ) ,
    that is,
    0 δ | f ( τ ) | τ β γ d τ < for some δ > 0 ,
    to ensure convergence near the origin, especially when γ 0 .
  • Control of the singularity as τ t . The kernel behaves like ( t β τ β ) δ 1 , which becomes singular near τ = t for δ ( 0 , 1 ) . One must therefore ensure
    t ε t ( t β τ β ) δ 1 τ β γ | f ( τ ) | d τ β < for some ε > 0 .
  • Sufficient global condition. A practical sufficient condition ensuring the well-posedness of the integral is
    f L 1 ( 0 , t ) ; τ β γ d τ ,
    which is typically satisfied in applications involving power-weighted function spaces.
While the Erdélyi–Kober fractional integral offers considerable generality and analytical richness, its application requires careful attention to the behavior of the integrand near the boundary points. The admissibility of a function f under this operator is intimately tied to the parameters δ , γ , and β , which dictate the singular structure of the kernel. Verifying these conditions is essential for the rigorous use of I β γ , δ in both theoretical investigations and applied contexts. For further theoretical developments and applications, the reader is referred to standard monographs such as [64].

References

  1. Dawson, D.A. Qualitative behavior of geostochastic systems. Stoch. Process. Appl. 1980, 10, 1–31. [Google Scholar] [CrossRef]
  2. De, S.S. Stochastic model of population growth and spread. Bull. Math. Biol. 1987, 49, 1–11. [Google Scholar] [CrossRef] [PubMed]
  3. Da Prato, G.; Zabczyk, J. Stochastic Equations in Infinite Dimensions; Encyclopedia of Mathematics and Its Applications; Cambridge University Press: Cambridge, UK, 1992; Volume 44, p. xviii+454. [Google Scholar] [CrossRef]
  4. Mann, J.A.j.; Woyczynski, W.A. Growing fractal interfaces in the presence of self-similar hopping surface diffusion. Phys. A 2001, 291, 159–183. [Google Scholar] [CrossRef]
  5. Cialenco, I. Statistical inference for SPDEs: An overview. Stat. Inference Stoch. Process. 2018, 21, 309–329. [Google Scholar] [CrossRef]
  6. Rozovskiĭ, B.L. Stochastic Evolution Systems; Linear theory and applications to nonlinear filtering, Translated from the Russian by A. Yarkho; Mathematics and Its Applications (Soviet Series); Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1990; Volume 35, p. xviii+315. [Google Scholar] [CrossRef]
  7. Chow, P.L. Stochastic Partial Differential Equations, 2nd ed.; Advances in Applied Mathematics; CRC Press: Boca Raton, FL, USA, 2015; p. xvi+317. [Google Scholar] [CrossRef]
  8. Hairer, M. An Introduction to Stochastic PDEs. arXiv 2023, arXiv:0907.4178. [Google Scholar] [CrossRef]
  9. Lototsky, S.V.; Rozovsky, B.L. Stochastic Partial Differential Equations; Universitext; Springer: Cham, Switzerland, 2017; p. xiv+508. [Google Scholar] [CrossRef]
  10. Meng, P.; Xu, Z.; Wang, X.; Yin, W.; Liu, H. A novel method for solving the inverse spectral problem with incomplete data. J. Comput. Appl. Math. 2025, 463, 116525. [Google Scholar] [CrossRef]
  11. Yang, X.; Meng, P.; Jiang, Z.; Zhou, L. Deep siamese residual support vector machine with applications to disease prediction. Comput. Biol. Med. 2025, 196, 110693. [Google Scholar] [CrossRef]
  12. Huebner, M.; Rozovskiĭ, B.L. On asymptotic properties of maximum likelihood estimators for parabolic stochastic PDE’s. Probab. Theory Relat. Fields 1995, 103, 143–163. [Google Scholar] [CrossRef]
  13. Rao, B.L.S.P. Parametric Estimation for Processes Driven by Infinite Dimensional Mixed Fractional Brownian Motion. arXiv 2021, arXiv:2103.05264. [Google Scholar] [CrossRef]
  14. Piterbarg, L.; Rozovskii, B. On asymptotic problems of parameter estimation in stochastic PDE’s: Discrete time sampling. Math. Methods Statist. 1997, 6, 200–223. [Google Scholar]
  15. Hildebrandt, F.; Trabs, M. Parameter estimation for SPDEs based on discrete observations in time and space. Electron. J. Stat. 2021, 15, 2716–2776. [Google Scholar] [CrossRef]
  16. Lototsky, S.V.; Rosovskii, B.L. Spectral asymptotics of some functionals arising in statistical inference for SPDEs. Stoch. Process. Appl. 1999, 79, 69–94. [Google Scholar] [CrossRef]
  17. Ibragimov, I.; Khasminskii, R. Some Nonparametric Estimation Problems for Parabolic Spde; Technical Report 31; Wayne State University, Department of Mathematics: Detroit, MI, USA, 1997. [Google Scholar]
  18. Huebner, M.; Lototsky, S. Asymptotic analysis of a kernel estimator for parabolic SPDE’s with time-dependent coefficients. Ann. Appl. Probab. 2000, 10, 1246–1258. [Google Scholar] [CrossRef]
  19. Wang, S.; Jiang, Y. Asymptotic analysis of a kernel estimator for parabolic stochastic partial differential equations driven by fractional noises. Front. Math. China 2018, 13, 187–201. [Google Scholar] [CrossRef]
  20. Bojdecki, T.; Gorostiza, L.G.; Talarczyk, A. Sub-fractional Brownian motion and its relation to occupation times. Statist. Probab. Lett. 2004, 69, 405–419. [Google Scholar] [CrossRef]
  21. Nualart, D. The Malliavin Calculus and Related Topics, 2nd ed.; Probability and its Applications (New York); Springer: Berlin, Germany, 2006; p. xiv+382. [Google Scholar] [CrossRef]
  22. Samorodnitsky, G.; Taqqu, M.S. Stable Non-Gaussian Random Processes; Stochastic Models with Infinite Variance; Stochastic Modeling; Chapman & Hall: New York, NY, USA, 1994; p. xxii+632. [Google Scholar]
  23. Dzhaparidze, K.; van Zanten, H. A series expansion of fractional Brownian motion. Probab. Theory Relat. Fields 2004, 130, 39–55. [Google Scholar] [CrossRef]
  24. Tudor, C. On the Wiener integral with respect to a sub-fractional Brownian motion on an interval. J. Math. Anal. Appl. 2009, 351, 456–468. [Google Scholar] [CrossRef]
  25. Tudor, C. Prediction and linear filtering with sub-fractional Brownian motion. preprint 2007. [Google Scholar]
  26. Diedhiou, A.; Manga, C.; Mendy, I. Parametric estimation for SDEs with additive sub-fractional Brownian motion. J. Numer. Math. Stoch. 2011, 3, 37–45. [Google Scholar]
  27. Pipiras, V.; Taqqu, M.S. Integration questions related to fractional Brownian motion. Probab. Theory Relat. Fields 2000, 118, 251–291. [Google Scholar] [CrossRef]
  28. Jolis, M. On the Wiener integral with respect to the fractional Brownian motion on an interval. J. Math. Anal. Appl. 2007, 330, 1115–1127. [Google Scholar] [CrossRef]
  29. Lei, P.; Nualart, D. A decomposition of the bifractional Brownian motion and some applications. Statist. Probab. Lett. 2009, 79, 619–624. [Google Scholar] [CrossRef]
  30. Xiao, W.; Zhang, X.; Zuo, Y. Least squares estimation for the drift parameters in the sub-fractional Vasicek processes. J. Statist. Plann. Inference 2018, 197, 141–155. [Google Scholar] [CrossRef]
  31. Mendy, I. Parametric estimation for sub-fractional Ornstein-Uhlenbeck process. J. Statist. Plann. Inference 2013, 143, 663–674. [Google Scholar] [CrossRef]
  32. Piterbarg, L.; Rozovskii, B. Maximum likelihood estimators in the equations of physical oceanography. In Stochastic Modelling in Physical Oceanography; Birkhäuser: Boston, MA, USA, 1996; Volume 39, pp. 397–421. [Google Scholar] [CrossRef]
  33. Safarov, Y.; Vassiliev, D. The Asymptotic Distribution of Eigenvalues of Partial Differential Operators; Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1997; Volume 155, p. xiv+354. [Google Scholar] [CrossRef]
  34. Rozovskiı, B. Stochastic evolution systems, volume 35 of Mathematics and its Applications (Soviet Series). In Linear Theory and Applications to Nonlinear Filtering; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1990. [Google Scholar]
  35. Tindel, S.; Tudor, C.; Viens, F. Stochastic evolution equations with fractional Brownian motion. Probab. Theory Relat. Fields 2003, 127, 186–204. [Google Scholar] [CrossRef]
  36. Li, Z.; Zhou, G.; Luo, J. Stochastic delay evolution equations driven by sub-fractional Brownian motion. Adv. Differ. Equ. 2015, 2015, 48. [Google Scholar] [CrossRef]
  37. León, J.A.; Tindel, S. Malliavin calculus for fractional delay equations. J. Theoret. Probab. 2012, 25, 854–889. [Google Scholar] [CrossRef]
  38. Devroye, L. A Course in Density Estimation; Progress in Probability and Statistics; Birkhäuser Boston, Inc.: Boston, MA, USA, 1987; Volume 14, p. xx+183. [Google Scholar]
  39. Müller, H.G. Smooth optimum kernel estimators of densities, regression curves and modes. Ann. Statist. 1984, 12, 766–774. [Google Scholar] [CrossRef]
  40. Liu, J.; Tang, D.; Cang, Y. Variations and estimators for self-similarity parameter of sub-fractional Brownian motion via Malliavin calculus. Comm. Statist. Theory Methods 2017, 46, 3276–3289. [Google Scholar] [CrossRef]
  41. Mandelbrot, B.B.; Van Ness, J.W. Fractional Brownian motions, fractional noises and applications. SIAM Rev. 1968, 10, 422–437. [Google Scholar] [CrossRef]
  42. Mishura, Y.S. Stochastic Calculus for Fractional Brownian Motion and Related Processes; Lecture Notes in Mathematics; Springer: Berlin, Germany, 2008; Volume 1929, p. xviii+393. [Google Scholar] [CrossRef]
  43. Gubinelli, M. Controlling rough paths. J. Funct. Anal. 2004, 216, 86–140. [Google Scholar] [CrossRef]
  44. Hairer, M. A theory of regularity structures. Invent. Math. 2014, 198, 269–504. [Google Scholar] [CrossRef]
  45. Nourdin, I.; Simon, T. On the absolute continuity of one-dimensional SDEs driven by a fractional Brownian motion. Statist. Probab. Lett. 2006, 76, 907–912. [Google Scholar] [CrossRef]
  46. Nourdin, I. A simple theory for the study of SDEs driven by a fractional Brownian motion, in dimension one. In Séminaire de Probabilités XLI; Springer: Berlin, Germany, 2008; Volume 1934, pp. 181–197. [Google Scholar] [CrossRef]
  47. Neuenkirch, A.; Nourdin, I. Exact rate of convergence of some approximation schemes associated to SDEs driven by a fractional Brownian motion. J. Theoret. Probab. 2007, 20, 871–899. [Google Scholar] [CrossRef]
  48. Devroye, L.; Lugosi, G. Combinatorial Methods in Density Estimation; Springer Series in Statistics; Springer: New York, NY, USA, 2001; p. xii+208. [Google Scholar]
  49. Watson, G.S.; Leadbetter, M.R. On the estimation of the probability density. I. Ann. Math. Statist. 1963, 34, 480–491. [Google Scholar] [CrossRef]
  50. Epsnečnikov, V.A. Nonparametric estimation of a multidimensional probability density. Teor. Verojatnost. I Primenen. 1969, 14, 156–162. [Google Scholar] [CrossRef]
  51. Deheuvels, P. Estimation non paramétrique de la densité par histogrammes généralisés. II. In Annales de l’ISUP; Publications of the Institute of Statistics of the University of Paris: Paris, France, 1977; Volume 22, pp. 1–23. [Google Scholar]
  52. Hall, P. Asymptotic properties of integrated square error and cross-validation for kernel estimation of a regression function. Z. Wahrsch. Verw. Geb. 1984, 67, 175–196. [Google Scholar] [CrossRef]
  53. Bouzebda, S.; Taachouche, N. Multivariate spatial conditional U-quantiles: A Bahadur–Kiefer representation. Results Appl. Math. 2025, 26, 100593. [Google Scholar] [CrossRef]
  54. Bouzebda, S.; Taachouche, N. Nonparametric conditional U-statistics on Lie groups with measurement errors. J. Complex. 2025, 89, 101944. [Google Scholar] [CrossRef]
  55. Bouzebda, S.; Taachouche, N. Oracle inequalities and upper bounds for kernel conditional U-statistics estimators on manifolds and more general metric spaces associated with operators. Stochastics 2024, 96, 2135–2198. [Google Scholar] [CrossRef]
  56. Bouzebda, S.; Taachouche, N. On the variable bandwidth kernel estimation of conditional U-statistics at optimal rates in sup-norm. Phys. A 2023, 625, 129000. [Google Scholar] [CrossRef]
  57. Mishura, Y.; Zili, M. Stochastic Analysis of Mixed Fractional Gaussian Processes; ISTE Press: London, UK; Elsevier Ltd.: Oxford, UK, 2018; p. xvi+194. [Google Scholar]
  58. Cialenco, I.; Lototsky, S.V. Parameter estimation in diagonalizable bilinear stochastic parabolic equations. Stat. Inference Stoch. Process. 2009, 12, 203–219. [Google Scholar] [CrossRef]
  59. Kiryakova, V. Generalized Fractional Calculus and Applications; Longman Scientific & Technical: Harlow, UK; John Wiley & Sons: New York, NY, USA, 1994; Volume 301. [Google Scholar]
  60. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Applications; Translation from the Russian; Gordon and Breach: New York, NY, USA, 1993. [Google Scholar]
  61. Kober, H. On fractional integrals and derivatives. Quart. J. Math. Oxf. Ser. 1940, 11, 193–211. [Google Scholar] [CrossRef]
  62. Erdélyi, A. On fractional integration and its application to the theory of Hankel transforms. Quart. J. Math. Oxf. Ser. 1940, 11, 293–303. [Google Scholar] [CrossRef]
  63. Sneddon, I.N. The use in mathematical physics of Erdelyi-Kober operators and of some of their generalizations. In Fractional Calculus and Its Applications: Proceedings of the International Conference Held at the University of New Haven, June 1974; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1975; Volume 457, pp. 37–79. [Google Scholar]
  64. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier Science B.V.: Amsterdam, The Netherlands, 2006; Volume 204, North-Holland Mathematics Studies; p. xvi+523. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Keddi, A.; Bouzebda, S.; Madani, F. Asymptotic Analysis of a Kernel-Type Estimator for Parabolic Stochastic Partial Differential Equations Driven by Cylindrical Sub-Fractional Brownian Motion. Mathematics 2025, 13, 2627. https://doi.org/10.3390/math13162627

AMA Style

Keddi A, Bouzebda S, Madani F. Asymptotic Analysis of a Kernel-Type Estimator for Parabolic Stochastic Partial Differential Equations Driven by Cylindrical Sub-Fractional Brownian Motion. Mathematics. 2025; 13(16):2627. https://doi.org/10.3390/math13162627

Chicago/Turabian Style

Keddi, Abdelmalik, Salim Bouzebda, and Fethi Madani. 2025. "Asymptotic Analysis of a Kernel-Type Estimator for Parabolic Stochastic Partial Differential Equations Driven by Cylindrical Sub-Fractional Brownian Motion" Mathematics 13, no. 16: 2627. https://doi.org/10.3390/math13162627

APA Style

Keddi, A., Bouzebda, S., & Madani, F. (2025). Asymptotic Analysis of a Kernel-Type Estimator for Parabolic Stochastic Partial Differential Equations Driven by Cylindrical Sub-Fractional Brownian Motion. Mathematics, 13(16), 2627. https://doi.org/10.3390/math13162627

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop