Next Article in Journal
Dynamic Analysis and FPGA Implementation of a New Linear Memristor-Based Hyperchaotic System with Strong Complexity
Previous Article in Journal
Change-Point Detection in Functional First-Order Auto-Regressive Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multivariate Universal Local Linear Kernel Estimators in Nonparametric Regression: Uniform Consistency

1
Sobolev Institute of Mathematics, 630090 Novosibirsk, Russia
2
Department of Probability and Mathematical Statistics, Novosibirsk State University, 630090 Novosibirsk, Russia
3
Department of Probability Theory, Lomonosov Moscow State University, 119234 Moscow, Russia
4
Department of Epidemiology of Noncommunicable Diseases, National Medical Research Center for Therapy and Preventive Medicine, 101990 Moscow, Russia
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(12), 1890; https://doi.org/10.3390/math12121890
Submission received: 15 May 2024 / Revised: 7 June 2024 / Accepted: 13 June 2024 / Published: 18 June 2024
(This article belongs to the Section D1: Probability and Statistics)

Abstract

:
In this paper, for a wide class of nonparametric regression models, new local linear kernel estimators are proposed that are uniformly consistent under close-to-minimal and visual conditions on design points. These estimators are universal in the sense that their designs can be either fixed and not necessarily satisfying the traditional regularity conditions, or random, while not necessarily consisting of independent or weakly dependent random variables. With regard to the design elements, only dense filling of the regression function domain with the design points without any specification of their correlation is assumed. This study extends the dense data methodology and main results of the authors’ previous work for the case of regression functions of several variables.

1. Introduction

We consider the following regression model:
X i = f ( z i ) + ε i , i = 1 , , n ,
where f ( t ) , t = ( t 1 , , t k ) P R + k , k 1 , is an unknown real-valued random function (random field) that is continuous on a compact set P with probability 1. The design { z i ; i = 1 , , n } consists of a set of observable random k-dimensional vectors taking values in P with possibly unknown distributions, and it is not necessarily independent or identically distributed. We consider the design as an array of random vectors { z i , i = 1 , , n } that may depend on n. In particular, this scheme includes models with a fixed design. It is not assumed that the random function f ( t ) does not depend on the design. Some conditions on the random error { ε i ; i = 1 , , n } are given below.
This paper is devoted to the construction of uniformly consistent (in the sense of convergence in probability) kernel-type estimators for the regression function f ( t ) under minimal assumptions on the dependence of the design points.
Let us review publications related to kernel estimators in the problem under consideration. Here, we do not aspire to present a comprehensive overview of this actively developing area of nonparametric estimation and we will only indicate publications representing the main trends in this direction. The most popular procedures for kernel estimation in the classical case of a nonrandom regression function are apparently related to the Nadaraya–Watson estimators, local polynomial estimators (in particular, local linear estimators), Priestley–Chao estimators, and Gasser–Müller, as well as their modifications (see, for example, books [1,2,3,4,5]).
We are primarily interested in the conditions on the design elements, and in this regard, we note that traditionally in regression problems, it is customary to consider deterministic and random designs separately. Part of this division seems to be due to different approaches to the study of estimators in one or another case. Moreover, initially, there was some specification by design type between kernel-type estimators: the Nadaraya–Watson estimators were studied, for example, only in the case of random design, while the Priestley–Chao and Gasser-Müller estimators are for a one-dimensional nonrandom case. Further in the direction mentioned, many various generalizations were obtained and the above-mentioned bounds were blurred (see, for example, [6,7,8,9]). The Nadaraya–Watson estimators in the case of a nonrandom regular multidimensional design were studied, for example, in [10].
In the case of deterministic design in the vast majority of works, one or another conditions for the design regularity are assumed (for example, see [10,11,12,13,14,15,16,17,18,19]). In papers dealing with random design, independent identically distributed design elements are often considered [1,11,20,21,22,23,24,25,26,27,28,29,30]. But over the past decades, many forms of dependent random variables have been proposed and the corresponding limit theorems for sequences with such properties (as well as probabilistic inequalities) were proved. Development in this direction of probability theory has also fully affected the problems of nonparametric regression so that as design elements, samples from a stationary sequence of random variables are often considered that satisfy one or another known form of dependence. In particular, to construct the design elements, the authors used various mixing conditions, moving average schemes, associated random variables, Markov or martingale properties, etc. In this regard, we note, for example, the papers [1,18,21,31,32,33,34,35,36,37,38,39,40,41]. In the recent papers [42,43,44,45,46,47,48], there are considered nonstationary sequences of design elements with certain special forms of dependence (Markov chains, autoregression, partial sums of moving averages, etc.). The uniform consistency of the kernel-type estimators of the regression function, both in the case of deterministic and random design, was studied by many authors (see, for example, [11,16,18,23,24,25,26,36,38,39,42,43,44,49,50] and the references therein).
It is worth noting that the nature of dependence of real sample data in statistics is difficult to determine if the dependence of observations takes place by the nature of the stochastic experiment. In this regard, the creation and development of new approaches to the statistical analysis of dependent observations of large size that do not satisfy the classical mixing conditions and other known forms of correlation, as well as the study of new forms of dependence, which would be statistically more clear and justified, is of interest not only from a theoretical point of view but is also relevant and especially important for applications.
In the present paper, we continue to develop the concept of dense data proposed in [51,52,53]. In these papers, it is established that to restore the regression function, it is enough to know the noisy values of this function on some dense (in one sense or another) set of points from the regression function domain. We succeeded in constructing new kernel-type estimators by using special sums of weighted observations with the structure of Riemann integral sums that are close to the corresponding integrals in the case of dense design. In this case, the stochastic nature of the design points does not play any role. In the papers of predecessors, in the case of random design, the fulfillment of the dense filling with design points of the regression function domain was satisfied due to various concrete forms of the weak dependence of design points, and the asymptotic properties of the estimators were studied using corresponding probabilistic limit theorems. It is important to emphasize that the new estimators are uniformly consistent not only in the cases of weak dependence mentioned above but also for significantly different correlations of observations when the conditions of ergodicity or stationarity are not satisfied, as well as the classical mixing conditions and other known conditions of dependence (see Example 2 in Section 2 below). In addition, the new estimators have the property of universality regarding the nature of the design: it can be either fixed and not necessarily regular or random, while not necessarily satisfying the traditional correlation conditions.
Note that the proposed estimators belong to the class of local linear kernel estimators, but with some weights other than those used in the classical version. These weights are given by the Lebesgue measure of the elements of some finite random partition of the design sample space, where each element of the partition corresponds to one design point. In this paper, explicit upper bounds on the rate of uniform convergence in the probability of the new estimators to the regression function are obtained simultaneously for both fixed and random designs, while, in contrast to the previously known results, the rate of convergence of our estimators are insensitive to the correlation structure of the design points. The only design characteristic explicitly included in the resulting upper bounds is the minimal radius of the epsilon net formed by the design elements in the regression function domain. This minimal radius is a qualitatively different characteristic compared with previously known ones, in terms of which it is possible to describe sufficient conditions for the uniform consistency of kernel-type estimators. Its advantage over classical weak dependence characteristics is that it is insensitive to the correlation of the design observations. The main thing is that as the size of the observations indefinitely increases, this radius tends to zero in probability. Such a requirement, as we have already noted above, is essentially necessary since only when the design is densely filled with the regression function domain, it is possible to restore it with one or another accuracy.
Previously, similar ideas were implemented in [51,52] for local constant estimators and in [53] for local linear estimators with a univariate design. The estimators from [53] are a particular case (for k = 1 ) of the estimators proposed here. Note that the construction of estimators from [53], when the weights of the weighted least squares method are the spacing statistics constructed by the variational series of design points, has no direct generalization to the case of functions of many variables. Similar conditions on design elements were also used in [54,55] for nonparametric regression, and in [56,57,58] for nonlinear regression. In particular, in [54,55], similar conditions were proposed for the Nadaraya–Watson estimators, but these conditions guarantee only pointwise consistency. In [54,55,59], conditions for the uniform consistency of Nadaraya–Watson estimators and classical local linear estimators are obtained in terms of dense data, but the conditions are not as simple as in the present paper and require a more uniform dense filling with design points of the regression function domain than is required in this paper.
Note also that the model (1) assumes that the unknown function f ( t ) is a random process with almost surely continuous trajectories (random field). This more general statement (in comparison with the classical one), is considered partly in order to use the obtained results as an application for estimating the mean and covariance functions of a continuous random field. In connection with the random regression function, we note, for example, recent works [60,61,62,63,64,65,66,67], in which the mean and covariance functions of the random regression function f are estimated in the case of N independent realizations f 1 , , f N of the function f, there are noisy values of each of these random curves in some set of the design elements (the design can be either common to all trajectories or different from series to series). We consider one of the variants of this problem as an application of the main result. Previously, we considered some statements of this problem in the case of a univariate design (see [51,53,68,69,70]).
To conclude the Introduction, it is worth noting that all kernel methods have the so-called “curse of dimensionality” property: as the dimension of the design points increases, the convergence rates of the kernel methods decrease. Therefore, in this case, kernel methods require relatively large samples to achieve the required accuracy. In this regard, reducing the dimension of the design by selecting relevant factors play an important role. As a guide in this direction, we point out the recent paper by [71], where one can find a detailed bibliography.
This paper has the following structure. Section 2 contains the main results. In Section 3, as a corollary, the problem of estimating the mean function in the dense design case is considered. The proofs of the results are found in Section 4.
Currently, the authors are preparing a continuation of this work for publication that will provide both a computer simulation of the statistical procedures proposed in this work and examples of processing real data.

2. Main Results

2.1. Notation and Main Assumptions

We agree that vectors are column vectors. Vectors are denoted by bold letters, and matrices are in straight capital letters. Denote by diag { d 1 , , d m } the diagonal matrix of dimension m × m with relevant elements on the main diagonal. The symbol denotes the transposition of a vector or matrix, and the determinant of some matrix A is denoted by det A . Denote by Λ k ( · ) the Lebesgue measure in R k . For any vector x = ( x 1 , , x k ) , the symbol x means the supnorm in R k , i.e., x = max j = 1 , k | x j | . For an arbitrary matrix X , as a matrix norm, we consider the norm subordinated to the vector supnorm, i.e., X = max l k m k | X l m | , where the symbol X l m here and below denotes the matrix entry at the intersection of the l-th row and m-th column. Notice that we may consider any vector column x as a matrix of dimension k × 1 and its supnorm x coincides with the matrix norm above. Therefore, we use one symbol · to denote these two norms.
By d ( A ) , we denote the diameter of a set A, i.e., d ( A ) = sup x , y A x y . We also need a notation for the modulus of continuity of a function f defined on a unit k-dimensional cube [ 0 , 1 ] k :
ω f ( h ) = sup x , y : x y h | f ( x ) f ( y ) | .
Next, by O p ( η n ) , we denote a random variable ζ n such that for all M > 0 , one has
lim sup n P ( | ζ n | / η n > M ) β ( M ) ,
where lim M β ( M ) = 0 and { η n } are positive (maybe random or not) variables and the function β ( M ) does not depend on the parameters of the model under consideration. We agree that throughout what follows, all limits, unless otherwise stated, are taken for n . In what follows, without loss of generality, we assume that P = [ 0 , 1 ] k .
We now formulate the following four main assumptions.
( A 1 ) The observations { ( z i , X i ) , i = 1 , , n } have the structure (1), where f ( t ) , t [ 0 , 1 ] k is an unknown real-valued random function on [ 0 , 1 ] k that is continuous with probability 1. The design points { z i ; i = 1 , , n } are a set of observable k-dimensional random variables with, generally speaking, unknown distributions and values in [ 0 , 1 ] k , not necessarily independent or identically distributed; in this case, the random variables { z i } may depend on n.
( A 2 ) For all n 1 , the unobservable random errors { ε i ; i = 1 , , n } form a sequence of martingale differences under the condition
M p = sup i n E | ε i | p < when some p > k and p 2 ,
where M p does not depend on n. It is also assumed that { ε i } is independent of { z i } and the random function f ( · ) ; moreover, the random variables { ε i } may depend on n.
( A 3 ) The kernel function K ( t ) , t R k , vanishes outside the cube [ 1 , 1 ] k and can be represented as K ( t ) = j = 1 k K o ( t j ) for t = ( t 1 , , t k ) , where K o ( · ) is a symmetric distribution density with the support [ 1 , 1 ] . We assume that the function K o ( t ) satisfies the Lipschitz condition with constant 1 L < and K o ( ± 1 ) = 0 .
In what follows, we need the notation K h ( t ) = h k K ( h 1 t ) . It is clear that K h ( t ) is a distribution density on [ h , h ] k .
The following condition is the only limitation on the design.
( A 4 ) For each n, there exists a random partition of the set [ 0 , 1 ] k into n Jordan-measurable subsets { P i ; i = 1 , , n } such that every element of this partition contains exactly one point from the set { z i ; i = 1 , , n } (the numbering of elements partition is such that z i P i ), and δ n = max i n d ( P i ) p 0 . Here, it is assumed that the diameters are random variables, i.e., measurable mappings of the probability space.
Remark 1.
Traditionally, a family of nonempty sets P 1 , , P n forms a partition of the set P if the elements of the family { P i } are pairwise disjoint and i = 1 n P i = P . Let us agree that in ( A 4 ) , it is allowed that the elements of the set { P i } intersect along sets of zero Lebesgue measure (for example, along boundaries). Such a reservation allows us not to exclude the situation of multiple points in the design. In the case of pairwise different design points, such a reservation is not required. Note also that without the above convention, condition ( A 4 ) can be formulated as follows: for each n, there exists a random partition of the set [ 0 , 1 ] k into n Jordan-measurable subsets { P i ; i = 1 , , n } such that δ n : = max i n d ( P i z i ) p 0 .
Remark 2.
Condition ( A 4 ) means that for any n, the set of design points { z i ; i = 1 , , n } forms an ε-net of [ 0 , 1 ] k for ε = δ n provided that δ n p 0 .
Remark 3.
Note that condition ( A 4 ) is satisfied for any nonrandom regular design. If { z i } are independent and identically distributed and the set [ 0 , 1 ] k is the support of the distribution z 1 , then condition ( A 1 ) is also fulfilled. If, additionally, the distribution density X 1 is separated from zero on [ 0 , 1 ] k , then δ n = O log n n 1 / k with probability 1. If { z i ; i 1 } is a stationary sequence with an α-mixing condition and the marginal distribution support is [ 0 , 1 ] k , then condition ( A 4 ) is also satisfied. Note that all kinds of dependence of design points { z i } known to the authors satisfy condition ( A 4 ) , but the fulfillment of this condition is also quite possible for other types of dependence not yet described in the modern literature on nonparametric regression (see Example 2 below).
Let
x = ( X 1 , , X n ) , W t = diag { K h ( t z 1 ) Λ k ( P 1 ) , , K h ( t z n ) Λ k ( P n ) } ,
Z t = 1 ( t z 1 ) 1 ( t z n ) .
We introduce the following class of estimators for the regression function f:
f ^ n , h ( t ) : = e 1 ( Z t W t Z t ) 1 Z t W t x ,
where e 1 = ( 1 , 0 , , 0 ) is the ( k + 1 ) -dimensional vector such that the first coordinate equals 1 and the other ones vanish.
Remark 4.
It is easy to check that the kernel estimator (4) is the first coordinate of the ( k + 1 ) -dimensional estimator of the following variant of the weighted least squares method:
min a , b i = 1 n X i a + b ( t z i ) 2 K h ( t z i ) Λ k ( P i ) .
Thus, the proposed class of estimators in a certain sense (in fact, by construction) is close to the classical multidimensional local linear kernel estimators, but in weighted least squares (5), we use slightly different weights.

2.2. The Main Theorem, Corollaries, and Examples

The following theorem is the main result of this paper. It allows us to construct a confidence region (tube) for an unknown regression function.
Theorem 1.
Let conditions ( A 1 ) ( A 4 ) be satisfied. Then, for any fixed h ( 0 , 1 / 2 ] with probability 1, the following relation is valid:
sup t [ 0 , 1 ] k | f ^ n , h ( t ) f ( t ) | C 1 * ω f ( h ) + ζ n ( h ) ,
with a nonnegative random variable ζ n ( h ) such that
P ζ n ( h ) > y C 2 * y p h k ( p / 2 + 1 ) E ( δ n k p / 2 ) + P ( δ n > c * h ) ,
where the constants c * < 1 , C 1 * , and C 2 * defined in (36), Lemmas 6 and 8, depend on k and the kernel K; furthermore, the constant C 2 * additionally depends on p and M p .
Remark 5.
Let f be a nonrandom regression function. Substitute the following intp (7):
y = h k ( p / 2 + 1 ) E ( δ n k p / 2 ) 1 / p .
Applying Markov’s power inequality with exponent k p / 2 for the second term in (7), it is easy to see that under the conditions of the above theorem,
ζ n ( h ) = O p C * h k ( p / 2 + 1 ) E ( δ n k p / 2 ) 1 / p ,
(here, C * is a positive constant that depends on k, p, M p , and K) and there exists a solution h h n to the equation
E ( δ n k p / 2 ) = h k ( p / 2 + 1 ) ω f p ( h ) .
It is clear that this solution tends to zero as n grows. In fact, the quantity h n minimizes in h the order of smallness of the right-hand side of relation (6). Notice that in virtue of (8), the limit relations h n k ( p / 2 + 1 ) E ( δ n k p / 2 ) 0 and δ n / h n p 0 are valid.
Theorem 1 implies the following two assertions.
Corollary 1.
Let conditions ( A 1 ) ( A 4 ) be fulfilled and let C be a set of equicontinuous nonrandom functions from the space C [ 0 , 1 ] k . Then,
γ n ( C ) = sup f C sup t [ 0 , 1 ] k | f ^ n , h ˜ n ( t ) f ( t ) | p 0 ,
where h ˜ n is a solution to Equation (8), where the modulus of continuity ω f ( h ) is replaced with the universal modulus ω f C ( h ) = sup f C ω f ( h ) . Moreover, there is the valid relation
γ n ( C ) = O p ( ω f C ( h ˜ n ) ) .
Corollary 2.
If conditions ( A 1 ) ( A 4 ) are fulfilled and the modulus of continuity of the random regression function f ( t ) , t [ 0 , 1 ] k satisfies with probability 1 the condition ω f ( h ) ζ φ ( h ) for some proper random variable ζ > 0 and a positive nonrandom function φ ( h ) with the condition φ ( h ) 0 as h 0 , then there is the valid limit relation
sup t [ 0 , 1 ] k | f ^ n , h ^ n ( t ) f ( t ) | p 0 ,
where h ^ n is a solution to Equation (8), in which the modulus of continuity ω f ( h ) is replaced with φ ( h ) .
Example 1.
δ n ν n 1 / k , with E ν k p / 2 < , and ω f ( h ) ζ h α for α ( 0 , 1 ] and some proper random variable ζ. Then,
h n = O n 1 2 k ( 1 / p + 1 / 2 ) + α a n d sup t [ 0 , 1 ] k | f ^ n , h ( t ) f ( t ) | = O p n α 2 k ( 1 / p + 1 / 2 ) + 2 α .
In particular, if f ( · ) = W ( · ) is a Wiener process on [ 0 , 1 ] and the independent identically distributed random variables ε i have a normal distribution with zero mean, then for all sufficiently small ν > 0 ,
sup t [ 0 , 1 ] | f ^ n , h ( t ) f ( t ) | = O p ( n 1 / 3 + ν ) .
Here, k = 1 , α = 1 / 2 ν 1 , and 1 / p < ν 2 for arbitrarily small positive ν 1 and ν 2 .
Example 2.
Let a sequence of bivariate random variables { z i ; i 1 } be defined by the relation
z i = ν i u i l + ( 1 ν i ) u i r ,
where { u i l } and { u i r } are independent and uniformly distributed on the respective rectangles [ 0 , 1 / 2 ] × [ 0 , 1 ] and [ 1 / 2 , 1 ] × [ 0 , 1 ] , and the sequence { ν i } does not depend on { u i l } and { u i r } and consists of Bernoulli random variables with the success probability 1 / 2 , i.e., the distribution of random variables z i is an equilibrium mixture of two uniform distributions in the corresponding rectangles. The dependence between the random variables ν i for any natural i is defined by the equalities ν 2 i 1 = ν 1 and ν 2 i = 1 ν 1 . In this case, the random variables { z i ; i 1 } in (9) form a stationary sequence of random variables uniformly distributed on the square [ 0 , 1 ] × [ 0 , 1 ] , but, say, all known mixing conditions are not satisfied here since for all natural m and n,
P z 2 m [ 0 , 1 / 2 ] × [ 0 , 1 ] , z 2 n 1 [ 0 , 1 / 2 ] × [ 0 , 1 ] = 0 .
On the other hand, it is easy to check that the stationary sequence { z i } satisfies the Glivenko–Cantelli theorem. This means that for any fixed h > 0 ,
c 1 h k lim sup t [ 0 , 1 ] k # { i : t z i h , 1 i n } c 2 h k
is almost surely uniform in t; here, # is the counting measure and the constants c 1 and c 2 do not depend on t and h. In other words, the sequence { z i } satisfies condition ( A 4 ) .
In the general case, according to the scheme of this example, one can construct various sequences of dependent random variables uniformly distributed over [ 0 , 1 ] × [ 0 , 1 ] based on the choice of various sequences of Bernoulli switches with the conditions ν j k = 1 and ν l k = 0 for unlimited numbers of indices { j k } and { l k } , respectively. In this case, condition ( A 4 ) is also satisfied. But the corresponding sequence { z i } (not necessarily stationary) may not satisfy the strong law of large numbers. For example, this situation occurs when ν j = 1 ν 1 for j = 2 2 k 1 , , 2 2 k 1 , and ν j = ν 1 for j = 2 2 k , , 2 2 k + 1 1 , where k = 1 , 2 , ; i.e., we play in which of the rectangles we throw the first point at random, and then alternate the number of throws for each of the two specified rectangles as follows: 2, 4, 8, 16, and so on. Indeed, we introduce the notation n k = 2 2 k 1 , n ˜ k = 2 2 k + 1 1 , and S m = i = 1 m z 1 i for z i = ( z 1 i , z 2 i ) , and we note that for all outcomes that make up the event { ν 1 = 1 } , the following is fulfilled:
S n k n k = 1 n k i N 1 , k u 1 i l + 1 n k i N 2 , k u 1 i r ,
where u i l = ( u 1 i l , u 2 i l ) and u i r = ( u 1 i r , u 2 i r ) , and N 1 , k and N 2 , k are the collections of indices such that the observations { z i , i n k } lie in the rectangles [ 0 , 1 / 2 ] × [ 0 , 1 ] or [ 1 / 2 , 1 ] × [ 0 , 1 ] , respectively. It is easy to see that # ( N 1 , k ) = n k / 3 and # ( N 2 , k ) = 2 # ( N 1 , k ) . Hence, S n k / n k 7 / 12 almost surely as k due to the strong law of large numbers for the sequences { u 1 i l } and { u 1 i r } . On the other hand, for all elementary outcomes belonging to the event { ν 1 = 1 } , as k ,
S n ˜ k n ˜ k = 1 n ˜ k i N ˜ 1 , k u 1 i l + 1 n ˜ k i N ˜ 2 , k u 1 i r 5 12
with probability 1, where N ˜ 1 , k and N ˜ 2 , k are the collections of indices such that the observations { z i , i n ˜ k } , lie in the respective rectangles [ 0 , 1 / 2 ] × [ 0 , 1 ] or [ 1 / 2 , 1 ] × [ 0 , 1 ] . When justifying the convergence in (10), we took into account the fact that # ( N ˜ 1 , k ) = ( 2 2 k + 2 1 ) / 3 and # ( N ˜ 2 , k ) = 2 n k / 3 , i.e., # ( N ˜ 1 , k ) = 2 # ( N ˜ 2 , k ) + 1 .
Similar conclusions are valid for all outcomes that make up the event { ν 1 = 0 } .

3. Estimating the Mean and Covariance of a Random Regression Function with a Dense Design

In this section, as an application of Theorem 1, we consider one of the variants of the problem of estimating the mean function of an almost surely continuous random process. The estimation of the mean and covariance functions plays an important role and many recent works were devoted to solving this problem (see, for example, [60,67,72,73] and the references below). As in the classical formulation of the problem of nonparametric regression of the form (1), the design in the problem of estimating the mean and covariance functions is considered to be either random or deterministic. For a random design, as a rule, it is assumed that the design points are independent identically distributed random variables (see, for example, [60,61,62,63,64,65,66,74,75]). In the case of deterministic design, one or another regularity condition is usually used. For example, the nonrandom equidistant design was discussed in [61]. In addition, in the problem of estimating the mean function, it is customary to subdivide the design into certain types depending on the number of design points for a particular trajectory.
The literature focuses on two opposing types of data: the design is in one sense or another “sparse” (for example, the number of design points for each of the realizations of the regression process is uniformly bounded; see [60,61,62,74]), or the design is somewhat “dense” (the number of design elements in each series grows simultaneously with the number of series; see [60,62,65,74]). In Theorem 2 given below, the second of the mentioned types of design is considered, provided only condition ( A 4 ) is satisfied in each of the independent series. Note that our formulation of the problem of estimating the mean function also includes the situation of a general deterministic design. Mean function estimation approaches that are used for dense or sparse data are often different (see, for example, [73,76]). In the case of a growing number of observations in each series, it is natural to first estimate the random regression function in each series, and then average their estimators over all series (see, for example, [61,65]). This is exactly what we do next (see Formula (12) below) by following this generally accepted approach. The uniform consistency of the mean function estimators was studied, for example, in [60,62,64,66,72].
Therefore, consider the following statement of the problem under consideration. We have N independent copies of model (1) from condition ( A 1 ) :
X i , j = f j ( z i , j ) + ε i , j , i = 1 , , n , j = 1 , , N ,
where f 1 ( t ) , , f N ( t ) , t [ 0 , 1 ] k , are independent identically distributed unobservable random processes with a.s. continuous trajectories, and the collections { ε i , j ; i = 1 , , n } and { z i , j ; i = 1 , , n } for all j satisfy conditions ( A 1 ) , ( A 2 ) , and ( A 4 ) . Here and below in this section, the index j means the copy number of model (1). We define an estimator for the mean function by the equality
f * ¯ N , n , h ( t ) = 1 N j = 1 N f ^ n , h , j ( t ) ,
where the estimators f ^ n , h , j ( t ) are determined by Formula (4), where the values from (1) are replaced with the corresponding characteristics from (11).
A corollary of Theorem 1 is the following assertion.
Theorem 2.
For model (11), let conditions ( A 1 ) ( A 4 ) be fulfilled and
E sup t [ 0 , 1 ] k | f 1 ( t ) | < .
Moreover, let a sequence h h n 0 and a subsequence of naturals N N n satisfy the conditions
h k ( p / 2 + 1 ) E ( δ n k p / 2 ) 0 a n d N P ( δ n > c * h } ) 0 .
Then,
sup t [ 0 , 1 ] k | f * ¯ N , n , h ( t ) E f 1 ( t ) | p 0 .
Remark 6.
If condition (13) is replaced with a slightly stronger constraint
E sup t [ 0 , 1 ] k f 1 2 ( t ) < ,
then by (14), one can show the uniform consistency of the estimator
M N , n , h * ( t , s ) = 1 N j = 1 N f ^ n , h , j ( t ) f ^ n , h , j ( s ) , t , s [ 0 , 1 ] k ,
for the mixed second moment E f ( t ) f ( s ) , where h h n and N N n are defined in (14). The proof of this fact is similar to the proof of Theorem 2, and thus, we omit the detailed arguments. In other words, under the above conditions, the estimator
Cov N , n , h * ( t , s ) = M N , n , h * ( t , s ) f * ¯ N , n , h ( t ) f * ¯ N , n , h ( s )
is uniformly consistent for the covariance function of the random regression field f ( t ) .

4. Proofs

Throughout this section, we assume that conditions ( A 1 ) ( A 3 ) are satisfied. To prove Theorem 1, we need a number of auxiliary assertions. Let
κ j = 1 1 | u | j K o ( u ) d u , j = 1 , 2 , λ i ( t ) = K h ( t z i ) Λ k ( P i ) , w n 0 ( t ) = i = 1 n λ i ( t ) ,
w n u v ( t ) = i = 1 n ( t u z u i ) ( t v z v i ) λ i ( t ) , w n u ( t ) = i = 1 n ( t u z u i ) λ i ( t ) , u , v = 1 , , k ,
w 0 ( t ) = K h ( t z ) d z , w u ( t ) = ( t u z u ) K h ( t z ) d z , u = 1 , , k ,
w u v ( t ) = ( t u z u ) ( t v z v ) K h ( t z ) d z , u , v = 1 , , k ,
where t = ( t 1 , , t k ) , z i = ( z i 1 , , z i k ) , and z = ( z 1 , , z k ) .
Remark 7.
We emphasize that due to the density properties of K h ( · ) , the summation domain in all sums in (15) and (16), as well as in all sums in the formulas given below in (41), (42), (44), and (51) coincide with the set
N h ( t ) = { i : t z i h , 1 i n } ,
and the domain of integration in (17) and (18), respectively, coincides with the set
A h ( t ) = { z [ 0 , 1 ] k : t z h } .
These facts are fundamental for the further analysis.
Lemma 1.
For h < 1 / 2 , the following relations are valid:
inf t [ 0 , 1 ] k w 0 ( t ) w u u ( t ) w u 2 ( t ) 2 k 1 κ 2 κ 1 2 h 2 , u = 1 , , k ,
t [ 0 , 1 ] k w 0 ( t ) w u v ( t ) w u ( t ) w v ( t ) = 0 , u v , u , v = 1 , , k ,
inf t [ 0 , 1 ] k w 0 ( t ) = 2 k , sup t [ 0 , 1 ] k w 0 ( t ) = 1 ,
sup t [ 0 , 1 ] k w u ( t ) = 2 1 κ 1 h , sup t [ 0 , 1 ] k w u v ( t ) = 2 2 κ 1 2 h 2 , u , v = 1 , , k .
Proof. 
The relation (21) obviously follows from the representation K ( t ) = j = 1 k K o ( t j ) introduced in condition ( A 3 ) . The statements (20), (22), and (23) follow from Lemma 1 in [53] and the abovementioned representation for the kernel function. □
Lemma 2.
For h < 1 / 2 , on the subset of elementary events defined by the relation δ n h , the following inequalities hold:
sup t [ 0 , 1 ] k w n 0 ( t ) 2 2 k L k , sup t [ 0 , 1 ] k | w n 0 ( t ) w 0 ( t ) | k 2 2 k L k h 1 δ n ,
sup t [ 0 , 1 ] k | w n u ( t ) | 2 2 k L k h , sup t [ 0 , 1 ] k | w n u ( t ) w u ( t ) | ( k + 1 ) 2 2 k L k δ n , u = 1 , , k ,
sup t [ 0 , 1 ] k | w n u v ( t ) | 2 2 k L k h 2 , sup t [ 0 , 1 ] k | w n u v ( t ) w u v ( t ) | ( k + 2 ) 2 2 k L k h δ n , u , v = 1 , , k ;
and on the subset of elementary events δ n ( k 2 3 k + 1 L k ) 1 h ,
inf t [ 0 , 1 ] k w n 0 ( t ) 2 k 1 ,
sup t [ 0 , 1 ] k w n u v ( t ) w n u ( t ) w n v ( t ) w n 0 ( t ) w u v ( t ) + w u ( t ) w v ( t ) w 0 ( t ) ( k + 2 ) 2 6 k + 3 L k + k 2 h δ n
for any u , v = 1 , , k .
Proof. 
To display the first estimates in (24)–(26), one should take into account Remark 7 and the following relations:
sup s [ 1 , 1 ] k K ( s ) L k , i = 1 n K h ( t z i ) Λ k ( P i ) h k L k ( 2 h + 2 δ n ) k 2 2 k L k .
The second estimates in (24)–(26) follow from the well-known error estimate for the approximation by Riemann integral sums of the corresponding integrals of functions satisfying the Lipschitz condition:
i N h ( t ) g t , α , β u , v ( z i ) Λ k ( P i ) z A h ( t ) g t , α , β u , v ( z ) d z δ n L g t , α , β u , v i = 1 n Λ k ( P i ) ,
where the functions g t , α , β u , v ( z ) = ( t u z u ) α ( t v z v ) β K h ( t z ) , α , β = 0 , 1 , are defined for z j [ 0 t j h , 1 t j + h ] , j = 1 , , k , and L g t , α , β u , v is a Lipschitz constant of the function g t , α , β u , v ( z ) . It is easy to verify that under the conditions of the lemma,
sup t [ 0 , 1 ] k L g t , 0 , 0 u , v k L k h k 1 , sup t [ 0 , 1 ] k L g t , 1 , 0 u , v ( k + 1 ) L k h k , sup t [ 0 , 1 ] k L g t , 1 , 1 u , v ( k + 2 ) L k h k + 1 .
To complete deriving (24)–(26), we need to take into account the definitions (15)–(18) and the estimates (29) and (30).
Finally, (27) follows from the first relation in (22) and the second relation in (24). When deriving the last assertion (28) of the lemma, we use the assertions of Lemmas 1 and 2. □
In what follows, we also need to establish the Lipschitz property of the introduced functions w n 0 ( t ) , w n u ( t ) , and w n u v ( t ) .
Lemma 3.
For h < 1 / 2 , on the set of elementary outcomes defined by the relation δ n h , for any t , t [ 0 , 1 ] k , the following inequalities hold:
| w n 0 ( t ) w n 0 ( t ) | k 2 2 k + 1 L k h 1 t t ,
| w n u ( t ) w n u ( t ) | ( k + 1 ) 2 2 k + 1 L k t t , u = 1 , , k ,
| w n u v ( t ) w n u v ( t ) | ( k + 2 ) 2 2 k + 1 L k h t t , u , v = 1 , , k .
On the set of elementary outcomes with the condition δ n ( k 2 3 k + 1 L k ) 1 h ,
w n u v ( t ) w n u ( t ) w n v ( t ) w n 0 ( t ) w n u v ( t ) + w n u ( t ) w n v ( t ) w n 0 ( t ) 5 ( k + 1 ) 2 8 k + 2 L 3 k h t t .
Proof. 
First of all, we note that the normalized kernel K h satisfies the Lipschitz condition with constant k 2 2 k + 1 L k h k 1 , and for the sets N h ( t ) defined in (19), under the conditions of the lemma, we have the estimate
i N h ( t ) Λ k ( P i ) 2 k ( h + δ n ) k 2 2 k h k .
From here, we easily obtain (31):
| w n 0 ( t ) w n 0 ( t ) | i N h ( t ) N h ( t ) | λ i ( t ) λ i ( t ) |
k L k h k 1 t t i N h ( t ) Λ k ( P i ) + i N h ( t ) Λ k ( P i ) k 2 2 k + 1 L k h 1 t t .
The proof of estimate (32) differs slightly from the one just shown:
| w n u ( t ) w n u ( t ) | i N h ( t ) N h ( t ) λ i ( t ) | t u t u | + i N h ( t ) N h ( t ) | λ i ( t ) λ i ( t ) | | t u z u i |
2 2 k + 1 L k t t + k 2 2 k + 1 L k t t .
Similarly, estimate (33) is proved after dividing the original sum into three subsums. Inequality (33) follows directly from the upper bounds (24) and (25) for the values under consideration, as well as from (31) and (32). We only deduce the main estimate:
| w n u ( t ) w n v ( t ) w n 0 ( t ) w n u ( t ) w n v ( t ) w n 0 ( t ) | | w n u ( t ) w n u ( t ) | | w n v ( t ) | w n 0 ( t )
+ | w n v ( t ) w n v ( t ) | | w n u ( t ) | w n 0 ( t ) + | w n 0 ( t ) w n 0 ( t ) | | w n u ( t ) | | w n v ( t ) |
( 5 k + 4 ) 2 6 k L 3 k h t t .
To prove (34), we only need to use estimate (33), as well as the uniform lower bound inf t [ 0 , 1 ] k w n 0 ( t ) 2 k 1 of the set of outcomes defined by the inequality δ n ( k 2 3 k + 1 L k ) 1 h (see (27)). After this, you just need to add up the obtained estimates and carry out elementary calculations, which we omit. □
We define the entries of a square matrix H of size k as follows:
H u v = w n u v ( t ) w n 0 1 ( t ) w n u ( t ) w n v ( t ) , u , v = 1 , , k .
Note that by virtue of the Cauchy–Bunyakowsky inequality, one has H u v 0 for all u , v . We also need the following notation:
c * = κ 2 κ 1 2 k ( k + 2 ) ! 2 6 k 2 + 4 k + 2 L 2 k 2 k + k 2 .
It is easy to see that the difference κ 2 κ 1 2 is the variance of a non-degenerate distribution, and thus, it is strictly positive. Also, c * < 1 .
The following assertion holds.
Lemma 4.
For h < 1 / 2 , on the set of outcomes defined by the relation δ n c * h ,
sup t [ 0 , 1 ] k H 1 k ! 2 6 k 2 2 k 1 L 2 k ( k 1 ) ( κ 2 κ 1 2 ) k h 2 .
Proof. 
Denote by H 0 the square matrix of size k with entries
H u v 0 = w u v ( t ) w 0 1 ( t ) w u ( t ) w v ( t ) , u , v = 1 , , k .
By Lemma 1,
H 0 = diag w 11 ( t ) w 0 ( t ) w 1 2 ( t ) , , w k k ( t ) w 0 ( t ) w k 2 ( t )
and the following estimate holds for the determinant of this diagonal matrix with positive entries:
inf t [ 0 , 1 ] k det H 0 2 k ( k + 1 ) κ 2 κ 1 2 k h 2 k .
Next,
sup t [ 0 , 1 ] k det H det H 0 k ! k ( k + 2 ) 2 6 k + 3 L k + k 2 L 2 k 2 5 k + 2 k 1 δ n h h 2 ( k 1 ) ( k + 2 ) ! 2 5 k 2 + 3 k + 1 L 2 k 2 k + k 2 δ n h 2 k 1 .
When deriving (4), we use the definition of the matrix determinant and estimate (28), and took into account the fact that due to Lemmas 1 and 2, for all u , v , one has
sup t [ 0 , 1 ] k H u v 2 5 k + 2 L 2 k h 2 , sup t [ 0 , 1 ] k H u v 0 2 k 1 L 2 h 2 .
Hence, on the set of elementary outcomes defined by the relation δ n c * h ,
inf t [ 0 , 1 ] k det H 2 k ( k + 1 ) 1 κ 2 κ 1 2 k h 2 k .
The assertion of the lemma can now be easily derived directly from the definition of the inverse matrix and relation (39). □
We need the following notation of n-dimensional vectors:
1 = ( 1 , , 1 ) , ϵ = ( ε 1 , , ε n ) , δ f = ( f ( z 1 ) f ( t ) ) , , ( f ( z n ) f ( t ) )
In this case, x = f ( t ) 1 + δ f + ϵ , and for the estimate f ^ n , h ( t ) in (4), there is the valid representation
f ^ n , h ( t ) = f ( t ) e 1 ( Z t W t Z t ) 1 Z t W t 1 + e 1 ( Z t W t Z t ) 1 Z t W t δ f + e 1 ( Z t W t Z t ) 1 Z t W t ϵ .
Using definitions (2) and (3) and the first notation in (15), one can rewrite the matrix Z t W t Z t of dimension ( k + 1 ) × ( k + 1 ) and ( k + 1 ) -dimensional vectors Z t W t 1 , Z t W t δ f , and Z t W t ϵ as follows:
Z t W t Z t = i = 1 n λ i ( t ) i = 1 n ( t z i ) λ i ( t ) i = 1 n ( t z i ) λ i ( t ) i = 1 n ( t z i ) ( t z i ) λ i ( t ) , Z t W t 1 = i = 1 n λ i ( t ) i = 1 n ( t z i ) λ i ( t ) ,
Z t W t δ f = i = 1 n λ i ( t ) f ( z i ) f ( t ) i = 1 n ( t z i ) λ i ( t ) f ( z i ) f ( t ) , Z t W t ϵ = i = 1 n λ i ( t ) ε i i = 1 n ( t z i ) λ i ( t ) ε i .
The following assertion holds.
Lemma 5.
The following identity is valid:
e 1 ( Z t W t Z t ) 1 Z t W t 1 = 1 .
Proof. 
We use the Frobenius formula to invert a nondegenerate block matrix:
A B C D 1 = A 1 + A 1 B H 1 C A 1 A 1 B H 1 H 1 C A 1 H 1 , f o r H = D C A 1 B ,
where A is a square nondegenerate matrix of size m 1 and D is a square matrix of size m 2 . In this formula, we let m 1 = 1 , m 2 = k ,
A = i = 1 n λ i ( t ) w n 0 ( t ) , B = i = 1 n ( t z i ) λ i ( t ) , C = i = 1 n ( t z i ) λ i ( t ) , D = i = 1 n ( t z i ) ( t z i ) λ i ( t ) .
Then, in view of (41) and the Frobenius formula, the first component of the vector of interest to us is
A 1 + A 1 B H 1 C A 1 A A 1 B H 1 C 1 .
Lemma 6.
For h < 1 / 2 , on the set of elementary outcomes defined by the relation δ n c * h ,
sup t [ 0 , 1 ] k | e 1 ( Z t W t Z t ) 1 Z t W t δ f | C 1 * ω f ( h ) , w i t h C 1 * = 1 + k · k ! 2 6 k 2 L 2 k 2 k ( κ 2 κ 1 2 ) k .
Proof. 
We use definition (43). Next, using the Frobenius formula (see the proof of Lemma 5), we obtain the identity
e 1 ( Z t W t Z t ) 1 Z t W t δ f = A 1 A f + A 1 B H 1 C A 1 A f A 1 B H 1 C f ,
where
A f = i = 1 n λ i ( t ) f ( z i ) f ( t ) , C f = i = 1 n ( t z i ) λ i ( t ) f ( z i ) f ( t ) .
Consider each of the terms on the right side of this relation. Notice that for the standard inner product ( x , y ) in R k and the supnorm · under consideration, we have the inequality | ( x , y ) | k x y . Therefore,
| A 1 B H 1 C A 1 A f | k A 1 | A 1 A f | B H 1 C , | A 1 B H 1 C f | k A 1 B H 1 C f .
Further, taking into account Remark 7, we obtain
| A 1 A f | ω f ( h ) , C f ω f ( h ) A h , C A h , sup t [ 0 , 1 ] k B 2 2 k L k h .
The above relations, together with the identity (44) and the estimates (45), complete the proof of the lemma. □
Lemma 7.
For any h < 1 / 2 , t [ 0 , 1 ] k , and m N h ( t ) , on the set of outcomes defined by the relation δ n c * h , the following estimates are valid:
| α n , m ( t ) | 1 + ( k + 1 ) ! 2 6 k 2 1 L 2 k 2 k ( κ 2 κ 1 2 ) k α , | α n , m ( t + 2 l e r ) α n , m ( t ) | α ¯ 2 l h 1 i f 2 l h , w i t h α ¯ = 30 ( k ! ) 2 k 3 2 k L 10 k 2 2 8 k 2 + 21 k + 5 κ 2 κ 1 2 2 k ,
where α n , m ( t ) = 1 + B H 1 C A 1 B H 1 ( t z m ) , and e r is the k-dimensional vector with unit r-th component and zeros as the other ones.
Proof. 
Since t z m h for m N h ( t ) , then the first assertion of the lemma follows from Lemma 4, relations (45), and the following estimate:
| α n , m ( t ) | 1 + k A 1 B H 1 C + B H 1 t z m .
To prove the second assertion, first of all, we note
H 1 = adj H det H ,
where the elements of the adjoint matrix adj H are, up to the signs ±, the complementary minors of the original matrix H . As is known, the determinant of any square matrix M of size k with entries M i j , i , j = 1 , , k , can be represented as the multilinear form
det M = α 1 , , α k ( 1 ) I ( α ¯ ) M 1 α 1 M k α k ,
where summation is over all permutations α ¯ ( α 1 α k ) of natural numbers 1 , , k , and I ( α ¯ ) is the number of inversions in a particular permutation α ¯ . We need Formula (46) to represent the determinant det H and the complementary minors of the matrix H in the inverse matrix H 1 , respectively, as k- and ( k 1 ) -linear forms of type (46) constructed by entries (35).
To prove this, in the definition of the function α n , m ( t ) , the term B H 1 C A 1 is Lipschitz and the summand B H 1 ( t z m ) is local Lipschitz, and the method is carried out the same as that in Lemma 3 (see the proof of (34)). First, we note that the following representation holds:
B H 1 C A 1 = i , j = 1 k w n i ( t ) w n j ( t ) ( 1 ) i + j det H ( i j ) A det H ,
where det H ( i j ) is the complementary minor to the entry H i j = w n i j ( t ) + w n i ( t ) w n j ( t ) w n 0 ( t ) of the matrix H . Note that in virtue of notation (43), relation (27) can be rewritten as follows:
inf t [ 0 , 1 ] k A 2 k 1 .
Further, to calculate the Lipschitz constant for the ratio in (47), we need the lower bounds (39) and (48) for the denominator of the fractions on the right-hand side of (47) that are valid on the set of elementary outcomes δ n c * h , taking into account the fact that the lower bound in (39) has the order of smallness h 2 k . Moreover, Lemmas 2 and 3, together with the representation (46) (with the elements H i j instead of M i j ), allow us to assert that the upper bound for det H has the same order of smallness in h, and the Lipschitz constant for det H is estimated from above as C 3 * h 2 k 1 . At the same time, the complementary minors det H ( i j ) are estimated from above as C 4 * h 2 k 2 , with a Lipschitz constant of order h 2 k 3 uniformly over all i , j = 1 , , k . In fact, we only need to repeat the calculations when deriving estimate (34) to calculate the Lipschitz constant of the fraction on the right side of (47), from which it follows that the mentioned constant has the form C 5 * h 1 , where as a constant C 5 * , we can take the value
C 5 * = 15 ( k ! ) 2 k 3 2 k L 10 k 2 2 8 k 2 + 21 k + 5 κ 2 κ 1 2 2 k .
For the term B H 1 ( t z m ) ) , we establish that in the h-neighborhood of the point t , it has the property of being locally Lipschitz with a constant of the form C 6 * h 1 . By analogy with (47), we represent the specified summand as
B H 1 ( t z m ) = i , j = 1 k w n i ( t ) ( t j z m j ) ( 1 ) i + j det H ( i j ) det H .
Repeating almost verbatim the previous arguments in proving this lemma, we come to the conclusion that in the h-neighbourhood of any point t [ 0 , 1 ] k (under the condition t z m h ), the function B H 1 ( t z m ) satisfies the Lipschitz condition with constant C 6 * , which is significantly less than that in (49). Therefore, the final constant in the lemma can be set equal to 2 C 5 * . □
Lemma 8.
For any y > 0 and h < 1 / 2 , on the subset of elementary events defined by the relation δ n c * h , the following estimate holds:
P F n sup t [ 0 , 1 ] k e 1 ( Z t W t Z t ) 1 Z t W t ϵ > y C 2 * y p δ n k p / 2 h k ( 1 + ( p / 2 ) ) ,
where the symbol P F n denotes the conditional probability given the σ-algebra F n generated by the design { z i ; i n } , and the constant C 2 * is defined in (61).
Proof. 
We use the notation (43). Then, from (41), (42), and the Frobenius formula, we obtain the identity
ν n , h ( t ) : = e 1 ( Z t W t Z t ) 1 Z t W t ϵ = A 1 + A 1 B H 1 C A 1 A ε A 1 B H 1 C ε ,
where
A ε = i = 1 n λ i ( t ) ε i , C ε = i = 1 n ( t z i ) λ i ( t ) ε i .
Let
μ n , h ( t ) = i : t z i h α n , i ( t ) λ i ( t ) ε i .
In virtue of estimate (27), Lemma 2, and Remark 7, one has
| ν n , h ( t ) | 2 k + 1 | μ n , h ( t ) | .
The distribution tail sup t [ 0 , 1 ] k | μ n , h ( t ) | is estimated by Kolmogorov’s dyadic chaining (see, for example, [77]). First of all, we note that the set [ 0 , 1 ] k under the supremum sign can be replaced with the set of dyadic rational points R = l 1 R l , where
R l = { ( j 1 / 2 l , , j k / 2 l ) : j 1 = 1 , , 2 l 1 ; ; j k = 1 , , 2 l 1 } .
Therefore,
sup t [ 0 , 1 ] k | μ n , h ( t ) | = sup t R | μ n , h ( t ) | max t R m | μ n , h ( t ) | + l = m + 1 r = 1 k max t R l | μ n , h ( t + 2 l e r ) μ n , h ( t ) | ,
where m = log 2 h (here, a is the smallest integer greater than or equal to a). Hence,
P F n sup t [ 0 , 1 ] k | μ n , h ( t ) | > y P F n max t R m | μ n , h ( t ) | > a m y + + l = m + 1 r = 1 k P F n max t R l | μ n , h ( t + 2 l e r ) μ n , h ( t ) | > a l y / k t R m P F n ( | μ n , h ( t ) | > a m y ) + l = m + 1 r = 1 k t R l P F n | μ n , h ( t + 2 l e r ) μ n , h ( t ) | > a l y / k ,
where a m , a m + 1 , is a sequence of positive numbers, with a m + a m + 1 + = 1 .
To evaluate the probabilities on the right side of (53), we need the following martingale inequality (see [78], Theorem 2.1):
E | i = 1 n η i | p ( p 1 ) i = 1 n E | η i | p 2 / p p / 2 ,
where { η i } is a martingale difference with finite moments of order p 2 .
To obtain an upper bound for P F n ( | μ n , h ( t ) | > a m y ) , we use (54) for
η i = α n , i ( t ) K h ( t z i ) Λ k ( P i ) ε i .
From (29), Lemma 7, and the elementary estimate K h ( t z i ) Λ k ( P i ) L k ( δ n / h ) k we obtain that with probability 1,
i = 1 n E F n | η i | p 2 / p M p 2 / p α 2 2 2 k L 2 k ( δ n / h ) k .
Next, taking into account the estimate δ n h 1 , the above inequality and (54) imply the relation
P F n ( | μ n , h ( t ) | > a m y ) E F n ( | μ n , h ( t ) | p ) ( a m y ) p G 1 ( δ n / h ) k p / 2 ( a m y ) p a . s . ,
where
G 1 = ( p 1 ) p / 2 α p 2 k p L k p M p .
To estimate the probability P F n | μ n , h ( t + 2 l e r ) μ n , h ( t ) | > a l y / k , we use inequality (54), with
η i = α n , i ( t + 2 l e r ) K h ( t z i + 2 l e r ) α n , i ( t ) K h ( t z i ) Λ k ( P i ) ε i .
We need the following relations:
| α n , i ( t + 2 l e r ) K h ( t z i + 2 l e r ) α n , i ( t ) K h ( t z i ) | Λ k ( P i ) | α n , i ( t + 2 l e r ) | | K h ( t z i + 2 l e r ) K h ( t z i ) | Λ k ( P i ) + + K h ( t z i ) | α n , i ( t + 2 l e r ) α n , i ( t ) | Λ k ( P i ) α L k h k 1 2 l δ n k + α ¯ h 1 2 l L k h k δ n k = ( α + α ¯ ) L k h k 1 2 l δ n k ,
i = 1 n α n , i ( t + 2 l e r ) K h ( t z i + 2 l e r ) α n , i ( t ) K h ( t z i ) Λ k ( P i ) ( α + α ¯ ) L k h k 1 2 l + 1 ( 2 h + 2 δ n ) k ( α + α ¯ ) L k h 1 2 l + 1 2 2 k .
Note that | t z i + 2 l e r t + z i | = 2 l h for l > m . Moreover, the summation domain in (4) coincides with the set
N h ( t + 2 l e r ) N h ( t ) f o r t R l .
These facts, Lemma 7, and the relation δ n h 1 were taken into account when deriving (4). To prove (57), we used the second assertion of Lemma 7.
Thus,
i = 1 n E F n | η i | p 2 / p M p 2 / p ( α + α ¯ ) 2 2 2 k L 2 k 2 2 l + 1 h 2 ( δ n / h ) k ,
and in virtue of inequality (54), we obtain
P F n | μ n , h ( t + 2 l e r ) μ n , h ( t ) | > a l y / k G 2 k ( δ n / h ) k p / 2 h p 2 l p ( a l y ) p ,
where
G 2 = 2 p / 2 ( α + α ¯ ) p k p + 1 ( p 1 ) p / 2 2 k p L k p M p = 2 p / 2 k p + 1 α p ( α + α ¯ ) p G 1 .
Now, using relations (53), (55), and (59), we conclude that
P F n sup t P | μ n , h ( t ) | > y < y p ( δ n / h ) k p / 2 G 1 2 k m a m p + G 2 h p l = m + 1 2 ( p k ) l a l p .
The optimal sequence a l that minimizes the right-hand side of this inequality is
a m = c ( G 1 2 k m ) 1 / ( p + 1 ) , a l = c G 2 h p 2 ( p k ) l 1 / ( p + 1 )
for l = m + 1 , m + 2 , , where the coefficient c is given by the equality a m + a m + 1 + = 1 . For the specified sequence, we obtain
P F n sup t P | μ n , h ( t ) | > y y p ( δ n / h ) k p / 2 ( G 1 2 k m ) 1 / ( p + 1 ) + l = m + 1 G 2 h p 2 ( p k ) l 1 / ( p + 1 ) p + 1 y p δ n k p / 2 h k ( 1 + ( p / 2 ) ) ( 2 G 1 ) 1 / ( p + 1 ) + G 2 2 ( p k ) 1 / ( p + 1 ) l = 0 2 ( p k ) l / ( p + 1 ) p + 1 y p δ n k p / 2 h k ( 1 + ( p / 2 ) ) ( 2 G 1 ) 1 / ( p + 1 ) + G 2 2 ( p k ) 1 / ( p + 1 ) 1 2 ( p k ) / ( p + 1 ) 1 p + 1 .
This relation, together with the estimate (52), completes the derivation of Lemma 8 if we let
C 2 * = 2 p ( k + 1 ) ( 2 G 1 ) 1 / ( p + 1 ) + G 2 2 ( p k ) 1 / ( p + 1 ) 1 2 ( p k ) / ( p + 1 ) 1 p + 1 ,
where the constants G 1 and G 2 are defined in (56) and (60), respectively. □
We complete the proof of Theorem 1. We set ζ n ( h ) = sup t [ 0 , 1 ] k | ν n , h ( t ) | , where ν n , h ( t ) is defined in (50), and together with Lemma 8, we take into account the relation
P ζ n ( h ) > y , δ n c * h = E I δ n c * h P F n ζ n ( h ) > y .
It remains for us to use the identity (40) and the assertions of the Lemmas 5 and 6. Theorem 1 is proved.
To prove Theorem 2, we need the following auxiliary assertion.
Lemma 9.
If condition (13) is satisfied, then lim ε 0 E ω f 1 ( ε ) = 0 , and for the independent copies f 1 ( t ) , , f n ( t ) of an almost sure continuous random process, the following uniform law of large numbers holds:
sup t [ 0 , 1 ] k | f ¯ N ( t ) E f 1 ( t ) | p 0 , w h e r e f ¯ N ( t ) = N 1 i = 1 N f i ( t ) .
Proof. 
The first assertion of the lemma follows from (13) and Lebesgue’s dominated convergence theorem. Let
ω f ¯ N ( ε ) = sup t , s : t s ε | f ¯ N ( t ) f ¯ N ( s ) | , ω E f 1 ( ε ) = sup t , s : t s ε | E f 1 ( t ) E f 1 ( s ) | .
For simplicity, let k = 2 . For an arbitrary fixed r > 0 and u , v = 0 , , r , the following relations hold:
sup t [ 0 , 1 ] 2 | f ¯ N ( t ) E f 1 ( t ) | max 0 u , v r f ¯ N u / r , v / r E f 1 u / r , v / r + + max 1 u , v r sup u 1 r t 1 u r , v 1 r t 2 v r f ¯ N ( t ) f ¯ N u / r , v / r + max 1 u , v r sup u 1 r t 1 u r , v 1 r t 2 v r E f 1 ( t ) E f 1 u / r , v / r max 0 u , v r f ¯ N u / r , v / r E f 1 u / r , v / r + ω f ¯ N 1 / r + ω E f 1 1 / r .
Now, notice that ω E f 1 ( ε ) E ω f 1 ( ε ) and
f ¯ N u / r , v / r p E f 1 u / r , v / r , ω f ¯ N ( ε ) 1 N i = 1 N ω f i ( ε ) p E ω f 1 ( ε ) .
Therefore, the right side in (4) does not exceed 2 E ω f 1 1 / r + o p ( 1 ) , and due to the arbitrariness of r and the first assertion of the lemma, the relation (62) for k = 2 is proved. For arbitrary k, the derivation of the statement of the lemma is similar. □
Proof of Theorem 2.
We let
Δ n , h , j = sup t [ 0 , 1 ] k | f ^ n , h , j ( t ) f j ( t ) |
and prove the following version of the law of large numbers for the values { Δ n , h , j } :
1 N j = 1 N Δ n , h , j p 0 a s N ,
where the sequences h = h n and N = N n are defined in (14). We introduce the events
A n , h , j = { δ n , j c * h } , j = 1 , , N
and notice that for any positive ν , one has
P 1 N j = 1 N Δ n , h , j > ν P 1 N j = 1 N Δ n , h , j I ( A n , h , j ) > ν + N P ( A n , h , 1 ¯ ) .
Next, from Theorem 1, it follows that
E Δ n , h , j I ( A n , h , j ) C 1 * E ω f ( h ) + 0 P ζ n ( h ) > y , δ n c * h d y C 1 * E ω f ( h ) + γ n + γ n P ζ n ( h ) > y , δ n c * h d y C 1 * E ω f ( h ) + ( 1 + C 2 * ) γ n ,
where γ n = h k ( p / 2 + 1 ) E δ n k p / 2 1 / p . To complete the proof of (64), it remains to estimate the first term on the right-hand side of (65) using Markov’s inequality, taking into account the limit relations from (14) and the last estimate.
The assertion of Theorem 2 follows from Lemma 9, limit relation (64), and the simple estimate
sup t [ 0 , 1 ] k N 1 j = 1 N f ^ n , h , j ( t ) E f 1 ( t ) sup t [ 0 , 1 ] k | f ¯ N ( t ) E f 1 ( t ) | + N 1 j = 1 N Δ n , h , j .
Thus, Theorem 2 is proved. □

5. Conclusions

This paper considers the classic problem of nonparametric regression: estimating a regression function from observations of its noisy values in some known set of points from the domain of its definition (design points). In order to estimate the regression function, you need to impose certain restrictions on the design. In numerous works by predecessors, it is usually assumed that the design points are either fixed and, in a certain sense, regularly fill the domain of definition of a regression function or random process, or are random and consist of independent or weakly dependent random variables. Our idea is as follows: in order to restore the function with respect to the design, it is enough to only require the condition for the design points to fill asymptotically densely the domain of the regression function as the sample size increases. In contrast to previously known results, the new condition is insensitive to the nature of the dependence of the design points and is essentially necessary to restore the function with varying the accuracy, and at the same time, includes both the situation of fixed design without requirements of regularity and random design, which may not satisfy the conditions of weak dependence.
We propose to implement this very clear idea by applying the construction of Riemann integral sums to the structure of new kernel estimators, which open up the opportunity to study asymptotic properties of the estimators due to the proximity of integral sums and corresponding integrals without using probabilistic limit theorems. We emphasize that this approach allows us to reconstruct a regression function without any specification regarding the structure of dependence of the design points and allows us to significantly weaken the previously known conditions for these characteristics.
In [51,52], we use local constant estimators, and in [53], we use local linear ones, but in the case of a one-dimensional design. In the present paper, new universal local linear kernel-type estimators of an unknown regression function are constructed in the case of a multivariate design.

Author Contributions

Conceptualization, I.B.; Methodology, Y.L., I.B. and E.Y.; Software, V.K.; Formal analysis, Y.L.; Investigation, P.R.; Data curation, S.S.; Writing—original draft, Y.L.; Writing—review & editing, E.Y.; Visualization, P.R. and V.K.; Supervision, E.Y.; Project administration, I.B. and S.S. All authors read and agreed to the published version of this manuscript.

Funding

The study of Y. Linke and I. Borisov is supported by the Mathematical Center in Akademgorodok under agreement no. 075-15-2022-281 with the Ministry of Science and Higher Education of the Russian Federation.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable for studies that did not create any new data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fan, J.; Gijbels, I. Local Polynomial Modelling and Its Applications; Chapman and Hall: London, UK, 1996. [Google Scholar]
  2. Fan, J.; Yao, Q. Nonlinear Time Series Nonparametric and Parametric Methods; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  3. Györfi, L.; Kohler, M.; Krzyzak, A.; Walk, H. A Distribution-Free Theory of Nonparametric Regression; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  4. Härdle, W. Applied Nonparametric Regression; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  5. Müller, H.-G. Nonparametric Regression Analysis of Longitudinal Data; Springer: New York, NY, USA, 1988. [Google Scholar]
  6. Ahmad, I.A.; Lin, P.-E. Fitting a multiple regression function. J. Stat. Plan. Inference 1984, 9, 163–176. [Google Scholar] [CrossRef]
  7. Georgiev, A.A. Nonparametric multiple function fitting. Stat. Probab. Lett. 1990, 10, 203–211. [Google Scholar] [CrossRef]
  8. Chu, C.-K.; Marron, J.S. Choosing a Kernel Regression Estimator. Stat. Sci. 1991, 6, 404–419. [Google Scholar] [CrossRef]
  9. Jones, M.C.; Davies, S.J.; Park, B.U. Versions of kernel-type regression estimators. J. Am. Stat. Assoc. 1994, 89, 825–832. [Google Scholar] [CrossRef]
  10. Georgiev, A.A. Asymptotic properties of the multivariate Nadaraya-Watson regression function estimate: The fixed design case. Stat. Probab. Lett. 1989, 7, 35–40. [Google Scholar] [CrossRef]
  11. Härdle, W.; Luckhaus, S. Uniform consistency of a class of regression function estimators. Ann. Stat. 1984, 12, 612–623. [Google Scholar] [CrossRef]
  12. Beran, J.; Feng, Y. Local polynomial estimation with a FARIMA-GARCH error process. Bernoulli 2001, 7, 733–750. [Google Scholar] [CrossRef]
  13. Benelmadani, D.; Benhenni, K.; Louhichi, S. Trapezoidal rule and sampling designs for the nonparametric estimation of the regression function in models with correlated errors. Statistics 2020, 54, 59–96. [Google Scholar] [CrossRef]
  14. Tang, X.; Xi, M.; Wu, Y.; Wang, X. Asymptotic normality of a wavelet estimator for asymptotically negatively associated errors. Stat. Probab. Lett. 2018, 140, 191–201. [Google Scholar] [CrossRef]
  15. Benhenni, K.; Hedli-Griche, S.; Rachdi, M. Estimation of the regression operator from functional fixed-design with correlated errors. J. Multivar. Anal. 2010, 101, 476–490. [Google Scholar] [CrossRef]
  16. Gu, W.; Roussas, G.G.; Tran, L.T. On the convergence rate of fixed design regression estimators for negatively associated random variables. Stat. Probab. Lett. 2007, 77, 1214–1224. [Google Scholar] [CrossRef]
  17. Wu, J.S.; Chu, C.K. Nonparametric estimation of a regression function with dependent observations. Stoch. Proc. Their Appl. 1994, 50, 149–160. [Google Scholar] [CrossRef]
  18. Hansen, B.E. Uniform convergence rates for kernel estimation with dependent data. Econom. Theory 2008, 24, 726–748. [Google Scholar] [CrossRef]
  19. Zhou, X.; Zhu, F. Asymptotics for L1-wavelet method for nonparametric regression. J. Inequal. Appl. 2020, 2020, 216. [Google Scholar] [CrossRef]
  20. Gasser, T.; Engel, J. The choice of weghts in kernel regression estimation. Biometrica 1990, 77, 277–381. [Google Scholar] [CrossRef]
  21. Linton, O.B.; Jacho-Chavez, D.T. On internally corrected and symmetrized kernel estimators for nonparametric regression. Test 2010, 19, 166–186. [Google Scholar] [CrossRef]
  22. Chu, C.K.; Deng, W.-S. An interpolation method for adapting to sparse design in multivariate nonparametric regression. J. Stat. Plan. Inference 2003, 116, 91–111. [Google Scholar] [CrossRef]
  23. Nadaraya, E.A. Remarks on non-parametric estimates for density functions and regression curves. Theory Prob. Appl. 1970, 15, 134–137. [Google Scholar] [CrossRef]
  24. Liero, H. Strong uniform consistency of nonparametric regression function estimates. Probab. Theory Relat. Fields 1989, 82, 587–614. [Google Scholar] [CrossRef]
  25. Mack, Y.P.; Silvermann, B.W. Weak and strong uniform consistency of kernel regression estimates. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1982, 61, 405–415. [Google Scholar] [CrossRef]
  26. Devroye, L.P. The uniform convergence of the Nadaraya–Watson regression function estimate. Can. J. Stat. 1979, 6, 179–191. [Google Scholar] [CrossRef]
  27. Müller, H.-G. Density adjusted kernel smoothers for random design nonparametric regression. Stat. Probab. Lett. 1997, 36, 161–172. [Google Scholar] [CrossRef]
  28. Gu, J.; Li, Q.; Yang, J.-C. Multivariate local polynomial kernel estimators: Leading bias and asymptotic distribution. Econom. Rev. 2015, 34, 979–1010. [Google Scholar] [CrossRef]
  29. Ruppert, D.; Wand, M.P. Multivariate locally weighted least squares regression. Ann. Stat. 1994, 22, 1346–1370. [Google Scholar] [CrossRef]
  30. Li, Q.; Lu, X.; Ullah, A. Multivariate local polynomial regression for estimating average derivatives. Nonparametr. Stat. 2003, 15, 607–624. [Google Scholar] [CrossRef]
  31. Roussas, G.G. Nonparametric regression estimation under mixing conditions. Stoch. Proc. Appl. 1990, 36, 107–116. [Google Scholar] [CrossRef]
  32. Masry, E. Nonparametric regression estimation for dependent functional data. Stoch. Proc. Their Appl. 2005, 115, 155–177. [Google Scholar] [CrossRef]
  33. Kulik, R.; Lorek, P. Some results on random design regression with long memory errors and predictors. J. Stat. Plan. Inference 2011, 141, 508–523. [Google Scholar] [CrossRef]
  34. Kulik, R.; Wichelhaus, C. Nonparametric conditional variance and error density estimation in regression models with dependent errors and predictors. Electr. J. Stat. 2011, 5, 856–898. [Google Scholar] [CrossRef]
  35. Jiang, J.; Mack, Y.P. Robust local polynomial regression for dependent data. Stat. Sin. 2001, 11, 705–722. [Google Scholar]
  36. Li, X.; Yang, W.; Hu, S. Uniform convergence of estimator for nonparametric regression with dependent data. J. Inequal. Appl. 2016, 2016, 142. [Google Scholar] [CrossRef]
  37. Hong, S.Y.; Linton, O.B. Asymptotic properties of a Nadaraya-Watson type estimator for regression functions of infinite order. arXiv 2016, arXiv:1604.06380. [Google Scholar] [CrossRef]
  38. Shen, J.; Xie, Y. Strong consistency of the internal estimator of nonparametric regression with dependent data. Stat. Probab. Lett. 2013, 83, 1915–1925. [Google Scholar] [CrossRef]
  39. Masry, E. Multivariate local polynomial regression for time series: Uniform strong consistency and rates. J. Time Ser. Anal. 1996, 17, 571–599. [Google Scholar] [CrossRef]
  40. Masry, E. Long-range dependence: Strong consistency and rates. IEEE Trans. Inf. Theory 2001, 47, 2863–2875. [Google Scholar] [CrossRef]
  41. Masry, E.; Fan, J. Local polynomial estimation of regression functions for mixing processes. Scand. Stat. Theory Appl. 1997, 24, 165–179. [Google Scholar] [CrossRef]
  42. Gao, J.; Kanaya, S.; Li, D.; Tjostheim, D. Uniform consistency for nonparametric estimators in null recurrent time series. Econom. Theory 2015, 31, 911–952. [Google Scholar] [CrossRef]
  43. Wang, Q.; Chan, N. Uniform convergence rates for a class of martingales with application in non-linear cointegrating regression. Bernoulli 2014, 20, 207–230. [Google Scholar] [CrossRef]
  44. Chan, N.; Wang, Q. Uniform convergence for Nadaraya-Watson estimators with nonstationary data. Econom. Theory 2014, 30, 1110–1133. [Google Scholar] [CrossRef]
  45. Linton, O.; Wang, Q. Nonparametric transformation regression with nonstationary data. Econom. Theory 2016, 32, 1–29. [Google Scholar] [CrossRef]
  46. Karlsen, H.A.; Myklebust, T.; Tjøstheim, D. Nonparametric estimation in a nonlinear cointegration type model. Ann. Stat. 2007, 35, 252–299. [Google Scholar] [CrossRef]
  47. Wang, Q.; Phillips, P.C.B. Structural nonparametric cointegrating regression. Econometrica 2009, 77, 1901–1948. [Google Scholar]
  48. Wang, Q.Y.; Phillips, P.C.B. Asymptotic theory for local time density estimation and nonparametric cointegrating regression. Econom. Theory 2009, 25, 710–738. [Google Scholar] [CrossRef]
  49. Einmahl, U.; Mason, D.M. Uniform in bandwidth consistency of kernel-type function estimators. Ann. Stat. 2005, 33, 1380–1403. [Google Scholar] [CrossRef]
  50. Liang, H.-Y.; Jing, B.-Y. Asymptotic properties for estimates of nonparametric regression models based on negatively associated sequences. J. Multivar. Anal. 2005, 95, 227–245. [Google Scholar] [CrossRef]
  51. Borisov, I.S.; Linke, Y.Y.; Ruzankin, P.S. Universal weighted kernel-type estimators for some class of regression models. Metrika 2021, 84, 141–166. [Google Scholar] [CrossRef]
  52. Linke, Y.; Borisov, I.; Ruzankin, P. Universal kernel-type estimation of random fields. Statistics 2023, 57, 785–810. [Google Scholar] [CrossRef]
  53. Linke, Y.; Borisov, I.; Ruzankin, P.; Kutsenko, V.; Yarovaya, E.; Shalnova, S. Universal local linear kernel estimators in nonparametric regression. Mathematics 2022, 10, 2693. [Google Scholar] [CrossRef]
  54. Linke, Y.Y.; Borisov, I.S. Insensitivity of Nadaraya–Watson estimators to design correlation. Commun. Stat. Theory Methods 2022, 51, 6909–6918. [Google Scholar] [CrossRef]
  55. Linke, Y.Y. Towards insensitivity of Nadaraya–Watson estimators to design correlation. Theory Probab. Appl. 2023, 68, 198–210. [Google Scholar] [CrossRef]
  56. Linke, Y.Y. Asymptotic properties of one-step M-estimators. Commun. Stat. Theory Methods 2019, 48, 4096–4118. [Google Scholar] [CrossRef]
  57. Linke, Y.Y.; Borisov, I.S. Constructing explicit estimators in nonlinear regression problems. Theory Probab. Appl. 2018, 63, 22–44. [Google Scholar] [CrossRef]
  58. Linke, Y.Y.; Borisov, I.S. Constructing initial estimators in one-step estimation procedures of nonlinear regression. Stat. Probab. Lett. 2017, 120, 87–94. [Google Scholar] [CrossRef]
  59. Linke, Y.Y. On sufficient conditions for the consistency of local linear kernel estimators. Math. Notes. 2023, 114, 283–296. [Google Scholar] [CrossRef]
  60. Li, Y.; Hsing, T. Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data. Ann. Stat. 2010, 38, 3321–3351. [Google Scholar] [CrossRef]
  61. Hall, P.; Müller, H.-G.; Wang, J.-L. Properties of principal component methods for functional and longitudinal data analysis. Ann. Stat. 2006, 34, 1493–1517. [Google Scholar] [CrossRef]
  62. Zhou, L.; Lin, H.; Liang, H. Efficient estimation of the nonparametric mean and covariance functions for longitudinal and sparse functional data. J. Am. Stat. Assoc. 2018, 113, 1550–1564. [Google Scholar] [CrossRef]
  63. Yao, F. Asymptotic distributions of nonparametric regression estimators for longitudinal or functional data. J. Multivar. Anal. 2007, 98, 40–56. [Google Scholar] [CrossRef]
  64. Yao, F.; Müller, H.-G.; Wang, J.-L. Functional data analysis for sparse longitudinal data. J. Am. Stat. Assoc. 2005, 100, 577–590. [Google Scholar] [CrossRef]
  65. Zhang, J.-T.; Chen, J. Statistical inferences for functional data. Ann. Stat. 2007, 35, 1052–1079. [Google Scholar] [CrossRef]
  66. Zhang, X.; Wang, J.-L. From sparse to dense functional data and beyond. Ann. Stat. 2016, 44, 2281–2321. [Google Scholar] [CrossRef]
  67. Kokoszka, P.; Reimherr, M. Introduction to Functional Data Analysis; Chapman and Hall/CRC: London, UK, 2017. [Google Scholar]
  68. Linke, Y.Y.; Borisov, I.S. Universal nonparametric kernel-type estimators for the mean and covariance functions of a stochastic process. Theory Probab. Appl. 2024, 69, 35–58. [Google Scholar] [CrossRef]
  69. Linke, Y. Kernel estimators for the mean function of a stochastic process under sparse design conditions. Siberian Adv. Math. 2022, 32, 269–276. [Google Scholar] [CrossRef]
  70. Linke, Y. Mean function estimation for a noisy random process under a sparse data condition. Chebyshevskii Sb. 2023, 24, 112–125. (In Russian) [Google Scholar] [CrossRef]
  71. Bulinski, A. Forward Selection of Relevant Factors by Means of MDR-EFE Method. Mathematics 2024, 12, 831. [Google Scholar] [CrossRef]
  72. Hsing, T.; Eubank, R. Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators; Wiley: Hoboken, NJ, USA, 2015. [Google Scholar]
  73. Müller, H.-G. Functional modelling and classification of longitudinal data. Scand. J. Stat. 2005, 32, 223–246. [Google Scholar] [CrossRef]
  74. Wu, H.; Zhang, J.-T. Nonparametric Regression Methods for Longitudinal Data Analysis: Mixed-Effects Modeling Approaches; John Wiley and Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  75. Zheng, S.; Yang, L.; Hardle, W. A smooth simultaneous confidence corridor for the mean of sparse functional data. J. Am. Stat. Assoc. 2014, 109, 661–673. [Google Scholar] [CrossRef]
  76. Wang, J.-L.; Chiou, J.-M.; Müller, H.-G. Functional Data Analysis. Annu. Rev. Stat. 2016, 3, 257–295. [Google Scholar] [CrossRef]
  77. Chentsov, N.N. Weak convergence of stochastic processes whose trajectories have no discontinuities of the second kind and the heuristic approach to the Kolmogorov-Smirnov tests. Theory Probab. Appl. 1956, 1, 140–144. [Google Scholar] [CrossRef]
  78. Rio, E. Moment Inequalities for Sums of Dependent Random Variables under Projective Conditions. J. Theor. Probab. 2009, 22, 146–163. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Linke, Y.; Borisov, I.; Ruzankin, P.; Kutsenko, V.; Yarovaya, E.; Shalnova, S. Multivariate Universal Local Linear Kernel Estimators in Nonparametric Regression: Uniform Consistency. Mathematics 2024, 12, 1890. https://doi.org/10.3390/math12121890

AMA Style

Linke Y, Borisov I, Ruzankin P, Kutsenko V, Yarovaya E, Shalnova S. Multivariate Universal Local Linear Kernel Estimators in Nonparametric Regression: Uniform Consistency. Mathematics. 2024; 12(12):1890. https://doi.org/10.3390/math12121890

Chicago/Turabian Style

Linke, Yuliana, Igor Borisov, Pavel Ruzankin, Vladimir Kutsenko, Elena Yarovaya, and Svetlana Shalnova. 2024. "Multivariate Universal Local Linear Kernel Estimators in Nonparametric Regression: Uniform Consistency" Mathematics 12, no. 12: 1890. https://doi.org/10.3390/math12121890

APA Style

Linke, Y., Borisov, I., Ruzankin, P., Kutsenko, V., Yarovaya, E., & Shalnova, S. (2024). Multivariate Universal Local Linear Kernel Estimators in Nonparametric Regression: Uniform Consistency. Mathematics, 12(12), 1890. https://doi.org/10.3390/math12121890

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop