Next Article in Journal
Environmental Disturbance Effects on Liquid Crystal Elastomer Photothermal-Oscillator Dynamics
Previous Article in Journal
Control of Predator Disease Dynamics Under Prey Refuge and Harvesting: A Fuzzy Computational Modeling Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Function-Theoretic and Probabilistic Approaches to the Problem of Recovering Functions from Korobov Classes in the Lebesgue Metric

by
Aksaule Zh. Zhubanysheva
1,
Galiya E. Taugynbayeva
1,*,
Nurlan Zh. Nauryzbayev
1,
Anar A. Shomanova
1 and
Alibek T. Apenov
2
1
Faculty of Mechanics and Mathematics, L.N. Gumilyov Eurasian National University, Satpayev Str., 2, Astana 010008, Kazakhstan
2
“Nazarbayev Intellectual School of Science and Mathematics in Nura District of Astana”, Branch of Autonomous Educational Organization “Nazarbayev Intellectual Schools”, Hussein ben Talal Str., 21, Astana 010000, Kazakhstan
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(21), 3363; https://doi.org/10.3390/math13213363
Submission received: 11 September 2025 / Revised: 17 October 2025 / Accepted: 17 October 2025 / Published: 22 October 2025

Abstract

In this article, function-theoretic and probabilistic approaches to the recovery of functions from Korobov classes in Lebesgue metrics are considered. Exact order estimates are obtained for the recovery errors of functions reconstructed from both accurate and inaccurate information given by the trigonometric Fourier–Lebesgue coefficients of the recovered function in the uniform metric. Within these settings, optimal computational aggregates (optimal recovery methods) are constructed. The boundary of inaccurate information (the limiting error ε ˜ N ) that preserves the order of recovery corresponding to accurate information is identified. Furthermore, a set of computational aggregates is constructed whose limiting errors do not exceed ε ˜ N . A procedure for constructing a probability measure on functional classes is presented, and upper bounds for the mean-square recovery error with respect to these measures on Korobov classes are established. Numerical experiments were conducted to validate the theoretical results. These experiments showed that for the function corresponding to the lower bound in Theorem 1 (cases C(N)D-2 and C(N)D-3), the ratio between the function value and the approximation error remains constant in the case of uniform weighting and increases indefinitely when logarithmic weighting is used as the number of terms N grows.

1. Introduction

Mathematical models are used to describe a wide range of physical phenomena. The main components of such models are functions, derivatives, integrals, and solutions of partial differential equations. However, in practice, the available information about a model is often incomplete and may contain errors. In such cases, the problem arises of recovering functions from accurate and inaccurate information while determining optimal computational methods for this purpose.
There are two principal approaches to recovery problems: the function-theoretic and the probabilistic. The first approach essentially consists of comparing various approximation methods, where the maximum deviation is taken as a measure of the efficiency of computational procedures on a certain functional compact. The distinctive feature of the second approach is that the deviation of computational procedures from the accurate value of the approximated object is treated in a probabilistic sense. In this case, the mean-square deviation of computational results from the accurate value of the approximated object is estimated (see [1]).
This paper considers the problem of recovering functions f from Korobov classes E s r using these two approaches, in the framework of the computational (numerical) diameter, abbreviated as C(N)D. From an ideological point of view, the C(N)D study consists of three main parts:
C(N)D-1. Recovery from accurate information, depending on the type of functionals and algorithms used for processing the obtained numerical data. This part encompasses classical approximation theory, numerical analysis, computational mathematics, and function theory (Fourier series, bases, and related topics).
C(N)D-2. In optimal computational methods, the values of functionals can be replaced by values close to them while preserving optimality. The problem of finding the largest such permissible deviations forms an independent optimization problem, known as the problem of limiting errors.
C(N)D-3. Investigation of whether there exist other computational methods with structures similar to those of the optimal ones under consideration, and possibly even more general, that provide higher-order accuracy in terms of the limiting error.

2. Statement of the Problem

In the framework of the general theory of recovery (for the necessary definitions, notation, background, and comparisons with related studies, see, for example, [2,3,4]), the concept of the computational (numerical) diameter involves the sequential formulation of three interconnected problems.
The general recovery problem is to reconstruct an operator
T : F Y ,
where F is a class of functions and Y is a normalized space of functions defined on the domains Ω F and Ω Y , respectively.
The available numerical information about a function f F is given by
l ( N ) = l N ( 1 ) ( f ) , , l N ( N ) ( f ) ,
which consists of N functionals (not necessarily linear in the general case).
An information processing algorithm
φ N ( z 1 , , z N ; x ) : C N × Ω Y C
is a mapping such that, for any fixed ( z 1 , , z N ) C N , the function of the variable x belongs to Y. Throughout the paper, the notation φ N Y means that φ N satisfies all of the above conditions. We denote by { φ N } Y the set of all such algorithms belonging to Y.
We now define a computational aggregate (or computational method) corresponding to the pair ( l ( N ) , φ N ) . First, the accurate values l N ( τ ) ( f ) are replaced, with a given precision ε N ( τ ) 0 , by approximate values
z τ z τ ( f ) , | z τ ( f ) l N ( τ ) ( f ) | ε N ( τ ) , τ = 1 , , N .
Then the numbers z 1 ( f ) , , z N ( f ) are processed by the algorithm φ N to produce a function
φ N z 1 ( f ) , , z N ( f ) ; x ,
which constitutes the computational aggregate ( l ( N ) , φ N ) φ N ( z 1 ( f ) , , z N ( f ) ; x ) . Let D N D N ( F ) Y denote the set of all such pairs ( l ( N ) , φ N ) .
The central notion in the C(N)D framework is the following quantity:
δ N ( ε N ; D N ) Y δ N ( ε N ; T ; F ; D N ) Y inf ( l ( N ) , φ N ) D N δ N ε N ; ( l ( N ) , φ N ) Y ,
where
δ N ε N ; ( l ( N ) , φ N ) Y
sup f F | γ N ( τ ) | 1 , τ = 1 , , N T f ( · ) φ N l N ( 1 ) ( f ) + γ N ( 1 ) ε N ( 1 ) , , l N ( N ) ( f ) + γ N ( N ) ε N ( N ) ; · Y .
The notation A N B N and A N B N will be used to denote the relations A N c B N and the simultaneous validity of A N B N and B N A N , respectively, where { A N } and { B N } are nonnegative sequences, and c > 0 is a constant independent of N.
Within the framework of the above notation and definitions, the problem of optimal recovery from accurate and inaccurate information, formalized in terms of the computational (numerical) diameter, can be viewed, in a collective sense, as the sequential solution of three interrelated problems. Given fixed objects F , Y , T , D N (as specified below), the following should be performed:
C(N)D-1: Find the order δ N 0 ; D N Y δ N 0 ; T ; F ; D N Y , which is the information power of a set of computational aggregates D N D N F Y .
C(N)D-2: Construct a particular computational aggregate l ¯ N , φ ¯ N from D N D N F Y , supporting the order δ N 0 ; D N Y , for which we study the problem of existence and determination of a sequence ε ˜ N ε ˜ N D N ; l ¯ ( N ) , φ ¯ N Y ε ˜ N 1 , , ε ˜ N N with non-negative components—the limiting error (corresponding to the computational aggregate)—such that
δ N 0 ; D N Y δ N ε ˜ N ; l ¯ N , φ ¯ N Y
s u p T f · φ ¯ N z 1 ( f ) , , z N ( f ) ; · Y : f F , l ¯ τ f z τ ( f ) ε ˜ N τ τ = 1 , , N ,
while simultaneously performing
η N + ( 0 < η N < η N + 1 , η N + ) : l i m ¯ N + δ N η N ε ˜ N ; l ¯ ( N ) , φ ¯ N Y δ N ( 0 ; D N ) Y = + .
C(N)D-3: The massiveness of the limiting error ε ˜ N is established: one finds as large as possible a set D N l ¯ ( N ) , φ ¯ N of (usually related to the structure of the original l ¯ ( N ) , φ ¯ N ) computational aggregates ( l ( N ) , φ N ) constructed over all possible (not necessarily linear) functionals of l 1 , , l N , such that the relation is satisfied for each of them
η N + ( 0 < η N < η N + 1 , η N + ) : l i m ¯ N + δ N ( η N ε ˜ N ; ( l ( N ) , φ N ) ) Y δ N ( 0 ; D N ) Y = + .
In the probabilistic approach, the supremum in δ N ( 0 ; T ; F ; ( l ( N ) , φ N ) ) Y is replaced by integration with respect to a probability measure μ defined on the functional class F.
δ N ( 0 ; T ; F ; μ ; ( l N , φ N ) ) Y F T f · φ N l 1 f , , l N f ; · Y d μ f .
The historical development of problems C(N)D-1, C(N)D-2, and C(N)D-3 within the general theory of function recovery has been extensively studied in the mathematical literature. The quantity δ N ( 0 ; T ; F ; D N ) Y appearing in C(N)D-1 was introduced several decades ago, and numerous studies, each employing its own notation and formulation, have been devoted to its investigation. For example, the works of A.N. Kolmogorov [5]; A. Sard [6]; S.M. Nikol’skii [7]; S.B. Stechkin [8]; N.M. Korobov [9]; A.D. Ioffe and V.M. Tikhomirov [10]; C.A. Micchelli and T.J. Rivlin [11]; N.P. Korneichuk [12]; A. Pietsch [13]; J.F. Traub, G. Wasilkowski, H. Wozniakowski, and E. Novak [14,15]; L. Plaskota [16]; K.Yu. Osipenko, G.G. Magaril-Il’yaev, and A.G. Marchuk [17,18,19]; and S. Heinrich [20], among many others.
The principal objective of these studies is to obtain order estimates for δ N ( 0 ; T ; F ; D N ) Y and to identify optimal computational aggregates (methods) that achieve such bounds. Problem C(N)D-1, in its various specializations, encompasses a wide range of classical approximation problems, including the diameters of Kolmogorov, Korneichuk, Tikhomirov, and Temlyakov, as well as approximation problems involving Fourier series and their averages, orthogonal bases, wavelet systems, and greedy algorithms. Problem C(N)D-1 addresses recovery from accurate information, whereas problems C(N)D-2 and C(N)D-3 concern recovery from inaccurate information.
The study of recovery from inaccurate information has been developed in the works of G.G. Magaril-Il’yaev, K.Yu. Osipenko, A.G. Marchuk, J.F. Traub, G. Wasilkowski, H. Wozniakowski, and L. Plaskota. In the research conducted by Magaril-Il’yaev, Osipenko, and Marchuk, the focus is placed on obtaining exact recovery under a fixed error level ε > 0 . In contrast, the works of Traub, Wasilkowski, Wozniakowski, and Plaskota investigate the minimization of the total computational cost of obtaining approximate values, which differs in formulation from the first part of C(N)D-2. The second part of C(N)D-2 and problem C(N)D-3 constitute new problem formulations within this framework.
The paper is devoted to the following concretization of the general C(N)D problem in which T f f ( x ) is the identity operator, F = E s r denotes the Korobov class, and Y = L ( 0 , 1 ) s ( 0 , 1 ) s is considered in the function-theoretic approach, while Y = L 2 ( 0 , 1 ) s is used in the probabilistic approach.
The set of computational aggregates D N is defined as
D N = Φ N ( F ) × { φ N } Y =
= l 1 ( f ) = f ^ ( m ( 1 ) ) , , l N ( f ) = f ^ ( m ( N ) ) : f F , m ( 1 ) , , m ( N ) Z s × { φ N } Y ,
where
f ^ ( m ) = [ 0 , 1 ] s f ( x ) e 2 π i ( m , x ) d x
are the trigonometric Fourier–Lebesgue coefficients of the function f.
A brief overview of known results on the recovery of functions from Korobov classes is given below.
S.A. Smolyak [21] constructed grids of the form
ξ = k 1 2 τ 1 , , k s 2 τ s , n s τ 1 + + τ s n , 0 k j < 2 τ j , τ j , k j Z + , j = 1 , , s ,
for which the approximation error of the corresponding linear operators φ N ( z 1 , , z N ; x ) admits the upper bound
( ln N ) r ( s 1 ) N r 1 .
K. Sherniyazov [22] obtained matching upper and lower bounds for the recovery of functions from Korobov classes in the Lebesgue metric L q , 2 q , which coincide on the power scale
( ln N ) r ( s 1 ) N r 1 + 1 q δ N 0 ; E s r ; P N × { φ N } L ( 0 , 1 ) s L q ( 0 , 1 ) s ( ln N ) r ( s 1 ) N r 1 + 1 q ,
where
P N = l 1 ( f ) = f ( ξ 1 ) , , l N ( f ) = f ( ξ N ) : ξ 1 , , ξ N [ 0 , 1 ] s .
It is further shown that a grid { ξ k } k = 1 N and a function φ N ( z 1 , , z N ; x ) realize the optimal upper bound on this power scale.
N.M. Korobov [9,23] proposed recovery operators in which, in addition to the values of the approximated function at the grid nodes, the values of its derivatives at the same nodes are also used. I. Kovaleva [24] extended the corresponding theorem of Korobov by providing an explicit algorithm for constructing such a recovery operator. In a study of Sh. Azhgaliyev [25], order estimates of recovery errors in the Lebesgue metrics L 2 and L were obtained. N. Temirgaliyev, K. Sherniyazov, and M. Berikhanova [26] presented a complete solution to the C(N)D problem for the recovery of functions from Korobov classes in the Hilbert metric L 2 .

3. Necessary Definitions and Auxiliary Statements

The Korobov class E s r ( r > 1 , s = 1 , 2 , ) [9] is defined as the set of all integrable functions that are 1-periodic in each variable and have the trigonometric Fourier–Lebesgue expansion
f ( x ) = m Z s f ^ ( m ) e 2 π i ( m , x ) , x = ( x 1 , , x s ) ,
where the Fourier–Lebesgue coefficients satisfy the inequality
| f ^ ( m 1 , , m s ) |   1 m ¯ 1 m ¯ s r , m = ( m 1 , , m s ) Z s ,
with m ¯ j = max { 1 , | m j | } for j = 1 , , s . The trigonometric Fourier–Lebesgue coefficients of f are defined by
f ^ ( m 1 , , m s ) = [ 0 , 1 ] s f ( x 1 , , x s ) e 2 π i ( m 1 x 1 + + m s x s ) d x 1 d x s .
For any positive R, the set Γ R Γ R ( s ) is defined as the hyperbolic cross
Γ R = m = ( m 1 , , m s ) Z s : m ¯ ¯ R ,
where, for every m = ( m 1 , , m s ) Z s , m ¯ ¯ = j = 1 s m ¯ j .
The norm in the space L q ( 0 , 1 ) s ( 1 q < ) is denoted, as usual, by
f L q = [ 0 , 1 ] s | f ( x ) | q d x 1 / q .
In addition, throughout the paper, L ( 0 , 1 ) s is understood as C ( [ 0 , 1 ] s ) , the space of continuous functions on [ 0 , 1 ] s .
The following lemmas hold.
Lemma 1 
(see [9]). Let real numbers α > 1 and t 1 be given. Then the following inequality holds
1 ( m 1 m s ) α C ( 1 + ln t ) s 1 t α 1 ,
where the summation extends over all systems of positive integers m 1 , , m s such that the product m 1 m s is greater than or equal to t.
Lemma 2 
(see [9]). Let s ( s = 1 , 2 , ) and R > 0 be given. Then the following asymptotic relation holds
m = ( m 1 , , m s ) Z s m ¯ 1 m ¯ s R 1 R ( ln R ) s 1 .
Lemma 3 
(see [25]). Let s ( s = 1 , 2 , ) and r > 1 be given. Then the following asymptotic relation holds
δ N 0 ; T ; E s r ; Φ N ( E s r ) × { φ N } Y L ( 0 , 1 ) s ( ln N ) r ( s 1 ) N r 1 , N = 1 , 2 ,
The transition to the probabilistic framework is achieved by introducing a measure on the considered functional class. This allows the recovery error to be interpreted as a random variable and, consequently, enables the examination of its expected values, in particular, the mean-square error. In this setting, the properties of computational aggregates can be studied not only in the worst case but also in the probabilistic sense, which considerably broadens the scope of analysis and provides a deeper understanding of the efficiency of recovery methods.
The procedure for constructing a measure on functional classes is described below (see, for example, [27,28,29,30,31,32,33]; see also [34,35,36,37] for related studies).
Let B ( C ) denote the Borel σ -algebra on the complex plane C . For each m Z s , a measure ν m is defined on B ( C ) .
Define
A : = { a m } m Z s : a m C .
Let Σ A denote the σ -algebra generated by all finite-dimensional cylinder sets
C ( J ; E ( m 1 ) , , E ( m k ) ) = { a m } A : a m j E ( m j ) , j = 1 , , k ,
where J = { m 1 , , m k } Z s is a finite set and each E ( m j ) B ( C ) .
On the collection of cylinders, a pre-measure is defined by
λ ( C ( J ; E ( m 1 ) , , E ( m k ) ) ) = j = 1 k ν m j E ( m j ) ,
which, by Carathéodory’s extension theorem, can be uniquely extended to a measure P on ( A , Σ A ) .
Define the mapping
S : E s r A , S ( f ) = { f ^ ( m ) } m Z s ,
where f ^ ( m ) denotes the Fourier–Lebesgue coefficients of f.
For the σ -algebra of measurable subsets of E s r , consider the preimage
Σ E : = S 1 ( Σ A ) = F E s r : S ( F ) Σ A .
Equivalently, Σ E is generated by the sets of the form
{ f E s r : f ^ ( m j ) E ( m j ) , j = 1 , , k } , k N , m j Z s , E ( m j ) B ( C ) .
Finally, a measure on ( E s r , Σ E ) is defined by
μ ( F ) : = P ( S ( F ) ) , F Σ E .
Lemma 4. 
For any m Z s and any nonnegative measurable function φ : C [ 0 , ) , the following equality holds
E s r φ ( f ^ ( m ) ) μ ( d f ) = C φ ( z ) d ν m ( z ) .
In particular, if C | z | 2 d ν m ( z ) < , then
E s r | f ^ ( m ) | 2 μ ( d f ) = C | z | 2 d ν m ( z ) .
Proof. 
For the indicator function
χ G ( t ) = 1 , if t G , 0 , if t G ,
one has
E s r χ { f : f ^ ( m ) A } ( f ) d μ = μ ( { f : f ^ ( m ) A } ) =
= P { a m : a m A } = ν m ( A ) = C χ A ( z ) d ν m ( z ) .
This identity for simple functions follows from the linearity of the integral, and for arbitrary φ 0 from the monotone convergence theorem. □
For a finite set Λ = { m ( 1 ) , , m ( N ) } Z s , define
φ ˜ N ( f ^ ( m ( 1 ) ) , , f ^ ( m ( N ) ) ; x ) : = m Λ f ^ ( m ) e 2 π i ( m , x ) .
Lemma 5. 
Let s , s = 1 , 2 , and r > 1 . Then the following equality holds
E s r f φ ˜ N ( f ^ ( m ( 1 ) ) , , f ^ ( m ( N ) ) ; x ) L 2 2 d μ = m Λ C | z | 2 d ν m ( z ) .
Proof. 
By Parseval’s identity,
f φ ˜ N ( f ) L 2 2 = m Λ | f ^ ( m ) | 2 .
Integrating over μ and applying (1) yields the desired equality. □
Two particular specifications of the measure ν m are considered below. Let
Λ = Γ R : = { m = ( m 1 , , m s ) Z s : j = 1 s max { 1 , | m j | } R } ,
ρ ( m ) : = j = 1 s max { 1 , | m j | } r .
(a) Uniform circular model. Define a measure that is uniform over the disk of the radius ρ ( m ) :
d ν m ( z ) = 1 π ρ ( m ) 2 d A ( z ) , | z | ρ ( m ) ,
where d A ( z ) denotes the two-dimensional Lebesgue measure on C . Then
C | z | 2 d ν m ( z ) = 1 2 ρ ( m ) 2 .
Indeed,
C | z | 2 d ν m ( z ) = 1 π ρ ( m ) 2 | z | ρ ( m ) | z | 2 d A ( z ) =
= 1 π ρ ( m ) 2 0 2 π 0 ρ ( m ) r 3 d r d θ = 1 2 ρ ( m ) 2 .
Hence,
E s r f m Γ R f ^ ( m ) e 2 π i ( m , x ) L 2 2 d μ = 1 2 m Γ R ρ ( m ) 2 .
(b) Gaussian model. Let
d ν m ( z ) = 1 π τ ( m ) 2 e | z | 2 / τ ( m ) 2 d A ( z ) , z C ,
that is, ν m = N C ( 0 , τ ( m ) 2 ) with τ ( m ) 2 ρ ( m ) 2 .
Then
C | z | 2 d ν m ( z ) = 1 π τ ( m ) 2 C | z | 2 e | z | 2 / τ ( m ) 2 d A ( z ) = τ ( m ) 2 .
Indeed, in polar coordinates ( | z | = r , d A = r d r d θ ) ,
C | z | 2 d ν m ( z ) = 2 τ ( m ) 2 0 r 3 e r 2 / τ ( m ) 2 d r = τ ( m ) 2 ,
since 0 u e u d u = 1 .
Therefore,
E s r f m Γ R f ^ ( m ) e 2 π i ( m , x ) L 2 2 d μ m Γ R ρ ( m ) 2 .

4. Main Results and Their Proofs

The following theorem holds.
Theorem 1. 
For s , s = 1 , 2 , and r > 1 , the following statements are valid ( R > 1 , N R ( ln R ) s 1 , where N is a positive integer).
C(N)D-1. For the Korobov class E s r , one has
δ N ( 0 ; Φ N ( E s r ) × { φ N } L ( 0 , 1 ) s ) L ( 0 , 1 ) s
sup f E s r f ( x ) m Γ R f ^ ( m ) e 2 π i ( m , x ) L ( 0 , 1 ) s ( ln N ) r ( s 1 ) N r 1 .
C(N)D-2. For the computational aggregate
l ¯ N , φ ¯ N = φ ¯ N ( f ^ ( m ( 1 ) ) , , f ^ ( m ( N ) ) ; x ) = m Γ R f ^ ( m ) e 2 π i ( m , x ) ,
the value ε ˜ N = ( ln N ) r ( s 1 ) N r is the limiting error. First,
δ N ( 0 ; Φ N ( E s r ) × { φ N } L ( 0 , 1 ) s ) L ( 0 , 1 ) s δ N ε ˜ N ; l ¯ N , φ ¯ N L ( 0 , 1 ) s =
= s u p f E s r f m τ z τ f ε ˜ N τ = 0 , 1 , , N f x m Γ R z m e 2 π i m , x L 0 , 1 s l n N r s 1 N r 1 ,
Moreover, for any increasing sequence { η N } N = 1 with η N + ,
lim N δ N ( η N ε ˜ N ; l ¯ N , φ ¯ N ) L ( 0 , 1 ) s δ N ( 0 ; Φ N ( E s r ) × { φ N } L ( 0 , 1 ) s ) L ( 0 , 1 ) s = + .
C(N)D–3. Any computational aggregate constructed on an arbitrary finite spectrum of Fourier coefficients has a limiting error that does not decrease more slowly (in order) than the limiting error of the aggregate m Γ R f ^ ( m ) e 2 π i ( m , x ) ; that is, for any positive sequence { η N } N = 1 increasing to + and for all computational aggregates ( l ( N ) , φ N ) from Φ N ( E s r ) × { φ N } L ( 0 , 1 ) s , the following equality holds
lim N δ N ( η N ε ˜ N ; ( l ( N ) , φ N ) ) L ( 0 , 1 ) s δ N ( 0 ; Φ N ( E s r ) × { φ N } L ( 0 , 1 ) s ) L ( 0 , 1 ) s = + ,
Proof of Theorem 1. 
The part C(N)D–1 follows from Lemma 3. The proof now turns to recovery from inaccurate information. Throughout, set ε ˜ N = ( ln N ) r ( s 1 ) N r .
Fix f E s r . Then f L ( 0 , 1 ) s L for r > 1 . Let N N , and choose R > 1 such that N R ( ln R ) s 1 . Consider { γ N ( m ) } m Γ R satisfying | γ N ( m ) |   1 ,
f ( x ) m Γ R f ^ ( m ) + ε ˜ N γ N ( m ) e 2 π i ( m , x ) L I 1 + ε ˜ N I 2 ,
where
I 1 : = f ( x ) m Γ R f ^ ( m ) e 2 π i ( m , x ) L , I 2 : = m Γ R γ N ( m ) e 2 π i ( m , x ) L .
Estimate of I 1 . Using the Fourier representation and the triangle inequality,
I 1 = m Z s Γ R f ^ ( m ) e 2 π i ( m , x ) L m Z s Γ R | f ^ ( m ) | .
By the definition of E s r , | f ^ ( m ) | ( m ¯ 1 m ¯ s ) r . Since m Γ R implies m ¯ 1 m ¯ s > R , Lemma 1 (with α = r , t = R ) yields
I 1 m ¯ 1 m ¯ s > R 1 ( m ¯ 1 m ¯ s ) r ( ln R ) s 1 R r 1 ( ln N ) r ( s 1 ) N r 1 ,
where the last relation follows from N R ( ln R ) s 1 .
Estimate of I 2 . By the triangle inequality,
I 2 m Γ R | γ N ( m ) |     | Γ R | .
By Lemma 2, | Γ R | R ( ln R ) s 1 ; hence,
ε ˜ N I 2 ( ln N ) r ( s 1 ) N r · R ( ln R ) s 1 ( ln N ) r ( s 1 ) N r 1 .
Combining the bounds for I 1 and ε ˜ N I 2 gives
f ( x ) m Γ R f ^ ( m ) + ε ˜ N γ N ( m ) e 2 π i ( m , x ) L ( ln N ) r ( s 1 ) N r 1 ,
which proves the required upper bound.
Thus,
I 1 = f ( x ) m Γ R f ^ ( m ) e 2 π i ( m , x ) L ( ln N ) r ( s 1 ) N r 1 .
Furthermore, by the inequality | γ N ( m ) |   1 and Lemma 2, one obtains
I 2 = m Γ R γ N ( m ) e 2 π i ( m , x ) L m Γ R 1 R ( ln R ) s 1 N .
Hence,
I 1 + ε ˜ N I 2 ( ln N ) r ( s 1 ) N r 1 + ε ˜ N N ( ln N ) r ( s 1 ) N r 1 .
This yields
δ N ( ε ˜ N ; l ¯ N , φ ¯ N ) L ( ln N ) r ( s 1 ) N r 1 .
Since
( ln N ) r ( s 1 ) N r 1 δ N ( 0 ; Φ N ( E s r ) × { φ N } L ) L
δ N ( ε ˜ N ; l ¯ N , φ ¯ N ) L ,
we obtain the two-sided estimate
δ N ( ε ˜ N ; l ¯ N , φ ¯ N ) L ( ln N ) r ( s 1 ) N r 1 .
Next, we establish the massiveness of the limiting error ε ˜ N = ( ln N ) r ( s 1 ) N r , which corresponds to the second part of problem C(N)D–2.
Let N N and let
B N = { m ( 1 ) , , m ( N ) } Z s .
According to the choice of D N in (1), define the functionals
l 1 ( f ) = f ^ ( m ( 1 ) ) , , l N ( f ) = f ^ ( m ( N ) ) .
Let φ N ( z 1 , , z N ; x ) be an arbitrary information-processing algorithm satisfying φ N ( 0 , , 0 ; x ) 0 . This condition is used in deriving lower bounds in the form
sup f F f ( x ) φ N ( l 1 ( f ) , , l N ( f ) ; x ) Y
g N ( x ) φ N ( l 1 ( g N ) , , l N ( g N ) ; x ) Y = g N ( x ) Y .
In fact, this condition can be omitted (under the additional assumption that the class F is symmetric, i.e., f F f F ). It is introduced only for the sake of convenience in exposition [27].
Namely, under the assumptions that g N F and l τ ( g N ) = 0 for τ = 1 , 2 , , N , we obtain the following lower bound:
sup f F f ( x ) φ N ( l 1 ( f ) , , l N ( f ) ; x ) Y g N ( x ) φ N ( l 1 ( g N ) , , l N ( g N ) ; x ) Y =
= max k = 0 , 1 ( 1 ) k g N ( x ) φ N ( 1 ) k l 1 ( g N ) , , ( 1 ) k l N ( g N ) ; x Y
1 2 g N ( x ) φ N ( l 1 ( g N ) , , l N ( g N ) ; x ) Y +
+ g N ( x ) φ N ( l 1 ( g N ) , , l N ( g N ) ; x ) Y =
= 1 2 g N ( x ) φ N ( 0 , 0 , , 0 ; x ) Y + g N ( x ) + φ N ( 0 , 0 , , 0 ; x ) Y
1 2 g N ( x ) φ N ( 0 , 0 , , 0 ; x ) + g N ( x ) + φ N ( 0 , 0 , , 0 ; x ) Y = g N ( x ) Y .
Let a positive sequence { η N } , η N + be given. Define
η N * = min { η N , ln N } ,
and choose a constant C = C ( r , s ) ( 0 , 1 ] such that for each N 2 ,
0 η N * ε ˜ N 1 .
We now construct a function g N ( x ) satisfying g N ( x ) E s r ,
g ^ N ( m ( j ) ) + η N ε ˜ N γ ^ N ( j ) = 0 , j = 1 , , N ,
and
g N L η ¯ N δ N ( 0 ) ,
where { η ¯ N } is a sequence diverging to + .
To this end, we set
g N ( x ) = m Γ R τ N ( m ) e 2 π i ( m , x ) ,
where for each integer N 2 , the corresponding R = R ( N ) > 1 is chosen so that
N   | Γ R | , R ( ln R ) s 1 N ,
and the coefficients τ N ( m ) are defined by
τ N ( m ) = min η N * ε ˜ N , 1 ( m ¯ ¯ ) r = min η N * ( ln N ) r ( s 1 ) N r , 1 ( m ¯ ¯ ) r .
Since
| g ^ N ( m ) | 1 ( m ¯ ¯ ) r , m Z s ,
the function g N indeed belongs to the Korobov class E s r .
We now estimate the norm of g N in the uniform metric. From (4), in the case when
1 ( m ¯ ¯ ) r η N * ε ˜ N , N 2 ,
we have
( m ¯ ¯ ) r ( η N * ε ˜ N ) 1 , 1 m ¯ ¯ ( η N * ε ˜ N ) 1 / r ,
and hence,
( η N * ε ˜ N ) 1 / r = η N * ( ln N ) r ( s 1 ) N r 1 / r = N ( η N * ) 1 / r ( ln N ) s 1 .
As a result (see also Lemma 2),
g N L = sup x ( 0 , 1 ) s m Γ R τ N ( m ) e 2 π i ( m , x ) m Γ R τ N ( m ) 1 m ¯ ¯ ( η N * ε ˜ N ) 1 / r τ N ( m )
η N * ( ln N ) r ( s 1 ) N r 1 m ¯ ¯ ( η N * ε ˜ N ) 1 / r 1 ( η N * ε ˜ N ) · ( η N * ε ˜ N ) 1 / r ln ( η N * ε ˜ N ) 1 / r s 1
( η N * ε ˜ N ) 1 / r r 1 ( ln N ) s 1 ( η N * ) 1 1 / r ( ln N ) r ( s 1 ) N r 1 = ( η N * ) 1 1 / r δ N ( 0 ) .
That is, when
η ¯ N = ( η N * ) 1 1 / r ,
we obtain the required relation
g N L η ¯ N δ N ( 0 ) .
According to (3) and (4), we have (for k = 1 , , N )
g ^ N ( m ( k ) ) = τ N ( m ( k ) ) = min 1 ( m ¯ ¯ ) r , η N * ε ˜ N , if m ( k ) Γ R , 0 , if m ( k ) Γ R .
Then, for m ( k ) Γ R , the inequalities
g ^ N ( m ( k ) ) η N * ε ˜ N η N ε ˜ N
hold. Hence, by defining
γ ^ N ( k ) = g ^ N ( m ( k ) ) η N ε ˜ N , if m ( k ) Γ R , 0 , if m ( k ) Γ R ,
we obtain
| γ ^ N ( k ) | 1 , and g ^ N ( m ( k ) ) + η N ε ˜ N γ ^ N ( k ) = 0 , k = 1 , , N .
Consequently,
φ N g ^ N ( m ( 1 ) ) + η N ε ˜ N γ ^ N ( 1 ) , , g ^ N ( m ( N ) ) + η N ε ˜ N γ ^ N ( N ) ; x 0 .
Therefore, by virtue of (5), the following inequality holds
g N ( x ) φ N ( g ^ N ( m ( 1 ) ) + η N ε ˜ N γ ^ N ( 1 ) , , g ^ N ( m ( N ) ) + η N ε ˜ N γ ^ N ( N ) ; x ) L =
= g N L ( η N * ) 1 1 r ( ln N ) r ( s 1 ) N r 1 .
Thus, for any computational aggregate ( l ( N ) , φ N ) Φ N ( E s r ) × { φ N } L , the following relation holds
δ N ( ε ˜ N ; φ N ( f ^ ( m ( 1 ) ) , , f ^ ( m ( N ) ) ; x ) ) L =
= sup f E s r | γ N ( k ) | 1 , k = 1 , , N f ( x ) φ N ( f ^ ( m ( 1 ) ) + η N ε ˜ N γ N ( 1 ) , , f ^ ( m ( N ) ) + η N ε ˜ N γ N ( N ) ; x ) L
g N ( x ) φ N ( g ^ N ( m ( 1 ) ) + η N ε ˜ N γ ^ N ( 1 ) , , g ^ N ( m ( N ) ) + η N ε ˜ N γ ^ N ( N ) ; x ) L
( η N * ) 1 1 r ( ln N ) r ( s 1 ) N r 1 ( η N * ) 1 1 r δ N ( 0 ) .
Finally, due to the arbitrariness of m ( 1 ) , , m ( N ) and φ N , it follows that
δ N ( ε ˜ N ; ( l ( N ) , φ N ) ) L =
= inf ( l ( N ) , φ N ) Φ N ( E s r ) × { φ N } L δ N ( ε ˜ N ; φ N ( f ^ ( m ( 1 ) ) , , f ^ ( m ( N ) ) ; x ) ) L
( η N * ) 1 1 r ( ln N ) r ( s 1 ) N r 1 .
δ N ( ε ˜ N ; ( l ( N ) , φ N ) ) L ( η N * ) 1 1 r ( ln N ) r ( s 1 ) N r 1 .
As a consequence of Lemma 3 and (6), the relation holds for every N,
δ N ( ε ˜ N η N ; ( l ( N ) , φ N ) ) L δ N ( 0 ; Φ N ( E s r ) × { φ N } L ) L ( η N * ) 1 1 r ( ln N ) r ( s 1 ) N r 1 ( ln N ) r ( s 1 ) N r 1 = ( η N * ) 1 1 r .
Taking into account that (see (2)) η N * + and r > 1 , this inequality proves condition C(N)D–3 and, in particular, the second part of condition C(N)D–2. Theorem 1 is, thus, completely proven. □
The error estimates obtained above describe the worst-case scenario of recovery, in which the efficiency of a computational aggregate is evaluated by the supremal deviation over a given functional class. Such an analysis corresponds to the function–theoretic approach and makes it possible to determine the extremal characteristics of approximation. At the same time, for further analysis, it is of particular interest to examine the average behavior of the recovery error, in which, along with asymptotic characteristics, integral measures of approximation quality are also taken into account.
Theorem 2. 
Let s be a positive integer and let r > 1 . Then, for the average measure of the squared recovery error with respect to the measure μ E s r , the following estimate holds (for R > 1 and N R ( ln R ) s 1 , a positive integer)
E s r f ( x ) m = ( m 1 , , m s ) Z s m Γ R f ^ ( m ) e 2 π i ( m , x ) L 2 ( 0 , 1 ) s 2 d μ E s r ( f ) ( ln R ) s 1 R 2 r 1 ( ln N ) 2 r ( s 1 ) N 2 r 1 .
Proof of Theorem 2. 
Let f E s r . Since r > 1 , the trigonometric Fourier–Lebesgue series of f converges absolutely. According to Lemmas 4 and 5, the following equality holds
E s r f ( x ) m = ( m 1 , , m s ) Z s m Γ R f ^ ( m ) e 2 π i ( m , x ) L 2 ( 0 , 1 ) s 2 d μ E s r ( f ) =
= E s r m = ( m 1 , , m s ) Z s m Γ R f ^ ( m ) e 2 π i ( m , x ) L 2 ( 0 , 1 ) s 2 d μ E s r ( f )
m Γ R j = 1 s max { | m j | , 1 } 2 r .
According to Lemma 1 with α = 2 r , the corresponding sum satisfies the estimate
m Γ R j = 1 s max { | m j | , 1 } 2 r ( ln R ) s 1 R 2 r 1 .
Taking into account that N R ( ln R ) s 1 , we then have
E s r f ( x ) m = ( m 1 , , m s ) Z s m Γ R f ^ ( m ) e 2 π i ( m , x ) L 2 ( 0 , 1 ) s 2 d μ E s r ( f ) ( ln R ) s 1 R 2 r 1 ( ln N ) 2 r ( s 1 ) N 2 r 1 .
Hence, the asserted estimate follows. □

5. Numerical Implementation

The numerical experiment was carried out for the class E 2 4 , that is, for the two-dimensional case with parameters s = 2 and r = 4 . The function under consideration was
g N ( x ) = m Γ R τ N ( m ) e 2 π i ( m , x ) , τ N ( m ) = min { η ( N ) ε ˜ N , m ¯ ¯ 4 } ,
where Γ R is the hyperbolic cross defined below.
In accordance with Theorem 1, two cases were considered:
η ( N ) 1 and η ( N ) log N .
In the numerical experiment, fifty hyperbolic crosses were constructed for the parameters R = 10 , 20 , , 500 , according to
Γ R = m = ( m 1 , m 2 ) Z 2 : m ¯ ¯ = max { 1 , | m 1 | } · max { 1 , | m 2 | } R .
Figure 1a illustrates the hyperbolic cross for R = 50 . Figure 1b,c display the graphs of the functions | g N ( x ) | for the cases η ( N ) 1 and η ( N ) log N , respectively. These correspond to the coefficients
τ N ( m ) = min { ε ˜ N , m ¯ ¯ 4 } and τ N ( m ) = min { ε ˜ N log N , m ¯ ¯ 4 } .
For each parameter R, the number of elements N = | Γ R | of the hyperbolic cross Γ R and the values
g N L , g N L δ N ( 0 )
were computed for the function
g N ( x ) = m Γ R τ N ( m ) e 2 π i ( m , x ) , τ N ( m ) = min { η ( N ) ε ˜ N , m ¯ ¯ 4 } .
Here
ε ˜ N = ( log N ) 4 N 3 , η ( N ) 1 or η ( N ) log N .
It should be noted that
g N L = g N ( 0 ) .
The corresponding numerical results are presented in Table 1.
The first column of Table 1 contains the parameter R of the hyperbolic cross Γ R . The second column presents the number of elements N = | Γ R | . Rows 3–4 and 5–6 list the values of g N L and g N L / δ N ( 0 ) , respectively, for the cases η ( N ) 1 and η ( N ) log N .
Figure 2 presents a detailed visualization of the numerical results. Panels (a,b) display the dependence of the absolute error g N L on the number of Fourier coefficients N = | Γ R | for the cases η N 1 and η N log N , respectively. In both cases, the error decreases monotonically as N increases, illustrating the convergence predicted by Theorem 1. On the logarithmic scale, the rate of decay is close to the theoretical order N ( r 1 ) ( log N ) r ( s 1 ) , confirming the asymptotic estimate g N L δ N ( 0 ) . For η N log N , the error values are larger, which reflects the deterioration of recovery accuracy under amplified noise in the Fourier coefficients.
Panels (c,d) show the normalized error g N L / δ N ( 0 ) . In the case η N 1 , the normalized quantity remains nearly constant for all N, demonstrating the stability of the recovery process when the information error does not exceed the limiting value ε ˜ N . When η N log N , the normalized error grows approximately logarithmically, which is consistent with the theoretical prediction that replacing ε ˜ N by η N ε ˜ N leads to a loss of optimality (conditions C(N)D–2 and C(N)D–3).
Overall, the numerical results confirm that the limiting error ε ˜ N ( log N ) r ( s 1 ) / N r , indeed, serves as the boundary between accurate and degraded recovery. The observed numerical behavior, thus, provides a clear quantitative validation of the theoretical conclusions of Section 4.

6. Discussion

This article examines the problem of recovering functions from Korobov classes from two complementary perspectives: the function-theoretic and probabilistic approaches.
Within the function-theoretic framework, the problem of optimal recovery in the uniform metric for functions belonging to the Korobov classes E s r is investigated based on information obtained from trigonometric Fourier–Lebesgue coefficients and processed by an arbitrary algorithm φ N . In Theorem 1, for computational aggregates represented by partial sums of the Fourier series over the hyperbolic crosses of the recovered function, a value ε ˜ N of the error in computing the Fourier–Lebesgue coefficients is established such that the order of optimal recovery
δ N ( ε ˜ N ; ( l ¯ N , φ ¯ N ) ) L
remains asymptotically equivalent to the order corresponding to accurate information, namely
δ N ( 0 ; Φ N ( E s r ) × { φ N } L ) L δ N ( ε ˜ N ; ( l ¯ N , φ ¯ N ) ) L .
However, if ε ˜ N is replaced by η N ε ˜ N for an arbitrarily slowly increasing sequence η N + , then the accuracy of recovery deteriorates, i.e.,
δ N ( 0 ; Φ N ( E s r ) × { φ N } L ) L δ N ( η N ε ˜ N ; ( l ¯ N , φ ¯ N ) ) L .
Furthermore, the massiveness of the limiting error ε ˜ N is established: for any algorithm φ N and any spectrum { m ( 1 ) , , m ( N ) : m ( j ) Z s , j = 1 , , s } , the limiting error of the computational aggregate φ N ( f ( m ( 1 ) ) , , f ( m ( N ) ) ; x ) does not exceed ε ˜ N .
The quantities in Theorem 1 have the following asymptotic orders:
δ N ( 0 ; Φ N ( E s r ) × { φ N } L ) L ( ln N ) r ( s 1 ) N r 1 , ε ˜ N ( ln N ) r ( s 1 ) N r .
The latter quantity, being of order 1 N smaller, is referred to as the limiting error.
While the estimates for the recovery error based on accurate information (problem C(N)D–1) were known previously, the results concerning recovery from inaccurate information and the limiting error of inaccurate information (problems C(N)D–2 and C(N)D–3), as well as the characterization of its massiveness, are new contributions of the authors. These results are of both theoretical and practical importance. Allowing for bounded errors in the initial data removes the need for precise measurements, simplifies device implementation, and reduces computational cost. All theoretical findings are supported by the numerical experiments, according to which the following conclusions can be drawn:
  • The absolute error
    g N φ N ( g ^ N ( m ( 1 ) ) , , g ^ N ( m ( N ) ) ; x ) L = g N φ N ( 0 , , 0 ; x ) L = g N L
    decreases rapidly as N in both numerical regimes corresponding to η N 1 and η N = log N (note the difference in scales in Figure 2a,b).
  • The normalized value g N L / δ N ( 0 ) remains approximately constant with respect to N when η N 1 , and increases when η N = log N . This behavior confirms the theoretical results of Theorem 1 concerning the limiting recovery error under inaccurate information (the second parts of conditions C(N)D–2 and C(N)D–3).
In Theorem 2, an upper bound for the mean-square recovery error is derived with respect to probability measures on the Korobov classes. This establishes a probabilistic characterization of the recovery process, complementing the function-theoretic analysis.
The presented work may be extended in several directions. One possible generalization involves replacing the uniform norm L with the norm L q , 2 q + , thus formulating and solving the recovery problem in the L q metric for functions from the Korobov classes based on the trigonometric Fourier–Lebesgue coefficient. Further developments are also possible in refining and generalizing the various specifications of the C(N)D problem.

Author Contributions

Conceptualization, A.Z.Z., G.E.T., and N.Z.N.; methodology, A.Z.Z., G.E.T., and N.Z.N.; software, A.Z.Z., G.E.T., N.Z.N., A.A.S., and A.T.A.; validation, A.Z.Z., G.E.T., N.Z.N., A.A.S., and A.T.A.; formal analysis, A.Z.Z., G.E.T., N.Z.N., A.A.S., and A.T.A.; investigation, A.Z.Z., G.E.T., N.Z.N., A.A.S., and A.T.A.; resources, A.Z.Z., G.E.T., N.Z.N., A.A.S., and A.T.A.; data curation, A.Z.Z., G.E.T., N.Z.N., A.A.S., and A.T.A.; writing—original draft preparation, A.Z.Z., G.E.T., N.Z.N., A.A.S., and A.T.A.; writing—review and editing, A.Z.Z., G.E.T., and N.Z.N.; visualization, A.Z.Z., G.E.T., N.Z.N., A.A.S., and A.T.A.; supervision, A.A.S., and A.T.A.; project administration, A.Z.Z.; funding acquisition, A.Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Higher Education of the Republic of Kazakhstan (grant number: AP19680525).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Temirgaliyev, N. The concept of S. M. Voronin in the problem of comparisons of deterministic and random computation in the same terms. Bull. L. N. Gumilyov Eurasian Natl. Univ. Math. Comput. Sci. Mech. Ser. 2019, 128, 8–33. (In Russian) [Google Scholar]
  2. Temirgaliyev, N.; Zhubanysheva, A. Approximation Theory, Computational Mathematics and Numerical Analysis in New Conception of Computational (Numerical) Diameter. Bull. L. N. Gumilyov Eurasian Natl. Univ. Math. Comput. Sci. Mech. Ser. 2018, 124, 8–88. (In Russian) [Google Scholar] [CrossRef]
  3. Temirgaliev, N.; Zhubanysheva, A.Z. Computational (Numerical) Diameter in a Context of General Theory of a Recovery. Russ. Math. (Iz. VUZ) 2019, 63, 79–86. [Google Scholar] [CrossRef]
  4. Taugynbayeva, G.; Azhgaliyev, S.; Zhubanysheva, A.; Temirgaliyev, N. Full C(N)D-Study of Computational Capabilities of Lagrange Polynomials. Math. Comput. Simul. 2025, 227, 189–208. [Google Scholar] [CrossRef]
  5. Kolmogorov, A.N. Über die beste Annäherung von Funktionen einer gegebenen Funktionenklasse. Ann. Math. 1936, 37, 107–110. [Google Scholar] [CrossRef]
  6. Sard, A. Best Approximate Integration Formulas; Best Approximation Formulas. Am. J. Math. 1949, 71, 80–91. [Google Scholar] [CrossRef]
  7. Nikol’skii, S.M. Concerning Estimation for Approximate Quadrature Formulas. Russ. Math. Surv. 1950, 5, 165–177. (In Russian) [Google Scholar]
  8. Stechkin, S.B. On the Best Approximation of Given Classes of Functions by Any Polynomials. Usp. Mat. Nauk 1954, 9, 133–134. (In Russian) [Google Scholar]
  9. Korobov, N.M. Number-Theoretical Methods in Approximate Analysis; Fizmatgiz: Moscow, Russia, 1963. (In Russian) [Google Scholar]
  10. Ioffe, A.D.; Tikhomirov, V.M. Duality of Convex Functions and Extremal Problems. Usp. Mat. Nauk 1968, 23, 51–116. (In Russian) [Google Scholar]
  11. Micchelli, C.A.; Rivlin, T.J. A Survey of Optimal Recovery. In Optimal Estimation in Approximation Theory; Micchelli, C.A., Rivlin, T.J., Eds.; Plenum Press: New York, NY, USA, 1977; pp. 1–54. [Google Scholar]
  12. Korneichuk, N.P. Exact Constants in Approximation Theory; Nauka: Moscow, Russia, 1987. (In Russian) [Google Scholar]
  13. Pietsch, A. Eigenvalues and s-Numbers; Geest and Portig: Leipzig, Germany; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
  14. Traub, J.F.; Wasilkowski, G.W.; Wozniakowski, H. Information-Based Complexity; Academic Press: New York, NY, USA, 1988. [Google Scholar]
  15. Novak, E.; Wozniakowski, H. Tractability of Multivariate Problems. Vol. 1. Linear Information; EMS Tracts Math. 6; European Mathematical Society Publish House: Zürich, Switzerland, 2008. [Google Scholar]
  16. Plaskota, L. Noisy Information and Computational Complexity; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  17. Osipenko, K.Y. Best Approximation of Analytic Functions from Information about Their Values at a Finite Number of Points. Math. Notes 1976, 19, 17–23. [Google Scholar] [CrossRef]
  18. Magaril-Il’yaev, G.G.; Osipenko, K.Y. Optimal Recovery of Values of Functions and Their Derivatives from Inaccurate Data on the Fourier Transform. Sb. Math. 2004, 195, 1461–1476. [Google Scholar] [CrossRef]
  19. Marchuk, A.G.; Osipenko, K.Y. Best Approximation of Functions Specified with an Error at a Finite Number of Points. Math. Notes 1975, 17, 207–212. [Google Scholar] [CrossRef]
  20. Heinrich, S. Random Approximation in Numerical Analysis. In Functional Analysis; Bierstedt, K.D., Pietsch, A., Ruess, W.M., Vogt, D., Eds.; Marcel Dekker: New York, NY, USA, 1993; pp. 123–171. [Google Scholar]
  21. Smolyak, S.A. Quadrature and Interpolation Formulas for Tensor Products of Certain Classes of Functions. Dokl. Akad. Nauk SSSR 1963, 148, 1042–1045. [Google Scholar]
  22. Sherniyazov, K. Approximate Reconstruction of Functions and Solutions of Heat Conductivity Equations with Distribution Functions of Initial Temperatures from Classes E, SW and B. Ph.D. Thesis, Al-Farabi Kazakh National University, Almaty, Kazakhstan, 1998. (In Russian). [Google Scholar]
  23. Korobov, N.M. Trigonometric Sums and Their Applications; Nauka: Moscow, Russia, 1989. (In Russian) [Google Scholar]
  24. Kovaleva, I.M. Reconstruction and Integration over Domains of Functions from Anisotropic Korobov Classes. Ph.D. Thesis, Al-Farabi Kazakh National University, Almaty, Kazakhstan, 2002. (In Russian). [Google Scholar]
  25. Azhgaliev, S. Approximate Reconstruction of Functions and Solutions of the Heat Equation with Distribution Functions of Initial Temperatures from Classes W, B, SW and E from Linear Information. Ph.D. Thesis, Al-Farabi Kazakh National University, Almaty, Kazakhstan, 2000. (In Russian). [Google Scholar]
  26. Temirgaliev, N.; Sherniyazov, K.; Berikhanova, M. Exact Orders of Computational (Numerical) Diameters in Problems of Reconstructing Functions and Sampling Solutions of the Klein–Gordon Equation from Fourier Coefficients. Proc. Steklov Inst. Math. 2013, 282 (Suppl. 1), 165–191. (In Russian) [Google Scholar] [CrossRef]
  27. Temirgaliev, N. Computer (Computational) Diameter. Algebraic Number Theory and Harmonic Analysis in Reconstruction Problems (Quasi-Monte Carlo Method). Theory of Embeddings and Approximations. Fourier Series. Bull. L. N. Gumilyov Eurasian Natl. Univ. 2010, 194. [Google Scholar]
  28. Banach, S. The Lebesgue Integral in Abstract Space. In Theory of the Integral; Saks, S., Ed.; IL: Moscow, Russia, 1949; pp. 463–477. (In Russian) [Google Scholar]
  29. Sul’din, A.V. Wiener Measure and Its Applications to Approximation Methods. Izv. Vyss. Uchebnykh Zaved. Mat. 1959, 6, 145–158. (In Russian) [Google Scholar]
  30. Voronin, S.M.; Skalyga, V.I. Quadrature Formulas. Dokl. Akad. Nauk SSSR 1984, 276, 1038–1041. [Google Scholar] [CrossRef]
  31. Temirgaliev, N. On Some Problems of Numerical Integration. Vestn. Akad. Nauk KazSSR 1983, 12, 15–18. [Google Scholar]
  32. Temirgaliev, N. On the construction of probability measures of functional classes. Proc. Steklov Inst. Math. 1997, 218, 396–401. [Google Scholar]
  33. Nauryzbayev, N.; Shomanova, A.; Temirgaliyev, N. Average Square Errors by Banach Measure of Recovery of Functions by Finite Sums of Terms of Their Trigonometric Fourier Series. Bull. L. N. Gumilyov Eurasian Natl. Univ. Math. Comput. Sci. Mech. Ser. 2025, 150, 17–24. [Google Scholar]
  34. Liu, Y.; Li, X.; Li, H. N-Widths of Multivariate Sobolev Spaces with Common Smoothness in Probabilistic and Average Settings in the Sq Norm. Axioms 2023, 12, 698. [Google Scholar] [CrossRef]
  35. Fang, G.; Ye, P. Probabilistic and Average Linear Widths of Sobolev Spaces with Gaussian Measure in L-Norm. Constr. Approx. 2004, 20, 159–172. [Google Scholar]
  36. Tan, X.; Wang, Y.; Sun, L.; Shao, X.; Chen, G. Gel’fand-N-Width in Probabilistic Setting. J. Inequal. Appl. 2020, 2020, 143. [Google Scholar] [CrossRef]
  37. Liu, Y.; Li, H.; Li, X. Gel’fand Widths of Sobolev Classes of Functions in the Average Setting. Ann. Funct. Anal. 2023, 14, 14–31. [Google Scholar] [CrossRef]
Figure 1. (a) Hyperbolic cross with parameter R = 50 . (b) Function | g N ( x ) | for τ N ( m ) = min { ε ˜ N , m ¯ ¯ 4 } . (c) Function | g N ( x ) | for τ N ( m ) = min { ε ˜ N log N , m ¯ ¯ 4 } .
Figure 1. (a) Hyperbolic cross with parameter R = 50 . (b) Function | g N ( x ) | for τ N ( m ) = min { ε ˜ N , m ¯ ¯ 4 } . (c) Function | g N ( x ) | for τ N ( m ) = min { ε ˜ N log N , m ¯ ¯ 4 } .
Mathematics 13 03363 g001
Figure 2. (a) Absolute error g N L as a function of N for η N 1 . (b) Absolute error g N L as a function of N for η N log N . (c) Normalized error g N L / δ N ( 0 ) as a function of N = | Γ R | for η N 1 . (d) Normalized error g N L / δ N ( 0 ) as a function of N = | Γ R | for η N log N .
Figure 2. (a) Absolute error g N L as a function of N for η N 1 . (b) Absolute error g N L as a function of N for η N log N . (c) Normalized error g N L / δ N ( 0 ) as a function of N = | Γ R | for η N 1 . (d) Normalized error g N L / δ N ( 0 ) as a function of N = | Γ R | for η N log N .
Mathematics 13 03363 g002
Table 1. Numerical data for g N L ( 0 , 1 ) s and g N L ( 0 , 1 ) s / δ N ( 0 ) relative to R and η ( N ) .
Table 1. Numerical data for g N L ( 0 , 1 ) s and g N L ( 0 , 1 ) s / δ N ( 0 ) relative to R and η ( N ) .
R N = | Γ R | g N L ( 0 , 1 ) s for η ( N ) 1 g N L ( 0 , 1 ) s δ N ( 0 ) for η ( N ) 1 g N L ( 0 , 1 ) s for η ( N ) log N g N L ( 0 , 1 ) s δ N ( 0 ) for η ( N ) log N
101491.89536 × 10 3 19.48428 × 10 4 5.003946306
203452.83954 × 10 5 11.6593 × 10 4 5.843544417
305658.94009 × 10 6 15.66518 × 10 5 6.336825731
407933.9829 × 10 6 12.65892 × 10 5 6.675823222
5010292.12459 × 10 6 11.47369 × 10 5 6.936342736
6012851.23761 × 10 6 18.85941 × 10 6 7.158513997
7015298.08639 × 10 7 15.92924 × 10 6 7.332369206
8017935.46472 × 10 7 14.09398 × 10 6 7.491645474
9020613.87329 × 10 7 12.95568 × 10 6 7.630946581
10023292.86032 × 10 7 12.21766 × 10 6 7.75319427
11025932.18982 × 10 7 11.72132 × 10 6 7.860570786
12028891.67224 × 10 7 11.33256 × 10 6 7.9686657
13031491.34806 × 10 7 11.08584 × 10 6 8.054840221
14034371.08258 × 10 7 18.81478 × 10 7 8.142354277
15037218.86906 × 10 8 17.29192 × 10 7 8.221747728
16040097.3524 × 10 8 16.09977 × 10 7 8.296297113
17043016.15869 × 10 8 15.15273 × 10 7 8.366602833
18046055.18359 × 10 8 14.3723 × 10 7 8.434897949
19048854.46519 × 10 8 13.7927 × 10 7 8.493924564
20051933.82506 × 10 8 13.27237 × 10 7 8.555066844
21055053.29935 × 10 8 12.84187 × 10 7 8.613412049
22057852.90916 × 10 8 12.52021 × 10 7 8.66302364
23060772.56718 × 10 8 12.2366 × 10 7 8.712266432
24064132.23892 × 10 8 11.96266 × 10 7 8.766082459
25066852.01433 × 10 8 11.77414 × 10 7 8.80762149
26070091.78557 × 10 8 11.58111 × 10 7 8.854950317
27073211.59793 × 10 8 11.42191 × 10 7 8.89850221
28076411.43268 × 10 8 11.281 × 10 7 8.941283764
29079491.29516 × 10 8 11.16316 × 10 7 8.980801414
30082691.1709 × 10 8 11.05618 × 10 7 9.020268862
31085651.07018 × 10 8 19.69093 × 10 8 9.055439411
32088859.7429 × 10 9 18.85836 × 10 8 9.092119741
33092178.86925 × 10 9 18.09657 × 10 8 9.128804884
34095218.16154 × 10 9 17.477 × 10 8 9.161255164
35098337.5139 × 10 9 16.9079 × 10 8 9.193499355
36010,1856.86552 × 10 9 16.33597 × 10 8 9.228671329
37010,4776.38497 × 10 9 15.91052 × 10 8 9.256937657
38010,8215.87651 × 10 9 15.45883 × 10 8 9.28924397
39011,1335.46251 × 10 9 15.08979 × 10 8 9.31766895
40011,4735.05589 × 10 9 14.72612 × 10 8 9.347751728
41011,7854.71866 × 10 9 14.42355 × 10 8 9.374582815
42012,1454.36699 × 10 9 14.10701 × 10 8 9.404672841
43012,4254.11804 × 10 9 13.88227 × 10 8 9.427465851
44012,7693.83826 × 10 9 13.62899 × 10 8 9.454775637
45013,1173.58125 × 10 9 13.39562 × 10 8 9.481664378
46013,4293.37061 × 10 9 13.20383 × 10 8 9.505171827
47013,7653.16244 × 10 9 13.01377 × 10 8 9.529884418
48014,1172.96292 × 10 9 12.83111 × 10 8 9.555135024
49014,4252.80233 × 10 9 12.68371 × 10 8 9.576718091
50014,7612.64053 × 10 9 12.53484 × 10 8 9.599743847
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhubanysheva, A.Z.; Taugynbayeva, G.E.; Nauryzbayev, N.Z.; Shomanova, A.A.; Apenov, A.T. Function-Theoretic and Probabilistic Approaches to the Problem of Recovering Functions from Korobov Classes in the Lebesgue Metric. Mathematics 2025, 13, 3363. https://doi.org/10.3390/math13213363

AMA Style

Zhubanysheva AZ, Taugynbayeva GE, Nauryzbayev NZ, Shomanova AA, Apenov AT. Function-Theoretic and Probabilistic Approaches to the Problem of Recovering Functions from Korobov Classes in the Lebesgue Metric. Mathematics. 2025; 13(21):3363. https://doi.org/10.3390/math13213363

Chicago/Turabian Style

Zhubanysheva, Aksaule Zh., Galiya E. Taugynbayeva, Nurlan Zh. Nauryzbayev, Anar A. Shomanova, and Alibek T. Apenov. 2025. "Function-Theoretic and Probabilistic Approaches to the Problem of Recovering Functions from Korobov Classes in the Lebesgue Metric" Mathematics 13, no. 21: 3363. https://doi.org/10.3390/math13213363

APA Style

Zhubanysheva, A. Z., Taugynbayeva, G. E., Nauryzbayev, N. Z., Shomanova, A. A., & Apenov, A. T. (2025). Function-Theoretic and Probabilistic Approaches to the Problem of Recovering Functions from Korobov Classes in the Lebesgue Metric. Mathematics, 13(21), 3363. https://doi.org/10.3390/math13213363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop