Next Article in Journal
Investigation into the Explicit Solutions of the Integrable (2+1)—Dimensional Maccari System via the Variational Approach
Next Article in Special Issue
A Stable Generalized Finite Element Method Coupled with Deep Neural Network for Interface Problems with Discontinuities
Previous Article in Journal
Modeling Preferences through Personality and Satisfaction to Guide the Decision Making of a Virtual Agent
Previous Article in Special Issue
Improvement of the WENO-NIP Scheme for Hyperbolic Conservation Laws
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Upper Bounds for RKHS Approximation by Bessel Functions

1
Department of Economic Statistics, School of International Business, Zhejiang Yuexiu University, Shaoxing 312000, China
2
School of Information Engineering, Jingdezhen Ceramic University, Jingdezhen 333403, China
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(5), 233; https://doi.org/10.3390/axioms11050233
Submission received: 19 April 2022 / Revised: 8 May 2022 / Accepted: 9 May 2022 / Published: 17 May 2022

Abstract

:
A reproducing kernel Hilbert space (RKHS) approximation problem arising from learning theory is investigated. Some K-functionals and moduli of smoothness with respect to RKHSs are defined with Fourier–Bessel series and Fourier–Bessel transforms, respectively. Their equivalent relation is shown, with which the upper bound estimate for the best RKHS approximation is provided. The convergence rate is bounded with the defined modulus of smoothness, which shows that the RKHS approximation can attain the same approximation ability as that of the Fourier–Bessel series and Fourier–Bessel transform. In particular, it is shown that for a RKHS produced by the Bessel operator, the convergence rate sums up to the bound of a corresponding convolution operator approximation. The investigations show some new applications of Bessel functions. The results obtained can be used to bound the approximation error in learning theory.

1. Introduction

The error analysis in learning theory shows that the learning rate of the kernel regularized regression depends upon the approximation ability of the kernel function spaces (see, for example, [1,2,3]).
Let X be a complete metric space and μ be a Borel measure on X. Denoted by L μ 2 ( X ) , the Hilbert space consisting of (real) square integrable functions with the inner product
f , g L μ 2 ( X ) = X f ( x ) g ( x ) d μ ( x ) , f , g L μ 2 ( X ) .
Suppose that K : X × X R = ( , + ) is continuous, symmetric and strictly positive definite, i.e., for any given integers m 1 , ( K ( x i , x j ) ) i , j = 1 m are positive definite matrices for given finite sets { x 1 , x 2 , , x m } X . Assume that K L μ × μ 2 ( X × X ) , i.e.,
X X K ( x , t ) 2 d μ ( x ) d μ ( t ) < + .
Then the linear operator L K : L μ 2 ( X ) L μ 2 ( X ) defined by
L K ( f , x ) = X K ( x , t ) f ( t ) d μ ( t ) , x X
is positive, and its range lies in C ( X ) . Take L K 1 2 to be the linear operator on L μ 2 ( X ) satisfying L K 1 2 L K 1 2 = L K and L K 1 2 , the inverse of L K 1 2 . Additionally, define H K = L K 1 2 ( L μ 2 ( X ) ) . Then ( H K , · H K ) is a reproducing kernel Hilbert space associated with K x ( y ) = K ( x , y ) , i.e., (see [1,4,5,6,7]),
f ( x ) = f , K x H K , f H K , x X ,
where the inner product · , · H K is induced by a norm defined as
f H K = L K 1 2 f L μ 2 ( X ) , f H K ,
i.e.,
L K 1 2 f H K = f L μ 2 ( X ) , f L μ 2 ( X ) .
One of the targets of learning theory is to find an unknown function f : X R from the random observations { ( x i , y i ) } i = 1 m drawn i.i.d. (identically and independently distributed) according to a unknown probability ρ ( x , y ) = ρ X ( x ) ρ ( y | x ) defined on X × R (see [1,6]). A usual algorithm to realize this aim is to solve the following kernel regularized optimization problem:
f z , λ = a r g min f H K 1 m i = 1 m f ( x i ) y i 2 + λ f H K 2 ,
where H K is taken as the hypothesis space, λ > 0 is a parameter which balances the relationship between the empirical error term i = 1 m f ( x i ) y i 2 and the penalty term f H K 2 . Let f ρ ( x ) = R y d ρ ( y | x ) be the regression function. Then f ρ is the least-squares-best predictor (see Section 9.4 in Section 9 of [8]), i.e.,
E ( ( f ρ ( · ) y ) 2 ) = inf g L ρ X 2 ( X ) E ( f ( · ) y ) 2 .
It is known that the convergence analysis of model (5) sums up to bound the convergence rate for error f z , λ f ρ L ρ X 2 ( X ) , which depends upon the decay of the best approximation I ( f , γ ) L ρ X 2 ( X ) defined as (see e.g., [1,2,6])
I ( f , γ ) L ρ X 2 ( X ) = inf g H K , g H K γ f g L ρ X 2 ( X ) , γ > 0
as γ + .
Formula (6) deals with a decay rate which depends upon the approximation property of H K . Many mathematicians have performed investigations on it. For example, D. X. Zhou gives the decay of (6) with the RKHS interpolation theory (see [2,3]). P.X. Ye gives the decay using convolution operators in the Euclidean space R d (see [9]). H.W. Sun gives a decay for (6) with the help of operator theory in a Hilbert space (see [10]). It is known that the Fourier–Bessel series is a good approximation tool and has been studied by many mathematicians (see for example, [11,12,13,14,15,16]). Additionally, we found that approximation by RBF networks of Delsarte translates was studied by some mathematicians. The essence of RBF is summed up as the approximation of Fourier–Bessel transforms (see, for example, [17,18,19,20]). So it is of interest for us to conduct investigations on the decay of I ( f , R ) L ρ X 2 ( X ) with both the Fourier–Bessel series and the Fourier–Bessel transforms.
Let α > 1 2 and 1 p + be given real numbers, and L p ( R + , d μ α ) denote the space of all measurable real functions on R + = [ 0 , + ) such that
f p , α = R + | f ( x ) | p d μ α 1 p < + , 1 p < + , e s s sup x R + | f ( x ) | < + , p = + ,
where d μ α ( x ) = x 2 α + 1 2 α Γ ( α + 1 ) d x . The normalized Bessel function j α ( z ) of the first kind and order α is
j α ( z ) = Γ ( α + 1 ) n = 0 + ( 1 ) n ( z 2 ) 2 n n ! Γ ( n + α + 1 ) = 2 α Γ ( α + 1 ) J α ( x ) x α , z R + ,
where
J α ( x ) = x 2 α n = 0 + ( 1 ) n ( z 2 ) 2 n n ! Γ ( n + α + 1 )
is the Bessel function of first kind and order α , and Γ ( α + 1 ) is the Gamma function.
For f L 1 ( R + , d μ α ) , the usual Fourier–Bessel transform F B ( α ) ( f ) is defined as
F B ( α ) ( f ) ( λ ) = R + f ( x ) j α ( λ x ) d μ α , λ R + .
In the present paper, some investigations on the decay of I ( f , γ ) L ρ X 2 ( X ) in the case that H K are constructed with j α ( z ) ( z [ 0 , 1 ] ) and F B ( α ) ( f ) are provided. Some K-functional and moduli of smoothness are defined with the help of the semigroup of operators, and their equivalences are shown, with which the error for the decay is bounded. The results obtained are two kinds of upper bound estimates associated with Fourier–Bessel series and Fourier–Bessel transforms, respectively.
The paper is organized as follows. In Section 2, some notions and results of the Fourier–Bessel series and Fourier–Bessel transforms are provided, with which two kinds of RKHSs are constructed; the corresponding best RKHS approximation problem in these setting is restated. Some K-functionals and moduli of smoothness associated with Fourier–Bessel series and Fourier–Bessel transforms are provided, and their equivalence is shown, with which some upper bounds for the best approximation are shown in Section 3 and Section 4, respectively. All the proofs for the propositions, the theorems and lemmas are given in Section 5. Some further analysis for the results of the present paper are given in Section 6, from which one can see the value of writing this manuscript. A general proposition for the strong equivalence of K-functionals and moduli of smoothness is listed in the Appendix A.

2. Preliminaries

Let λ 1 , λ 2 , , be the positive zeros of J α ( u ) arranged in increasing order. It is well known that j α ( λ n x ) , n = 1 , 2 , , form a complete orthogonal system in L α 2 = { f : f L α 2 = ( 0 1 x 2 α + 1 | f ( x ) | 2 d x ) 1 2 < + } (see, for example, [12,16,21]), i.e.,
0 1 x 2 α + 1 j α ( λ n u ) j α ( λ m u ) d u = j α ( λ i · ) L α 2 2 δ m , n .
Take j α * ( λ i x ) = j α ( λ i x ) j α ( λ i · ) L α 2 . Then
0 1 x 2 α + 1 j α * ( λ n u ) j α * ( λ m u ) d u = δ m , n ,
{ j α * ( λ i x ) } i = 1 forms an orthonormal basis of L α 2 and for any f L α 2 , there holds Fourier–Bessel series
f ( x ) = i = 1 + a i ( f ) j α * ( λ i x ) , x [ 0 , 1 ] ,
where a i ( f ) = 0 1 x 2 α + 1 f ( x ) j α * ( λ i x ) d x and
f L α 2 = i = 1 + a i ( f ) 2 1 2 .
Lemma 1.
We have the following results:
(i)
Let Λ N . Then
i Λ c i j α * ( λ i x ) L α 2 = i Λ c i 2 1 2 .
(ii)
The generalized translation operator T x on L α 2 defined as
T x ( f ) ( y ) = Γ ( α + 1 ) π Γ ( α + 1 2 ) 0 π f x 2 + y 2 2 x y cos θ ( sin θ ) 2 α d θ , x , y [ 0 , 1 ]
has the expansion of
T x ( f ) ( y ) = i = 1 + a i * ( f ) j α * ( λ i x ) j α * ( λ i y ) , x , y [ 0 , 1 ] ,
where a i * ( f ) = 0 1 x 2 α + 1 f ( x ) j α ( λ i x ) d x , and
T h ( f ) ( · ) L α 2 f L α 2 , h [ 0 , 1 ] .
(iii)
The zeros { λ 1 , λ 2 , , } satisfy
λ n = n π + α π 2 π 4 + O ( 1 n ) .
Proof. 
See it from Section 5. □
Inequality (13) is a theoretical basis for defining the moduli of smoothness with translation operators T x ( f ) ( y ) .
Let { h i } i = 1 + be the set of given positive real sequences such that the right side of the series
K x ( α ) ( y ) = K ( α ) ( x , y ) = i = 1 + h i j α * ( λ i x ) j α * ( λ i y ) , x , y [ 0 , 1 ] ,
has uniform convergence for all x R + . It therefore is a Mercer kernel. Then
L K ( α ) ( f , x ) = i = 1 + h i a i ( f ) j α * ( λ i x ) , x [ 0 , 1 ] .
Take
L K ( α ) 1 2 ( f , x ) = i = 1 + h i a i ( f ) j α * ( λ i x ) , x [ 0 , 1 ] .
Then it is easy to verify that L K ( α ) = L K ( α ) 1 2 L K ( α ) 1 2 , and
H K ( α ) = L K ( α ) 1 2 ( L α 2 ) = { g L α 2 : g K ( α ) = L K ( α ) 1 2 ( g ) L α 2 = i = 1 + | a i ( g ) | 2 h i 1 2 < + }
is a RKHS in L α 2 associating with reproducing kernel K ( α ) ( x , y ) and an inner product · , · K ( α ) defined as
f , g K ( α ) = i = 1 + a i ( f ) a i ( g ) h i , f , g H K ( α ) .
Since
a i ( K x ( α ) ( · ) ) = 0 1 y 2 α + 1 K ( α ) ( x , y ) j α * ( λ i y ) d y = 0 1 y 2 α + 1 k = 1 + h k j α * ( λ k x ) j α * ( λ k y ) j α * ( λ i y ) d y = h i j α * ( λ i x ) ,
we have
f , K x ( α ) ( · ) K ( α ) = i = 1 + a i ( f ) a i ( K x ( α ) ( · ) ) h i = i = 1 + a i ( f ) h i j α * ( λ i x ) h i = i = 1 + a i ( f ) j α * ( λ i x ) = f ( x ) .
Equality (6) becomes
I ( f , γ ) L α 2 = inf g H K ( α ) , g K ( α ) γ f g L α 2 , γ > 0
as γ + .
Let C * ( R ) be the class of even C -functions on R = { , + } . Denoted by A * ( R ) , the space of even C -functions on R which are rapidly decreasing together with all their derivatives, i.e.,
p , k N , sup x 0 | x p f ( k ) ( x ) | < + ,
where N is the set of natural numbers.
Let D * , a denote the space of even C -functions on R with support in [ a , a ] , a 0 and
D ( R ) = a 0 D , a .
Additionally, define the generalized translation operator T x on L 1 ( R + , d μ α ) as
T x ( f ) ( y ) = Γ ( α + 1 ) π Γ ( α + 1 2 ) 0 π f x 2 + y 2 2 x y cos θ ( sin θ ) 2 α d θ , x , y R + .
and define a convolution on L 1 ( R + , d μ α ) by
( f B g ) ( x ) = R + T x ( f ) ( y ) g ( y ) d μ α ( y ) , f , g L 1 ( R + , d μ α ) , x R + .
For the Bessel operators
l α = d 2 d x 2 + 2 α + 1 x d d x
we have (see p. 12 or p. 177 of [22])
( l α ) ( j α ( λ · ) ) ( x ) = λ 2 j α ( λ x ) , ( l α ) 1 ( j α ( λ · ) ) ( x ) = 1 λ 2 j α ( λ x ) , λ , x R + .
and therefore
( l α ) 1 2 ( j α ( λ · ) ) ( x ) = λ j α ( λ x ) , x R + .
Moreover, we have the following lemma.
Lemma 2.
There hold the following:
(i) 
D ( R ) is dense in A ( R ) ;
(ii) 
Both D ( R ) and A ( R ) are dense in L p ( R + , d μ α ) , 1 p < + , and
D ( R ) A ( R ) L p ( R + , d μ α ) , 1 p < + ;
(iii) 
If f A ( R ) , then F B ( α ) ( f ) A ( R ) and T x ( f ) A ( R ) ;
(iv) 
F B ( α ) is a topological isomorphism from A ( R ) to itself and F ( α ) B 1 = F B ( α ) .
(v) 
There hold
F B ( α ) ( f B g ) = F B ( α ) ( f ) F B ( α ) ( g ) , f , g L 1 ( R + , d μ α ) ,
( f B g ) ( x ) = R + F B ( α ) ( f ) ( λ ) F B ( α ) ( g ) ( λ ) j α ( λ x ) d μ α ( λ )
and
F B ( α ) ( T x ( f ) ) ( λ ) = j α ( λ x ) F B ( α ) ( f ) ( λ ) , f L 1 ( R + , d μ α ) .
It follows
T x ( f , y ) = R + F B ( α ) ( f ) ( λ ) j α ( λ x ) j α ( λ y ) d μ α ( λ ) , f L 1 ( R + , d μ α ) .
(vi) 
If f , F B ( α ) ( f ) L 1 ( R + , d μ α ) , then
f ( x ) = R + F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) , a . e . x R + ;
(vii) 
Let f A ( R ) or f L 2 ( R + , d μ α ) . Then
R + f ( x ) 2 d μ α = R + F B ( α ) ( f ) ( λ ) 2 d μ α ( λ ) ;
(viii) 
There hold the following relations
F B ( α ) ( l α p ( f ) ) ( λ ) = ( 1 ) p λ 2 p F B ( α ) ( f ) ( λ ) , f L 1 ( R + , d μ α ) ,
T x ( f ) p , α f p , α , f L p ( R + , d μ α ) , 1 p < + ,
F B ( α ) ( j α ( λ · ) ) ( y ) = j α ( λ x ) j α ( λ y ) , x , y , λ R + .
Proposition 2.1 of [23] shows that if ϕ L 1 ( R + , d μ α ) satisfies F B ( α ) ( ϕ ) 0 and F B ( α ) ( ϕ ) L 1 ( R + , d μ α ) , then
K ( ϕ , x , y ) = K x ( ϕ , y ) = T x ( ϕ , y ) = R + F B ( α ) ( ϕ ) ( λ ) j α ( λ x ) j α ( λ y ) d μ α , y R + .
defines a Mercer kernel on R + . We give an assumption
Assumption I.
Let ϕ L 1 ( R + , d μ α ) satisfy F B ( α ) ( ϕ ) > 0 , F B ( α ) ( ϕ ) L 1 ( R + , d μ α ) and for any μ > 0 there is a real number a R + such that
{ λ R + : F B ( α ) ( ϕ ) ( λ ) 1 μ } [ 0 , a ] .
We point here that the functions ϕ satisfying Assumption 1 are existent, and give two examples.
Example 1.
For t ( 0 , + ) the function p t : [ 0 , + ) R + defined by
p t ( x ) = 2 α + 1 Γ ( α + 3 2 ) π t ( t 2 + x 2 ) α + 3 2
satisfies p t L 1 ( R + , d μ α ) = 1 ,   p t B p s = p t + s and F B ( α ) ( p t ) ( λ ) = e t λ for λ R + (see Problem 5. VIII 2 in Section 5.VIII Problems of [22]).
Example 2.
For t , s ( 0 , + ) the function k t : R + R + defined by
k t ( x ) = e x 2 4 t ( 2 t ) α + 1
satisfies k t L 1 ( R + , d μ α ) = 1 , k t B k s = k t + s and F B ( α ) ( k t ) ( λ ) = e t λ 2 for λ R + (see Problem 5. VIII 1 in Section 5.VIII Problems of [22]).
Define
H K ( ϕ ) = { g L 2 ( R + , d μ α ) C ( R ) : F B ( α ) ( g ) F B ( α ) ( ϕ ) 1 2 L 2 ( R + , d μ α ) , g ( u ) = R + F B ( α ) ( g ) ( λ ) j α ( λ u ) d μ α ( λ ) }
with norm g H K ( ϕ ) = R + | F B ( α ) ( g ) ( λ ) | 2 F B ( α ) ( ϕ ) ( λ ) d μ α 1 2
Define an inner product on H K ( ϕ ) as
g , f K ( ϕ ) = R + F B ( α ) ( f ) ( λ ) F B ( α ) ( g ) ( λ ) F B ( α ) ( ϕ ) ( λ ) d μ α , f , g H K ( ϕ ) .
It is known that K ( ϕ , x , y ) is a reproducing kernel of H K ( ϕ ) (see [24]), i.e.,
g , K ( ϕ , x , · ) K ( ϕ ) = g ( x ) , g H K ( ϕ ) , x R + .
We have
L K ( ϕ ) ( f , x ) = R + K x ( ϕ , u ) f ( u ) d μ α ( u ) = R + F B ( α ) ( ϕ ) ( λ ) F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) , f L 1 ( R + , d μ α ) .
Defi
ne for a given real number r R an operator as
L K ( ϕ ) r ( f , x ) = R + F B ( α ) ( ϕ ) ( λ ) r F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) , f L 1 ( R + , d μ α ) .
Then it is easy to show that L K ( ϕ ) = L K ( ϕ ) 1 2 ( L K ( ϕ ) 1 2 ) = L K ( ϕ ) 1 2 L K ( ϕ ) 1 2 ,
L K ( ϕ ) 1 2 ( L 2 ( R + , d μ α ) ) = { g L 2 ( R + , d μ α ) : R + | F B ( α ) ( g ) ( λ ) | 2 F B ( α ) ( ϕ ) ( λ ) d μ α 1 2 < + } = H K ( ϕ ) ,
and
f K ( ϕ ) = L K ( ϕ ) 1 2 ( f ) L 2 ( R + , d μ α ) , f H K ( ϕ ) .
In this case, the decay (6) becomes
I ( f , γ ) L 2 ( R + , d μ α ) = inf g H K ( ϕ ) , g K ( ϕ ) γ f g L 2 ( R + , d μ α ) , f L 2 ( R + , d μ α )
for γ + .
If F B ( α ) ( ϕ ) ( λ ) = 1 λ 2 , then we define the corresponding RKHS
H K ( ϕ ) = L K ( ϕ ) 1 2 ( A ( R ) ) = { g A ( R ) : R + λ 2 F B ( α ) ( g ) ( λ ) 2 d μ α 1 2 < + }
and for g H K ( ϕ ) , there holds
g K ( ϕ ) = L K ( ϕ ) 1 2 ( g ) L 2 ( R + , d μ α ) = R + λ 2 F B ( α ) ( g ) ( λ ) 2 d μ α ( λ ) 1 2 = ( l α ) 1 2 g L 2 ( R + , d μ α ) .
We have by (34) that
I ( f , γ ) L 2 ( R + , d μ α ) = inf ( l α ) 1 2 g L 2 ( R + , d μ α ) γ f g L 2 ( R + , d μ α )
for γ + .

3. An Upper Bound Estimate with Fourier–Bessel Series

To bound the decay of (18), we define a K-functional
D H K ( α ) ( f , t ) L α 2 = inf g H K ( α ) f g L α 2 + t g K ( α ) , f L α 2 , t > 0
and a modulus of smoothness
ω H K ( α ) ( f , t ) L α 2 = ( T K ( α ) ( t ) I ) f L α 2 , f L α 2 , t > 0 ,
where
T K ( α ) ( t ) f ( x ) = i = 1 e t h i a i ( f ) j α ( λ i x ) , x [ 0 , 1 ] .
Then we have the following Proposition 1 whose proofs can be found from Section 5.
Proposition 1.
There holds an equivalent relation
D H K ( α ) ( f , t ) L α 2 ω H K ( α ) ( f , t ) L α 2 , f L α 2 , t > 0 .
Proof. 
See it from Section 5. □
Theorem 1.
There is a constant C > 0 such that
I ( f , γ ) L α 2 C ω H K ( α ) f , f L α 2 γ L α 2 , f L α 2
if γ + .
Proof. 
See it from Section 5. □
Taking h i = 1 λ i 2 into (15), we have a kernel
K x ( y ) = K ( x , y ) = i = 1 + 1 λ i 2 j α ( λ i x ) j α ( λ i y ) , x , y [ 0 , 1 ] ,
It follows that
H K = L K 1 2 ( L α 2 ) = { g L α 2 : g K = i = 1 + λ i 2 | a i ( g ) | 2 1 2 < + } ,
which shows that g K = ( l α ) 1 2 ( g ) L α 2 and
D H K ( f , t ) L α 2 = inf g H K f g L α 2 + t ( l α ) 1 2 ( g ) L α 2 , f L α 2 , t > 0
and
ω H K ( f , t ) L α 2 = ( T K ( t ) I ) f L α 2 , f L α 2 , t > 0 ,
where
T K ( t ) f ( x ) = i = 1 e t λ i a i ( f ) j α ( λ i x ) , x [ 0 , 1 ] .
We have two corollaries.
Corollary 1.
For any f L α 2 , there holds
D H K ( f , t ) L α 2 ω H K ( f , t ) L α 2 , f L α 2 , t > 0
Corollary 2.
For any f L α 2 , there holds
I ( f , γ ) L α 2 C ω H K f , f L α 2 γ L α 2 , γ + .

4. An Upper Bound Estimate with the Fourier–Bessel Transform

To bound I ( f , γ ) L 2 ( R + , d μ α ) , we define a K-functional D K ( ϕ ) ( f , t ) L 2 ( R + , d μ α ) and a modulus ω K ( ϕ ) ( f , t ) L 2 ( R + , d μ α ) respectively corresponding to H K ( ϕ ) as
D K ( ϕ ) ( f , t ) L 2 ( R + , d μ α ) = inf g H K ( ϕ ) f g L 2 ( R + , d μ α ) + t g K ( ϕ ) = inf g L K ( ϕ ) 1 2 ( L 2 ( R + , d μ α ) ) f g L 2 ( R + , d μ α ) + t L K ( ϕ ) 1 2 ( f ) L 2 ( R + , d μ α ) , f L 2 ( R + , d μ α ) ,
and
ω K ( ϕ ) ( f , t ) L 2 ( R + , d μ α ) = ( T K ( ϕ ) ( t ) I ) f L 2 ( R + , d μ α ) , f L 2 ( R + , d μ α ) , t > 0 ,
where
T K ( ϕ ) ( t ) f ( x ) = R + e t F B ( α ) ( ϕ ) ( λ ) F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) .
The K-functional and the modulus are equivalent, i.e., we have the following proposition.
Proposition 2.
Let ϕ L 1 ( R + , d μ α ) satisfy Assumption 1. Then there holds the equivalence
D K ( ϕ ) ( f , t ) L 2 ( R + , d μ α ) ω K ( ϕ ) ( f , t ) L 2 ( R + , d μ α ) , f L 2 ( R + , d μ α ) , t > 0 .
We now give an upper bound estimate for (34).
Theorem 2.
Under the conditions of Proposition 2, there is a constant C > 0 such that
I ( f , γ ) L 2 ( R + , d μ α ) C ω K ( ϕ ) f , f L 2 ( R + , d μ α ) γ L 2 ( R + , d μ α ) , f L 2 ( R + , d μ α )
if γ + .
For F B ( α ) ( ϕ ) ( λ ) = 1 λ 2 we define a K-functional on L 2 ( R + , d μ α ) as
D l α 1 2 ( f , t ) L 2 ( R + , d μ α ) = inf g H K ( ϕ ) f g + t ( l α ) 1 2 g L 2 ( R + , d μ α ) , t > 0 .
Define a modulus of smoothness as
ω l α 1 2 ( f , t ) L 2 ( R + , d μ α ) = ( T l α 1 2 ( t ) I ) f L 2 ( R + , d μ α ) , t > 0 ,
where
T l α 1 2 ( t ) f ( x ) = R + e λ t F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) .
Then we have the following two corollaries.
Corollary 3.
There holds the equivalent relation
D l α 1 2 ( f , t ) L 2 ( R + , d μ α ) ω l α 1 2 ( f , t ) L 2 ( R + , d μ α ) , f L 2 ( R + , d μ α ) , t > 0 .
Corollary 4.
There is a constant C > 0 such that
I ( f , R ) L 2 ( R + , d μ α ) C ω l α 1 2 f , f L 2 ( R + , d μ α ) R L 2 ( R + , d μ α ) , f L 2 ( R + , d μ α ) .
We give further computations for T l α 1 2 ( t ) f ( x ) . By Example 1, we know F B ( α ) ( p t ) ( λ ) = e λ t , which, together with (21), gives
T l α 1 2 ( t ) f ( x ) = R + F B ( α ) ( p t ) ( λ ) F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) = R + F B ( α ) ( f B p t ) ( λ ) j α ( λ x ) d μ α ( λ ) = ( f B p t ) ( x ) , x R + ,
which with (42) shows that
ω l α 1 2 ( f , t ) L 2 ( R + , d μ α ) = ( f B p t ) f L 2 ( R + , d μ α ) , t > 0 .
Take (43) into (42). Then
I ( f , γ ) L 2 ( R + , d μ α ) C ( f B p t ) f L 2 ( R + , d μ α ) | t = f L 2 ( R + , d μ α ) γ , f L 2 ( R + , d μ α ) .
(44) shows that the decay of I ( f , γ ) L 2 ( R + , d μ α ) is controlled by the approximation order of convolution operator f B p t for t = f L 2 ( R + , d μ α ) γ .
For F B ( α ) ( ϕ ) ( λ ) = 1 λ 4 we define
H K ( ϕ ) = L K ( ϕ ) 1 2 ( A ( R ) ) = { g A ( R ) : R + λ 4 F B ( α ) ( f ) ( λ ) d μ α 2 1 2 < + } .
Then
g K ( ϕ ) = R + λ 4 F B ( α ) ( f ) ( λ ) d μ α 2 1 2 = ( l α ) g L 2 ( R + , d μ α ) .
Define a K-functional on L 2 ( R + , d μ α ) as
D l α ( f , t ) L 2 ( R + , d μ α ) = inf g H K ( ϕ ) f g + t ( l α ) g L 2 ( R + , d μ α ) , t > 0 .
Define a modulus of smoothness as
ω l α ( f , t ) L 2 ( R + , d μ α ) = ( T l α ( t ) I ) f L 2 ( R + , d μ α ) , t > 0 ,
where
T l α ( t ) f ( x ) = R + e λ 2 t F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) .
Then we have the following two corollaries.
Corollary 5.
There holds
D l α ( f , t ) L 2 ( R + , d μ α ) ω l α ( f , t ) L 2 ( R + , d μ α ) , f L 2 ( R + , d μ α ) , t > 0 .
Corollary 6.
There is a constant C > 0 such that
I ( f , γ ) L 2 ( R + , d μ α ) C ω l α f , f L 2 ( R + , d μ α ) γ L 2 ( R + , d μ α ) , f L 2 ( R + , d μ α ) .
Additionally, by Example 2, we know F B ( α ) ( k t ) ( λ ) = e λ 2 t , which, together with (21), gives
T l α ( t ) f ( x ) = R + F B ( α ) ( k t ) ( λ ) F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) = R + F B ( α ) ( f B k t ) ( λ ) j α ( λ x ) d μ α ( λ ) = ( f B k t ) ( x ) , x R + ,
which, with (47), shows that
ω l α ( f , t ) L 2 ( R + , d μ α ) = ( f B k t ) f L 2 ( R + , d μ α ) , t > 0 .
Take (48) into (47), we have
I ( f , γ ) L 2 ( R + , d μ α ) C ( f B k t ) f L 2 ( R + , d μ α ) | t = f L 2 ( R + , d μ α ) γ , f L 2 ( R + , d μ α ) .
We know by (49) that the decay of I ( f , γ ) L 2 ( R + , d μ α ) is controlled by the approximation order of the convolution operator f B k t for t = f L 2 ( R + , d μ α ) γ .

5. Proofs

Proof of Lemma 1.
Formula (11) can be obtained by the orthonormal of { j α ( λ i x ) } i = 1 + . Formula (13) can be seen from [11] or Lemma 1 in [12]. Formula (14) can be seen from [16]. □
Proof of Lemma 2.
Proof of (i). See Proposition 2.III.1 in P51 of [22].
Proof of (ii). See Corollary 4.III.2 in P104 and Corollary 4.III.3 in P105 of [22].
Proof of (iii). See Theorem 5.III.1 in P127 and Proposition 5.II.4 in P129 of [22].
Proof of (iv). See Theorem 5.III.1 in P127 and (5.III.3) in P128 of [22].
Proof of (v). See Proposition 5.II.2 in P120 of [22] and (4.III.10) in Proposition 4.III.4 of [22].
Proof of (vi). See Theorem 5.II.2 in P126 of [22].
Proof of (vii). See (5.III.5) and (5.III.6) in Proposition 5.III.2 in P128,(5.V.2) in P139 of [22], and Proposition 2.2 in [25].
Proof of (viii). Formula (27) may be found from (5.II.12) of Proposition 5.II.3 in P122 of [22]; (28) may be found from (4.II.9) of Proposition 4.II.2 in P94 of [22]; (29) may be found from (4.II.8) in P93 of [22]. □
Proof of Proposition 1.
We show it with the help of Proposition A1 in the Appendix A.
It is easy to see that T K ( α ) ( t ) satisfies (A1) and (A2). Simple computations show
E f ( x ) = lim t 0 T K ( α ) ( t ) f ( x ) f ( x ) t = i = 1 + a i ( f ) lim t 0 ( e t h i 1 ) t j α ( λ i x ) = i = 1 + 1 h i a i ( f ) j α ( λ i x )
and
t E T K ( α ) ( t ) f ( x ) = i = 1 + t h i e t h i a i ( f ) j α ( λ i x ) .
It follows
t E T K ( α ) ( t ) f L α 2 = i = 1 + t h i e t h i 2 a i 2 ( f ) 1 2 i = 1 + a i 2 ( f ) 1 2 = f L α 2 .
Collecting (50), and (A5), we have (38). □
Proof of Theorem 1.
Because h i 0 + ( i + ) , defining
f μ ( α ) ( x ) = 1 h i < μ a i ( f ) j α ( λ i x ) ,
we have for any g H K ( α ) that
f ( x ) f μ ( α ) ( x ) = 1 h i μ a i ( f ) j α ( λ i x ) = 1 h i μ a i ( f g ) j α ( λ i x ) + 1 h i μ a i ( g ) j α ( λ i x )
and
f f μ ( α ) L α 2 1 h i μ a i ( f g ) 2 1 2 + 1 h i μ a i ( g ) 2 1 2 f g L α 2 + 1 h i μ h i h i a i ( g ) 2 1 2 f g L α 2 + 1 μ 1 h i μ 1 h i | a i ( g ) | 2 1 2 f g L α 2 + 1 μ g K ( α ) .
Since the arbitrariness of g H K ( α ) , we have
f f μ ( α ) L α 2 inf g H K ( α ) f g L α 2 + 1 μ g K ( α ) .
Take h μ ( α ) ( x ) = 1 h i < μ a i ( f ) h i j α ( λ i x ) . Then f μ ( α ) ( x ) = L K ( α ) 1 2 ( h μ ( α ) , x ) H K ( α ) and
f μ K ( α ) = h μ L α 2 = 1 h i < μ | a i ( f ) | 2 h i 1 2 μ 1 h i < μ a i ( f ) 2 1 2 μ f L α 2 .
Take μ f L α 2 = γ . Then 1 μ = f L α 2 γ . By the definition of I ( f , γ ) L α 2 , we have (39). □
Proof of Proposition 2.
It is easy to see that T K ( ϕ ) ( t ) satisfies (A1) and (A2). Simple computations show
E f ( x ) = lim t 0 T K ( ϕ ) ( t ) f ( x ) f ( x ) t = lim t 0 R + ( e t F B ( α ) ( ϕ ) ( λ ) 1 ) F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) t = R + 1 F B ( α ) ( ϕ ) ( λ ) F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ )
and
( t E T K ( ϕ ) ( t ) f ) ( x ) = R + t F B ( α ) ( ϕ ) ( λ ) e t F B ( α ) ( ϕ ) ( λ ) F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) .
Since f L 2 ( R + , d μ α ) , we know by (26) that F B ( α ) ( ϕ ) ( · ) L 2 ( R + , d μ α ) . Additionally, since
t F B ( α ) ( ϕ ) ( λ ) e t F B ( α ) ( ϕ ) ( λ ) 1 , t 0 ,
we know
h t ( · ) = t F B ( α ) ( ϕ ) ( · ) e t F B ( α ) ( ϕ ) ( · ) F B ( α ) ( f ) ( · ) L 2 ( R + , d μ α ) .
It is easy to see that
( t E T K ( ϕ ) ( t ) f ) ( x ) = F B ( α ) ( h t ) ( x ) .
It follows by (26) again that
t E T K ( ϕ ) ( t ) f L 2 ( R + , d μ α ) 2 = F B ( α ) ( h t ) L 2 ( R + , d μ α ) = h t L 2 ( R + , d μ α ) F B ( α ) ( f ) L 2 ( R + , d μ α ) = f L 2 ( R + , d μ α ) 2 .
By the same method, we have
T K ( ϕ ) ( t ) f L 2 ( R + , d μ α ) 2 = R + e t F B ( α ) ( ϕ ) ( λ ) 2 F B ( α ) ( f ) ( λ ) 2 d μ α ( λ ) R + F B ( α ) ( f ) ( λ ) 2 d μ α ( λ ) = f L 2 ( R + , d μ α ) 2 .
Collect (54), (55) and (A6) we have (40). □
Proof of Theorem 2.
Define μ , λ = { λ R + : 1 F B ( α ) ( ϕ ) ( λ ) < μ } and
f ( x ) = μ , λ F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) .
Then
f ( x ) f ( x ) = R + \ μ , λ F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) .
It follows that for any g H K ( ϕ ) , there holds
f ( x ) f ( x ) = R + \ μ , λ F B ( α ) ( f g ) ( λ ) j α ( λ x ) d μ α ( λ ) + R + \ μ , λ F B ( α ) ( g ) ( λ ) j α ( λ x ) d μ α ( λ ) .
Define the characteristic of R + \ μ , λ as χ R + \ μ , λ ( λ ) . Then
f ( x ) f ( x ) = R + χ R + \ μ , λ ( λ ) F B ( α ) ( f g ) ( λ ) j α ( λ x ) d μ α ( λ ) + R + χ R + \ μ , λ ( λ ) F B ( α ) ( g ) ( λ ) j α ( λ x ) d μ α ( λ ) = F B ( α ) ( g μ ) ( x ) + F B ( α ) ( b μ ) ( x ) ,
where
g μ ( λ ) = χ R + \ μ , λ ( λ ) F B ( α ) ( f g ) ( λ ) , b μ ( λ ) = χ R + \ μ , λ ( λ ) F B ( α ) ( g ) ( λ ) .
Since ϕ satisfies Assumption 1, by (30) we know g μ D ( R ) A ( R ) L 2 ( R + , d μ α ) . By (26), we have
F B ( α ) ( g μ ) L 2 ( R + , d μ α ) = g μ L 2 ( R + , d μ α ) .
By the same method, we have
F B ( α ) ( b μ ) L 2 ( R + , d μ α ) = b μ L 2 ( R + , d μ α ) .
It follows from (56), (57) and (58) that
f f L 2 ( R + , d μ α ) χ R + \ μ , λ ( · ) F B ( α ) ( f g ) ( · ) L 2 ( R + , d μ α ) + χ R + \ μ , λ ( · ) F B ( α ) ( g ) ( · ) L 2 ( R + , d μ α )
= R + \ μ , λ F B ( α ) ( f g ) ( λ ) 2 d μ α 1 2 + R + \ μ , λ F B ( α ) ( g ) ( λ ) 2 d μ α 1 2 R + F B ( α ) ( f g ) ( λ ) 2 d μ α 1 2 + R + \ μ , λ F B ( α ) ( g ) ( λ ) 2 d μ α 1 2 .
Since (26), we have by the definition of μ , λ that
f f L 2 ( R + , d μ α ) f g L 2 ( R + , d μ α ) + R + \ μ , λ F B ( α ) ( ϕ ) ( λ ) F B ( α ) ( ϕ ) ( λ ) F B ( α ) ( g ) ( λ ) 2 d μ α 1 2 f g L 2 ( R + , d μ α ) + max λ R + \ μ , λ F B ( α ) ( ϕ ) ( λ ) 1 2 R + \ μ , λ F B ( α ) ( g ) ( λ ) 2 F B ( α ) ( ϕ ) ( λ ) d μ α 1 2 f g L 2 ( R + , d μ α ) + max λ R + \ μ , λ F B ( α ) ( ϕ ) ( λ ) 1 2 R + F B ( α ) ( g ) ( λ ) 2 F B ( α ) ( ϕ ) ( λ ) d μ α 1 2 = f g L 2 ( R + , d μ α ) + 1 μ g H K ( ϕ ) .
Because of the arbitrariness of g H K ( ϕ ) , we have
f f L 2 ( R + , d μ α ) inf g H K ( ϕ ) f g L 2 ( R + , d μ α ) + 1 μ g H K ( ϕ ) .
Let h ( x ) = μ , λ F B ( α ) ( f ) ( λ ) j α ( λ x ) F B ( α ) ( ϕ ) ( λ ) d μ α . Then by (20) we have h L 2 ( R + , μ α ) and
f ( x ) = L K ( ϕ ) 1 2 ( h , x ) = μ , λ F B ( α ) ( f ) ( λ ) j α ( λ x ) d μ α ( λ ) .
Therefore, f H K ( ϕ ) . It follows that
f K ( ϕ ) = h L 2 ( R + , d μ α ) = μ , λ | F B ( α ) ( f ) ( λ ) | 2 F B ( α ) ( ϕ ) ( λ ) d μ α 1 2 μ F B ( α ) ( f ) L 2 ( R + , d μ α ) = μ f L 2 ( R + , d μ α ) .
Take μ f L 2 ( R + , d μ α ) = γ . Then μ = γ f L 2 ( R + , d μ α ) . Collecting (60) and (59), together with the definition of I f ; γ L 2 ( R + , d μ α ) we arrive at
I f ; γ L 2 ( R + , d μ α ) inf g H K ( ϕ ) f g L 2 ( R + , d μ α ) + f L 2 ( R + , d μ α ) γ g H K ( ϕ ) = D f , f L 2 ( R + , d μ α ) γ L 2 ( R + , d μ α ) ω f , f L 2 ( R + , d μ α ) γ L 2 ( R + , d μ α ) .

6. Further Discussions

We now give some comments on the results obtained in the present paper.
A more general problem arising from learning theory is to bound the decay rate of the function (see [2])
I ( a , R ) = inf g H R ( a b ) , a B , R + ,
where ( B , · ) is a Banach space and ( H , · H ) is a dense subspace with b b H for b H .
It is known that the approximation ability of a function class is determined by the smoothness of its functions. So the decay of I ( a , R ) is influenced by the smoothness of the functions in H .
Smale and Zhou (see [2]) give the first estimate for the decay of (61) in the case that a ( B , H ) θ , , which is a particular Besov space (in fact, it is the interpolation space of B and H). This work is improved in [9]. For B = H s ( R d ) ( s > 0 ) (the Sobolev space, see [2] for the definition) and the reproducing kernel Hilbert space H = H K σ , Zhou gives an estimate as (see [3])
inf g K σ R f g B d , s ( l o g R ) s
if R A f L 2 ( R d ) , where K σ is the Gaussian kernels
K σ ( x , y ) = exp { x y 2 σ 2 } , x , y [ 0 , 1 ] d , σ > 0 .
The tools used is the RKHS function interpolation.
It is known that the most commonly used tool in approximation theory is the K-functional. The most helpful relation is the strong equivalent relation between a K-functional and a corresponding modulus of smoothness (see, for example, [26]). The most commonly used quantity for describing the approximation ability of a function class is the Jackson inequality expressed with a K-functional or a modulus of smoothness (see also [26]). As far as we know from the literature, no Jackson inequality has been established for the decay of (6). There is little description for the smoothness of a RKHS. Recent research shows that any RKHS has some smoothness; it can be considered from the view of fractional derivative and orthogonal series and show that the well-known K-functional ([27])
D H K ( f , λ ) L ρ X 2 ( X ) = inf g H K ( f g + λ g H K ) , λ > 0 ,
is equivalent to a modulus of smoothness, where X is chosen as some compact sets, for example, X = S d 1 = { x R d : x = 1 } and X = B d = { x R d : x 1 } . It is valuable for us to extend these results to the RKHS defined on a noncompact set. The set X used in the present paper is X = R 1 , which is a noncompact set and has essential properties different from those of a compact set (see, for example, [5]). Moreover, it is the first time that a Jackson inequality is established to describe the decay (6). A advantage of this manuscript is the use of the Bessel series and Bessel transforms, which transforms the RKHS approximation problem into the classical Bessel–Fourier approximation problem and gives the decay rate with Bessel–Fourier approximation skills.
The Jackson inequalities in Theorem 1 and Theorem 2 show that the RKHSs constructed with Bessel series and Bessel transforms have the same approximation as that of the Bessel series and Bessel transforms.
The moduli of smoothness defined in this manuscript are one-order moduli. It is a valuable problem for us to define higher-order moduli of smoothness and show the Jackson inequality to describe the decay of (6).

Author Contributions

Formal analysis, B.S.; Investigation, S.W.; Writing—original draft, M.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported partially by the NSF (Project No. 61877039), the NSFC/RGC Joint Research Scheme (Project No. 12061160462 and N-CityU102/20) of China, the NSF (Project No. LY19F020013) of Zhejiang Province, the Science and Technology Project in Jiangxi Province Department of Education (Project No. GJJ211334).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

It is known that the moduli of smoothness defined by a semi-group of operators have the same properties as those of the usual moduli of smoothness defined by the difference of the function (see Chapter Two of [28]) and have been used to describe the degree of approximation in approximation theory (see, for example, [27,29,30,31,32]). We restate here a proposition for a general strong equivalent relation.
Let ( B , · B ) be a normed linear space, T ( t ) : ( B , a n d · B ) ( B , · B ) t > 0 be a strongly continuous semi-group of operators satisfying
T ( s + t ) = T ( s ) T ( t ) , lim t 0 + T ( t ) = I ,
and
T ( t ) f B f B , f B , t > 0 .
The infinitesimal generator E is given by
E f = lim t 0 + T ( t ) f f t , ( i n B ) ,
whenever the limit exists. D ( E ) is the domain of E. Then we have the following proposition.
Proposition A1.
(Theorem 5.1 of [33]) Let T ( t ) satisfy (A1), (A2) and (A3),
T ( t ) f D ( E ) f o r   a l l   f B ,
and there exists a positive constant N independent of t and T ( t ) such that
t E T ( t ) B N ( N i s   a   c o n s t a n t   i n d e p e n d e n t   o f   t ) , E T ( t ) : B B f o r t 0 ,
Then for r N and t > 0 , there holds
ω r ( f , t ) B = ( T ( t ) I ) r f B inf g D ( E r ) f g B + t r E r g B = K E r ( f , t r ) B ,
where
( T ( s ) I ) r f = k = 1 r ( r k ) ( 1 ) r k T ( k s ) f + ( 1 ) r f .

References

  1. Cucker, F.; Smale, S. On the mathematical foundations of learning. Bull. Amer. Math. Soc. 2001, 39, 1–49. [Google Scholar] [CrossRef] [Green Version]
  2. Smale, S.; Zhou, D.X. Estimating the approximation error in learning theory. Anal. Appl. 2003, 1, 17–41. [Google Scholar] [CrossRef]
  3. Zhou, D.X. Density problem and approximation error in learning theory. Abstr. Appl. Anal. 2013, 715683. [Google Scholar] [CrossRef]
  4. Aronszajn, N. Theory of reproducing kernels. Trans. Amer. Math. Soc. 1950, 68, 337–404. [Google Scholar] [CrossRef]
  5. Sun, H.W. Mercer theorem for RKHS on noncompact sets. J. Complex. 2005, 21, 337–349. [Google Scholar] [CrossRef] [Green Version]
  6. Cucker, F.; Zhou, D.X. Learning Theory: An Approximation Theory Viewpoint; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  7. Ferreiar, J.C.; Menegatto, V.A. Reproducing kernel Hilbert spaces associated with kernels on topological spaces. Funct. Anal. Appl. 2012, 46, 89–91. [Google Scholar] [CrossRef]
  8. Williams, D. Probability with Martingales; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  9. Ye, P.X. Some Approximation Problems in Learning Theory, Post-Doctoral Research Work Report; Chinese Academy of Sciences: Bejing, China, 2003. [Google Scholar]
  10. Sun, H.W. Behavior of a functional in learning theory. Front. Math. China 2007, 2, 455–465. [Google Scholar] [CrossRef]
  11. Abilov, V.A.; Abilova, F.V. Approximation of functions by Fourier-Bessel sums. Izv. Vyssh. Uchebn. Zaved. Math. 2001, 8, 3–9. [Google Scholar]
  12. Abilov, V.A.; Abilova, F.V.; Kerimov, M.K. Some issues concerning approximation of functions by Fourier-Bessel sums. Comput. Math. Math. Phy. 2013, 53, 867–873. [Google Scholar] [CrossRef]
  13. Abilov, V.A.; Abilova, F.V.; Kerimov, M.K. Sharp estimates for the convergence rate of Fourier-Bessel series. Comput. Math. Math. Phy. 2015, 55, 907–916. [Google Scholar] [CrossRef]
  14. Abilov, V.A.; Abilova, F.V.; Kerimov, M.K. On sharp estimates of the convergence of double Fourier-Bessel series. Comput. Math. Math. Phy. 2017, 57, 1735–1740. [Google Scholar] [CrossRef]
  15. Abilov, V.A.; Kerimov, M.K. Some estimates for the error in mixed Fourier-Bessel expansions of functions of two variables. Comput. Math. Math. Phy. 2006, 46, 1465–1486. [Google Scholar] [CrossRef]
  16. Hochstadt, H. The mean convergence of Fourier-Bessel series. SIAM Rev. 1967, 9, 211–218. [Google Scholar] [CrossRef]
  17. Arteaga, C.; Marrero, I. Universal approximation by radial basis function networks of Delsarte translates. Neural Netw. 2013, 46, 299–305. [Google Scholar] [CrossRef] [PubMed]
  18. Arteaga, C.; Marrero, I. Approximation in weighted p-mean by RBF networks of Delsarte translates. J. Math. Anal. Appl. 2014, 414, 450–460. [Google Scholar] [CrossRef]
  19. Dai, F.; Wang, H.P. Interpolation by weighted Paley-Wiener spaces associated with the Dunkl transform. J. Math. Anal. Appl. 2012, 390, 556–572. [Google Scholar] [CrossRef] [Green Version]
  20. Marrero, I. Radial basisi function neural networks of Hankel translates as universal approximation. Anal. Appl. 2019, 17, 897–930. [Google Scholar] [CrossRef]
  21. Vladimirov, V.S. Equations of Matheamtical Physics; Marcel Dekker: New York, NY, USA, 1971. [Google Scholar]
  22. Triméche, K. Generalized Harmonic Analysis and Wavelet Packets; Gordon and Breach Science Publishers: Singapore, 2001. [Google Scholar]
  23. Sheng, B.H. The weighted norm for some Mercer kernel matrices. Acta Math. Sci. 2013, 33A, 6–15. (In Chinese) [Google Scholar]
  24. Sheng, B.H.; Zuo, L. Error analysis of the kernel regularized regression based on refined convex losses and RKBSs. Int. J. Wavelets Multiresolut. Inform. Process 2021, 19, 2150012. [Google Scholar] [CrossRef]
  25. Quadih, S.E.; Daher, R. Estimates for the generalized Fourier-Bessel transform in the space L α , n 2 . Internat. J. Math. Model Comput. 2016, 6, 269–275. [Google Scholar]
  26. Ditzian, Z.; Totik, V. Moduli of Smoothness; Springer: New York, NY, USA, 1987. [Google Scholar]
  27. Sheng, B.H.; Wang, J.L. On the K-functional in learning theory. Anal. Appl. 2020, 18, 423–446. [Google Scholar] [CrossRef]
  28. Butzer, P.L.; Berens, H. Semi-Group of Operators and Approximation; Springer: New York, NY, USA, 1967. [Google Scholar]
  29. Dai, F.; Ditzian, Z. Strong converse inequality for Poisson sums. Proc. Amer. Math. Soc. 2005, 133, 2609–2611. [Google Scholar] [CrossRef]
  30. Dai, F.; Ditzian, Z. Cesaro summability and Marchaud inequality. Constr. Approx. 2007, 25, 73–88. [Google Scholar] [CrossRef]
  31. Ditzian, Z. New moduli of smoothness on the unit ball and other domains, introduction and main properties. Constr. Approx. 2014, 40, 1–36. [Google Scholar] [CrossRef]
  32. Ditzian, Z. New moduli of smoothness on the unit ball, applications and computability. J. Approx. Theory 2014, 180, 49–76. [Google Scholar] [CrossRef]
  33. Ditzian, Z.; Ivanov, K.G. Strong converse inequalities. J. Anal. Math. 1993, 61, 61–111. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tian, M.; Sheng, B.; Wang, S. Some Upper Bounds for RKHS Approximation by Bessel Functions. Axioms 2022, 11, 233. https://doi.org/10.3390/axioms11050233

AMA Style

Tian M, Sheng B, Wang S. Some Upper Bounds for RKHS Approximation by Bessel Functions. Axioms. 2022; 11(5):233. https://doi.org/10.3390/axioms11050233

Chicago/Turabian Style

Tian, Mingdang, Baohuai Sheng, and Shuhua Wang. 2022. "Some Upper Bounds for RKHS Approximation by Bessel Functions" Axioms 11, no. 5: 233. https://doi.org/10.3390/axioms11050233

APA Style

Tian, M., Sheng, B., & Wang, S. (2022). Some Upper Bounds for RKHS Approximation by Bessel Functions. Axioms, 11(5), 233. https://doi.org/10.3390/axioms11050233

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop