Next Article in Journal
Enhancing Local Contrast in Low-Light Images: A Multiscale Model with Adaptive Redistribution of Histogram Excess
Previous Article in Journal
Implementation of Acyclic Matching in Aerospace Technology for Honeycomb-Designed Satellite Constellations
Previous Article in Special Issue
On a p(x)-Biharmonic Kirchhoff Problem with Logarithmic Nonlinearity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Probabilistic Convergence Rates of Symmetric Stochastic Bernstein Polynomials

1
School of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China
2
School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou 310018, China
3
School of Mathematical Sciences, Dalian University of Technology, Dalian 116016, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(20), 3281; https://doi.org/10.3390/math13203281
Submission received: 29 July 2025 / Revised: 12 September 2025 / Accepted: 29 September 2025 / Published: 14 October 2025
(This article belongs to the Special Issue Nonlinear Functional Analysis: Theory, Methods, and Applications)

Abstract

This paper analyzes the exponential convergence properties of Symmetric Stochastic Bernstein Polynomials (SSBPs), a novel approximation framework that combines the deterministic precision of classical Bernstein polynomials (BPs) with the adaptive node flexibility of Stochastic Bernstein Polynomials (SBPs). Through innovative applications of order statistics concentration inequalities and modulus of smoothness analysis, we derive the first probabilistic convergence rates for SSBPs across all L p ( 1 p ) norms and in pointwise approximation. Numerical experiments demonstrate dual advantages: (1) SSBPs achieve comparable L errors to BPs in approximating fundamental stochastic functions (uniform distribution and normal density), while significantly outperforming SBPs; (2) empirical convergence curves validate exponential decay of approximation errors. These results position SSBPs as a principal solution for stochastic approximation problems requiring both mathematical rigor and computational adaptability.

1. Introduction

Modern engineering systems increasingly rely on function approximation from irregular measurements, for example, medical imaging reconstruction [1], robotic sensor networks [2], and shape design problems [3], to name a few. During various approximation methods, classical Bernstein polynomials exhibit excellent properties and have broad applications [4,5,6,7]. For a function f C [ 0 , 1 ] , the degree n Bernstein polynomial B n f is constructed as follows:
B n f ( t ) = k = 0 n f k n B k n ( t ) , t [ 0 , 1 ] ,
where B k n ( t ) = n k t k ( 1 t ) n k , and n k represents the binomial coefficient. This polynomial approximates the function f in the Banach space C [ 0 , 1 ] , and the approximation error is bounded by the inequality (see, e.g., [8]):
B n f ( t ) f ( t ) C [ 0 , 1 ] ω 2 f , 1 n , f C [ 0 , 1 ] ,
where the second-order modulus of smoothness ω 2 ( f , h ) is defined as:
ω 2 ( f , h ) : = sup t ± τ [ a , b ] | τ | h | f ( t + τ ) + f ( t τ ) 2 f ( t ) | .
However, as Wu et al. [9] pointed out, their performance may be limited when handling scattered or noisy data. They can also face challenges in accurately approximating complex functions in high-dimensional spaces. In many real-world problems, data at equally spaced points may not be available, and even when it appears evenly distributed, it may suffer from random errors caused by signal delays or measurement inaccuracies.
Let X be the random variable uniformly distributed on [ 0 , 1 ] . Let { X n , k } k = 0 n be ( n + 1 ) independent copies of X, which we list in ascending order (Here, we have ignored the event: some of the random variables take the same value, which has probability zero).
X n , 0 * < X n , 1 * < < X n , n * ,
to obtain ordered statistics [10]. For brevity of notation, we suppress the asterisks in the displayed inequalities above, and stick with the notations { X n , k } k = 0 n for order statistics.
To mitigate uncertainties brought on by unreliable sampling sites, Wu et al. [9] introduced a class of SBP:
B n X f ( t ) : = k = 0 n f ( X n , k ) B k n ( t ) ,
and proved the following result: Let ϵ > 0 and suppose that ω f , 1 n < ϵ 4.2 . Then,
P B n X f ( t ) f ( t ) > ϵ 15 64 · n ω 6 f , 1 n ϵ 6 ,
in which ω f , · denotes the modulus of continuity of f defined by
ω ( f , h ) : = sup 0 x , y 1 | x y | h f ( x ) f ( y ) , 0 h 1 .
Sun et al. [11,12,13] analyzed the polynomial error of the classical SBP method. Adell and Cárdenas-Morales [14,15,16] have recently introduced a type of SBPs which enhance probabilistic error estimates for smooth functions. On that basis, Gao et al. [17,18] constructed a series of stochastic quasi-interpolations for scattered data problems, which showed efficiency in computation and high accuracy in approximation. Gal et al. [19] examined efficient approximation methods for random functions using stochastic Bernstein polynomials within capacity spaces. Bouzebda [20] combined k-nearest neighbors with single index modeling to handle high-dimensional functional mixing data through dimension reduction, effectively mitigating the curse of dimensionality. These estimates are similar to those established in the learning theory paradigm advocated by Cucker and Smale [21] and Cucker and Zhou [22].
Moreover, Wu et al. [12] analyzed the exponential convergence of SBP, which is crucial for stochastic problems in practice, such as probability density estimation, numerical solutions to stochastic differential equations, and denoising problems. Taking these factors into account, Sun [12] conducted an analysis of the exponential convergence of the classical SBP method. In order to analyze the overall error of SBPs, Sun [12] introduced the concept of “ L p -probabilistic convergence” for SBP: For 1 p and f C [ 0 , 1 ] , we define
f p : = 0 1 | f ( t ) | p d t 1 / p , if 1 p < , max t [ 0 , 1 ] | f ( t ) | , if p = .
On that basis, they gave the power and exponential convergence rates using the modulus of continuity and established Gaussian tail bounds 1 p 2 , and the case p = . The pointwise exponential error estimate is also given.
In order to improve the approximation power of SBPs and let it possess the same approximation power of the classical Bernstein polynomials given by Equation (1), Gao et al. [23] introduced a family of symmetric stochastic Bernstein polynomials (SSBPs) which is called optimal in the sense that it possess the same order of error estimates as classical Bernstein polinomials. The SSBPs is defined as
B n T f ( t ) : = 1 2 k = 0 n f ( X n , k ) + f ( Y n , k ) B k n ( t ) ,
in which Y n , k = 2 k n X n , k . SSBPs are stochastic variants of classical Bernstein polynomials. Among other flexible features, SSBPs allow the underlying sampling operation to take place at scattered sites, which is a more practical way of collecting data for many real world problems. Ref. [23] established the order of convergence in probability in terms of modulus of smoothness, which epitomizes an optimal pointwise error estimate for the classical Bernstein polynomial approximation. Given ϵ > 0 and assume that ω φ 2 ( f , 1 n ) < ϵ 33 . Then, for any 0 t 1 , the following probabilistic estimate holds true:
P B n T f ( t ) f ( t ) > ϵ 152 max 59 ω φ 2 f , 1 n 2 , 100 ω 2 f , 1 n 2 ϵ 2 ,
where the second-order Ditzian–Totik φ -modulus of smoothness is defined by
ω φ 2 ( f , h ) = sup t ± φ ( t ) τ [ 0 , 1 ] | τ | h | f ( t + φ ( t ) τ ) + f ( t φ ( t ) τ ) 2 f ( t ) | , h 0 ,
which characterizes the approximation order of B n f to f in the Banach space C [ 0 , 1 ] .
Motivated by the method in [12], this paper studies the exponential convergence of SSBP. By employing concentration inequalities for order statistics and properties of the modulus of smoothness, we establish probabilistic convergence rates for SSBPs in both the L p norm, for 0 < p , and pointwise. Numerical experiments approximating two standard functions demonstrate that SSBPs achieve approximation errors comparable to those of classical Bernstein polynomials while outperforming SBPs. SSBPs present a promising tool for function approximation in stochastic settings, particularly when dealing with scattered and noisy data, and open new avenues for research in stochastic approximation theory and numerical solutions of stochastic partial differential equations.
The paper is structured as follows: Section 2 covers the necessary preliminaries. Section 3 presents error estimates for the exponential convergence in the L p norm, while Section 4 addresses pointwise exponential convergence estimates. In Section 5, numerical examples are provided to validate the theoretical findings. Finally, Section 6 concludes the paper with a summary of the key results and potential directions for future research.

2. Preliminaries

2.1. Concentration Inequalities for Order Statistics

For a given n N , let
0 X n , 0 < X n , 1 < < X n , n 1 ,
be the order statistics of ( n + 1 ) independent copies of the random variable X uniformly distributed in ( 0 , 1 ) . The following is a standard result in order statistics (e.g., [10,24]).
Theorem 1.
If X is a random variable that has density function f ( x ) and distribution function F ( x ) , and if
X ( 1 ) < X ( 2 ) < < X ( n )
are the order statistics of the n sample values of X, then the density function f X ( j ) of X ( j ) is given by
f X ( j ) ( x ) = n ! ( j 1 ) ! ( n j ) ! [ F ( x ) ] ( j 1 ) [ 1 F ( x ) ] ( n j ) f ( x ) .
Proposition 1.
The random variables X n , k (as described in (6)) have, respectively, density functions: p n , k ( x ) : = ( n + 1 ) · B k n ( x ) , k = 0 , 1 , , n . That is, X n , k obeys the law B ( k + 1 , n k + 1 ) , the beta distribution with parameters k + 1 and n k + 1 .
Order statistics are intrinsically dependent random variables, and must be studied as such. Direct calculation gives
E ( X n , k ) = k + 1 n + 2 , and Var ( X n , k ) = ( k + 1 ) ( n k + 1 ) ( n + 2 ) 2 ( n + 3 ) .
For 0 j < k n , the covariance Cov ( X n , j , X n , k ) of the two random variables X n , j , X n , k is
Cov ( X n , j , X n , k ) = ( j + 1 ) ( n k + 1 ) ( n + 2 ) 2 ( n + 3 ) .
The last equation can be found in [10]. Recall that Y n , k = 2 k n X n , k . It then follows that
E ( Y n , k ) = 2 k n k + 1 n + 2 , Cov ( Y n , j , Y n , k ) = Cov ( X n , j , X n , k ) .

2.2. Extension and Modulus of Continuity

It is noted that, the construct of an SSBP (4) requires that a target function be defined on [ 1 , 2 ] . If a target function is already defined on [ 1 , 2 ] , then nothing needs to be done. If a target function f is only defined on [ 0 , 1 ] , then we extend it to be a continuous function on [ 1 , 2 ] in such a way that the graph of f on the interval [ 1 , 0 ] ( [ 1 , 2 ] ) is symmetric to that on the interval [ 0 , 1 ] with respect to the point ( 0 , f ( 0 ) ) ( ( 1 , f ( 1 ) ) . Precisely, we define the extension f E as follows:
f E ( t ) = 2 f ( 0 ) f ( t ) , 1 t < 0 , f ( t ) , 0 t 1 , 2 f ( 1 ) f ( 2 t ) , 1 < t 2 .
In [23], ω 2 ( f E , h ) is bounded from above by an (absolute) constant multiple of ω 2 ( f , h ) in the following lemma.
Lemma 1.
Let f C [ 0 , 1 ] and f E be the extension of f as defined in Equation (9), 0 τ h 1 . Then a central secnd-order finite difference of f E on [ 1 , 2 ] : Δ τ 2 f E ( t ) = f E ( t + τ ) + f E ( t τ ) 2 f E t is bounded above by that of f on [ 0 , 1 ] . Moreover, the following inequality holds true:
ω 2 ( f E , h ) 5 ω 2 ( f , h ) .

3. Exponential Decay Rate of the L p -Probabilistic Convergence

For consistency with contemporary literature, we adopt the functional norm definition initially introduced by Sun [12]. For 1 p and f C [ 0 , 1 ] , Sun [12] defined
f p : = 0 1 | f ( t ) | p d t 1 / p , if 1 p < , max t [ 0 , 1 ] | f ( t ) | , if p = .

3.1. L -Probabilistic Convergence

Lemma 2
([12]). If X Beta ( α , β ) , then
P { | X E ( X ) | > r } 2 exp 2 ( α + β + 1 ) r 2 , r > 0 .
Lemma 3
([4]). If f C [ a , b ] and 0 λ , h < , then
ω 2 ( f , λ h ) ( 1 + λ ) 2 ω 2 ( f , h ) .
In what follows, we will denote for each n N and t [ 0 , 1 ] ,
Z n ( t ) : = k = 0 n p n , k ( t ) X n , k E X n , k 2 .
Our primary objective in this study is to establish rigorous upper bounds for the probability quantities:
P { B n T f f p > ϵ } , 1 p ,
where ϵ > 0 is a fixed error tolerance and n N represents the polynomial degree. To derive these estimates, we first require the following foundational lemma:
Lemma 4.
Let ϵ > 0 and f C [ 0 , 1 ] be given. Suppose that ω 2 f , 1 n < ϵ 12 + 20 / n . Then the following inequality holds true:
P B n T f f p > ϵ P Z n p > ϵ 20 n ω 2 f , 1 n , 1 p .
Proof. 
For a fixed 1 p , we first use the L p -space triangle inequality to write
B n T f f p B n f B n T f p + B n f f p .
The second term can be estimated deterministically based on Equation (1):
B n f f p B n f f ω 2 f , 1 n .
For the first term B n T f B n f p , based on Equation (4) and the definition of L P norm, we have
B n T f B n f p 0 1 k = 0 n 1 2 f ( X n , k ) + f E ( Y n , k ) 2 f ( k n ) p n , k ( t ) p d t 1 / p 0 1 k = 0 n 1 2 ω 2 ( f E , X n , k k n ) p n , k ( t ) p d t 1 / p 0 1 k = 0 n 1 2 ω 2 ( f E , 1 n ) ( 1 + X n , k k n 1 / n ) 2 p n , k ( t ) p d t 1 / p
where the last two inequalities are based on the definition of ω 2 norm (Equation (2)). Then taking ω 2 out, according to the Minkowski inequality and Lemma 1, we have:
B n T f B n f p ω 2 ( f E , 1 n ) 0 1 k = 0 n ( 1 2 + n X n , k k n 2 ) p n , k ( t ) p d t 1 / p ω 2 ( f E , 1 n ) + n ω 2 ( f E , 1 n ) 0 1 k = 0 n X n , k k n 2 p n , k ( t ) p d t 1 / p 5 ω 2 ( f , 1 n ) + 5 n ω 2 ( f , 1 n ) 0 1 k = 0 n X n , k k n 2 p n , k ( t ) p d t 1 / p 5 ω 2 ( f , 1 n ) + 5 n ω 2 ( f , 1 n ) 0 1 k = 0 n 2 X n , k E X n , k 2 + k n E X n , k 2 p n , k ( t ) p d t 1 / p 5 ω 2 ( f , 1 n ) + 5 n ω 2 ( f , 1 n ) 0 1 k = 0 n 2 X n , k E X n , k 2 + 1 n + 1 2 p n , k ( t ) p d t 1 / p 5 + 10 n ω 2 ( f , 1 n ) + 10 n ω 2 ( f , 1 n ) Z n p .
Using the assumption that ω 2 f , 1 n < ϵ 12 + 20 / n , we have
6 + 10 n ω 2 ( f , 1 n ) < ϵ 2 .
Thus, in order for B n T f f p > ϵ to hold true, it is necessary that
Z n p > ϵ 20 n ω 2 f , 1 n ,
which is the desired result. □
We are now in a position to present the first primary result of this work.
Theorem 2.
Let ϵ > 0 and f C [ 0 , 1 ] be given. Suppose that ω 2 f , 1 n < ϵ 12 + 20 / n , then the following inequality holds true:
P B n T f f > ϵ 2 ( n + 1 ) exp n + 3 10 n ϵ ω 2 f , 1 n
Proof. 
Let ϵ > 0 and f C ( [ 0 , 1 ] ) be given. Making use of Lemma 4 (for the case p = ) , we have
P B n T f f > ϵ P Z n > ϵ 20 n ω 2 f , 1 n P max 0 k n X n , k E X n , k 2 > ϵ 20 n ω 2 f , 1 n k = 0 n P X n , k E X n , k 2 > ϵ 20 n ω 2 f , 1 n 2 ( n + 1 ) exp n + 3 10 n ϵ ω 2 f , 1 n .
This completes the proof. □

3.2. L p -Probabilistic Convergence

For the case where 0 < p < , the result exhibits stronger properties. To establish the main theorem, we require the following preparatory lemmas:
Lemma 5
([12]). Let 0 < p < . Then
E X n , k E X n , k p p Γ p 2 2 p / 2 ( n + 3 ) p / 2 , k = 0 , 1 , , n .
Lemma 6.
For each 1 p < , we have
E Z n p p 2 p Γ p 2 p ( n + 3 ) p .
Proof. 
We begin by noting that for each k = 0 , 1 , , n ,
0 1 p n , k ( t ) d t = 1 n + 1 .
By applying Fubini’s theorem to interchange the order of integration between the Lebesgue integral over t [ 0 , 1 ] and the expectation operator E, and leveraging the convexity of the function f ( x ) = x p , we derive the following:
E Z n p p = 0 1 E k = 0 n p n , k ( t ) X n , k E X n , k 2 p d t 0 1 k = 0 n p n , k ( t ) E X n , k E X n , k 2 p d t = 1 n + 1 k = 0 n E X n , k E X n , k 2 p
We then apply Lemma 5 to get the desired result. □
Building upon the preceding lemmas, we now present the second principal result of this paper.
Theorem 3.
Let ϵ > 0 and 1 p < be given. Suppose that f C [ 0 , 1 ] satisfies
ω 2 f , 1 n < ϵ 12 + 20 / n .
Then, the following inequality holds true:
P B n T ( f ) f p > ϵ 10 p n p Γ p ω 2 p f , 1 n ϵ p .
Proof. 
For each ϵ > 0 , p in the range 1 p < , and f C [ 0 , 1 ] , we first recall the notation Z n defined in Equation (13) and then use Lemma 6 to write
P B n T f f p > ϵ P Z n p > ϵ 20 n ω 2 f , 1 n = P Z n p p > ϵ p 20 p n p ω 2 p f , 1 n 20 p n p ω 2 p f , 1 n E Z n p p ϵ p 10 p n p Γ p ω 2 p f , 1 n ϵ p
where we have used the Markov inequality and Lemma 6.
This completes the proof. □

4. Exponential Decay of Pointwise Convergence in Probability

Sun and Wu [9] studied pointwise convergence in probability of the stochastic Bernstein polynomials defined in Equation (3) and proved the following result:
P B n X f ( t ) f ( t ) > ϵ 40 ω 2 f , 1 n ϵ 2 , t [ 0 , 1 ] .
In this section, we will prove the following theorem which is the third principal result of the paper.
Theorem 4.
Let ϵ > 0 , f C [ 0 , 1 ] , and t [ 0 , 1 ] be given. Suppose that
ω 2 f , 1 n < ϵ 12 + 20 / n .
Then we have the following inequality
P B n T f ( t ) f ( t ) > ϵ 2 exp ϵ 20 ω 2 f , 1 n .
We need the following Lemmas to prove Theorem 4.
Lemma 7
([12]).
E exp n k = 0 n p n , k ( t ) X n , k E X n , k 2 2 .
Lemma 8.
Let ϵ > 0 and f C [ 0 , 1 ] be given. Suppose that
ω 2 f , 1 n < ϵ 12 + 20 / n .
Then for each fixed t [ 0 , 1 ] , the following inequality holds true.
P B n T f ( t ) f ( t ) > ϵ P Z n ( t ) > ϵ 10 n ω 2 f , 1 n .
Proof. 
Based on the definition of B n T f (Equation (4)) and ω 2 (Equation (1)), we have:
| B n T f B n f | | B n T f B n f | + | f ( x ) B n f | k = 0 n 1 2 f ( X n , k ) + f E ( Y n , k ) 2 f ( k n ) p n , k ( t ) + ω 2 ( f , 1 n ) k = 0 n 1 2 ω 2 ( f E , X n , k k n ) p n , k ( t ) + ω 2 ( f , 1 n ) k = 0 n 1 2 ω 2 ( f E , 1 n ) ( 1 + X n , k k n 1 / n ) 2 p n , k ( t ) + ω 2 ( f , 1 n )
Then taking ω 2 out, according to the Minkowski inequality and Lemma 1, we have:
| B n T f B n f | ω 2 ( f E , 1 n ) k = 0 n ( 1 2 + n X n , k k n 2 ) p n , k ( t ) + ω 2 ( f , 1 n ) 5 ω 2 ( f , 1 n ) k = 0 n ( 1 2 + n X n , k k n 2 ) p n , k ( t ) + ω 2 ( f , 1 n ) 6 ω 2 ( f , 1 n ) + 5 n ω 2 ( f , 1 n ) k = 0 n 2 X n , k E X n , k 2 + k n E ( X n , k ) 2 p n , k ( t ) 6 ω 2 ( f , 1 n ) + 5 n ω 2 ( f , 1 n ) k = 0 n 2 X n , k E X n , k 2 + 1 n + 1 2 p n , k ( t ) 6 + 10 n ω 2 ( f , 1 n ) + 10 n ω 2 ( f , 1 n ) Z n .
This completes the proof. □
Now we apply Markov inequality to give the proof of the Theorem 4.
Proof of Theorem 4.
P B n T f f > ϵ P Z n ( t ) > ϵ 20 n ω 2 f , 1 n P k = 0 n p n , k ( t ) X n , k E X n , k 2 > ϵ 20 n ω 2 f , 1 n E exp n k = 0 n p n , k ( t ) X n , k E X n , k 2 exp n ϵ 20 n ω 2 f , 1 n 2 exp ϵ 20 ω 2 f , 1 n .
This completes the proof of Theorem 4. □

5. Numerical Experiments

In this section, we present the results of numerical experiments utilizing SBP and SSBP methods to approximate two test functions: the uniform distribution function and the normal density function. Both functions have widespread applications in stochastic problems, and accurately simulating these functions is crucial for applying SSBPs to various stochastic challenges.
These functions were specifically chosen due to their significantly different smoothness properties. The uniform distribution function is continuous but non-differentiable, while the normal density function is infinitely differentiable but exhibits large variations when the variance is small. This contrast in behavior makes them ideal candidates for evaluating the performance of approximation methods. For these examples, we first show the L error in a single random trial, then we show the exponential convergence property of SSBPs.
To approximate high-degree Bernstein polynomials, we employ the De Casteljau algorithm (see, e.g., [25,26]). De Casteljau’s algorithm approximates high-degree Bernstein polynomial curves through the principle of recursive linear interpolation, enabling efficient modeling without directly handling complex high-order formulas. Its core process consists of three steps: recursive subdivision, segment approximation, and dynamic optimization. Its main advantages are: numerical stability, enhanced efficiency, and high flexibility.

5.1. Simulations of Uniform Distribution Functions

f 1 ( x ) = 0 x < 0.25 2 x 0.5 0.25 x 0.75 1 x > 0.75
Table 1 summarizes the maximum approximation errors for SBPs, SSBPs, and BPs when approximating the uniform distribution function. The errors are computed based on 1000 equally spaced points from the interval [ 0 , 1 ] , and the polynomial degrees used for the approximation are n = 50 , 100 , 700 , 3000 , 5000 , 7000 . The data indicates that SSBPs yield approximation errors comparable to those of BPs, while outperforming SBPs. These results are based on a single random trial.
Figure 1 illustrates the approximations of the uniform distribution function f 1 using BPs, SBPs, and SSBPs with different polynomial degrees. The left columns in the figures display the graphs of the approximations, while the right columns show the corresponding approximation errors for n = 50 , 400 , 3000 , 7000 . These plots are generated using 100 equally spaced points from the interval [ 0 , 1 ] . The figures clearly demonstrate that SSBPs provide a much more accurate approximation than SBPs, with minimal additional computational cost. The data presented are based on a single random trial.
In Figure 2, the horizontal axes represent the degrees of SBPs and SSBPs, while the vertical axes represent the probabilities of certain events occurring, which we will elaborate on below. When approximating f 1 , we set the tolerance ϵ = 0.02 and selected nine equally spaced points, denoted as t j ( 1 j 9 ) , from the interval [ 0 , 1 ] , starting from t 1 = 0.1 . The total set of these points is denoted by Ξ . At each point t j , we ran 1000 trials and computed the probabilities of the events | B n X f i ( t j ) f i ( t j ) | > ϵ and | B n T f i ( t j ) f i ( t j ) | > ϵ ( i = 1 , 2 ) .
Furthermore, we conducted 1000 trials to compute the probabilities of the events max t j Ξ | B n X f i ( t j ) f i ( t j ) | > ϵ and max t j Ξ | B n T f i ( t j ) f i ( t j ) | > ϵ ( i = 1 , 2 ) . Figure 2 demonstrates that the probabilities of events associated with SSBPs decay exponentially as the polynomial degree n increases. In contrast, the probabilities of events associated with SBPs do not decay as rapidly as those for SSBPs.

5.2. Simulations of Normal Distribution Functions

In this subsection, we approximate another commonly used function, the normal density function:
f 2 ( x ; μ , σ 2 ) = 1 2 π σ 2 e ( x μ ) 2 2 σ 2 ,
where μ = 0.5 , σ = 0.1 . The normal density function is widely applied, not only in probability theory but also in areas like kernel approximation, neural networks, and deep learning. Therefore, accurately approximating this function is of great importance. Although the function is infinitely smooth, it presents a challenge due to its significant curvature changes, which can lead to substantial errors if the approximation method is inadequate.
As shown in the Table 2 and Figure 3, the error obtained from the approximation using SSBPs is acceptable, while the error from SBPs becomes so large that it distorts the function’s shape. This demonstrates that SSBPs offer a superior approximation performance for this function. Figure 3 shows the approximation results for polynomial degrees n = 50 , 100 , 700 , 3000 , 5000 , 7000 , corresponding to the results summarized in Table 2.
In Figure 4, the horizontal axes represent the polynomial degrees for SBPs and SSBPs, while the vertical axes represent the probabilities of certain events occurring. Here, the specific event is defined as the approximation error exceeding 0.5. The probability of the error exceeding 0.5 decreases exponentially as n increases for SSBPs. In contrast, the probability associated with SBPs does not decay as rapidly. This indicates that SBPs are not as effective in approximating functions with large curvature variations, and their performance degrades further as the standard deviation of the normal density function increases.
Thus, we can observe that SSBPs are well-suited for approximating functions with large curvature changes, as the error remains manageable. From the comparison of these two very different functions—the uniform distribution and the normal density function—it becomes evident that SSBPs can handle noisy data effectively while providing approximation results comparable to classical BPs. Therefore, we conclude that SSBPs are highly reliable for these types of approximations.

6. Conclusions

This paper demonstrates the exponential convergence of SSBPs under the L p ( 1 p ) norm. Theoretical analysis shows that the ω 2 -error bound of SSBPs is comparable to that of classical BPs and significantly outperforms the SBPs, while maintaining an O ( N ) computational complexity. It is worth mentioning that we establish probabilistic convergence rates for the general L p norm (Theorem 3), with explicit dependence on the parameter p. However, the numerical experiments in this study exclusively validate the L and exponential convergence. While L control provides a theoretical upper bound for L p errors, the actual convergence behavior for finite p values may exhibit distinct characteristics. Although the current study is limited to the univariate case, these findings provide new research directions for the field of stochastic computation, particularly in the extension to multivariate cases and the optimization of computational complexity, which are of significant exploratory value.

Author Contributions

Validation, software and writing, S.Z.; Conceptualization and methodology, Q.G. and C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (Nos. 12471358, 12071057), the Fundamental Research Funds for the Central Universities (No. DUT23LAB302), and the Characteristic & Preponderant Discipline of Key Construction Universities in Zhejiang Province (Zhejiang Gongshang University-Statistics).

Data Availability Statement

The original contributions presented in this study are included in the article. For further inquiries, please contact the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lin, D.; Johnson, P.; Knoll, F.; Lui, Y. Artificial intelligence for MR image reconstruction: An overview for clinicians. J. Magn. Reson. Imaging 2021, 53, 1015–1028. [Google Scholar] [CrossRef] [PubMed]
  2. Al-Quayed, F.; Ahmad, Z.; Humayun, M. A situation based predictive approach for cybersecurity intrusion detection and prevention using machine learning and deep learning algorithms in wireless sensor networks of industry 4.0. IEEE Access 2024, 12, 34800–34819. [Google Scholar] [CrossRef]
  3. El, Y.; Ellabib, A. A fuzzy particle swarm optimization method with application to shape design problem. RAIRO-Oper. Res. 2023, 57, 2819–2832. [Google Scholar]
  4. Lorentz, G. Bernstein Polynomials; American Mathematical Society: Providence, RI, USA, 2012. [Google Scholar]
  5. Cheney, E.W.; Light, W. A Course in Approximation Theory; American Mathematical Society: Providence, RI, USA, 2009; Volume 101. [Google Scholar]
  6. Ditzian, Z.; Totik, V. Moduli of Smoothness; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 9. [Google Scholar]
  7. Chung, K. Elementary Probability Theory with Stochastic Processes; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  8. Paltanea, R. Approximation Theory Using Positive Linear Operators; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  9. Wu, Z.; Sun, X.; Ma, L. Sampling scattered data with Bernstein polynomials: Stochastic and deterministic error estimates. Adv. Comput. Math. 2013, 38, 187–205. [Google Scholar] [CrossRef]
  10. David, H.; Nagaraja, H. Order Statistics; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  11. Sun, X.; Wu, Z. Chebyshev type inequality for stochastic Bernstein polynomials. Proc. Am. Math. Soc. 2019, 147, 671–679. [Google Scholar] [CrossRef]
  12. Sun, X.; Wu, Z.; Zhou, X. On probabilistic convergence rates of stochastic Bernstein polynomials. Math. Comput. 2021, 90, 813–830. [Google Scholar] [CrossRef]
  13. Sun, X.; Zhou, X. Stochastic Quasi-Interpolation with Bernstein Polynomials. Mediterr. J. Math. 2022, 19, 240. [Google Scholar] [CrossRef]
  14. Adell, J.; Cárdenas-Morales, D. Stochastic Bernstein polynomials: Uniform convergence in probability with rates. Adv. Comput. Math. 2020, 46, 16. [Google Scholar] [CrossRef]
  15. Adell, J.; Cárdenas-Morales, D. Random linear operators arising from piecewise linear interpolation on the unit interval. Mediterr. J. Math. 2022, 19, 223. [Google Scholar] [CrossRef]
  16. Adell, J.; Cárdenas-Morales, D.; López-Moreno, A. On the rates of pointwise convergence for Bernstein polynomials. Results Math. 2025, 80, 1–10. [Google Scholar] [CrossRef]
  17. Gao, W.; Sun, X.; Wu, Z.; Zhou, X. Multivariate Monte Carlo approximation based on scattered data. SIAM J. Sci. Comput 2020, 42, A2262–A2280. [Google Scholar] [CrossRef]
  18. Gao, W.; Fasshauer, G.; Sun, X.; Zhou, X. Optimality and regularization properties of quasi-interpolation: Deterministic and stochastic approaches. SIAM J. Numer. Anal. 2020, 58, 2059–2078. [Google Scholar] [CrossRef]
  19. Gal, S.; Niculescu, C. Approximation of random functions by stochastic Bernstein polynomials in capacity spaces. Carpathian J. Math. 2021, 37, 185–194. [Google Scholar] [CrossRef]
  20. Bouzebda, S. Uniform in number of neighbor consistency and weak convergence of k-nearest neighbor single index conditional processes and k-nearest neighbor single index conditional u-processes involving functional mixing data. Symmetry 2024, 16, 1576. [Google Scholar] [CrossRef]
  21. Cucker, F.; Smale, S. On the mathematical foundations of learning. Bull. Am. Math. Soc. 2002, 39, 1–49. [Google Scholar] [CrossRef]
  22. Cucker, F.; Zhou, D. Learning Theory: An Approximation Theory Viewpoint; Cambridge University Press: Cambridge, UK, 2007; Volume 24. [Google Scholar]
  23. Gao, Q.; Sun, X.; Zhang, S. Optimal stochastic Bernstein polynomials in Ditzian-Totik type modulus of smoothness. J. Comput. Appl. Math. 2022, 404, 113888. [Google Scholar] [CrossRef]
  24. Casella, G.; Berger, R. Statistical Inference; CRC Press: Boca Raton, FL, USA, 2024. [Google Scholar]
  25. Farin, G. Handbook of Computer Aided Geometric Design; Elsevier: Amsterdam, The Netherlands, 2002; Volume 2, pp. 577–580. [Google Scholar]
  26. Prautzsch, H.; Boehm, W.; Paluszny, M. Bézier and B-Spline Techniques; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
Figure 1. Pointwise approximation errors (to f 1 ) given by BPs, SBPs, and SSBPs in one trial.
Figure 1. Pointwise approximation errors (to f 1 ) given by BPs, SBPs, and SSBPs in one trial.
Mathematics 13 03281 g001
Figure 2. f 1 : Simulations of probabilistic convergence of SBPs and SSBPs in 1000 trials.
Figure 2. f 1 : Simulations of probabilistic convergence of SBPs and SSBPs in 1000 trials.
Mathematics 13 03281 g002
Figure 3. Pointwise approximation errors (to f 2 ) given by BPs, SBPs, and SSBPs in one trial.
Figure 3. Pointwise approximation errors (to f 2 ) given by BPs, SBPs, and SSBPs in one trial.
Mathematics 13 03281 g003
Figure 4. f 2 : Simulations of probabilistic convergence of SBPs and SSBPs in 1000 trials.
Figure 4. f 2 : Simulations of probabilistic convergence of SBPs and SSBPs in 1000 trials.
Mathematics 13 03281 g004
Table 1. Maximal approximation errors given by BPs, SBPs, and SSBPs.
Table 1. Maximal approximation errors given by BPs, SBPs, and SSBPs.
n501004001000300050007000
f 1 BP0.04680.03200.01490.00860.00410.00280.0021
SBP0.38800.19180.08680.06040.03840.02400.0239
SSBP0.08910.05560.03580.01400.00430.00420.0021
Table 2. Maximal approximation errors given by BPs, SBPs, and SSBPs.
Table 2. Maximal approximation errors given by BPs, SBPs, and SSBPs.
n501004001000300050007000
f 2 BP0.28600.16590.04690.01930.00650.00390.0021
SBP2.42551.86181.07051.02480.28010.26730.0239
SSBP0.71210.45840.18280.05610.02210.02120.0187
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Gao, Q.; Zhu, C. On Probabilistic Convergence Rates of Symmetric Stochastic Bernstein Polynomials. Mathematics 2025, 13, 3281. https://doi.org/10.3390/math13203281

AMA Style

Zhang S, Gao Q, Zhu C. On Probabilistic Convergence Rates of Symmetric Stochastic Bernstein Polynomials. Mathematics. 2025; 13(20):3281. https://doi.org/10.3390/math13203281

Chicago/Turabian Style

Zhang, Shenggang, Qinjiao Gao, and Chungang Zhu. 2025. "On Probabilistic Convergence Rates of Symmetric Stochastic Bernstein Polynomials" Mathematics 13, no. 20: 3281. https://doi.org/10.3390/math13203281

APA Style

Zhang, S., Gao, Q., & Zhu, C. (2025). On Probabilistic Convergence Rates of Symmetric Stochastic Bernstein Polynomials. Mathematics, 13(20), 3281. https://doi.org/10.3390/math13203281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop