Next Article in Journal
Minimizing Cohort Discrepancies: A Comparative Analysis of Data Normalization Approaches in Biomarker Research
Previous Article in Journal
Novel Methods for Synthesizing Self-Checking Combinational Circuits by Means of Boolean Signal Correction and Polynomial Codes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interpolation for Neural Network Operators Activated by Smooth Ramp Functions †

1
Department of Computer Science, University of Prizren, 20000 Prizren, Kosovo
2
Department of Mathematics, University of Prishtina “Hasan Prishtina”, 10000 Prishtina, Kosovo
*
Author to whom correspondence should be addressed.
On 70th birthday of a renowned Indian approximation theorist, our friend Prof. P.N. Agrawal.
These authors contributed equally to this work.
Computation 2024, 12(7), 136; https://doi.org/10.3390/computation12070136
Submission received: 8 June 2024 / Revised: 18 June 2024 / Accepted: 23 June 2024 / Published: 4 July 2024

Abstract

:
In the present article, we extend the results of the neural network interpolation operators activated by smooth ramp functions proposed by Yu (Acta Math. Sin.(Chin. Ed.) 59:623-638, 2016). We give different results from Yu (Acta Math. Sin.(Chin. Ed.) 59:623-638, 2016) we discuss the high-order approximation result using the smoothness of φ and a related Voronovskaya-type asymptotic expansion for the error of approximation. In addition, we showcase the related fractional estimates result and the fractional Voronovskaya type asymptotic expansion. We investigate the approximation degree for the iterated and complex extensions of the aforementioned operators. Finally, we provide numerical examples and graphs to effectively illustrate and validate our results.

1. Introduction

Neural networks (NNs) have found a growing range of applicability in several domains, such as marketing, cybersecurity for fraud detection, image recognition and classification, geology for seismic facies classification, stock prediction in financial institutions, industrial automation, and healthcare, just to name a few. Feedforward neural networks (FNNs) can be highlighted as one of the most widely recognized and commonly employed methods in neural networks. FNNs, known as multi-layer perceptrons (MLPs), follow the architectural model of input, hidden, and output layers of interconnected nodes/neurons. Each node connects to other nodes in the successive layer to form a pathway known as a feedforward pathway in the direction from the input to the output layer. Within the feedforward neural network (FNN) context, the activation function injects non-linearity into the neural network’s output by processing the input data. FNNs with activation function ς : R R , an output node, and a hidden layer consisting of m neurons can be written as:
N m ( u ) : = k = 1 m a k ς y k , u + η k , u R n , n N ,
where, for 1 k m , y j R n denote the connection weights, η k R represent the threshold values, y k , u is the inner product of the vectors y k and u, and a k R are the coefficients. Feedforward neural networks (FNNs) are powerful tools for making approximations due to their adaptable architecture with considerable invisible layers and non-linear activation functions. They are effective since they can theoretically discover, learn, and approximate to an arbitrary level of exactness any continuous function on a compact set due to their versatile approximation property. Extensive research has been performed using FNNs in distinct topics such as density results [1,2], the complexity of approximation of NNs [3,4] and quantitative estimation for the approximation order [5,6,7].
Cardaliaguet and Euvrard [8] initiated the exploration of approximating feedforward neural network operators (FNNOs) to the identity operator. Then, Anastassiou [5] has successfully determined the rate of convergence of the Cardaliaguet–Euvrard and “squashing” neural network operators by utilizing the modulus of continuity. Further, Costarelli and Spigler [9] explored the pointwise and uniform convergence by certain NN operators induced by sigmoidal functions. Their research demonstrated an enhanced rate of convergence for the C ( 1 ) -class of functions through the consideration of logistic and hyperbolic tangent sigmoidal functions. In Yu’s work [10], the author comprehensively examined pointwise and uniform approximation for distinct neural network operators, activated by sigmoidal functions on a compact set. The study indicated that significant enhancements in the order of convergence can be achieved through the assortment of these operators.
In their study, Costarelli and Spigler [4] introduced a set of multivariate NN operators of Kantorovich type employing specific density functions. They examined the convergence results in both the uniform and L p , 1 p < norms. Furthermore, in their paper [4], the same authors advocated for the use of NN Kantorovich-type operators to explore the approximation of functions within L p , 1 p < spaces. In their comprehensive research, Yu and Zhou [11] examined the direct and converse outcomes for specific NN operators induced by a smooth ramp function, resulting in a significant enhancement in the convergence rate for smooth functions via the combination of these operators. Costarelli et al. [12] conducted a careful analysis of the pointwise and uniform convergence by linear and the corresponding non-linear NN operators, showing valuable insights. They also provided a detailed discussion on the approximation by their Kantorovich version for L p , where 1 p < . Additionally, Costarelli and Vinti [13,14] delivered compelling findings on the quantitative results for the Kantorovich variant of NN operators, utilizing the modulus of continuity and Peetre’s K -functional. Uyan et al. [15] introduced a family of neural network interpolation operators utilizing the first derivative of generalized logistic-type functions as a density function. The research demonstrates their interpolation properties for continuous functions on finite intervals and examines parameter efficiency for function approximation in L p spaces for 1 p .
In [16], the researchers analyzed the asymptotic expansion and Voronovskaja-type result for the NN operators and received a more promising rate of convergence by considering a combination of these operators. Coroianu et al. [17] researched quantitative results for the classical as well as Kantorovich-type NN operators. Kadak [18] delved into the development of fractional neural network interpolation operators activated by a sigmoidal function in the multivariate scenario. Additionally, Qian and Yu [19] established NN interpolation operators induced by specific sigmoidal functions, and subsequently derived both the direct and converse results for these operators.
In their paper [20], Costarelli and Vinti investigated pointwise and uniform approximation and derived results for L p spaces, where 1 p < , using max-product NN Kantorovich operators. Additionally, in another paper [21], the authors put forward and examined the approximation properties of the max-product NN and quasi-interpolation operators. Wang et al. [22] have demonstrated the direct and converse results for distinctive multivariate NN interpolation operators applied to continuous functions. Their work also encompasses the investigation of the Kantorovich variant of these operators to analyze the approximation of functions within the L p space, where 1 p < . Qian and Yu [23] studied the direct and converse theorems for NN interpolation operators induced by smooth ramp functions and achieved an improved order of approximation by means of a combination of these operators. Qian and Yu [19] presented a single-layer NN interpolation operator based on a linear combination of general sigmoidal functions and discussed the direct and converse results. In a subsequent section of the paper, the authors expound upon the approximation properties of a linear combination and the Kantorovich variant of the NN operators. Wang et al. [24] introduced NN interpolation operators by incorporating Lagrange polynomials of degree r and presented direct and converse findings in both univariate and multivariate scenarios. Bajpeyi and Kumar [25] proposed exponential-type NN operators and studied some direct approximation theorem.
Bajpeyi [26] proposed a Voronovskaja-type theorem and the pointwise quantitative estimates for the exponential-type NN operators and obtained a better rate of convergence by evaluating a combination of these operators.
Mahmudov and Kara [27] presented and examined the order of convergence of the Riemann–Liouville fractional integral type Szász–Mirakyan-Kantorovich operators in the univariate and bivariate cases. Recently, Kadak [28] introduced a family of multivariate neural network operators (NNOs) involving a Riemann–Liouville fractional integral operator of order α and examined their pointwise and uniform approximation properties.
A measurable function ς : R R is said to be a sigmoidal function, provided that
lim z ς ( z ) = 1 , and lim z ς ( z ) = 0 .
Costarelli [1] pioneered the study of interpolation by NN operators. For a bounded and measurable function φ : [ c , d ] R , he defined the following NN interpolation operators:
F n ( φ ; η ) : = ν = 0 n φ ( z ν ) Φ R n ( z z ν ) d c ν = 0 n n ( z z ν ) d c ,
where z ν = c + ν h , ν = 0 , 1 , , n with h = ( d c ) n , and the density function Φ R ( η ) is given by Φ R ( z ) = ς R ( z + 1 / 2 ) ς R ( z 1 / 2 ) , with the ramp function ς R ( z ) being
ς R ( z ) : = 0 , if z 1 / 2 , 1 , if z 1 / 2 , z + 1 / 2 , if 1 / 2 < z < 1 / 2 .
Li [29] introduced a generalization of the above ramp function by means of a parameter μ 0 as follows:
ς ( η ) : = 0 , if z μ 0 , b , if z μ 0 , z + μ 0 2 μ 0 b , if μ 0 < z < μ 0 ,
where b R + , and 0 < μ 0 1 / 2 . The associated density function is given by
ϕ ( z ) = ς ( z + μ 0 ) ς ( z μ 0 )
= 0 , if | z | 2 μ 0 , 1 1 2 μ 0 | z | b , if | z | < 2 μ 0 .
It is clear that, in the case μ 0 = 1 / 2 , and b = 1 , the functions ς ( η ) and ϕ ( z ) turn into ς R ( z ) and Φ R ( z ) , respectively.
Li [29] proposed the following NN interpolation operators:
G n ( φ ; z ) : = ν = 0 n φ ( z ν ) ϕ n ( z z ν ) d c ν = 0 n n ( z z ν ) d c ,
and proved that G n ( φ ; z ν ) = φ ( z ν ) , for   all ν = 0 , 1 , , n , together with a convergence estimate by utilizing a second-order modulus of continuity.
Yu [11] introduced a smooth ramp sigmoidal function ς R ( z ) , with the help of the function ϕ ( z ) as defined above, in the following manner:
ς R ( z ) : = 0 , if z 1 / 2 , 10 ( z + 1 / 2 ) 3 15 ( z + 1 / 2 ) 4 + 6 ( z + 1 / 2 ) 5 , if 1 / 2 < z < 1 / 2 , 1 if z 1 / 2 .
The corresponding density function is given by
ϕ R ( z ) = ς R ( z + 1 / 2 ) ς R ( z 1 / 2 )
= 0 , if z 1 , 10 ( z + 1 ) 3 15 ( z + 1 ) 4 + 6 ( z + 1 ) 5 if 1 < z 0 , 1 ( 10 z 3 15 z 4 + 6 z 5 ) , if 0 < z < 1 0 , if z 1 ,
and
ϕ R ( z ) = 0 , if z 1 , 30 ( z + 1 ) 2 z 2 if 1 < z 0 , 30 ( z 1 ) 2 z 2 , if 0 < z < 1 0 , if z 1 .
We present a lemma listing the properties of the density function ϕ R ( z ) .
Lemma 1. 
[11] The function ϕ R ( z ) verifies the following useful properties:
(1) 
ϕ R ( z ) is nonenegative and smooth;
(2) 
ϕ R ( z ) is even;
(3) 
ϕ R ( z ) is non-decreasing when z < 0 , and non-increasing when z > 0 ;
(4) 
0 ϕ R ( z ) 1 ,   ϕ R ( ± 1 2 ) = 15 2 ;
(5) 
s u p p ( ϕ R ) = s u p p ( ϕ R ) ( 1 , 1 ) .
Let φ : [ c , d ] R be a bounded and measurable function. Yu [11] introduced a new neural network interpolation operator based on ϕ 2 ( z ) as
F n ( φ ) ( z ) = s = 0 n φ ( z s ) Q s , R 1 h ( z z s ) ,
where
Q s , R 1 h ( z z s ) = ϕ R 1 h ( z z s ) s = 0 n ϕ R 1 h ( z z s )
and z s are the uniformly spaced nodes defined by z s = c + s h , s = 0 , 1 , , n , with h = d c n . It is clear that
s = 0 n Q s , R 1 h ( z z s ) = 1 ,
and hence, from (4), it follows that
F n ( 1 ; z ) = 1 .
Yu [11] showed that the operators F n ( φ ; z ) interpolate φ ( z ) at the nodes z s , s = 0 , 1 , , n . They obtained the following estimates for Q s , R 1 h ( z z s ) and its derivative.
Lemma 2. 
[11] Let z [ z ν , z ν + 1 ] , ν = 0 , 1 , , n 1 . Then, for s ν , ν + 1 , we have
Q s , R 1 h ( z z s ) = Q s , R 1 h ( z z s ) = 0 ,
and, for s = ν , ν + 1 , there follows
0 Q s , R 1 h ( z z s ) 1 .
Q s , R 1 h ( z z s ) 15 4 h .
Theorem 1. 
[11] For any φ ( x ) C ( [ c , d ] ) ,   F n ( φ ) ( z ) interpolates φ ( z ) at the nodes { z i } i = 0 n , that is,
F n ( φ ) ( z i ) = φ ( z i ) , i = 0 , 1 , , n .
Theorem 2. 
[11] We assume that φ ( z ) C ( [ c , d ] ) ,   F n ( φ ) ( z ) . It holds that
F n ( φ ) ( φ ) φ 2 ω ( φ , d c n ) .
The paper is organized as follows: Some auxiliary definitions and results are given in Section 1. The high order of approximation by using the smoothness of φ as well as the related fractional approximation result have been discussed in Section 2. In Section 3, we research the quantitative approximation of complex valued continuous functions on [ c , d ] by the NN operators activated by smooth ramp functions. Finally, we discuss a few examples of functions for which our results are applicable by using NN operators activated by smooth ramp functions.

2. Main Results

We present the following high-order approximation result, by using the smoothness of  φ .
Theorem 3. 
Let φ C m ( [ c , d ] ) ,   m N ,   z [ c , d ] . Then, we have
(1)
F n ( φ ) ( z ) φ ( z ) 2 ν = 1 m | φ ( ν ) ( z ) | ν ! ( d c ) ν n ν + ω 1 φ m , d c n ( d c ) m n m m ! ;
(2)
F n ( φ ) φ 2 ν = 1 m φ ( ν ) ν ! ( d c ) ν n ν + ω 1 φ m , d c n ( d c ) m n m m ! ;
(3) We assume further that φ ( ν ) ( z ) = 0 ,   ν = 1 , 2 , , m , for some, z [ c , d ] is fixed, it holds
F n ( φ ) ( z ) φ ( z ) 2 ω 1 φ m , d c n ( d c ) m n m m ! ,
a high speed 1 n m + 1 pointwise convergence and
(4)
F n ( φ ) ( z ) φ ( z ) ν = 1 m φ ν ( z ) ν F n ( ( · z ) ν ) ( z ) 2 ω 1 φ m , d c n ( d c ) m n m m ! .
Proof. 
Let φ C m ( [ c , d ] ) ,   m N . Then, we can write from the Taylor formula
φ ( z s ) = ν = 0 m φ ( ν ) ( z ) ν ! ( z k z ) s + z z s φ ( m ) ( y ) φ ( m ) ( z ) ( z s y ) m 1 ( m 1 ) ! d y .
Thus, we can write what follows:
F n ( φ ) ( z ) φ ( z ) = ν = 0 m φ ( ν ) ( z ) ν ! s = 0 n Q s , R 1 h ( z z s ) ( z s z ) ν + s = 0 n Q s , R 1 h ( z z s ) z z s φ ( m ) ( y ) φ ( m ) ( z ) ( z s y ) m 1 ( m 1 ) ! d y .
Let us obtain it:
R n ( z ) : = s = 0 n Q s , R 1 h ( z z s ) z z s φ ( m ) ( y ) φ ( m ) ( z ) ( z s y ) m 1 ( m 1 ) ! d y .
We also obtain
Θ ( z , z s ) : = z z s φ ( m ) ( y ) φ ( m ) ( z ) ( z s y ) m 1 ( m 1 ) ! d y .
We distinguish the following cases:
(i) We assume that z z s , then
Θ ( z , z s ) = z z s φ ( m ) ( y ) φ ( m ) ( z ) ( z s y ) m 1 ( m 1 ) ! d y z z s φ ( m ) ( y ) φ ( m ) ( z ) ( z s y ) m 1 ( m 1 ) ! d y ω 1 φ ( m ) , z s z ( z s z ) m m ! .
(ii) Moreover, in case of z z s , then, we have
Θ ( z , z s ) z z s φ ( m ) ( y ) φ ( m ) ( z ) ( y z s ) m 1 ( m 1 ) ! d y z z s φ ( m ) ( y ) φ ( m ) ( z ) ( y z s ) m 1 ( m 1 ) ! d y ω 1 φ ( m ) , z s z ( z z s ) m m ! .
Consequently, we have found that
Θ ( z , z s ) ω 1 φ ( m ) , | z s z | ( z z s ) m m ! .
Therefore, it holds that
| R | s = 0 n Q s , R 1 h ( z z s ) ω 1 φ ( m ) , | z s z | ( z z s ) m m ! .
Noting that z s z z s + 1 , for some s { 0 , 1 , , n 1 } , we obtain
| R | Q s , R 1 h ( z z s ) ω 1 φ ( m ) , | z s z | ( z z s ) m m ! + Q s + 1 , 2 1 h ( z z s + 1 ) ω 1 φ ( m ) , | z s + 1 z | ( z z s + 1 ) m m ! 2 ω 1 φ ( m ) , d c n ( d c ) m n m m ! .
Next, we observe
s = 0 n Q s , R 1 h ( z z s ) ( z s z ) ν s = 0 n Q s , 2 1 h ( z z s ) | z s z | ν Q s , R 1 h ( z z s ) | z s z | ν + Q s + 1 , R 1 h ( z z s + 1 ) | z s + 1 z | ν 2 ( d c ) ν n ν .
Therefore, we derive
ν = 0 m φ ( ν ) ( z ) ν ! s = 0 n Q s , R 1 h ( z z z ) ( z s z ) ν 2 ν = 0 m | φ ( ν ) ( z ) | ν ! ( d c ) ν n ν .
Using (12) and (13), we derive (8)–(10). It is clear that taking
s = 0 n Q s , R 1 h ( z z z ) ( z s z ) ν = F n ( ( · z ) ν ) ( z ) ,
we immediately obtain the relation (11).    □
We present a related Voronovskaya-type asymptotic expansion for the error of approximation.
Theorem 4. 
Let φ C m ( [ c , d ] ) ,   m N . Then, we have
F n ( φ ) ( z ) φ ( z ) ν = 1 m 1 φ ( ν ) ( z ) ν ! D n ( ( · z ) ν , z ) = o 1 n m ϵ ,
where 0 < ϵ m ,   n N .
If m = 1 , the sum above disappears.
From the asymptotic expansion given by the relation (14), it follows that
n m ϵ F n ( φ ) ( z ) φ ( z ) ν = 1 m 1 φ ( ν ) ( z ) ν ! F n ( ( · z ) ν , z ) 0 , as n ,
where 0 < ϵ m ,   n N .
When m = 1 , or φ ( ν ) ( z ) = 0 ,   ν = 1 , , m 1 , then
n m ϵ F n ( φ ) ( z ) φ ( z ) 0 , as n , 0 < ϵ m .
Proof. 
Let z [ c , d ] ; then, we can write the following from the Taylor formula:
φ ( z s ) = j = 0 m 1 φ ( j ) ( z ) j ! ( z s z ) j + z z s φ ( m ) ( y ) ( z s y ) m 1 ( m 1 ) ! d y .
For any fixed z [ c , d ] , there exists s { 0 , 1 , , n 1 } such that z s z z s + 1 .
Now, we can write what follows:
F n ( φ ) ( z ) φ ( z ) = ν = 1 m 1 φ ( ν ) ( z ) ν ! s = 0 n Q s , R 1 h ( z z s ) ( z s z ) ν + s = 0 n Q s , R 1 h ( z z s ) z z s φ ( m ) ( y ) ( z s y ) m 1 ( m 1 ) ! d y .
Here, we also divide R ( z ) as follows:
R ( z ) : = s = 0 n Q s , R 1 h ( z z s ) z z s φ ( m ) ( y ) ( z s y ) m 1 ( m 1 ) ! d y .
So that
F n ( φ ) ( z ) φ ( z ) ν = 1 m 1 φ ( ν ) ( z ) ν ! F n ( · z ) ν ( z ) = R ( z ) .
Hence, it holds that
| R ( z ) | s = 0 n Q s , R 1 h ( z z s ) z z s φ ( m ) ( y ) ( z s y ) m 1 ( m 1 ) ! d y .
We distinguish the following cases:
(1) Let z s z . Then,
z z s φ ( m ) ( y ) ( z s y ) m 1 ( m 1 ) ! d y = z z s φ ( m ) ( y ) ( y z s ) m 1 ( m 1 ) ! d y z z s φ ( m ) ( y ) ( y z s ) m 1 ( m 1 ) ! d y φ ( m ) ( z z s ) m m ! .
(2) Moreover, in the case of z s z , then, we have
z z s φ ( m ) ( y ) ( z s y ) m 1 ( m 1 ) ! d y z z s φ ( m ) ( y ) ( z s y ) m 1 ( m 1 ) ! d y φ ( m ) ( z s y ) m m ! .
Thus, in both cases, we have proved that
z z s φ ( m ) ( y ) ( z s y ) m 1 ( m 1 ) ! d y φ ( m ) | z s y | m m ! .
Clearly, we find
R ( z ) s = 0 n Q s , R 1 h ( z z s ) φ ( m ) | z k y | m m ! Q i , R 1 h ( z z i ) φ ( m ) | z i y | m m ! + Q i + 1 , R 1 h ( z z i + 1 ) φ ( m ) | z i + 1 y | m m ! 2 φ ( m ) ( d c ) m m ! n m = A n m .
From the above estimates, we find
| R ( z ) | = O n m , a n d | R ( z ) | = o 1 .
So, for 0 < ϵ m , we obtain
| R ( z ) | 1 n m ϵ A n ϵ 0 ,
as n . That is,
| R ( z ) | = o 1 n m ϵ , n N ,
which proves the theorem.    □
Let us give some of the limitations and lemmas we need for the validation of the following theorems. Let A C m ( [ c , d ] ) be the space of functions φ with φ ( m 1 ) A C ( [ c , d ] ) being absolutely continuous functions.
Definition 1. 
[30] Let κ > 0 ,   m = κ ( · is the ceiling of the number), and φ A C m ( [ c , d ] ) . We call left Caputo fractional derivative the function
D * c κ φ ( z ) : = 1 Γ ( m κ ) c z z y m κ 1 φ ( m ) ( y ) d y , z [ c , d ] ,
where Γ is the gamma function. We set D * c 0 φ ( z ) = φ ( z ) ,   z [ c , d ] .
Lemma 3. 
[30] Let κ > 0 ,   κ N ,   m = κ ,   φ C m 1 ( [ c , d ] ) , and φ ( m ) L ( [ c , d ] ) . Then, D * c κ φ ( c ) = 0 .
Definition 2. 
[30] Let φ A C m ( [ c , d ] ) ,   m = κ ,   κ > 0 . The right Caputo fractional derivative of order κ > 0 is given by
D d κ φ ( z ) : = ( 1 ) m Γ ( m κ ) z d y z m κ 1 φ ( m ) ( y ) d y ,
z [ c , d ] . We set D d 0 φ ( z ) = φ ( z ) .
Lemma 4. 
[30] Let κ > 0 ,   m = κ ,   φ C m 1 ( [ c , d ] ) , and φ ( m ) L ( [ c , d ] ) . Then, D d κ φ ( d ) = 0 .
As in [30], for all z , w [ c , d ] , we assume that
D * w κ φ ( z ) = 0 , for z < w , a n d D w κ φ ( z ) = 0 , for z > w .
In the following, we present the related fractional estimates result.
Theorem 5. 
Let η > 0 ,   m = η ,   η N ,   φ A C m ( [ c , d ] ) ,   φ ( m ) L ( [ c , d ] ) . Then, we have
(1)
F n ( φ ) ( z ) φ ( z ) 2 [ ν = 1 m 1 φ ( ν ) ( z ) ν ! ( d c ) ν n ν + ( d c ) η Γ ( η + 1 ) n η ( ω 1 D z η φ , d c n + ω 1 D * z η φ , d c n ) ] ;
(2)
F n ( φ ) φ 2 [ ν = 1 m 1 φ ( ν ) ν ! ( d c ) ν n ν + ( d c ) η Γ ( η + 1 ) n η ( sup z [ c , d ] ω 1 D z η φ , d c n + sup z [ c , d ] ω 1 D * z η φ , d c n ) ] < .
Proof. 
For any fixed z [ c , d ] , there exists r { 0 , 1 , , n 1 } such that z r z z r + 1 . We have that
D z η φ ( z ) = D * z η = 0 .
From (18), for all z , w [ c , d ] , we obtain
D * z η φ ( w ) = 0 , for w < z , and D z η φ ( z ) = 0 , for w > z .
From ([31]), we can write from the right Caputo fractional Taylor formula
φ ( z s ) = ν = 0 m 1 φ ( ν ) ( z ) ν ! ( z s z ) ν + 1 Γ ( η ) z s z J z s η 1 D z η φ ( J ) D z η φ ( z ) d J .
Also from ([32], p. 54), from the left Caputo fractional Taylor formula, we obtain
φ ( z s ) = ν = 0 m 1 φ ( ν ) ( z ) ν ! ( z s z ) ν + 1 Γ ( η ) z z s z s J η 1 D * z η φ ( J ) D * z η φ ( z ) d J .
for all c z s z . Hence, it holds that
s = 0 r φ ( z s ) Q s , R 1 h ( z z s ) = ν = 0 m 1 φ ( ν ) ( z ) ν ! s = 0 r Q s , R 1 h ( z z s ) ( z s z ) ν + R 1
where
R 1 : = s = 0 n Q s , R 1 h ( z z s ) 1 Γ ( η ) z z s J z s η 1 D z η φ ( J ) D z η φ ( z ) d J .
for all c z s z .
Similarly, for z s [ z , d ] , we have that
s = r + 1 n φ ( z s ) Q s , R 1 h ( z z s ) = ν = 0 m 1 φ ( ν ) ( z ) ν ! s = r + 1 n Q s , R 1 h ( z z s ) ( z s z ) ν + R 2 ,
where
R 2 = s = r + 1 n Q s , R 1 h ( z z s ) 1 Γ ( η ) z s z z s J η 1 D * z η φ ( J ) D * z η φ ( z ) d J .
Now, by adding (21) and (22), we can write what follows:
F n ( φ ) ( z ) φ ( z ) = ν = 0 m 1 φ ( ν ) ( z ) ν ! s = 0 n Q s , R 1 h ( z z s ) | z s z | ν + | R 1 | + | R 2 | = ν = 0 m 1 φ ( ν ) ( z ) ν ! 2 ( d c ) ν n ν + | R 1 | + | R 2 | .
Now, we can estimate | R 1 | and | R 2 | . We have that
| R 1 | = s = 0 r Q s , R 1 h ( z z s ) 1 Γ ( η ) z s z J z s η 1 D z η φ ( J ) D z η φ ( z ) d J s = 0 r Q s , R 1 h ( z z s ) 1 Γ ( η ) ω 1 D z η φ , z z s ( z z s ) η η = Q s , R 1 h ( z z r ) 1 Γ ( η + 1 ) ω 1 D z η φ , z z r ( z z r ) η 1 Γ ( η + 1 ) ω 1 D z η φ , d c n ( d c ) η n η .
We have proved that
| R 1 | 1 Γ ( η + 1 ) ω 1 D z η φ , d c n ( d c ) η n η .
Furthermore, we observe that
| R 2 | s = r + 1 n Q s , R 1 h ( z z s ) 1 Γ ( η ) z z s z s J η 1 D * z η φ ( J ) D * z η φ ( z ) d J s = r + 1 n Q s , R 1 h ( z z s ) 1 Γ ( η ) ω 1 D * z η φ , ( z s z ) z z s ( z s J ) η 1 d J = s = r + 1 n Q s , R 1 h ( z z s ) 1 Γ ( η ) ω 1 D * z η φ , ( z s z ) ( z s z ) η η = s = r + 1 n Q s , R 1 h ( z z s ) 1 Γ ( η + 1 ) ω 1 D * z η φ , ( z s z ) ( z s z ) η Q s , R 1 h ( z z r + 1 ) 1 Γ ( η + 1 ) ω 1 D * z η φ , ( d c ) n ( d c ) η n η 1 Γ ( η + 1 ) ω 1 D * z η φ , ( d c ) n ( d c ) η n η .
That is, we have proved
| R 2 | 1 Γ ( η + 1 ) ω 1 D * z η φ , ( d c ) n ( d c ) η n η .
Thus, from the above estimates we obtain
| R 1 | + | R 2 | 1 Γ ( η + 1 ) ( d c ) η n η ω 1 D * z η φ , ( d c ) n + ω 1 D z η φ , d c n
Now, finally, by using (24) and (25), we find (19), which implies (20).
In the following, we prove that the right-hand side of (20) is finite.
We have
D * z η φ ( y ) = 1 Γ ( m η ) z y ( y z ) m η 1 φ ( m ) ( z ) d z , z y d .
Hence,
D * z η φ ( y ) = φ ( m ) Γ ( m η + 1 ) d c m η , z y d .
Thus, for z y d , we have
D * z η φ = φ ( m ) Γ ( m η + 1 ) d c m η .
Similarly, by simple calculations, it follows that
D z η φ ( y ) = ( 1 ) m Γ ( m η ) y z ( z y ) m η 1 φ ( m ) ( z ) d z , c y z .
This implies
D z η φ ( y ) = φ ( m ) Γ ( m η + 1 ) d c m η , c y z .
>Thus,
D z η φ = φ ( m ) Γ ( m η + 1 ) d c m η .
Now, for δ > 0 , we can write what follows:
ω 1 D z η φ , δ = sup z 1 , z 2 : | z 1 z 2 | δ | D z η φ ( z 1 ) | + D z η φ ( z 2 ) | 2 D z η φ 2 φ ( m ) Γ ( m η + 1 ) d c m η < + .
Hence, it holds that
ω 1 D z η φ , δ 2 φ ( m ) Γ ( m η + 1 ) d c m η < .
Thus, we finally have  
sup z [ c , d ] ω 1 D z η φ , δ 2 φ ( m ) Γ ( m η + 1 ) ( d c ) m η < ,
and, similarly, we obtain
sup z [ c , d ] ω 1 D * z η φ , δ 2 φ ( m ) Γ ( m η + 1 ) ( d c ) m η < .
This completes the proof.    □
Corollary 1. 
We assume all the conditions of Theorem 5; also, let us assume that φ ( ν ) ( z ) = 0 ,   ν = 1 , , m 1 . Then, we have
F n ( φ ) ( z ) φ ( z ) 2 ( d c ) η Γ ( η + 1 ) n η ω 1 D z η φ , d c n + ω 1 D * z η φ , d c n .
We notice in the previous expression, the high-speed of pointwise convergence at 1 n η + 1 .
Next, we present a result of the fractional Voronovskaya-type asymptotic expansion.
Theorem 6. 
Let η > 0 ,   m = η ,   η N ,   φ A C m ( [ c , d ] ) ,   φ ( m ) L ( [ c , d ] ) ; then, we have
F n ( φ ) ( z ) φ ( z ) ν = 1 m 1 φ ( ν ) ( z ) ν ! D n ( ( · z ) ν ) ( z ) = o 1 n η ϵ ,
where 0 < ϵ η ,   n N .
If m = 1 , the sum above disappears.
From the asymptotic expansion given by the relation (30), it follows that
n η ϵ F n ( φ ) ( z ) φ ( z ) ν = 1 m 1 φ ( ν ) ( z ) ν ! F n ( ( · z ) ν ) ( z ) 0 ,
as n ,   0 < ϵ η .
When m = 1 , or φ ( ν ) ( z ) = 0 ,   ν = 1 , 2 , , m 1 , then
n η ϵ F n ( φ ) ( z ) φ ( z ) 0 ,
as n ,   0 < ϵ η .
Of great interest is the case η = 1 2 .
Proof. 
From the left Caputo fractional Taylor formula ([32], p. 54), we obtain
φ ( z s ) = ν = 0 m 1 φ ( ν ) ( z ) ν ! ( z s z ) ν + 1 Γ ( η ) z z s z s J η 1 D * z η φ ( J ) d J .
for all z z s d .
Furthermore, from [31], using the right Caputo fractional Taylor formula, we obtain
φ ( z s ) = ν = 0 m 1 φ ( ν ) ( z ) ν ! ( z s z ) ν + 1 Γ ( η ) z s z J z s η 1 D z η φ ( J ) d J .
for all c z s z .
Hence, z [ c , d ] is fixed such that z r z z r + 1 , for some r { 0 , 1 , , n 1 } . Hence, it holds that
s = r + 1 n φ ( z s ) Q s , R 1 h ( z z s ) = ν = 0 m 1 φ ( ν ) ( z ) ν ! s = r + 1 n Q n , R 1 h ( z z s ) ( z s z ) ν + s = r + 1 n Q n , R 1 h ( z z s ) 1 Γ ( η ) z z s z s J η 1 D * z η φ ( J ) d J = ν = 0 m 1 φ ( ν ) ( z ) ν ! s = r + 1 n Q n , R 1 h ( z z s ) ( z s z ) ν + R 1 ,
for all z z s d . Furthermore, it holds that
s = 0 r φ ( z s ) Q s , R 1 h ( z z s ) = ν = 0 m 1 φ ( ν ) ( z ) ν ! s = 0 r Q s , R 1 h ( z z s ) ( z s z ) ν + s = 0 r Q s , R 1 h ( z z s ) 1 Γ ( η ) z s z J z s η 1 D z η φ ( J ) d J = ν = 0 m 1 φ ( ν ) ( z ) ν ! s = 0 r Q s , R 1 h ( z z s ) ( z s z ) ν + R 2 ,
for all c z s z . Hence, we obtain
F n ( φ ) ( z ) φ ( z ) ν = 1 m 1 φ ( ν ) ( z ) ν ! s = 0 n Q n , R 1 h ( z z s ) ( z s z ) ν = R 1 + R 2 .
We also see that, for any z [ c , d ] , by (27) and (28), we have
D * z η φ , D z η φ φ ( m ) Γ ( m η 1 ) ( d c ) m η = M ,
with M > 0 . That is, we find
F n ( φ ) ( z ) φ ( z ) ν = 1 m 1 φ ( ν ) ( z ) ν ! F n ( ( · z ) ν ) ( z ) = R 1 + R 2 .
We easily see that
| R 1 | M s = r + 1 n Q n , R 1 h ( z z s ) ( z s z ) η Γ ( η + 1 ) M ( d c ) η n η Γ ( η + 1 ) .
Similarly, we reach
| R 2 | M s = 0 r Q N , R 1 h ( z z s ) ( z z s ) η Γ ( η + 1 ) M ( d c ) β n η Γ ( η + 1 ) .
Therefore, it holds that
| R 1 + R 2 | 2 M ( d c ) η n η Γ ( η + 1 ) .
Thus,
| R 1 + R 2 | = O 1 n η ,
and
| R 1 + R 2 | = o 1 .
Thus, finally letting 0 < ϵ η , we derive
n ϵ η | R 1 + R 2 | 2 M ( d c ) η n η Γ ( η + 1 ) 0 ,
as n , so that
| R 1 + R 2 | = o 1 n η ϵ , n N ,
proving the claim.    □

3. Complex Multivariate Neural Network Approximation

Let φ : [ c , d ] C with real and imaginary parts φ 1 , φ 2 : φ = φ 1 + i φ 2 ,   i = 1 . Clearly, φ is continuous if φ 1 and φ 2 are continuous. Let C ( [ c , d ] , C ) : = φ : [ c , d ] C : φ is continuous . Then φ 1 , φ 2 C ( [ c , d ] ) and hence both are bounded which implies that φ is bounded.
Let us define
F n C ( φ ) ( z ) : = F n ( φ 1 ) ( z ) + i F n ( φ 2 ) ( z ) , z R .
We observe that
| F n C ( φ ) ( z ) φ ( z ) | | F n φ 1 ) ( z φ 1 ( z ) | + | D n φ 2 ) ( z φ 2 ( z ) | ,
and, hence,
F n C ( φ ) φ F n ( φ 1 ) φ 1 + F n ( φ 2 ) φ 2 .
Clearly, if φ is bounded, then φ 1 , φ 2 are also bounded functions.
In order to establish the interpolation property, let us assume that φ is bounded and measurable. Then, φ 1 , φ 2 are bounded and measurable in [ c , d ] .
From Theorem 1 and (31), for φ C ( [ c , d ] , C ) , we have
F n C ( φ ) ( z ν ) = F n φ 1 ( z ν ) + i F n φ 2 ( z ν ) = φ 1 z ν + i φ 2 z ν = φ z ν ,
for every ν = 0 , 1 , , n . Hence, it follows that, for every φ C ( [ c , d ] , C ) , the operators F n C ( φ ) ( z ν ) interpolate the function φ z ν at the nodes z ν for every ν = 0 , 1 , , n .
In the following result, we present the uniform error estimate for the operators (31) in terms of modulus of smoothness by using (33) and Theorem 2.
Theorem 7. 
We assume that φ C ( [ c , d ] , C ) . Then, there holds the inequality
F n C φ φ 2 ω φ 1 ; ( d c ) n + ω φ 2 ; ( d c ) n .
In view of (32) and (33), using Theorem 3, we easily derive the following convergence estimates in the pointwise and uniform approximation of smooth complex functions by the operators (31).
Theorem 8. 
Let φ C m ( [ c , d ] , C ) ,   m N , and z [ c , d ] . Then,
(i)
F n C φ ; z φ ; z 2 [ ν = 1 m | φ 1 ( ν ) ( z ) | + | φ 2 ( ν ) ( z ) | ν ! ( d c ) ν n ν + ω 1 φ 1 m , d c n + ω 2 φ 2 m , d c n ( d c ) m n m m ! ] .
(ii)
F n C φ φ 2 [ ν = 1 m φ 1 ( ν ) + φ 2 ( ν ) ν ! ( d c ) ν n ν + ω 1 φ 1 m , d c n + ω 2 φ 2 m , d c n ( d c ) m n m m ! ] .

4. Applications

Next, we give several illustrative examples with the help of Matlab algorithms to verify the convergence behavior, computational efficiency, and consistency of the interpolation neural network operators activated by the ramp function ς R ( z ) . The possible effects of the ramp function on the absolute error of approximation of the operators F n ( φ ) are examined in the following tables and graphs in one case. Also, through Algorithm 1, we will give the implementation steps.
Algorithm 1: Implementing the interpolation NN operators (4) activated by smooth ramp functions.
Computation 12 00136 i001
Example 1. 
Let us take the operators defined in the relation (4). We now apply the interpolation neural network operators activated by ramp function ς R ( z ) to the function ψ ( z ) = 6 e 3 z 2 s i n ( 2 π z / 3 ) with n = 5 , 15 , 25 , 45 , 55 , 65 , and 100 . Let E n ( φ ; z ) = | F n ( φ ) ( z ) φ ( z ) | , the error function of approximation by F n ( φ ) operators. Figure 1a demonstrates that the operators F n ( φ ) have a good approximation performance in the one-dimensional case while the error in the approximation is shown in Figure 1b. In Table 1, we have computed the error at certain points in [ 1 , 1 ] . We observe that, as the value of n increases, the approximation becomes better, i.e., for the largest value of n , the error is minimal. In Figure 2 we give correlation between n = 5, 15, 25, 45, 65, 100, and error in point 0.5.
If we take test points uniformly, the results are shown in Table 2.
Example 2. 
We now apply the interpolation neural network operators activated by the ramp function ς R ( z ) to the function ψ ( z ) = z 2 s i n ( 1 / ( z + 1 ) with n = 5 , 15 , 25 , 45 , 55 , 65 , and 100 .  Figure 3a demonstrates that the operators F n ( φ ) have a good approximation performance in the one-dimensional case while the error in the approximation is shown in Figure 3b. In Table 3, we have computed the error at certain points in [ 1 , 1 ] . We observe that, as the value of n increases, the approximation becomes better, i.e., for the largest value of n , the error is minimal. The computational efficiency lies in the fact that, with the increase of n , the error very quickly reaches a value close to zero. In Figure 4 we give correlation between n = 5, 15, 25, 45, 65, 100, and error in point 0.5.
If we take test points uniformly, the results are shown in Table 4.

5. Conclusive Remarks

In this paper, we studied the approximation properties of a novel family of interpolation NN operators based on the ramp function. The construction and approximation results for the family of operators F n ( φ ) ( z ) in order to improve the approximation rate to smooth functions have been discussed. We have extended our study to the iterated and complex cases of the operators F n ( φ ) ( z ) . It is interesting to mention that, in the approximation process by our proposed family of operators, one can consider the known sample values in F n ( φ ) ( z ) as a training data for the network. The convergence results for the proposed operators in this article demonstrate its ability to depict the values extending beyond the confines of the training set. Finally, we gave several illustrative examples with the help of Matlab algorithms to verify the convergence behavior, computational efficiency, and consistency of the neural network operators activated by smooth ramp functions.

Author Contributions

All the authors have equally contributed to the conceptualization, framing and writing of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

We assert that no data sets were generated or analyzed during the preparation of the manuscript.

Acknowledgments

We are extremely thankful to the reviewers for their careful reading of the manuscript and making valuable suggestions, which led to a better presentation of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Costarelli, D.; Vinti, G. Approximation by nonlinear multivariate sampling-Kantorovich type operators and applications to image processing. Numer. Funct. Anal. Optim. 2013, 34, 819–844. [Google Scholar] [CrossRef]
  2. Cybenko, G. Approximation by superpositions of sigmoidal function. Math. Control Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  3. Barron, A.R. Universal approximation bounds for superpositions of a sigmodal function. IEEE Trans. Inform. 1993, 39, 930–945. [Google Scholar] [CrossRef]
  4. Costarelli, D.; Spigler, R. Convergence of a family of neural network operators of the Kantorovich type. J. Approx. Theory 2014, 185, 80–90. [Google Scholar] [CrossRef]
  5. Anastassiou, G.A. Rate of Convergence of Some Neural Network Operators to the Unit-Univariate Case. J. Math. Anal. Appl. 1997, 212, 237–262. [Google Scholar] [CrossRef]
  6. Anastassiou, G.A. Univariate hyperbolic tangent neural network approximation. Math. Comput. Model. 2011, 53, 1111–1132. [Google Scholar] [CrossRef]
  7. Anastassiou, G.A. Multivariate hyperbolic tangent neural network approximation. Comput. Math. Appl. 2011, 61, 809–821. [Google Scholar] [CrossRef]
  8. Cardaliaguet, P.; Euvrard, G. Approximation of a function and its derivative with a neural network. Neural Netw. 1992, 5, 207–220. [Google Scholar] [CrossRef]
  9. Costarelli, D.; Spigler, R. Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 2013, 44, 101–106. [Google Scholar] [CrossRef]
  10. Yu, D.S. Approximation by neural networks with sigmoidal functions. Acta Math. Sin. 2013, 29, 2013–2026. [Google Scholar] [CrossRef]
  11. Yu, D.S.; Zhou, P. Approximation by neural network operators activated by smooth ramp functions. Acta Math. Sin. (Chin. Ed.) 2016, 59, 623–638. [Google Scholar]
  12. Costarelli, D.; Spigler, R.; Vinti, G. A survey on approximation by means of neural network operators. J. NeuroTechnol. 2016, 1, 29–52. [Google Scholar]
  13. Costarelli, D.; Vinti, G. Rate of approximation for multivariate sampling Kantorovich operators on some function spaces. J. Int. Equ. Appl. 2014, 26, 455–481. [Google Scholar] [CrossRef]
  14. Costarelli, D.; Vinti, G. Quantitative estimates involving K-functionals for neural network-type operators. Appl. Anal. 2019, 98, 2639–2647. [Google Scholar] [CrossRef]
  15. Uyan, H.; Aslan, O.A.; Karateke, S.; Büyükyazıcı, Í. Interpolation for neural network operators activated with a generalized logistic-type function. Preprint 2024. [Google Scholar] [CrossRef]
  16. Costarelli, D.; Vinti, G. Voronovskaja type theorems and high-order convergence neural network operators with sigmoidal functions. Mediterr. J. Math. 2020, 17, 23. [Google Scholar] [CrossRef]
  17. Coroianu, L.; Costarelli, D.; Kadak, U. Quantitative estimates for neural network operators implied by the asymptotic behaviour of the sigmoidal activation functions. Mediterr. J. Math. 2022, 19, 211. [Google Scholar] [CrossRef]
  18. Kadak, U. Multivariate neural network interpolation operators. J. Comput. Appl. Math. 2022, 414, 114426. [Google Scholar] [CrossRef]
  19. Qian, Y.Y.; Yu, D.S. Rates of approximation by neural network interpolation operators. Appl. Math. Comput. 2022, 418, 126781. [Google Scholar] [CrossRef]
  20. Costarelli, D.; Vinti, G. Approximation by max-product neural network operators of Kantorovich type. Results Math. 2016, 69, 505–519. [Google Scholar] [CrossRef]
  21. Costarelli, D.; Vinti, G. Max-product neural network and quasi-interpolation operators activated by sigmoidal functions. J. Approx. Theory 2016, 209, 1–22. [Google Scholar] [CrossRef]
  22. Wang, G.S.; Yu, D.S.; Guan, L.M. Neural network interpolation operators of multivariate function. J. Comput. Appl. Math. 2023, 431, 115266. [Google Scholar] [CrossRef]
  23. Qian, Y.Y.; Yu, D.S. Neural network interpolation operators activated by smooth ramp functions. Anal. Appl. 2022, 20, 791–813. [Google Scholar] [CrossRef]
  24. Wang, G.; Yu, D.; Zhou, P. Neural network interpolation operators optimized by Lagrange polynomial. Neural Netw. 2022, 153, 179–191. [Google Scholar] [CrossRef] [PubMed]
  25. Bajpeyi, S.; Kumar, A.S. Approximation by exponential type neural network operators. Anal. Math. Phys. 2021, 11, 108. [Google Scholar] [CrossRef]
  26. Bajpeyi, S. Order of approximation for exponential type neural network operators. Results Math. 2023, 78, 99. [Google Scholar] [CrossRef]
  27. Mahmudov, N.; Kara, M. Approximation properties of the Reimann–Liouville fractional integral type Szász–Mirakyan–Kantorovich operators. J. Math. Inequal. 2022, 16, 1285–1308. [Google Scholar] [CrossRef]
  28. Kadak, U. Fractional type multivariate neural network operators. Math. Methods Appl. Sci. 2023, 46, 3045–3065. [Google Scholar] [CrossRef]
  29. Li, F.J. Constructive function approximation by neural networks with optimized activation functions and fixed weights. Neural Comput. Appl. 2019, 31, 4613–4628. [Google Scholar] [CrossRef]
  30. Anastassiou, G.A. Fractional Korovkin theory. Chaos Solit. Fract. 2009, 42, 2080–2094. [Google Scholar] [CrossRef]
  31. Anastassiou, G. On right fractional calculus. Chaos Solit. Fract. 2009, 42, 365–376. [Google Scholar] [CrossRef]
  32. Diethelm, K. The Analysis of Fractional Differential Equations, Lecture Notes in Mathematics 2004; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
Figure 1. As we increase the value of n , the approximation is good, i.e., for the largest value of n , the error is minimal.
Figure 1. As we increase the value of n , the approximation is good, i.e., for the largest value of n , the error is minimal.
Computation 12 00136 g001
Figure 2. Correlation between n = 5, 15, 25, 45, 65, 100, and error in point 0.5.
Figure 2. Correlation between n = 5, 15, 25, 45, 65, 100, and error in point 0.5.
Computation 12 00136 g002
Figure 3. As we increase the value of n , the approximation is good, i.e., for the largest value of n , the error is minimal.
Figure 3. As we increase the value of n , the approximation is good, i.e., for the largest value of n , the error is minimal.
Computation 12 00136 g003
Figure 4. Correlation between n = 5, 15, 25, 45, 65, 100 and, error in point 0.5.
Figure 4. Correlation between n = 5, 15, 25, 45, 65, 100 and, error in point 0.5.
Computation 12 00136 g004
Table 1. Error of approximation E n for n = 5, 15, 25, 45, 55, 65, and 100.
Table 1. Error of approximation E n for n = 5, 15, 25, 45, 55, 65, and 100.
z E 5 E 15 E 25 E 45 E 55 E 65 E 100
−10000000
−0.80.22350.02830.01030.00320.00210.00150
−0.70.3920.10050.06690.03640.02850.0250
−0.50.49320.06630.05970.03110.02190.0210
−0.40.70790.08330.03010.00930.00620.00454.44 · 10 16
−0.10.50570.24790.12220.07010.06170.04910
0.10.50570.24790.12220.07010.06170.04910
0.40.70790.08330.03010.00930.00620.00450
0.50.49320.06630.05970.03110.02190.0210
0.70.3920.10050.06690.03640.02850.0251.55 · 10 15
0.80.22350.02830.01030.00320.00210.00150
10000000
Table 2. Error of approximation E n for n = 5, 15, 25, 45, 55, 65, and 100.
Table 2. Error of approximation E n for n = 5, 15, 25, 45, 55, 65, and 100.
z E 5 E 15 E 25 E 45 E 55 E 65 E 100
−0.80.22350.02830.01030.00320.00210.00150
−0.40.70790.08330.03010.00930.00620.00454.44 · 10 16
01.55 · 10 15 000000
0.40.70790.08330.03010.00930.00620.00450
0.80.22350.02830.01030.00320.00210.00150
Table 3. Error of approximation E n for n = 5, 15, 25, 45, 55, 65, and 100.
Table 3. Error of approximation E n for n = 5, 15, 25, 45, 55, 65, and 100.
z E 5 E 15 E 25 E 45 E 55 E 65 E 100
−0.90.13970.03190.02280.01230.00950.00850
−0.70.04230.02590.01280.00740.00650.00520
−0.40.03480.00380.00144.26 · 10 4 2.85 · 10 4 2.04 · 10 4 5.55 · 10 17
−0.10.50570.24790.12220.07010.06170.04910
0.30.00820.00530.00250.00150.00130.0010
0.50.02560.00640.00430.00240.00180.00160
0.90.03350.00980.00620.00340.00270.00240
Table 4. Error of approximation E n for n = 5, 15, 25, 45, 55, 65, and 100.
Table 4. Error of approximation E n for n = 5, 15, 25, 45, 55, 65, and 100.
z E 5 E 15 E 25 E 45 E 55 E 65 E 100
−0.80.06490.00720.00268.01 · 10 4 5.36 · 10 4 3.84 · 10 4 0
−0.40.03480.00380.00144.26 · 10 4 2.85 · 10 4 2.04 · 10 4 5.55 · 10 17
00.01930.00217.67 · 10 4 2.37 · 10 4 1.59 · 10 4 1.13E-040
0.40.01160.00134.60 · 10 4 1.42 · 10 4 9.50 · 10 5 6.80 · 10 5 0
0.80.00748.18 · 10 4 2.94 · 10 4 9.08 · 10 5 6.08 · 10 5 4.35 · 10 5 0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baxhaku, F.; Berisha, A.; Baxhaku, B. Interpolation for Neural Network Operators Activated by Smooth Ramp Functions. Computation 2024, 12, 136. https://doi.org/10.3390/computation12070136

AMA Style

Baxhaku F, Berisha A, Baxhaku B. Interpolation for Neural Network Operators Activated by Smooth Ramp Functions. Computation. 2024; 12(7):136. https://doi.org/10.3390/computation12070136

Chicago/Turabian Style

Baxhaku, Fesal, Artan Berisha, and Behar Baxhaku. 2024. "Interpolation for Neural Network Operators Activated by Smooth Ramp Functions" Computation 12, no. 7: 136. https://doi.org/10.3390/computation12070136

APA Style

Baxhaku, F., Berisha, A., & Baxhaku, B. (2024). Interpolation for Neural Network Operators Activated by Smooth Ramp Functions. Computation, 12(7), 136. https://doi.org/10.3390/computation12070136

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop