Next Article in Journal
On Polynomial φ-Contractions with Applications to Fractional Logistic Growth Equations
Previous Article in Journal
Fractal Dimensions of Particle Size Distribution in Littoral Sandstones of Carboniferous Donghetang Formation in Hade Oilfield, Tarim Basin, NW China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Univariate Neural Network Quantitative (NNQ) Approximation by Symmetrized Operators

1
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
2
Department of Software Engineering, Faculty of Engineering and Natural Sciences, Istanbul Atlas University, Istanbul 34408, Türkiye
3
Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Sivas University of Science and Technology, Sivas 58000, Türkiye
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(6), 365; https://doi.org/10.3390/fractalfract9060365
Submission received: 6 March 2025 / Revised: 29 May 2025 / Accepted: 30 May 2025 / Published: 3 June 2025
(This article belongs to the Section Numerical and Computational Methods)

Abstract

This paper deals not only with pointwise and uniform convergence but also Y-valued fractional approximation results by univariate symmetrized neural network (SNN) operators on Banach space Y , . . Moreover, our main motivation in this work is to compare the convergence results obtained by classical neural network (NN) operators and symmetric neural network (SNN) operators and try to convert them into numerical examples and graphs through computer programming language Python codes. As a result of this experimental study conducted under the regime of certain parameters, the convergence speed and results of SNN operators are superior to those of classical NN operators.

1. Introduction and Related Work

Pioneering work in the field of machine learning (ML) theory and computational approximation theory are integrating cutting-edge applications into our lives every day (see [1,2,3,4]). With the advancement of artificial intelligence (AI) technology, humanity needs more effective ML models. It is not possible to ignore the contribution of pure mathematics and physics at this point.
Throughout the history of mathematics and physics, the phenomena of “symmetry” and “asymmetry” have been of great importance both in theory and in practice (please, see also [5]). When we have examined the literature, we have come across promising studies on how these phenomena are applied to real-world problems. It would be too assertive to say that the perspectives of different disciplines on symmetry-asymmetry phenomena and the tools and practices used are exactly the same. However, it would not be wrong to say that they are based on similar roots in principle and philosophy.
In [6], together with using ANNs and symplectic NN architectures to solve differential equations according to Noether’s theorem used in mathematical physics, the “physical symmetry” that governs the behavior of physical systems, translational symmetry provides conservation of momentum, and rotational symmetry provides conservation of angular momentum and energy. Mattheakis et al. also exploited the property of being an odd or even function (in the sense of symmetry with respect to the origin/ y axis in Euclidean geometry) for the generation of real-world data. In order to obtain better regression estimates with a standard feedforward multilayer perceptron (MLP) that is unaware of symmetry, a hidden layer called a “hub layer” was integrated into the classical MLP architecture. In this way, the desired odd or even function for the regression could be achieved.
In [7], Faroughi et al. consider fundamental physics-based ANNs in three different categories and some applications to solid and fluid mechanics. They especially emphasize that working with sparse data is disadvantageous in terms of the training process under the regime of classical ANNs. When viewed from the philosophy of inserting physical structures into NN architectures, the symmetry applications of Lagrangian neural networks in fundamental physics attract attention.
Lu et al. recommend a new, elegant network called the deep operator network (DeepONet) inspired by the universal approximation theorem (UAT) for operator and the generalized version of UAT for the operator. They also establish that some implicit (for example, integrals, fractional Laplacians) and also explicit operators (like stochastic and deterministic differential equations) are learnable by DeepONet in [8].
In [9], Goswami et al. put under the spotlight a comprehensive literature review, highlighting the usefulness of the graph neural network operators, DeepONet and Fourier neural network operators, and their generalized versions, as well as their applications to the field of computational mechanics.
Again, as an interesting work, in [10], Dong et al. handle concepts of “symmetry”, “anti-symmetry”, “non-symmetry” from the point of view of particle physics. In their data visualization, by employing the Heaviside step function, they use three equations verifying anti-symmetric, symmetric, and fully anti-symmetric cases for a two-dimensional dataset. They also think in their experimental studies that the use of continuous symmetry (like gauge, Lorentz) from field theory will open the door to promising results. Mostly, they study the performance of Equivariant Quantum Neural Networks (EQNNs) with discrete symmetries.
Another work that was the driving force behind the “symmetrization” technique, which was created by Anastassiou (please see [11]) and used in this study, is in [12]. Tahmasebi and Jegelka revealed that it is possible to use “smaller” amounts of data to train artificial neural networks by taking advantage of the “symmetry” phenomenon used in datasets. They have looked at the differential geometric perspective of ways to reduce the complexity of a given dataset in ANNs. More precisely, they have succeeded in extending Weyl’s law, which is used primarily in Spectral Theory, to include symmetry in the evaluation of the complexity of a dataset in machine learning (ML). It is thought that machine learning models that incorporate symmetry will not only make more accurate predictions, but will also fill a serious gap, especially in some disciplines where training data is scarce.
In [13], Na and Park offer architecturally symmetric neural networks (NNs) in which even the weights act in a symmetric manner. Here, the authors particularly draw attention to the “difficulties” of memory space and time spent training the model in the performance process of the ANN models. In order to overcome these difficulties at an optimum level, they highlight the “symmetric” structure of the model with some examples, such as the XOR neural network. The NN structure here is topologically symmetric; in other words, this structure has symmetric weights, connections, and even inputs. Ref. [14] is a comprehensive study of the approximation status of the idea of embedding infinite-dimensional inputs in classical neural network architectures, the approximation potentials of symmetric and antisymmetric networks, and the potentials of learning simple symmetric functions by gradient techniques.
The concept of “permutation symmetries” in multilayer perceptrons (MLPs), which began with the pioneering work of Hecht-Nielsen [15], has spawned enthusiasm among many researchers up to the present day. Albertini et al. [16] generalized this idea and investigated flip sign symmetries in artificial neural networks by considering single functions. Later, Laurent et al. [17] investigated the concept of complexity in Bayesian neural network posteriors. They stated that considering symmetries in NN models yielded promising results and that they gathered considerable insights for future studies.
In [18], Vlačić and Bölcskei solve the open problem posed by Fefferman [19]. Accordingly, they develop a theory of the relationships between NNs with respect to a given function f : R m R n by relating t a n h and t a n h -type nonlinear functions and their symmetries. In fact, they point out that the architecture, weights, and biases of feedforward neural networks (FNNs) can be determined with respect to a given nonlinearity g.
In [20], Perin and Deny state that incorporating “symmetries in the sense of transformations with group actions” into the architecture of deep neural networks (DNNs) is promising in enabling ML models to produce more accurate predictions. Thus, they answer the questions of how and under what conditions DNNs can learn symmetries in dataset.
In [21], Hutter considers symmetric and anti-symmetric NNs. He also emphasizes the essential features of an ideal approximation architecture should have. The main emphasis in this work is to prove the existence of an “equivalent MLP” that allows symmetric MLP structures with the “universality” principle.
In this work, we benchmark the performance of the approximation of symmetrized NN operators versus classical NN operator’s performance. Inspired by [12,22], let us consider any function with mirror symmetry in the Euclidean plane (the function that is symmetric about the y-axis—the concept of an “even” function). For instance, let us imagine that we are calculating the integral of such a function over a symmetric region. In this case, since this whole integral region, as is well known, contains two equal subregions, it is sufficient to perform the calculation over one subregion. It is also anticipated that a similar logic can be followed here, and that in ML models, which take symmetry into account, relatively fewer datasets can be used to save data and energy. As pointed out in the study in [23], we observe that the popularity of computer programming languages such as Python, with minset of “number-crunching”, continues to increase. With the motivation we received from here, in our current study, we crowned our numerical examples with the graphics we obtained thanks to the computer programming language Python.
Beyond the above-mentioned papers, in [24], Cantarini and Costarelli consider a research problem related to “simultaneous approximation” on well-known neural network (NN) operators activated by classical logistic functions with interesting Voronovskaja-type results. Again, in [25], density results of deep neural network (DNN) operators have been highlighted. Moreover, here, Costarelli also draws attention to the issue that additional layers of NN operators can play a critical role in the degree of accuracy of the obtained approximations, and thus, increasing the number of hidden layers will improve the approximation. Also, in [26], Costarelli and Spigler study the pointwise and uniform convergence of certain neural network operators. They consider these convergence cases by taking into account both the weights and the number of neurons of the network. They even propose solution methods obtained by using sigmoidal functions in the solutions of integro-differential equations and Volterra integral equations. Turkun and Duman [27] have used regular summability methods to improve the convergence results of NN operators and increase the convergence rate. They also have put forth, with numerical examples and graphs, that they obtained better convergence results than those obtained by classical methods. In [28], Hu et al. have parameterized deep neural network weights to be symmetric. Thanks to these symmetry parameterizations, memory requirements are significantly reduced, and computational efficiency is achieved. These improvements are revolutionary, especially for mobile applications. Ref. [29] deals with an experimental study taking advantage of the “symmetry” property for credit risk modeling. The symmetry feature used in this credit risk modeling offers significant advantages. Some of these can be listed as temporal invariance of risk models, consistent representation of risk factors, and symmetry in financial network structures created by market participants. Peleshchak et al. have studied the classification of mines made of different materials with a neural network structure in [30]. They have shown that in addition to the asymmetry of the neurons in the first and second hidden layers of this neural network structure with respect to the symmetry plane between the hidden layers, the activation functions used also indicated a significant change in the accuracy of the developed neural network model. Even when the symmetry of the number of neurons in the hidden layers is broken, a change in the value of the loss function is observed.
The current study is designed as follows. In Section 1, we provide a gentle introduction and an in-depth literature survey. In Section 2, we give an elegant theoretical background about the symmetrized density function and activation function, which are the backbone of our symmetrized NN operators. We insert this background into the approximation theorems in the subsequent section. Section 3 is devoted to pointwise and uniform approximation results. In Section 4, which is the core part of the paper, a table of convergence rates obtained by employing classical NN operators and symmetrized NN operators, as well as some special functions, is given. Furthermore, these numerical results are interpreted with nice graphics created using the Python symbolic library SymPy and Python numerical computation libraries SciPy and NumPy (see also [31]). In Section 5, the findings are discussed, and ideas that open the doors to the future are suggested. In Appendix A, the Python programming language codes used throughout the article are shared with curious researchers from all fields, especially mathematicians, and also subject experts as “open source” (see [31,32,33,34,35]).

2. Regarding Symmetrized Density Function and Creation of Symmetrized Neural Network (SNN) Operator

In this section, firstly, we consider our activation function as in (1). Moreover, we create the density function t ^ , ξ taking advantage of k t ^ , ξ . Being inspired by [11], and also by a different line of vision compared with other papers like [5,24,26,27,36], our operators shall be derived by a so-called “symmetrized density function” whose technique is given in between (3) and (6).
Now, our activation function to be used here is
k t ^ , ξ υ ¯ : = 1 t ^ e 2 ξ υ ¯ 1 + t ^ e 2 ξ υ ¯ ,
for ξ , t ^ > 0 , υ ¯ R . Above ξ is the parameter, and t ^ is the deformation coefficient. For more, read [36,37,38]. We employ the following density function
t ^ , ξ υ ¯ : = 1 4 k t ^ , ξ υ ¯ + 1 k t ^ , ξ υ ¯ 1 > 0 ,
υ ¯ R ; t ^ , ξ > 0 .
For υ ¯ R ; t ^ , ξ > 0 , we have that
t ^ , ξ υ ¯ = 1 t ^ , ξ υ ¯ ,
and
1 t ^ , ξ υ ¯ = t ^ , ξ υ ¯ .
Adding (3) and (4), we obtain that
t ^ , ξ υ ¯ + 1 t ^ , ξ υ ¯ = t ^ , ξ υ ¯ + 1 t ^ , ξ υ ¯ ,
which is the key to this work.
So that
F υ ¯ : = t ^ , ξ υ ¯ + 1 t ^ , ξ υ ¯ 2 .
F is symmetric with respect to the y axis. In other words, we have obtained an even function. By (18.18) of [37], for ξ > 0 , we have that
t ^ , ξ ln t ^ 2 ξ = tanh ξ 2 = 1 t ^ , ξ ln t ^ 2 ξ ,
sharing the same maximum at symmetric points. By Theorem 18.1, p.458 of [37], we have that for all υ ¯ R ; t ^ , ξ > 0
i ^ = t ^ , ξ υ ¯ i ^ = i ^ = 1 t ^ , ξ υ ¯ i ^ = 1 ,
is true. Consequently, we derive that
i ^ = F υ ¯ i ^ = 1 ,
for every υ ¯ R . By Theorem 18.2, p. 459 of [37], for ξ , t ^ > 0 , we have that
t ^ , ξ υ ¯ d υ ¯ = 1 t ^ , ξ υ ¯ d υ ¯ = 1 ,
so that
F υ ¯ d υ ¯ = 1 ,
therefore, F υ ¯ is a density function.
By Theorem 18.3, p. 459, of [37], we have
Let 0 < ς < 1 , and η N with η 1 ς > 2 ; t ^ , ξ > 0 . Then,
m = : η υ ¯ m η 1 ς t ^ , ξ η υ ¯ m < 2 max t ^ , 1 t ^ e 4 ξ e 2 ξ η 1 ς = W ^ e 2 ξ η 1 ς ,
where W ^ : = 2 max t ^ , 1 t ^ e 4 ξ .
Similarly, we get that
m = : η υ ¯ m η 1 ς 1 t ^ , ξ η υ ¯ m < W ^ e 2 ξ η 1 ς ,
Consequently, we obtain that
m = : η υ ¯ m η 1 ς F η υ ¯ m < W ^ e 2 ξ η 1 ς ,
where W ^ is as in (12).
Depict the ceiling and integral part of a real number by · and · , respectively.
Theorem 1 
([11]). Let us take ξ t ^ > z 0 > 0 with t ^ , ξ z 0 = t ^ , ξ 0 and ξ t ^ > 1 for each t ^ , ξ > 0 . Then
1 m = η a η b t ^ , ξ η υ ¯ m < m a x 1 t ^ , ξ ξ t ^ , 1 1 t ^ , ξ ξ 1 t ^ = : A ¯ t ^
exists such that η a η b for η N ; υ ¯ a , b R .
Similarly, we consider ξ 1 t ^ > z 1 > 0 such that 1 t ^ , ξ z 1 = 1 t ^ , ξ 0 , and ξ 1 t ^ > 1 .
Thus,
1 m = η a η b 1 t ^ , ξ η υ ¯ m < max 1 1 t ^ , ξ ξ 1 t ^ , 1 t ^ , ξ ξ t ^ = : A ¯ t ^ .
Hence,
m = η a η b t ^ , ξ η υ ¯ m > 1 A ¯ t ^ ,
and
m = η a η b 1 t ^ , ξ η υ ¯ m > 1 A ¯ t ^ .
Consequently, it holds that
m = η a η b t ^ , ξ η υ ¯ m + 1 t ^ , ξ η υ ¯ m 2 > 2 2 A ¯ t ^ = 1 A ¯ t ^ ,
so that
1 m = η a η b t ^ , ξ η υ ¯ m + 1 t ^ , ξ η υ ¯ m 2 < A ¯ t ^ ,
that is,
1 m = η a η b F η υ ¯ m < A ¯ t ^ .
Theorem 2 
([11]). Let us pick ξ t ^ > z 0 > 0 such that t ^ , ξ z 0 = t ^ , ξ 0 ; ξ t ^ > 1 for t ^ , ξ > 0 . In addition, let ξ 1 t ^ > z 1 > 0 , so 1 t ^ , ξ z 1 = 1 t ^ , ξ 0 is valid for ξ 1 t ^ > 1 . Then,
1 m = η a η b F η υ ¯ m < A ¯ t ^
is obtained such that η a η b for υ ¯ a , b R , η N .
Remark 1. 
(i) By Remark 18.5, p.460 of [37], for at least some υ ¯ 1 a , b , we have that
lim η + m = η a η b t ^ , ξ η υ ¯ 1 m 1 ,
and for some υ ¯ 2 a , b ,
lim η + m = η a η b 1 t ^ , ξ η υ ¯ 2 m 1 ,
such that ξ , t ^ > 0 . Therefore, it holds that
lim η + m = η a η b t ^ , ξ η υ ¯ 1 m + 1 t ^ , ξ η υ ¯ 2 m 2 1 .
Hence,
lim η + m = η a η b t ^ , ξ η υ ¯ 1 m + 1 t ^ , ξ η υ ¯ 1 m 2 1
even if
lim η + m = η a η b 1 t ^ , ξ η υ ¯ 1 m = 1 ,
because then
lim η + m = η a η b t ^ , ξ η υ ¯ 1 m 2 + 1 2 1 ,
equivalently
lim η + m = η a η b t ^ , ξ η υ ¯ 1 m 2 1 2 ,
true by
lim η + m = η a η b t ^ , ξ η υ ¯ 1 m 1
(ii) For sufficiently large η N and a , b R , η a η b is valid. So, in general, it holds that
m = η a η b F η υ ¯ m 1 ,
such that η a m η b if and only if a m η b .
Here, Y , · denotes a Banach space.
Definition 1. 
Y-valued linear “symmetrized neural network (SNN) operator” is defined as below:
L η s f , υ ¯ : = m = η a η b f m η F η υ ¯ m m = η a η b F η υ ¯ m
= m = η a η b f m η t ^ , ξ η υ ¯ m + 1 t ^ , ξ η υ ¯ m m = η a η b t ^ , ξ η υ ¯ m + 1 t ^ , ξ η υ ¯ m ,
for arbitrary f C a , b , Y such that η N : η a η b .
Definition 2 
([39]). (i) We employ the universal “moduli of continuity” defined as below for f C a , b , Y ,
ω 1 f , η : = sup υ ¯ 1 , υ ¯ 2 a , b υ ¯ 1 υ ¯ 2 η f υ ¯ 1 f υ ¯ 2 ,
such that η > 0 .
(ii) (33) is also valid for any f C υ ¯ B R , Y , C B R , Y , or C υ ¯ R , Y , where
  • C υ ¯ B R , Y : = f f : R Y , b o u n d e d a n d c o n t i n u o u s u n i f o r m l y ,
  • C B R , Y : = f f : R Y , c o n t i n u o u s a n d b o u n d e d , and
  • C υ ¯ R , Y : = f f : R Y , c o n t i n u o u s u n i f o r m l y , respectively.
(iii) f C υ ¯ R , Y or f C a , b , Y lim η 0 ω 1 f , η = 0 as η 0 .
Definition 3. 
“Y-valued quasi-interpolation SNN operator” is defined as
L η s ¯ f , υ ¯ : = m = f m η F η υ ¯ m
for f C υ ¯ B R , Y , or f C B R , Y such that η N , υ ¯ R .

3. Pointwise and Uniform Approximations by SNN Operators

We present a set of Y-valued symmetrized neural network (SNN) approximations to a function given quantitatively.
Theorem 3. 
Let f C a , b , Y , 0 < ς < 1 , η N : η 1 ς > 2 , t ^ , ξ > 0 , t ^ 1 , υ ¯ a , b . Then,
(i)
L η s f , υ ¯ f υ ¯ A ¯ t ^ ω 1 f , 1 η ς + 2 f W ^ e 2 ξ η 1 ς = : ϱ ^ ,
where W ^ as in (12),
and
(ii)
L η s f f ϱ ^ .
We get that lim η L η s f = f , pointwise and uniformly.
Proof. 
Please see Appendix B. □
Next, we give
Theorem 4. 
Let f C B R , Y , 0 < ς < 1 , t ^ > 0 , t ^ 1 , η N : η 1 ς > 2 , υ ¯ R . Then,
(i)
L s ¯ η f , υ ¯ f υ ¯ ω 1 f , 1 η ς + 2 f W ^ e 2 ξ η 1 ς = : ϱ ,
and
(ii)
L s ¯ η f f ϱ .
For f C υ ¯ B R , Y we get lim η L s ¯ η f = f , pointwise and uniformly.
Proof. 
Please see Appendix C. □
A high-order approximation follows.
Theorem 5. 
Let f C N a , b , Y , η , N N , t ^ > 0 , t ^ 1 , 0 < ς < 1 , υ ¯ a , b and η 1 ς > 2 . Then,
(i)
L η s f , υ ¯ f υ ¯ A ¯ t ^ j ^ = 1 N f j ^ υ ¯ j ^ ! 1 η ς j ^ + b a j ^ W ^ e 2 ξ η 1 ς
+ ω 1 f N , 1 η ς 1 η ς N N ! + 2 f N b a N N ! W ^ e 2 ξ η 1 ς ,
(ii) assume f j ^ υ ¯ 0 = 0 , j ^ = 1 , . . . , N , for some υ ¯ 0 a , b ; thus,
L η s f , υ ¯ 0 f υ ¯ 0 A ¯ t ^
ω 1 f N , 1 η ς 1 η ς N N ! + 2 f N b a N N ! W ^ e 2 ξ η 1 ς ,
and
(iii)
L η s f f A ¯ t ^ j ^ = 1 N f j ^ j ^ ! 1 η ς j ^ + b a j ^ W ^ e 2 ξ η 1 ς
+ ω 1 f N , 1 η ς 1 η ς N N ! + 2 f N b a N W ^ e 2 ξ η 1 ς .
Again, we obtain lim η L η s f = f , pointwise and uniformly.
Proof. 
It is long-winded, and similar to [37]. We would like to bring it to the attention of interested readers. □
All integrals from now on are of Bochner-type.
Definition 4 
([40]). Let a , b R , Y be a Banach space, ς > 0 ; ϰ = ς N ; f : a , b Y ; f ϰ L 1 a , b , Y . We call the Caputo–Bochner left fractional derivative of order ς:
D a ς f υ ¯ : = 1 Γ ϰ ς a υ ¯ υ ¯ t ϰ ς 1 f ϰ t d t
for all υ ¯ a , b .
Definition 5 
([40]). Let ς > 0 , t ^ > 0 , t ^ 1 , a , b R , Y be a Banach space, ϰ : = ς . We assume that f ϰ L 1 a , b , Y , where f : a , b Y . We call the Caputo–Bochner right fractional derivative of order ς:
D b ς f υ ¯ : = 1 ϰ Γ ϰ ς υ ¯ b z υ ¯ ϰ ς 1 f ϰ z d z
for every υ ¯ a , b .
A Y-valued fractional approximation follows:
Theorem 6. 
Let ς > 0 , t ^ > 0 , t ^ 1 , N = ς , ς N , f C N a , b , Y , 0 < τ < 1 , υ ¯ a , b , η N : η 1 τ > 2 . Then, for υ ¯ a , b ,
(i)
L s f , υ ¯ j ^ = 1 N 1 f j ^ υ ¯ j ^ ! L η s · υ ¯ j ^ υ ¯ f υ ¯
A ¯ t ^ Γ ς + 1 ω 1 D υ ¯ ς f , 1 η τ a , υ ¯ + ω 1 D υ ¯ ς f , 1 η τ υ ¯ , b η ς τ
+ W ^ e 2 ξ η 1 τ D υ ¯ ς f , a , υ ¯ υ ¯ a ς + D υ ¯ ς f , υ ¯ , b b υ ¯ ς ,
(ii) if f j ^ υ ¯ = 0 , for j ^ = 1 , . . . , N 1 , we have
L η s f , υ ¯ f υ ¯ A ¯ t ^ Γ ς + 1
ω 1 D υ ¯ ς f , 1 η τ a , υ ¯ + ω 1 D υ ¯ ς f , 1 η τ υ ¯ , b η ς τ
+ W ^ e 2 ξ η 1 τ D υ ¯ ς f , a , υ ¯ υ ¯ a ς + D υ ¯ ς f , υ ¯ , b b υ ¯ ς ,
(iii)
L η s f , υ ¯ f υ ¯ A ¯ t ^
j ^ = 1 N 1 f j ^ υ ¯ j ^ ! 1 η τ j ^ + b a j ^ W ^ e 2 ξ η 1 τ
+ 1 Γ ς + 1 ω 1 D υ ¯ ς f , 1 η τ a , υ ¯ + ω 1 D υ ¯ ς f , 1 η τ υ ¯ , b η ς τ
+ W ^ e 2 ξ η 1 τ D υ ¯ ς f , a , υ ¯ υ ¯ a ς + D υ ¯ ς f , υ ¯ , b b υ ¯ ς ,
and
(iv)
L η s f f A ¯ t ^
j ^ = 1 N 1 f j ^ j ^ ! 1 η τ j ^ + b a j ^ W ^ e 2 ξ η 1 τ
+ 1 Γ ς + 1 sup υ ¯ a , b ω 1 D υ ¯ ς f , 1 η τ a , υ ¯ + sup υ ¯ a , b ω 1 D υ ¯ ς f , 1 η τ υ ¯ , b η ς τ
+ W ^ e 2 ξ η 1 τ b a ς sup υ ¯ a , b D υ ¯ ς f , a , υ ¯ + sup υ ¯ a , b D υ ¯ ς f , υ ¯ , b .
Above, when N = 1 the sum j ^ = 1 N 1 · = 0 .
We have established the Y-valued fractional approximation, pointwise and uniform quantitatively of L η s I , the unit operator, as η .
Proof. 
The proof is long and similar to [11], so it is saved for the interested reader. □
Next, we apply Theorem 6 for N = 1 .
Corollary 1. 
Let 0 < ς , τ < 1 , t ^ > 0 , t ^ 1 , f C 1 a , b , Y , υ ¯ a , b , η N : η 1 τ > 2 . Then,
(i)
L η s f , υ ¯ f υ ¯
A ¯ t ^ Γ ς + 1 ω 1 D υ ¯ ς f , 1 η τ a , υ ¯ + ω 1 D υ ¯ ς f , 1 η τ υ ¯ , b η ς τ
+ W ^ e 2 ξ η 1 τ D υ ¯ ς f , a , υ ¯ υ ¯ a ς + D υ ¯ ς f , υ ¯ , b b υ ¯ ς ,
and
(ii)
L η s f f A ¯ t ^ Γ ς + 1
sup υ ¯ a , b ω 1 D υ ¯ ς f , 1 η τ a , υ ¯ + sup υ ¯ a , b ω 1 D υ ¯ ς f , 1 η τ υ ¯ , b η ς τ
+ b a ς W ^ e 2 ξ η 1 τ sup υ ¯ a , b D υ ¯ ς f , a , υ ¯ + sup υ ¯ a , b D υ ¯ ς f , υ ¯ , b .
When ς = 1 2 , we derive
Corollary 2. 
Let 0 < τ < 1 , t ^ > 0 , t ^ 1 , f C 1 a , b , Y , υ ¯ a , b , η N : η 1 τ > 2 . Then,
(i)
L η s f , υ ¯ f υ ¯
2 A ¯ t ^ π ω 1 D υ ¯ 1 2 f , 1 η τ a , υ ¯ + ω 1 D υ ¯ 1 2 f , 1 η τ υ ¯ , b η τ 2
+ W ^ e 2 ξ η 1 τ D υ ¯ 1 2 f , a , υ ¯ υ ¯ a + D υ ¯ 1 2 f , υ ¯ , b b υ ¯ ,
and
(ii)
L η s f f 2 A ¯ t ^ π
sup υ ¯ a , b ω 1 D υ ¯ 1 2 f , 1 η τ a , υ ¯ + sup υ ¯ a , b ω 1 D υ ¯ 1 2 f , 1 η τ υ ¯ , b η τ 2
+ b a W ^ e 2 ξ η 1 τ sup υ ¯ a , b D υ ¯ 1 2 f , a , υ ¯ + sup υ ¯ a , b D υ ¯ 1 2 f , υ ¯ , b < .

4. Numerical Approach: Classical Neural Network (CNN) Operators Versus Symmetrized Neural Network (SNN) Operators

To the best of our knowledge, the approximation results and approximation speeds obtained with operators derived from the symmetrized density function with the theoretical framework in this article are different from other studies (for instance, [24,26,27]) in the literature.
In other words, one of the gaps we noticed in the approximation theory literature was the speculative need for improving the convergence results of ANN operators. The elegant “symmetrization technique” presented by Anastassiou helped to close this gap. Below, we present the numerical examples, graphical results, and convergence speeds obtained according to different parameters by using the tools we introduced.
The “difference” between the measures RHS and LHS introduced in detail below are almost closed after a certain number of η and with appropriate parameter choices for α , t ^ , ξ > 0 .
The fact that this difference is very close to zero, which is what is desired in approximation theory, is possible with SNN operators compared with the CNN operators, makes the present work interesting. It points to our main contribution to the literature.
Three different test functions are used in the six examples given below. As a result of the approaches made to each of these test functions first with CNN operators and then with SNN operators and the “difference” measurement results calculated using Python 3.9, we see that SNN operators give smaller “difference” values compared with CNNs by looking at the bolded numerical values at the bottom of the six tables. In addition, we support these results, which we have concretely obtained with numerical data, with six graphic visuals.
Notation 1 
([37]). Note that Classical NN operators are defined for υ ¯ a , b , t ^ > 0 , t ^ 1 as follows:
L η f , υ ¯ : = k = η a η b f m η t ^ , ξ η υ ¯ m k = η a η b t ^ , ξ η υ ¯ m ,
such that f C 1 a , b , Y and η N : η a η b .
Notation 2. 
Remember that CNN operators are defined for υ ¯ a , b , t ^ > 0 , t ^ 1 as in (52). Furthermore, let us introduce the concepts of “Right Hand Side (RHS)”, and “Left Hand Side (LHS)” of the argument in (53) for “pointwise convergence” defined as follows, being inspired by Theorem 3. For any α such that α > 0 ,
| L η f , υ ¯ f υ ¯ | 1 η α ,
  • where
  • Right Hand Side (RHS) : = 1 η α ,
  • Left Hand Side (LHS):= | L η f , υ ¯ f υ ¯ | ,
  • and,
  • Difference := 1 η α | L η f , υ ¯ f υ ¯ | =: RHSLHS.
The three estimates, Difference , RHS , and LHS , shall serve as criteria for interpreting the speed of convergence of both classical NN (CNN) operators and symmetrized NN (SNN) operators, and also creating the convergence speed tables employing Python 3.9 programming language.
Example 1. 
For the activation function defined in (1), t ^ = 1 4 , ξ = 1 2 , and the density function in (2), υ ¯ [ a , b ] = π , π , η = 5 , 10 , 20 , 50 , 200 and the operators L η f , υ ¯ defined in (52), the convergence of the operators L 5 f , υ ¯ (blue), L 10 f , υ ¯ (red), L 20 f , υ ¯ (fuchsia), L 50 f , υ ¯ (aqua), L 200 f , υ ¯ (black) to f x = sin x (gold) is displayed in Figure 1.
The convergence speeds of the CNN operators to the test function f x = sin x under the regime of certain parameters are given below in Table 1.
Here, for η = 5 , 10 , 20 , 50 , 200 , 300 ; RHS : = 1 η α , α = 1 2 ; LHS : = | L η f , υ ¯ f υ ¯ | , Difference:=RHSLHS.
Result: We witness that the Difference value decreases for increasing η numbers from 5 to 300, which shows us that the approximation is promising after a certain η value.
Example 2. 
For the activation function defined in (1); t ^ = 1 4 , ξ = 1 2 , α = 0.5 , 0.9 , 1 , and the density function in (6), υ ¯ [ a , b ] = π , π , η = 5 , 10 , 20 , 50 , 200 and the “symmetrized” operators L η s f , υ ¯ defined in (32), the convergence of the operators L 5 s f , υ ¯ (blue), L 10 s f , υ ¯ (red), L 20 s f , υ ¯ (fuchsia), L 50 s f , υ ¯ (aqua), L 200 s f , υ ¯ (black) to f x = sin x (gold) is displayed in Figure 2.
As seen in Figure 2 and Table 2 below, it is observed that the approximations by SNN operators clearly give better results than the CNN operators. We monitor this situation with both numerical results and graphical visuals. In other words, they overlap perfectly.
The convergence speeds of the SNN operators to the test function f x = sin x under the regime of certain parameters are given below in Table 2.
Example 3. 
For the activation function defined in (1), t ^ = 1 4 , ξ = 1 2 , α = 0.5 , and the density function in (2), υ ¯ [ a , b ] = π , π , η = 5 , 10 , 20 , 50 , 200 and the operators L η f , υ ¯ defined in (52), the convergence of the operators L 5 f , υ ¯ (blue), L 10 f , υ ¯ (red), L 20 f , υ ¯ (fuchsia), L 50 f , υ ¯ (aqua), L 200 f , υ ¯ (black) to f x = sin ( 2 π x ) (gold) is displayed in Figure 3.
The convergence speeds of the CNN operators to the test function f x = sin ( 2 π x ) under the regime of certain parameters are given below in Table 3.
Example 4. 
For the activation function defined in (1), t ^ = 1 4 , ξ = 1 2 , α = 0.5 , 0.9 , 1 , and the density function in (6), υ ¯ [ a , b ] = π , π ; η = 5 , 10 , 20 , 50 , 200 , and the “symmetrized” operators L η s f , υ ¯ defined in (32), the convergence of the operators L 5 s f , υ ¯ (blue), L 10 s f , υ ¯ (red), L 20 s f , υ ¯ (fuchsia), L 50 s f , υ ¯ (aqua), L 200 s f , υ ¯ (black) to f x = sin ( 2 π x ) (gold) is displayed in Figure 4.
The convergence speeds of the SNN operators to the test function f x = sin ( 2 π x ) under the regime of certain parameters are given below in Table 4.
Example 5. 
For the activation function defined in (1), t ^ = 1 4 , ξ = 1 2 , α = 0.5 , and density function in (2), υ ¯ [ a , b ] = π , π , η = 5 , 10 , 20 , 50 , 200 and the operators L η f , υ ¯ defined in (52), the convergence of the operators L 5 f , υ ¯ (blue), L 10 f , υ ¯ (red), L 20 f , υ ¯ (fuchsia), L 50 f , υ ¯ (aqua), L 200 f , υ ¯ (black) to f x = exp ( x ) (gold) is displayed in Figure 5.
The convergence speeds of the CNN operators to the test function f x = exp ( x ) under the regime of certain parameters are given below in Table 5.
Example 6. 
For the activation function defined in (1); t ^ = 1 4 , ξ = 1 2 , α = 0.5 , 0.9 , 1 , and density function in (6), υ ¯ [ a , b ] = π , π , η = 5 , 10 , 20 , 50 , 200 , and the “symmetrized” operators L η s f , υ ¯ defined in (32), the convergence of the operators L 5 s f , υ ¯ (blue), L 10 s f , υ ¯ (red), L 20 s f , υ ¯ (fuchsia), L 50 s f , υ ¯ (aqua), L 200 s f , υ ¯ (black) to f x = exp ( x ) (gold) is displayed in Figure 6.
The convergence speeds of the SNN operators to the test function f x = exp ( x ) under the regime of certain parameters are given below in Table 6.
As a result of all numerical examples, we conclude that the Difference value decreases for increasing η numbers from 5 to 300, which shows us that the approximation is promising after a certain η value for especially symmetrized versions of the NN operators.

5. Conclusions and Future Remarks

We believe that this study provides a two-pronged approach as a contribution to the literature. In one aspect, the theoretical background of symmetric univariate neural network operators has been emphasized once again. In another aspect, it has been shown with concrete examples and graphs how these symmetrized neural network operators (SNNs) achieve better approximation results under certain parameter choices compared with the classical ones.
Thanks to the major role that artificial neural networks (ANNs) and computational methods play in the advancement of scientific research, it is of great importance that any new developments in this field are supported by mathematical consistence and robustness for the society.
Furthermore, we think that we have presented the mathematical verification and comparison employing the computer programming language Python 3.9, which is open-source and flexible, to provide evidence of the superiority of symmetric structure. When we examine other studies in the literature, we can say that we have reached a consensus on the effectiveness and game-changing role of symmetrical structures in approximation theory. In addition, our findings anchor the effectiveness of symmetric structures in neural network architectures on the model and the potential for profound transformations in outputs in machine learning.
In a nutshell, we verified the basic theorems of ANN operators in the theory of approximations with numerical examples, compared classical ANN operators with symmetrized ANN accelerated operators, and showed that the symmetric structure, in particular, provides superior results.
We anticipate that this theoretical background will guide the latest applications of future artificial intelligence (AI) subfields such as “Geometric Deep Learning”, “Graph Learning”, “Graph Neural Networks”, “Quantum Neural Networks”, as well as a theory of machine learning (ML), applied analysis, and computational mathematics.
Lastly, there is undoubtedly still a lot to be carried out with other types of activation functions, such as half-hyperbolic tangent functions, sigmoidal functions, ReLu, activated NN operators, etc., in a symmetrized manner. In addition, in Appendix A, we shared the Python 3.9 codes of numerical applications: speed of the approximation of the CNN and SNN operators to certain functions and their 2D space visualizations constructed by using Python 3.9 libraries NumPy 1.21.0, Matplotlib 3.8, and Math.

Author Contributions

Conceptualization, G.A.A., S.K. and M.Z.; methodology and validation, G.A.A., S.K. and M.Z.; investigation, G.A.A., S.K. and M.Z.; resources and writing—original draft preparation, G.A.A., S.K. and M.Z.; writing—review and editing and visualization, G.A.A., S.K. and M.Z.; supervision, G.A.A.; project administration, G.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the reviewers who generously shared their time and opinions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Python Codes

In the code snippets below you only see sample structures of the f x = sin ( 2 π x ) test function. Please take note that by changing the test functions in the following Python 3.9 codes, different approximation speeds can be calculated and their graphs can be drawn, easily.
Figure A1 illustrates the classical ANN convergence calculation Python 3.9 code snippet where, since the convergence is point-based, the Python 3.9 Math library is used for convergence calculations between the original function f x = sin ( 2 π x ) and the approximating operator L η given in (52). The “calculate_difference” function, defined in Line 28, originated from Theorem 3, where the difference between the two functions is calculated based on t ^ and ξ parameters, which are explained as in Figure A2. Starting from Line 27 for various values of parameters, the differences are calculated to form the Table 3 in Section 4.
Figure A1. Code snippet of CNN operators convergence for f x = sin ( 2 π x ) .
Figure A1. Code snippet of CNN operators convergence for f x = sin ( 2 π x ) .
Fractalfract 09 00365 g0a1
Figure A2. Code snippet of CNN operators convergence for f x = sin ( 2 π x ) (continuation of Figure A1).
Figure A2. Code snippet of CNN operators convergence for f x = sin ( 2 π x ) (continuation of Figure A1).
Fractalfract 09 00365 g0a2
Figure A3 shows the SNN convergence Python 3.9 code snippet, which is different from Figure A1 and Figure A2 coding in a way that the SNN operator L s is used in Line 20 in Figure A3, corresponding to (32). The convergence speeds of the symmetrized operators are given in Table 4 produced by the codes in Figure A3 and Figure A4.
Figure A3. Code snippet of symmetrized convergence for f x = sin ( 2 π x ) .
Figure A3. Code snippet of symmetrized convergence for f x = sin ( 2 π x ) .
Fractalfract 09 00365 g0a3
Figure A4. Code snippet of symmetrized convergence for f x = sin ( 2 π x ) (continuation of Figure A3).
Figure A4. Code snippet of symmetrized convergence for f x = sin ( 2 π x ) (continuation of Figure A3).
Fractalfract 09 00365 g0a4
Figure A5 figures out the Python coding for plotting the test function f x = sin ( 2 π x ) together with the CNN operators related to Figure 3 and Table 3. In Line 2 the “NumPy” Python library is imported because the plotting is carried out vector-based. In Line 30 “plot_ L ” function is defined where 1000 y values is produced based on 1000 points between a = π and b = π . The plotting result obtained by the code in Figure 5 and Figure 6 are shown in Figure 3 with different colors for each function to make it clear the parameter value’s importance.
Figure A5. Code snippet of 2D plotting of the CNN operators to function f x = sin ( 2 π x ) with various parameter values.
Figure A5. Code snippet of 2D plotting of the CNN operators to function f x = sin ( 2 π x ) with various parameter values.
Fractalfract 09 00365 g0a5
If we compare the plotting results given in Figure 3 and Figure 4, the SNN operator L s gives better results compared with the classical one. Moreover, the convergence speeds in Table 3 and Table 4 illustrate that the SNN operator L s outperforms the CNN operator L .
Figure A7 shows the symmetrized convergence plot Python 3.9 coding, which is different from coding in Figure A5 and Figure A6 in a way that the symmetrized operator L s is used in Line 23 in Figure A7 corresponding to (32) above. The plotting result obtained by the code in Figure A7 and Figure A8 are shown in Figure 4 with different colors for each function.
Figure A6. Code snippet of 2D plotting of the CNN operators to function f x = sin ( 2 π x ) with various parameter values (continuation of Figure A5).
Figure A6. Code snippet of 2D plotting of the CNN operators to function f x = sin ( 2 π x ) with various parameter values (continuation of Figure A5).
Fractalfract 09 00365 g0a6
Figure A7. Code snippet 2D plotting of SNN operators to the test function f x = sin ( 2 π x ) with various parameter values.
Figure A7. Code snippet 2D plotting of SNN operators to the test function f x = sin ( 2 π x ) with various parameter values.
Fractalfract 09 00365 g0a7
Figure A8. Code snippet 2D plotting of SNN operators to the test function f x = sin ( 2 π x ) with various parameter values (continuation of Figure A7).
Figure A8. Code snippet 2D plotting of SNN operators to the test function f x = sin ( 2 π x ) with various parameter values (continuation of Figure A7).
Fractalfract 09 00365 g0a8

Appendix B. Theorem 3

Proof. 
We observe that
m = η a η b f m η f υ ¯ F η υ ¯ m
m = η a η b f m η f υ ¯ F η υ ¯ m
= m = η a m η υ ¯ 1 η ς η b f m η f υ ¯ F η υ ¯ m
+ m = η a m η υ ¯ > 1 η ς η b f m η f υ ¯ F η υ ¯ m
m = η a m η υ ¯ 1 η ς η b ω 1 f , m η υ ¯ F η υ ¯ m
+ 2 f m = η a m η υ ¯ > η 1 ς η b F η υ ¯ m
ω 1 f , 1 η ς m = m η υ ¯ 1 η ς F η υ ¯ m
+ 2 f m = m η υ ¯ > η 1 ς F η υ ¯ m
ω 1 f , 1 η ς + 2 f W ^ e 2 ξ η 1 ς
That is,
m = η a η b f m η f υ ¯ F η υ ¯ m
ω 1 f , 1 η ς + 2 f W ^ e 2 ξ η 1 ς .
Using the last inequality, we derive (37). □

Appendix C. Theorem 4

Proof. 
One has that
L s ¯ η f , υ ¯ f υ ¯ = m = f m η t ^ , ξ η υ ¯ m f υ ¯ m = F η υ ¯ m
= m = f m η f υ ¯ F η υ ¯ m
m = f m η f υ ¯ F η υ ¯ m
= m = m η υ ¯ 1 η ς f m η f υ ¯ F η υ ¯ m
+ m = m η υ ¯ > 1 η ς f m η f υ ¯ F η υ ¯ m
m = m η υ ¯ 1 η ς ω 1 f , m η υ ¯ F η υ ¯ m
+ 2 f m = m η υ ¯ > 1 η ς F η υ ¯ m
ω 1 f , 1 η ς m = m η x 1 η ς F η υ ¯ m + 2 f W ^ e 2 ξ η 1 ς
ω 1 f , 1 η ς + 2 f W ^ e 2 ξ η 1 ς ,
The claim has been proven. □

References

  1. Chen, Z.; Cao, F. The approximation operators with sigmoidal functions. Comput. Math. Appl. 2009, 58, 758–765. [Google Scholar] [CrossRef]
  2. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New York, NY, USA, 1998. [Google Scholar]
  3. McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
  4. Mitchell, T.M. Machine Learning; WCB-McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  5. Karateke, S. Some mathematical properties of flexible hyperbolic tangent activation function with application to deep neural networks. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A Mat. 2025, 119, 42. [Google Scholar] [CrossRef]
  6. Mattheakis, M.; Protopapas, P.; Sondak, D.; Di Giovanni, M.; Kaxiras, E. Physical symmetries embedded in neural networks. arXiv 2019, arXiv:1904.08991. [Google Scholar]
  7. Faroughi, S.A.; Pawar, N.M.; Fernandes, C.; Raissi, M.; Das, S.; Kalantari, N.K.; Kourosh Mahjour, S. Physics-guided, physics-informed, and physics-encoded neural networks and operators in scientific computing: Fluid and solid mechanics. J. Comput. Inf. Sci. Eng. 2024, 24, 040802. [Google Scholar] [CrossRef]
  8. Lu, L.; Jin, P.; Pang, G.; Zhang, Z.; Karniadakis, G.E. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat. Mach. Intell. 2021, 3, 218–229. [Google Scholar] [CrossRef]
  9. Goswami, S.; Bora, A.; Yu, Y.; Karniadakis, G.E. Physics-Informed Deep Neural Operator Networks. In Machine Learning in Modeling and Simulation; Computational Methods in Engineering & the Sciences; Springer: Cham, Switzerland, 2023. [Google Scholar] [CrossRef]
  10. Dong, Z.; Comajoan Cara, M.; Dahale, G.R.; Forestano, R.T.; Gleyzer, S.; Justice, D.; Kong, K.; Magorsch, T.; Matchev, K.T.; Matcheva, K.; et al. ℤ2 × ℤ2 Equivariant Quantum Neural Networks: Benchmarking against Classical Neural Networks. Axioms 2024, 13, 188. [Google Scholar] [CrossRef]
  11. Anastassiou, G.A. Multivariate Smooth Symmetrized and Perturbed Hyperbolic Tangent Neural Network Approximation over Infinite Domains. Mathematics 2024, 12, 3777. [Google Scholar] [CrossRef]
  12. Tahmasebi, B.; Jegelka, S. The exact sample complexity gain from invariances for kernel regression. Adv. Neural Inf. Process. Syst. 2023, 36, 1–31. [Google Scholar]
  13. Na, H.S.; Park, Y. Symmetric neural networks and its examples. In Proceedings of the 1992 IJCNN International Joint Conference on Neural Networks, Baltimore, MD, USA, 7–11 June 1992; IEEE: Piscataway, NJ, USA, 1992; Volume 1, pp. 413–418. [Google Scholar]
  14. Zweig, A. Theory of Symmetric Neural Networks (Order No. 30694251). (2024). Available from ProQuest Dissertations & Theses Global; Publicly Available Content Database. (2937439642). Available online: https://www.proquest.com/dissertations-theses/theory-symmetric-neural-networks/docview/2937439642/se-2 (accessed on 29 May 2025).
  15. Hecht-Nielsen, R. On the algebraic structure of feedforward network weight spaces. In Advanced Neural Computers; Elsevier: Amsterdam, The Netherlands, 1990; pp. 129–135. [Google Scholar]
  16. Albertini, F.; Sontag, E.D.; Maillot, V. Uniqueness of weights for neural networks. In Artificial Neural Networks for Speech and Vision; Chapman and Hall: London, UK, 1993; pp. 113–125. [Google Scholar]
  17. Laurent, O.; Aldea, E.; Franchi, G. A symmetry-aware exploration of bayesian neural network posteriors. arXiv 2023, arXiv:2310.08287. [Google Scholar]
  18. Vlačić, V.; Bölcskei, H. Affine symmetries and neural network identifiability. Adv. Math. 2021, 376, 107485. [Google Scholar] [CrossRef]
  19. Fefferman, C. Reconstructing a neural net from its output. Rev. Mat. Iberoam. 1994, 10, 507–556. [Google Scholar] [CrossRef]
  20. Perin, A.; Deny, S. On the Ability of Deep Networks to Learn Symmetries from Data: A Neural Kernel Theory. arXiv 2024, arXiv:2412.11521. [Google Scholar]
  21. Hutter, M. On representing (anti) symmetric functions. arXiv 2020, arXiv:2007.15298. [Google Scholar]
  22. Available online: https://news.mit.edu/2024/how-symmetry-can-aid-machine-learning-0205 (accessed on 11 February 2025).
  23. Tao, T. Machine-assisted proof. Not. Am. Math. Soc. 2025, 72, 6–13. [Google Scholar] [CrossRef]
  24. Cantarini, M.; Costarelli, D. Simultaneous approximation by neural network operators with applications to Voronovskaja formulas. Math. Nachrichten 2025, 298, 871–885. [Google Scholar] [CrossRef]
  25. Costarelli, D. Density results by deep neural network operators with integer weights. Math. Mod. Anal. 2022, 27, 547–560. [Google Scholar] [CrossRef]
  26. Costarelli, D.; Spigler, R. Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 2013, 44, 101–106. [Google Scholar] [CrossRef]
  27. Turkun, C.; Duman, O. Modified neural network operators and their convergence properties with summability methods. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A Mat. 2020, 114, 132. [Google Scholar] [CrossRef]
  28. Hu, S.X.; Zagoruyko, S.; Komodakis, N. Exploring weight symmetry in deep neural networks. Comput. Vis. Image Underst. 2019, 187, 102786. [Google Scholar] [CrossRef]
  29. Han, X.; Yang, Y.; Chen, J.; Wang, M.; Zhou, M. Symmetry-Aware Credit Risk Modeling: A Deep Learning Framework Exploiting Financial Data Balance and Invariance. Symmetry 2025, 17, 341. [Google Scholar] [CrossRef]
  30. Peleshchak, R.M.; Lytvyn, V.V.; Nazarkevych, M.A.; Peleshchak, I.R.; Nazarkevych, H.Y. Influence of the Symmetry Neural Network Morphology on the Mine Detection Metric. Symmetry 2024, 16, 485. [Google Scholar] [CrossRef]
  31. Karateke, S.; Zontul, M.; Mishra, V.N.; Gairola, A.R. On the Approximation by Stancu-Type Bivariate Jakimovski–Leviatan–Durrmeyer Operators. Matematica 2024, 3, 211–233. [Google Scholar] [CrossRef]
  32. Ravasi, M.; Vasconcelos, I. PyLops—A linear-operator Python library for scalable algebra and optimization. SoftwareX 2020, 11, 100361. [Google Scholar] [CrossRef]
  33. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef]
  34. Meurer, A.; Smith, C.P.; Paprocki, M.; Certík, O.; Kirpichev, S.B.; Rocklin, M.; Kumar, A.; Ivanov, S.; Moore, J.K.; Singh, S.; et al. SymPy: Symbolic computing in Python. PeerJ Comput. Sci. 2017, 3, e103. [Google Scholar] [CrossRef]
  35. Ince, R.A.A.; Petersen, R.S.; Swan, D.C.; Panzeri, S. Python for information theoretic analysis of neural data. Front. Neuroinform. 2009, 3, 4. [Google Scholar] [CrossRef]
  36. Anastassiou, G.A.; Karateke, S. Parametrized hyperbolic tangent induced Banach space valued ordinary and fractional neural network approximation. Progr. Fract. Differ. Appl. 2023, 9, 597–621. [Google Scholar]
  37. Anastassiou, G.A. Banach Space Valued Ordinary and Fractional Neural Network Approximation Based on q-Deformed and λ-Parametrized Half Hyperbolic Tangent. In Parametrized, Deformed and General Neural Networks; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2023; Volume 1116. [Google Scholar]
  38. El-Shehawy, S.A.; Abdel-Salam, E.A.-B. The q-deformed hyperbolic Secant family. Int. J. Appl. Math. Stat. 2012, 29, 51–62. [Google Scholar]
  39. Anastassiou, G.A. Quantitative Approximations; Chapman & Hall/CRC: Boca Raton, FL, USA; New York, NY, USA, 2001. [Google Scholar]
  40. Mikusinski, J. The Bochner Integral; Academic Press: New York, NY, USA, 1978. [Google Scholar]
Figure 1. Approximation by CNN operators to the test function f x = sin x .
Figure 1. Approximation by CNN operators to the test function f x = sin x .
Fractalfract 09 00365 g001
Figure 2. Approximation by SNN operators to the test function f x = sin x .
Figure 2. Approximation by SNN operators to the test function f x = sin x .
Fractalfract 09 00365 g002
Figure 3. Approximation by CNN operators to the test function f x = sin ( 2 π x ) .
Figure 3. Approximation by CNN operators to the test function f x = sin ( 2 π x ) .
Fractalfract 09 00365 g003
Figure 4. Approximation by SNN operators to the test function f x = sin ( 2 π x ) .
Figure 4. Approximation by SNN operators to the test function f x = sin ( 2 π x ) .
Fractalfract 09 00365 g004
Figure 5. Approximation by CNN operators to the test function f x = exp ( x ) .
Figure 5. Approximation by CNN operators to the test function f x = exp ( x ) .
Fractalfract 09 00365 g005
Figure 6. Approximation by SNN operators to the test function f x = exp ( x ) .
Figure 6. Approximation by SNN operators to the test function f x = exp ( x ) .
Fractalfract 09 00365 g006
Table 1. Convergence speeds for CNN operators with different parameter values.
Table 1. Convergence speeds for CNN operators with different parameter values.
α η Right Hand Side (RHS)Left Hand Side (LHS)Difference
0.5 5 0.4472135955 0.106196835925 0.341016759575
0.5 10 0.316227766017 0.076643050878 0.239584715139
0.5 20 0.22360679775 0.043869327366 0.179737470384
0.5 50 0.141421356237 0.018804729784 0.122616626454
0.5 200 0.070710678119 0.004852019867 0.065858658252
0.5 300 0.057735026919 0.003245667233 0.054489359686
Table 2. Convergence speeds for SNN operators with different parameter values.
Table 2. Convergence speeds for SNN operators with different parameter values.
α η Right Hand Side (RHS)Left Hand Side (LHS)Difference
0.5 5 0.4472135955 0.073997483748 0.373216111752
0.5 10 0.316227766017 0.019319978172 0.296907787845
0.5 20 0.22360679775 0.004883204134 0.218723593616
0.5 50 0.141421356237 0.000783723296 0.140637632941
0.5 200 0.070710678119 4.9009701  × 10 5 0.070661668417
0.5 300 0.057735026919 2.1782474  × 10 5 0.057713244445
0.9 5 0.234923788618 0.073997483748 0.16092630487
0.9 10 0.125892541179 0.019319978172 0.106572563008
0.9 20 0.067464142384 0.004883204134 0.06258093825
0.9 50 0.029575152733 0.000783723296 0.028791429436
0.9 200 0.008493232323 4.9009701  × 10 5 0.008444222622
0.9 300 0.005896453402 2.1782474  × 10 5 0.005874670927
15 0.2 0.073997483748 0.126002516252
110 0.1 0.019319978172 0.080680021828
120 0.05 0.004883204134 0.045116795866
150 0.02 0.000783723296 0.019216276704
1200 0.005 4.9009701 × 10 5 0.004950990299
1300 0.003333333333 2.1782474  × 10 5 0.003311550859
Table 3. Convergence speeds for CNN operators with different parameter values for f ( x ) = sin ( 2 π x ) .
Table 3. Convergence speeds for CNN operators with different parameter values for f ( x ) = sin ( 2 π x ) .
α η Right Hand Side (RHS)Left Hand Side (LHS)Difference
0.5 5 0.4472135955 1.019590673876 0.572377078376
0.5 10 0.316227766017 0.735035145942 0.418807379925
0.5 20 0.22360679775 0.310398556399 0.086791758649
0.5 50 0.141421356237 0.078890538643 0.062530817594
0.5 200 0.070710678119 0.012251715923 0.058458962196
0.5 300 0.057735026919 0.007583994866 0.050151032053
Table 4. Convergence speeds for SNN operators with different parameter values.
Table 4. Convergence speeds for SNN operators with different parameter values.
α η Right Hand Side (RHS)Left Hand Side (LHS)Difference
0.5 5 0.4472135955 0.99452974412 0.54731614862
0.5 10 0.316227766017 0.646761489603 0.330533723586
0.5 20 0.22360679775 0.232191624821 0.008584827071
0.5 50 0.141421356237 0.041730703041 0.099690653196
0.5 200 0.070710678119 0.002665088735 0.068045589384
0.5 300 0.057735026919 0.001185437546 0.056549589373
0.9 5 0.234923788618 0.99452974412 0.759605955503
0.9 10 0.125892541179 0.646761489603 0.520868948423
0.9 20 0.067464142384 0.232191624821 0.164727482437
0.9 50 0.029575152733 0.041730703041 0.012155550308
0.9 200 0.008493232323 0.002665088735 0.005828143588
0.9 300 0.005896453402 0.001185437546 0.004711015856
15 0.2 0.99452974412 0.79452974412
110 0.1 0.646761489603 0.546761489603
120 0.05 0.232191624821 0.182191624821
150 0.02 0.041730703041 0.021730703041
1200 0.005 0.002665088735 0.002334911265
1300 0.003333333333 0.001185437546 0.002147895788
Table 5. Convergence speeds to exp ( x ) by CNN operators.
Table 5. Convergence speeds to exp ( x ) by CNN operators.
α η Right Hand Side (RHS)Left Hand Side (LHS)Difference
0.5 5 0.4472135955 0.919943701463 0.472730105963
0.5 10 0.316227766017 0.372333139108 0.056105373091
0.5 20 0.22360679775 0.168097690311 0.055509107439
0.5 50 0.141421356237 0.06329630096 0.078125055277
0.5 200 0.070710678119 0.015355496873 0.055355181246
0.5 300 0.057735026919 0.01020291246 0.047532114459
Table 6. Convergence speeds to exp ( x ) by SNN operators.
Table 6. Convergence speeds to exp ( x ) by SNN operators.
α η Right Hand Side (RHS)Left Hand Side (LHS)Difference
0.5 5 0.4472135955 0.257627908469 0.189585687031
0.5 10 0.316227766017 0.06171214644 0.254515619577
0.5 20 0.22360679775 0.01525818134 0.20834861641
0.5 50 0.141421356237 0.002433781759 0.138987574479
0.5 200 0.070710678119 0.000152027539 0.070558650579
0.5 300 0.057735026919 6.7566605  × 10 5 0.057667460314
0.9 5 0.234923788618 0.257627908469 0.022704119852
0.9 10 0.125892541179 0.06171214644 0.064180394739
0.9 20 0.067464142384 0.01525818134 0.052205961044
0.9 50 0.029575152733 0.002433781759 0.027141370974
0.9 200 0.008493232323 0.000152027539 0.008341204784
0.9 300 0.005896453402 6.7566605  × 10 5 0.005828886796
15 0.2 0.257627908469 0.057627908469
110 0.1 0.06171214644 0.03828785356
120 0.05 0.01525818134 0.03474181866
150 0.02 0.002433781759 0.017566218241
1200 0.005 0.000152027539 0.004847972461
1300 0.003333333333 6.7566605  × 10 5 0.003265766728
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassiou, G.A.; Karateke, S.; Zontul, M. Univariate Neural Network Quantitative (NNQ) Approximation by Symmetrized Operators. Fractal Fract. 2025, 9, 365. https://doi.org/10.3390/fractalfract9060365

AMA Style

Anastassiou GA, Karateke S, Zontul M. Univariate Neural Network Quantitative (NNQ) Approximation by Symmetrized Operators. Fractal and Fractional. 2025; 9(6):365. https://doi.org/10.3390/fractalfract9060365

Chicago/Turabian Style

Anastassiou, George A., Seda Karateke, and Metin Zontul. 2025. "Univariate Neural Network Quantitative (NNQ) Approximation by Symmetrized Operators" Fractal and Fractional 9, no. 6: 365. https://doi.org/10.3390/fractalfract9060365

APA Style

Anastassiou, G. A., Karateke, S., & Zontul, M. (2025). Univariate Neural Network Quantitative (NNQ) Approximation by Symmetrized Operators. Fractal and Fractional, 9(6), 365. https://doi.org/10.3390/fractalfract9060365

Article Metrics

Back to TopTop