Next Article in Journal
Image-Tracking-Driven Symmetrical Steering Control with Long Short-Term Memory for Linear Charge-Coupled-Device-Based Two-Wheeled Self-Balancing Cart
Previous Article in Journal
DEL_YOLO: A Lightweight Coal-Gangue Detection Model for Limited Equipment
Previous Article in Special Issue
Construction of General Types of Fuzzy Implications Produced by Comparing Different t-Conorms: An Application Case Using Meteorological Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Logistic Neural Networks in Positive Linear Framework

by
George A. Anastassiou
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
Symmetry 2025, 17(5), 746; https://doi.org/10.3390/sym17050746
Submission received: 7 April 2025 / Revised: 1 May 2025 / Accepted: 5 May 2025 / Published: 13 May 2025
(This article belongs to the Special Issue Symmetry and Asymmetry in Nonlinear Partial Differential Equations)

Abstract

:
Essential neural-network operators are interpreted as positive linear operators, and the related general theory applies to them. These operators are induced by a symmetrized density function deriving from the parametrized and deformed A-generalized logistic activation function. We are acting on the space of continuous functions on a compact interval of real line to the reals. We quantitatively study the rate of convergence of these neural -network operators to the unit operator. Our inequalities involve the modulus of continuity of the function under approximation or its derivative. We produce uniform and L p , p 1 approximation results via these inequalities. The convexity of functions is also used to derive more refined results.

1. Introduction

The author has extensively studied the quantitative approximation of positive linear operators to the unit since 1985; see, for example, [1,2,3,4], which are used in this work. He started from the quantitative weak convergence of finite positive measures to the Dirac unit measure, with geometric moment theory as a method—see [2]—and he produced the best upper bounds, leading to attaining sharp Jackson-type inequalities, e.g., see [1,2]. These studies have gone in all possible directions, univariate and multivariate, though in this work, we focus only on the univariate approach.
Since 1997, the author has been studying the quantitative convergence of neural network operators to the unit and has written numerous articles and books, e.g., see [3,4,5], which are used here.
The wide range of neural-network operators being used by the author are, indeed, by nature, positive linear operators.
Here, the author continues his study of treating neural-network operators as positive linear operators. This is a totally new approach in the literature; see [6].
Different activation functions allow for different non-linearities, which might work better for solving a specific problem. So, the need to use neural networks with various activation functions is pressing. Thus, performing neural-network approximations using different activation functions is not only necessary but fully justified. Furthermore, brain non-symmetry has been observed in animals and humans in terms of structure, function, and behavior. This lateralization is thought to reflect evolutionary, hereditary, developmental, experiential, and pathological factors. Consequently, for our study, it is natural to consider deformed neural-network activation functions and operators. Thus, this work is a specific study under this philosophy of approaching reality as close as possible. The author is currently working on the long project of connecting neural-network operators to positive linear operator theory. Some of the most popular activation functions derive from the generalized logistic function, on which this article is based, and the hyperbolic tangent, on which [6] is based. Both cases have different calculations and produce different results of great interest and important application in my other papers.
All methods of positive linear operators apply here to our summation defined neural network operators, producing new and interesting results: pointwise, uniform, and L p , p 1 kinds. Via the Riesz representation theorem, neural networks are connected to measure theory. The convexity of functions produces optimal and sharper results.
The use of A-generalized logistic function as an activation function is well established and supported in ([5], Chapter 16). The A-generalized logistic activation function behaves well and is one of the most commonly used activation functions on this subject.
The author’s symmetrization method presented here aims for our operators to converge at high speed to the unit operator using a half-feed of data.
This article establishes a bridge connecting neural-network approximation (part of AI) to positive linear operators (a significant part of functional analysis).
So, it is a theoretical work aimed mainly at related experts.
The authors of [7,8] have, as always, been great inspiration. For classic studies on neural networks, we also recommend [9,10,11,12,13,14].
For newer work on neural networks, we also refer the reader to [15,16,17,18,19,20,21,22,23,24]. For recent studies in positive linear operators, we refer the reader to [6,25,26,27,28,29,30].
For other general related work, please read [31,32,33,34,35,36,37,38], with their justifications to follow:
Justification of [31]: Provides analogous weighted and pointwise error estimates for Kantorovich operators, enriching comparison with our symmetrized kernel bounds.
Justification of [32]: Develops Bézier–Kantorovich constructions using wavelet techniques, which is parallel to our symmetrization approach in handling deformed kernels.
Justification of [33]: Introduces a two-parameter Stancu deformation framework analogous to our A-generalized logistic deformation.
Justification of [34]: Explores lacunary sequences and invariant means underlying averaging arguments similar to our symmetrization sums.
Justification of [35]: Applies Appell-polynomial enhancement to beta-function kernels, directly relating to our beta-based logistic symmetrization.
Justification of [36]: Combines Stancu deformation with Bézier bases, mirroring our combination of deformation and symmetrization.
Justification of [37]: Presents modified Bernstein polynomials via basis-function techniques, offering comparable convergence comparisons.
Justification of [38]: Integrates q-statistical methods with wavelet-aided kernels, paralleling our use of q-deformation and symmetrized densities.

2. Basics

Here, we follow ([5], pp. 395–417).
Our activation function to be used here is the q-deformed and λ -parametrized function
φ q , λ x = 1 1 + q A λ x , x R , q , λ > 0 , A > 1 .
This is the A-generalized logistic function.
For more, read Chapter 16 of [5]: “Banach space valued ordinary and fractional neural-network approximation based on q-deformed and λ -parametrized A-generalized logistic function”.
This chapter motivates our current work.
The proposed “symmetrization technique” aims to use a half-data feed to our neural networks.
We will employ the following density function
G q , λ x : = 1 2 φ q , λ x + 1 φ q , λ x 1 , x R , q , λ > 0 .
We have
G q , λ x = G 1 q , λ x ,
and
G 1 q , λ x = G q , λ x , x R .
Adding (3) and (4), we obtain
G q , λ x + G 1 q , λ x = G q , λ x + G 1 q , λ x , x R ,
which is the key to this work.
Therefore,
W x : = G q , λ x + G 1 q , λ x 2
is an even function, symmetric with respect to the y-axis.
The global maximum of G q , λ is given by (16.18), p. 401 of [5] as
G q , λ log A q λ = A λ 1 2 A λ + 1 .
In addition, the global max of G 1 q , λ is
G 1 q , λ log A 1 q λ = G 1 q , λ log A q λ = A λ 1 2 A λ + 1 ,
both sharing the same maximum at symmetric points.
By Theorem 16.1, p. 401 of [5], we have
i = G q , λ x i = 1 , x R , λ , q > 0 , A > 1 ,
and
i = G 1 q , λ x i = 1 , x R , λ , q > 0 , A > 1 .
Consequently, we derive that
i = W x i = 1 , x R .
By Theorem 16.2, p. 402 of [5], we have
G q , λ x d x = 1 , λ , q > 0 , A > 1 ,
similarly, it holds that
G 1 q , λ x d x = 1 ,
so that
W x d x = 1 ,
and therefore, W is a density function.
By Theorem 16.3, p. 402 of [5], we have:
Let 0 < α < 1 , and n N with n 1 α > 2 . Then,
k = : n x k n 1 α G q , λ n x k < 2 max q , 1 q 1 A λ n 1 α 2 = γ A λ n 1 α 2 ,
where λ , q > 0 ,   A > 1 ; γ : = 2 max q , 1 q .
Similarly, we obtain that
k = : n x k n 1 α G 1 q , λ n x k < γ A λ n 1 α 2 .
Consequently, we obtain that
k = : n x k n 1 α W n x k < γ A λ n 1 α 2 ,
where γ : = 2 max q , 1 q .
Here, · denotes the ceiling of the number, and · its integral part.
We mention (Theorem 16.4 p. 402 of [5]) let x a , b R and n N so that n a n b . For q > 0 , λ > 0 , A > 1 , we consider the number λ q > z 0 > 0 with G q , λ z 0 = G q , λ 0 , and λ q > 1 . Then,
1 k = n a n b G q , λ n x k < max 1 G q , λ λ q , 1 G 1 q , λ λ 1 q = : K q .
Similarly, we consider λ 1 q > z 1 > 0 , such that G 1 q , λ z 1 = G 1 q , λ 0 , and λ 1 q > 1 . Thus,
1 k = n a n b G 1 q , λ n x k < max 1 G 1 q , λ λ 1 q , 1 G q , λ λ q = K q .
Hence,
k = n a n b G q , λ n x k > 1 K q ,
and
k = n a n b G 1 q , λ n x k > 1 K q .
Consequently, it holds
k = n a n b G q , λ n x k + G 1 q , λ n x k 2 > 2 2 K q = 1 K q ,
so that
1 k = n a n b G q , λ n x k + G 1 q , λ n x k 2 < K q ,
that is,
1 k = n a n b W n x k < K q .
We have proved
Theorem 1.
Let x a , b R and n N , so that n a n b . For q , λ > 0 , A > 1 , we consider λ q > z 0 > 0 with G q , λ z 0 = G q , λ 0 , and λ q > 1 . Also consider λ 1 q > z 1 > 0 , such that G 1 q , λ z 1 = G 1 q , λ 0 , and λ 1 q > 1 . Then
1 k = n a n b W n x k < K q .
We make the following remark:
Remark 1.
(I) By Remark 16.5, p. 402 of [5], we have
lim n k = n a n b G q , λ n x 1 k 1 , for some x 1 a , b ,
and
lim n k = n a n b G 1 q , λ n x 2 k 1 , for some x 2 a , b .
Therefore, it holds that
lim n k = n a n b G q , λ n x 1 k + G 1 q , λ n x 2 k 2 1 .
Hence, it is
lim n k = n a n b G q , λ n x 1 k + G 1 q , λ n x 1 k 2 1 ,
even if
lim n k = n a n b G 1 q , λ n x 1 k = 1 ,
because then
lim n k = n a n b G q , λ n x 1 k 2 + 1 2 1 ,
equivalently
lim n k = n a n b G q , λ n x 1 k 2 1 2 ,
true by
lim n k = n a n b G q , λ n x 1 k 1 .
(II) Let a , b R . For large n, we always have n a n b . Also a k n b , iff n a k n b . So, in general, it holds that
k = n a n b W n x k 1 .
We need:
Definition 1.
Let f C a , b and n N : n a n b . We introduce and define the X-valued linear symmetrized neural-network operators
S n s f , x : = k = n a n b f k n W n x k k = n a n b W n x k =
k = n a n b f k n G q , λ n x k + G 1 q , λ n x k k = n a n b G q , λ n x k + G 1 q , λ n x k .
In fact, it is S n s f C a , b ; and S n s 1 = 1 , S n s are positive linear operators.
The modulus of continuity is defined by
ω 1 f , δ : = sup x , y     a , b : x y     δ f x f y , δ > 0 .
The same is defined for f C u B R (uniformly continuous and bounded functions) and for f C B R (bounded and continuous functions) and for f C u R (uniformly continuous functions).
In fact, f C a , b or f C u R , is equivalent to lim δ 0 ω 1 f , δ = 0 .
In this work, 0 < α < 1 , n N : n 1 α > 2 , where · is the supremum norm.

3. Main Results

We present uniform, pointwise, L 1 , L p a n d m o n o t o n e n o r m a p p r o x i m a t i o n r e s u l t s .
Theorem 2.
Let f C a , b . Then,
S n s f f 2 ω 1 f , K q 1 n 2 α + b a 2 γ A λ n 1 α 2 0 , as n + .
So that lim n S n s f = f , uniformly.
Proof. 
We estimate that
S n s t x 2 x = k = n a n b k n x 2 W n x k k = n a n b W n x k ( 25 )
K q k = n a n b k n x 2 W n x k =
K q k = n a : k n x 1 n α n b k n x 2 W n x k +
k = n a : k n x > 1 n α n b k n x 2 W n x k ( by ( 11 ) , ( 17 ) )
K q 1 n 2 α + b a 2 γ A λ n 1 α 2 .
That is,
S n s t x 2 x K q 1 n 2 α + b a 2 γ A λ n 1 α 2 .
Consequently, it holds that
δ n * : = S n s t x 2 x 1 2
K q 1 n 2 α + b a 2 γ A λ n 1 α 2 .
By (Theorem 7.1.7, p. 203 of [2]), using the Shisha–Mond inequality [7] for positive linear operators and S n s 1 = 1 , we have
S n s f f 2 ω 1 f , δ n * .
Thus, it holds that
S n s f f 2 ω 1 f , K q 1 n 2 α + b a 2 γ A λ n 1 α 2 ,
proving the claim. □
It holds
Theorem 3.
Let f : R R be a continuous and 2 π periodic function with modulus of continuity ω 1 . Here, · denotes the sup-norm over a , b R , and the operators S n s are acting on such f over a , b ; n N : n 1 α > 2 , 0 < α < 1 . Then,
S n s f f 2 ω 1 f , π K q ω 1 sin 2 , 1 2 n α + γ A λ n 1 α 2 .
Proof. 
We want to estimate ( x a , b ),
ε n * : = π S n s sin 2 t x 2 x 1 2 =
π k = n a n b sin 2 k n x 2 W n x k k = n a n b W n x k 1 2
π K q k = n a n b sin 2 k n x 2 W n x k 1 2 =
π K q k = n a : k n x 1 n α n b sin 2 k n x 2 sin 2 0 W n x k +
k = n a : k n x > 1 n α n b sin 2 k n x 2 W n x k 1 2
π K q k = n a : k n x 1 n α n b ω 1 sin 2 , k n x 2 W n x k +
1 · k = n a : k n x > 1 n α n b W n x k 1 2
π K q ω 1 sin 2 , 1 2 n α + γ A λ n 1 α 2 .
We have proved that
ε n * π K q ω 1 sin 2 , 1 2 n α + γ A λ n 1 α 2 .
Therefore, it holds that, by [8], the following inequalities are
S n s f f 2 ω 1 f , ε n *
2 ω 1 f , π K q ω 1 sin 2 , 1 2 n α + γ A λ n 1 α 2 ,
hence proving the claim. □
We need
Definition 2.
Let f C a , b . The Peetre ([39]) K 1 functional is defined as follows:
K 1 f , t : = inf f g + t g , g C 1 a , b , t 0 .
We give
Theorem 4.
Let f C a , b . Then,
S n s f f 2 K 1 f , K q 2 1 n α + b a γ A λ n 1 α 2 0 ,
as n + .
So that lim n + S n s f = f , uniformly.
For the proof of (48), we will use
Corollary 1.
([1]). (pointwise approximation) Let f C a , b and let L be a positive linear operator acting on C a , b satisfying L 1 , x = 1 , all x a , b .
Then, we have the attainable (i.e., sharp) inequality
L f , x f x 2 K 1 f , 1 2 L y x , x , x a , b .
Proof. of Theorem 4.
We apply inequality (49). Clearly, we have
S n s f , x f x 2 K 1 f , 1 2 S n s t x x ,
which is attainable (i.e., sharp).
We estimate that
S n s t x x = k = n a n b k n x W n x k k = n a n b W n x k
K q k = n a n b k n x W n x k =
K q k = n a : k n x 1 n α n b k n x W n x k +
k = n a : k n x > 1 n α n b k n x W n x k
K q 1 n α + b a γ A λ n 1 α 2 .
That is,
S n s t x x K q 1 n α + b a γ A λ n 1 α 2 .
We have proved that
S n s f , x f x 2 K 1 f , K q 2 1 n α + b a γ A λ n 1 α 2 ,
x a , b .
Now, the validity of (48) is clear. □
This follows a trigonometric result.
Theorem 5.
Here f C 2 a , b , x 0 a , b , r > 0 , D ˜ 3 x 0 : = S n s · x 0 3 x 0 1 3 . Then
(i)
S n s f x 0 f x 0 f x 0 S n s sin · x 0 x 0 +
2 f x 0 S n s sin 2 · x 0 2 x 0 +
ω 1 f + f , r D ˜ 3 x 0 2 D ˜ 3 2 x 0 1 + 1 3 r ,
(ii)
S n s f f f S n s sin · x 0 x 0 +
2 f S n s sin 2 · x 0 2 x 0 +
ω 1 f + f , r D ˜ 3 2 D ˜ 3 2 1 + 1 3 r ,
(iii) if f x 0 = f x 0 = 0 , we obtain
S n s f x 0 f x 0 ω 1 f + f , r D ˜ 3 x 0 2 D ˜ 3 2 x 0 1 + 1 3 r ,
and
(iv)
S n s f x 0 f x 0 f x 0 S n s sin · x 0 x 0 2 f x 0 S n s sin 2 · x 0 2 x 0
ω 1 f + f , r D ˜ 3 x 0 2 D ˜ 3 2 x 0 1 + 1 3 r .
Proof. 
Direct application of (Theorem 12.3, p. 384 of [4]). □
Next, we give the hyperbolic version of the last result.
Theorem 6.
All as in Theorem 5. Then,
(i)
S n s f x 0 f x 0 f x 0 S n s sinh · x 0 x 0 +
2 f x 0 S n s sinh 2 · x 0 2 x 0 +
cosh b a ω 1 f f , r D ˜ 3 x 0 2 D ˜ 3 2 x 0 1 + 1 3 r ,
(ii)
S n s f f f S n s sinh · x 0 x 0 +
2 f S n s sinh 2 · x 0 2 x 0 +
cosh b a ω 1 f f , r D ˜ 3 2 D ˜ 3 2 1 + 1 3 r ,
(iii) if f x 0 = f x 0 = 0 , x 0 a , b , we obtain
S n s f x 0 f x 0 cosh b a ω 1 f f , r D ˜ 3 x 0 2 D ˜ 3 2 x 0 1 + 1 3 r ,
and
(iv)
S n s f x 0 f x 0 f x 0 S n s sinh · x 0 x 0 2 f x 0 S n s sinh 2 · x 0 2 x 0
cosh b a ω 1 f f , r D ˜ 3 x 0 2 D ˜ 3 2 x 0 1 + 1 3 r .
Proof. 
Direct application of (Theorem 12.5, p. 390 of [4]). □
Remark 2.
We similarly have
S n s · x 0 3 x 0 1 3 K q 3 1 n 3 α + b a 3 γ A λ n 1 α 2 3 0 ,
as n + .
Hence, by each of the above Theorems 5 and 6, we obtain that S n s f f is uniformly convergent as n + , f C 2 a , b .
Valid, by Theorem 12.4, p. 390, and Theorem 12.6, p. 395 from [4], respectively.
We make the following remark:
Remark 3.
Let x 0 a , b , f C a , b , here, S n s is a positive linear operator from C a , b into itself.
Also, S n s 1 = 1 . By the Riesz Representation theorem, there exists a probability measure μ x 0 on a , b , such that
S n s f x 0 = a , b f t d μ x 0 t , f C a , b .
Assume here that f t f x 0 is convex in t. We consider
d ˜ 2 x 0 : = a , b t x 0 2 d μ x 0 t 1 2
= S n s t x 0 2 x 0 1 2 ( as earlier )
K q 1 n 2 α + b a 2 γ A λ n 1 α 2 = .
Clearly, here it is d ˜ 2 x 0 > 0 (see (35)).
For large enough n N , we obtain
min x 0 a , b x 0 max x 0 a , b x 0 .
That is,
d ˜ 2 x 0 min x 0 a , b x 0 .
By (Corollary 8.1.1, p. 245 of [2]), we obtain:
Theorem 7.
It holds that
S n s f x 0 f x 0 ω 1 f , K q 1 n 2 α + b a 2 γ A λ n 1 α 2 ,
for large enough n N .
Remark 4.
We have
d ˜ 1 x 0 : = a , b t x 0 d μ x 0 t = S n s t x 0 x 0 ( 52 )
K q 1 n α + b a γ A λ n 1 α 2 min x 0 a , b x 0 ,
for large enough n N .
By (8.1.8, p. 245 of [2]), we obtain
Theorem 8.
Let r 1 , then
S n s f x 0 f x 0 r ω 1 f , 1 r K q 1 n α + b a γ A λ n 1 α 2 ,
for large enough n N .
By Theorem 8.1.2, p. 248 of [2], we can derive the following result.
Theorem 9.
Here, we consider a , b R , and x 0 a , b . Let L be a positive linear operator from C a , b into itself, such that L 1 = 1 .
Denote by
d ˜ m + 1 x 0 : = L t x 0 m + 1 x 0 1 m + 1 , m N .
Consider f C m a , b , such that f m t f m x 0 is convex in t.
Assume that
0 < d ˜ m + 1 m + 1 x 0 min x 0 a , b x 0 m + 1 ! .
Then,
L f x 0 f x 0 k = 1 m f k x 0 k ! L t x 0 k x 0
ω 1 f m , L t x 0 m + 1 x 0 m + 1 ! .
We make the following remark:
Remark 5.
We have
0 < S n s t x 0 m + 1 x 0 = k = n a n b k n x 0 m + 1 W n x k k = n a n b W n x k
K q k = n a n b k n x 0 m + 1 W n x k
( as earlier )
K q 1 n m + 1 α + b a m + 1 γ A λ n 1 α 2 min x 0 a , b x 0 m + 1 ! ,
for large enough n N .
By Theorem 9 and Remark 5, we have proved the following:
Theorem 10.
Here, it is a , b R , and x 0 a , b . Consider f C m a , b : f m t f m x 0 is convex in t, m N .
Then, for sufficiently large n N , we derive that
S n s f x 0 f x 0 k = 1 m f k x 0 k ! S n s t x 0 k x 0
ω 1 f m , K q m + 1 ! 1 n m + 1 α + b a m + 1 γ A λ n 1 α 2 .
Next, we use (Theorem 18.1, p. 419 of [3]).
Theorem 11.
Denote ( N N )
F ˜ N : = S n s t · N · 1 N < + ,
where · is the supremum norm.
Let f C N a , b . Then,
S n s f f k ¯ = 1 N f k ¯ k ¯ ! S n s t · k ¯ · +
ω 1 f N , F ˜ N F ˜ N N 1 b a N + 1 ! + F ˜ N 2 N ! + F ˜ N 2 8 b a N 1 ! , n N .
Furthermore, it holds that
F ˜ N K q N 1 n N α + b a N γ A λ n 1 α 2 N ,
and for k ¯ = 1 , , N , we have, similarly, that
S n s t · k ¯ · Δ q 1 n k ¯ α + b a k ¯ γ A λ n 1 α 2 .
By (Corollary 18.1, p. 421 of [3]), we obtain
Corollary 2.
Here F ˜ 1 : = S n s t · · < . Let f C 1 a , b . Then,
S n s f f f S n s t · · +
1 2 ω 1 f , F ˜ 1 b a + F ˜ 1 + F ˜ 1 2 4 b a .
Here, it is
F ˜ 1 K q 1 n α + b a γ A λ n 1 α 2 .
By (Corollary 18.2, p. 421 of [3]), we obtain
Corollary 3.
Here F ˜ 2 : = S n s t · 2 · 1 2 < + . Let f C 2 a , b . Then,
S n s f f f S n s t · · + f 2 S n s t · 2 ·
1 2 ω 1 f , F ˜ 2 F ˜ 2 b a 3 + F ˜ 2 2 + F ˜ 2 2 4 b a .
Here, they are
F ˜ 2 K q 1 n 2 α + b a 2 γ A λ n 1 α 2 ,
and
S n s t · 2 · K q 1 n 2 α + b a 2 γ A λ n 1 α 2 .
By (Theorem 18.2, p. 422 of [3]), we obtain:
Theorem 12.
Let φ ˜ : = S n s t x 0 2 x 0 1 2 < + , and r > 0 . Let also f C 1 a , b . Then
S n s f f f S n s t x 0 x 0
1 8 r 2 + r 2 ω 1 f , r φ ˜ φ ˜ , if r 2 ; ω 1 f , r φ ˜ φ ˜ , if r > 2 .
The following results are with respect to · 1 , · 2 norms, which are with respect to the Lebesgue measure.
By (Theorem 18.3, p. 424 of [3]), we obtain
Theorem 13.
Let
γ ˜ N : = S n s t x N x 2 1 N < + ,
n , N N . Also let f C N a , b . Then,
S n s f f 1 k ¯ = 1 N 1 k ¯ ! f k ¯ 2 S n s t · k ¯ · 2 + 1 2 ω 1 f N , γ ˜ N γ ˜ N N 1
b a N + 1 ! · 7 3 b a + b a N ! γ ˜ N + γ ˜ N 2 4 N 1 ! 2 b a < + .
By (Corollary 18.3, p. 426 of [3]), we obtain:
Corollary 4.
Set
γ ˜ 1 : = S n s t x x 2 < + .
Let f C 1 a , b . Then,
S n s f f 1 f 2 S n s t · · 2 + ω 1 f , γ ˜ 1 2
b a 2 7 3 b a + b a γ ˜ 1 + γ ˜ 1 2 4 2 b a .
By (Corollary 18.4, p. 427 of [3]), we obtain
Corollary 5.
Set
γ ˜ 2 : = S n s t x 2 x 2 1 2 < + .
Let f C 2 a , b . Then
S n s f f 1 f 2 S n s t · · 2 + f 2 2 γ ˜ 2 2 + 1 4 ω 1 f , γ ˜ 2 γ ˜ 2
b a 3 · 7 3 b a + b a γ ˜ 2 + γ ˜ 2 2 2 2 b a .
Setting 1.
In the following Theorems 14 and 15, we use:
Let a , b , B , μ , a b be a measure space, where B is the Borel σ algebra on a , b and μ is a positive finite measure on a , b . (Please note that C a , b L p a , b , B , μ , any p > 1 ). Here, · p stands for the related L p norm with respect to μ. Let p , q > 1 such that 1 p + 1 q = 1 .
Next, we apply Theorem 18.4, p. 428 of [3].
Theorem 14.
Here n , N N . Set
δ ˜ N : = S n s t · N · p 1 N < + .
Let f C N a , b . Then
S n s f f p k ¯ = 1 N 1 k ¯ ! f k ¯ p q S n s t · k ¯ · p 2 +
ω 1 f N , δ ˜ N δ ˜ N N 1 b a N + 1 ! + δ ˜ N 2 N ! + δ ˜ N 2 4 N 1 ! b a .
Remark 6.
We have
S n s t x 0 N x 0 K q 1 n N α + b a N γ A λ n 1 α 2 = : τ ˜ ,
and
S n s t · N · p =
a , b S n s t x 0 N x 0 p μ d x 0 1 p a , b τ ˜ p d μ 1 p = μ a , b 1 p · τ ˜ .
Therefore, it holds that
δ ˜ N μ a , b 1 N p τ ˜ 1 N .
By (Theorem 18.5, p. 431 of [3]), we obtain
Theorem 15.
Here, n , N N . We denote the following:
δ ˜ N : = S n s t x N , x p 1 N < + .
Let f C N a , b . Then,
S n s f f p k ¯ = 1 N f k ¯ k ¯ ! S n s t x k ¯ , x p +
ω 1 f N , δ ˜ N δ ˜ N N 1 b a N + 1 ! + δ ˜ N 2 N ! + δ ˜ N 2 4 N 1 ! b a .
We need
Definition 3.
Let f , g C a , b , such that f x g x , ∀ x a , b . A norm · on C a , b is called monotone iff f g . We denote a monotone norm by · m , e.g., L p norms in general ( 1 p + ), Orlicz norms, etc.
Finally, we apply (Corollary 18.5, p. 432 of [3]).
Corollary 6.
Let · m be a monotone norm on C a , b . Denote
ε ˜ N : = S n s t x N , x m 1 N < + , n , N N .
Let f C N a , b . Then
S n s f f m k ¯ = 1 N f k ¯ k ¯ ! S n s t x k ¯ , x m +
ω 1 f N , ε ˜ N ε ˜ N N 1 b a N + 1 ! + ε ˜ N 2 N ! + ε ˜ N 2 4 N 1 ! b a .

4. Conclusions

The author’s recent symmetrization technique presented here aims for our operators to converge at high speed to the unit operator using half-feed of data. This fact is documented by other authors’ forthcoming work, which includes numerical work and programming. This article builds a bridge connecting neural-network approximation (part of AI) to positive linear operators (a significant part of functional analysis). The author has been a pioneer in the study of positive linear operators by the use of geometric theory since 1985 and the founder of quantitative approximation theory by neural networks in 1997. The author recently was the first to connect neural networks to positive linear operators; see [6]. The list of references is complete, supporting the above.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Anastassiou, G.A. A “K-Attainable” inequality related to the convergence of Positive Linear Operators. J. Approx. Theory 1985, 44, 380–383. [Google Scholar] [CrossRef]
  2. Anastassiou, G.A. Moments in Probability and Approximation Theory; Pitman Research Notes in Mathematics Series; Longman Scientific & Technical: Essex, UK; New York, NY, USA, 1993. [Google Scholar]
  3. Anastassiou, G.A. Quantitative Approximations; Chapmen & Hall/CRC: London, UK; New York, NY, USA, 2001. [Google Scholar]
  4. Anastassiou, G.A. Trigonometric and Hyperbolic Generated Approximation Theory; World Scientific: Singapore; New York, NY, USA, 2025. [Google Scholar]
  5. Anastassiou, G.A. Parametrized, Deformed and General Neural Networks; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2023. [Google Scholar]
  6. Anastassiou, G.A. Neural Networks a Positive Linear Operators. Mathematics 2025, 13, 1112. [Google Scholar] [CrossRef]
  7. Shisha, O.; Mond, B. The degree of convergence of sequences of linear positive operators. Proc. Natl. Acad. Sci. USA 1968, 60, 1196–1200. [Google Scholar] [CrossRef] [PubMed]
  8. Shisha, O.; Mond, B. The degree of Approximation to Periodic Functions by Linear Positive Operators. J. Approx. Theory 1968, 1, 335–339. [Google Scholar] [CrossRef]
  9. Chen, Z.; Cao, F. The approximation operators with sigmoidal functions. Comput. Math. Appl. 2009, 58, 758–765. [Google Scholar] [CrossRef]
  10. Costarelli, D.; Spigler, R. Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 2013, 44, 101–106. [Google Scholar] [CrossRef]
  11. Costarelli, D.; Spigler, R. Multivariate neural network operators with sigmoidal activation functions. Neural Netw. 2013, 48, 72–77. [Google Scholar] [CrossRef]
  12. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New York, NY, USA, 1998. [Google Scholar]
  13. McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
  14. Mitchell, T.M. Machine Learning; WCB-McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  15. Dansheng, Y.; Feilong, C. Construction and approximation rate for feed-forward neural network operators with sigmoidal functions. J. Comput. Appl. Math. 2025, 453, 116150. [Google Scholar]
  16. Siyu, C.; Bangti, J.; Qimeng, Q.; Zhi, Z. Hybrid neural-network FEM approximation of diffusion coeficient in elyptic and parabolic problems. IMA J. Numer. Anal. 2024, 44, 3059–3093. [Google Scholar]
  17. Lucian, C.; Danillo, C.; Mariarosaria, N.; Pantiş, A. The approximation capabilities of Durrmeyer-type neural network operators. J. Appl. Math. Comput. 2024, 70, 4581–4599. [Google Scholar]
  18. Warin, X. The GroupMax neural network approximation of convex functions. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 11608–11612. [Google Scholar] [CrossRef] [PubMed]
  19. Fabra, A.; Guasch, O.; Baiges, J.; Codina, R. Approximation of acoustic black holes with finite element mixed formulations and artificial neural network correction terms. Finite Elem. Anal. Des. 2024, 241, 104236. [Google Scholar] [CrossRef]
  20. Grohs, P.; Voigtlaender, F. Proof of the theory-to-practice gap in deep learning via sampling complexity bounds for neural network approximation spaces. Found. Comput. Math. 2024, 24, 1085–1143. [Google Scholar] [CrossRef]
  21. Basteri, A.; Trevisan, D. Quantitative Gaussian approximation of randomly initialized deep neural networks. Mach. Learn. 2024, 113, 6373–6393. [Google Scholar] [CrossRef]
  22. De Ryck, T.; Mishra, S. Error analysis for deep neural network approximations of parametric hyperbolic conservation laws. Math. Comp. 2024, 93, 2643–2677. [Google Scholar] [CrossRef]
  23. Liu, J.; Zhang, B.; Lai, Y.; Fang, L. Hull form optimization reserach based on multi-precision back-propagation neural network approximation model. Internal. J. Numer. Methods Fluid 2024, 96, 1445–1460. [Google Scholar] [CrossRef]
  24. Yoo, J.; Kim, J.; Gim, M.; Lee, H. Error estimates of physics-informed neural networks for initial value problems. J. Korean Soc. Ind. Appl. Math. 2024, 28, 33–58. [Google Scholar]
  25. Kaur, J.; Goyal, M. Hyers-Ulam stability of some positive linear operators. Stud. Univ. Babeş-Bolyai Math. 2025, 70, 105–114. [Google Scholar] [CrossRef]
  26. Abel, U.; Acu, A.M.; Heilmann, M.; Raşa, I. On some Cauchy problems and positive linear operators. Mediterr. J. Math. 2025, 22, 20. [Google Scholar] [CrossRef]
  27. Moradi, H.R.; Furuichi, S.; Sababheh, M. Operator quadratic mean and positive linear maps. J. Math. Inequal. 2024, 18, 1263–1279. [Google Scholar] [CrossRef]
  28. Bustamante, J.; Torres-Campos, J. Power series and positive linear operators in weighted spaces. Serdica Math. J. 2024, 50, 225–250. [Google Scholar] [CrossRef]
  29. Acu, A.-M.; Rasa, I.; Sofonea, F. Composition of some positive linear integral operators. Demonstr. Math. 2024, 57, 20240018. [Google Scholar] [CrossRef]
  30. Patel, P.G. On positive linear operators linking gamma, Mittag-Leffler and Wright functions. Int. J. Appl. Comput. Math. 2024, 10, 152. [Google Scholar] [CrossRef]
  31. Ansari, K.J.; Özger, F. Pointwise and weighted estimates for Bernstein-Kantorovich type operators including beta function. Indian J. Pure Appl. Math. 2024. [Google Scholar] [CrossRef]
  32. Savaş, E.; Mursaleen, M. Bézier Type Kantorovich q-Baskakov Operators via Wavelets and Some Approximation Properties. Bull. Iran. Math. Soc. 2023, 49, 68. [Google Scholar] [CrossRef]
  33. Cai, Q.; Aslan, R.; Özger, F.; Srivastava, H.M. Approximation by a new Stancu variant of generalized (λ,μ)-Bernstein operators. Alex. Eng. J. 2024, 107, 205–214. [Google Scholar] [CrossRef]
  34. Ayman-Mursaleen, M.; Nasiruzzaman, M.; Sharma, S.; Cai, Q. Invariant means and lacunary sequence spaces of order (α, β). Demonstr. Math. 2024, 57, 20240003. [Google Scholar] [CrossRef]
  35. Ayman-Mursaleen, M.; Nasiruzzaman, M.; Rao, N. On the Approximation of Szász-Jakimovski-Leviatan Beta Type Integral Operators Enhanced by Appell Polynomials. Iran. J. Sci. 2025. [Google Scholar] [CrossRef]
  36. Alamer, A.; Nasiruzzaman, M. Approximation by Stancu variant of λ-Bernstein shifted knots operators associated by Bézier basis function. J. King Saud Univ. Sci. 2024, 36, 103333. [Google Scholar] [CrossRef]
  37. Ayman-Mursaleen, M.; Nasiruzzaman, M.; Rao, N.; Dilshad, M.; Nisar, K.S. Approximation by the modified λ-Bernstein-polynomial in terms of basis function. AIMS Math. 2024, 9, 4409–4426. [Google Scholar] [CrossRef]
  38. Ayman-Mursaleen, M.; Lamichhane, B.P.; Kilicman, A.; Senu, N. On q-statistical approximation of wavelets aided Kantorovich q-Baskakov operators. Filomat 2024, 38, 3261–3274. [Google Scholar] [CrossRef]
  39. Peetre, J. A Theory of Interpolation of Normed Spaces; Notes Universidade de Brasilia: Brasília, Brazil, 1963. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassiou, G.A. Generalized Logistic Neural Networks in Positive Linear Framework. Symmetry 2025, 17, 746. https://doi.org/10.3390/sym17050746

AMA Style

Anastassiou GA. Generalized Logistic Neural Networks in Positive Linear Framework. Symmetry. 2025; 17(5):746. https://doi.org/10.3390/sym17050746

Chicago/Turabian Style

Anastassiou, George A. 2025. "Generalized Logistic Neural Networks in Positive Linear Framework" Symmetry 17, no. 5: 746. https://doi.org/10.3390/sym17050746

APA Style

Anastassiou, G. A. (2025). Generalized Logistic Neural Networks in Positive Linear Framework. Symmetry, 17(5), 746. https://doi.org/10.3390/sym17050746

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop