Next Article in Journal
A Fractional Chemotaxis Navier–Stokes System with Matrix-Valued Sensitivities and Attractive–Repulsive Signals
Next Article in Special Issue
Mittag–Leffler Functions in Discrete Time
Previous Article in Journal
Infinitely Many Small Energy Solutions to Schrödinger-Kirchhoff Type Problems Involving the Fractional r(·)-Laplacian in RN
Previous Article in Special Issue
Some Local Fractional Hilbert-Type Inequalities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Abstract Univariate Neural Network Approximation Using a q-Deformed and λ-Parametrized Hyperbolic Tangent Activation Function

by
George A. Anastassiou
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
Fractal Fract. 2023, 7(3), 208; https://doi.org/10.3390/fractalfract7030208
Submission received: 23 January 2023 / Revised: 8 February 2023 / Accepted: 16 February 2023 / Published: 21 February 2023

Abstract

:
In this work, we perform univariate approximation with rates, basic and fractional, of continuous functions that take values into an arbitrary Banach space with domain on a closed interval or all reals, by quasi-interpolation neural network operators. These approximations are achieved by deriving Jackson-type inequalities via the first modulus of continuity of the on hand function or its abstract integer derivative or Caputo fractional derivatives. Our operators are expressed via a density function based on a q-deformed and λ -parameterized hyperbolic tangent activation sigmoid function. The convergences are pointwise and uniform. The associated feed-forward neural networks are with one hidden layer.

1. Introduction

The author of [1,2], see Chapters 2–5, was the pioneer to start neural network quantitative approximation to continuous functions by precisely defined neural network operators of Cardaliaguet–Euvrard and “Squashing” types, by using the modulus of continuity of the given function or its high order derivative, and deriving almost sharp Jackson-type inequalities. He dealt with both the univariate and multivariate cases. The defining these operators “bell-shaped” and “squashing” functions were taken with a compact support. Furthermore in [2], he provides the Nth order asymptotic expansion for the error of weak approximation of these two operators to a particular natural class of smooth functions, for more visit Chapters 4–5 therein.
Coming back the author motivated by [3], who continued his research on neural networks approximation by employing and using the appropriate quasi-interpolation operators of sigmoidal and hyperbolic tangent type, which resulted in [4,5,6,7,8], by treating both the univariate and multivariate cases. He also completed the corresponding fractional cases [9,10,11].
Let h be a general sigmoid function with h 0 = 0 , and y = ± 1 the horizontal asymptotes. Of course h is strictly increasing over R . Let the parameter 0 < r < 1 and x > 0 . Then clearly x < x and x < r x < r x < x , furthermore it holds h x < h r x < h r x < h x . Consequently the sigmoid y = h r x has a graph inside the graph of y = h x , of course with the same asymptotes y = ± 1 . Therefore h r x has derivatives (gradients) at more points x than h x has different than zero or not as close to zero, thus killing a smaller number of neurons! Furthermore, of course h r x is more distant from y = ± 1 , than h x it is. A highly desired fact in neural networks theory.
The brain non-symmetry has been clearly discovered in animals and humans in terms of structure, function and behaviors. This observation reflects evolutionary, hereditary, developmental, experiential and pathological factors. Therefore it is natural to consider for our study deformed neural network activation functions and operators. So this paper is a specific study under this philosophy of approaching reality as close as possible.
Consequently the author here performs q-deformed and λ -parameterized hyperbolic tangent function activated neural network approximations to continuous functions over closed intervals of reals or over the whole R with values to an arbitrary Banach space X , · . Finally he deals with the related X-valued fractional approximation. All convergences here are quantitative expressed via the first modulus of continuity of the on hand function or its X-valued high order derivative, or X-valued fractional derivatives and given by almost attained Jackson-type inequalities.
Our closed intervals are not necessarily symmetric to the origin. Some of our upper bounds to error quantity are very flexible and general. In preparation to derive our results we describe important properties of the basic density function defining our operators which is induced by a q-deformed and λ -parameterized hyperbolic tangent sigmoid function.
Feed-forward X-valued neural networks (FNNs) with one hidden layer, the only type of networks we use in this work, are mathematically expressed by
S n x = j = 0 n d j k c j · x + f j , x R s , s N ,
where for 0 j n , f j R are the thresholds, c j R s are the connection weights, d j X are the coefficients, c j · x is the inner product of c j and x, and k is the activation function of the network. For more in neural networks read [12,13,14].

2. About q-Deformed and λ-Parameterized Hyperbolic Tangent Function gq,λ

Here all this background comes from [15,16].
We use g q , λ , see (1), and exhibit that it is a sigmoid function and we will present several of its properties related to the approximation by neural network operators. It will act as activation function.
So, let us consider the function
g q , λ x : = e λ x q e λ x e λ x + q e λ x , λ , q > 0 , x R .
We have that
g q , λ 0 = 1 q 1 + q .
We notice also that
g q , λ x = e λ x q e λ x e λ x + q e λ x = 1 q e λ x e λ x 1 q e λ x + e λ x = e λ x 1 q e λ x e λ x + 1 q e λ x = g 1 q , λ x .
That is
g q , λ x = g 1 q , λ x , x R ,
and
g 1 q , λ x = g q , λ x ,
hence
g 1 q , λ x = g q , λ x .
It is
g q , λ x = e 2 λ x q e 2 λ x + q = 1 q e 2 l x 1 + q e 2 λ x x + 1 ,
i.e.,
g q , λ + = 1 ,
Furthermore,
g q , λ x = e 2 λ x q e 2 λ x + q x q q = 1 ,
i.e.,
g q , λ = 1 .
We find that
g q , λ x = 4 q λ e 2 λ x e 2 λ x + q 2 > 0 ,
therefore g q , λ is strictly increasing.
Next we obtain ( x R )
g q , λ x = 8 q λ 2 e 2 λ x q e 2 λ x e 2 λ x + q 3 C R .
We observe that
q e 2 λ x 0 q e 2 λ x ln q 2 λ x x ln q 2 λ .
So, in case of x < ln q 2 λ , we have that g q , λ is strictly concave up, with g q , λ ln q 2 λ = 0 .
Furthermore, in case of x > ln q 2 λ , we have that g q , λ is strictly concave down.
Clearly, g q , λ is a shifted sigmoid function with g q , λ 0 = 1 q 1 + q , and g q , λ x = g q 1 , λ x , (a semi-odd function), see also [17].
By 1 > 1 , x + 1 > x 1 , we consider the function
M q . λ x : = 1 4 g q , λ x + 1 g q , λ x 1 > 0 ,
x R ; q , λ > 0 . Notice that M q , λ ± = 0 , so the x-axis is horizontal asymptote.
We have that
M q , λ x = 1 4 g q , λ x + 1 g q , λ x 1 =
1 4 g q , λ x 1 g q , λ x + 1 =
1 4 g 1 q , λ x 1 + g 1 q , λ x + 1 =
1 4 g 1 q , λ x + 1 g 1 q , λ x 1 = M 1 q , λ x , x R .
Thus,
M q , λ x = M 1 q , λ x , x R ; q , λ > 0 ,
a deformed symmetry.
Next, we have that
M q , λ x = 1 4 g q , λ x + 1 g q , λ x 1 , x R .
Let x < ln q 2 λ 1 , then x 1 < x + 1 < ln q 2 λ and g q , λ x + 1 > g q , λ x 1 (by g q , λ being strictly concave up for x < ln q 2 λ ), that is M q , λ x > 0 . Hence, M q , λ is strictly increasing over , ln q 2 λ 1 .
Let now x 1 > ln q 2 λ , then x + 1 > x 1 > ln q 2 λ , and g q , λ x + 1 < g q , λ x 1 , that is M q , λ x < 0 .
Therefore M q , λ is strictly decreasing over ln q 2 λ + 1 , + .
Let us next consider, ln q 2 λ 1 x ln q 2 λ + 1 . We have that
M q , λ x = 1 4 g q , λ x + 1 g q , λ x 1 =
2 q λ 2 e 2 λ x + 1 q e 2 λ x + 1 e 2 λ x + 1 + q 3 e 2 λ x 1 q e 2 λ x 1 e 2 λ x 1 + q 3 .
By ln q 2 λ 1 x ln q 2 λ x + 1 ln q 2 λ x + 1 q e 2 λ x + 1 q e 2 λ x + 1 0 .
By x ln q 2 λ + 1 x 1 ln q 2 λ 2 λ x 1 ln q e 2 λ x 1 q q e 2 λ β x 1 0 .
Clearly by (13) we obtain that M q , λ x 0 , for x ln q 2 λ 1 , ln q 2 λ + 1 .
More precisely M q , λ is concave down over ln q 2 λ 1 , ln q 2 λ + 1 , and strictly concave down over ln q 2 λ 1 , ln q 2 λ + 1 .
Consequently M q , λ has a bell-type shape over R .
Of course it holds M q , λ ln q 2 λ < 0 .
At x = ln q 2 λ , we have
M q , λ x = 1 4 g q , λ x + 1 g q , λ x 1 =
q λ e 2 λ x + 1 e 2 λ x + 1 + q 2 e 2 λ x 1 e 2 λ x 1 + q 2 .
Thus,
M q , λ ln q 2 λ = q λ e 2 λ ln q 2 λ + 1 e 2 λ ln q 2 λ + 1 + q 2 e 2 λ ln q 2 λ 1 e 2 λ ln q 2 λ 1 + q 2 =
q λ q e 2 λ q e 2 λ + q 2 q e 2 λ q e 2 λ + q 2 =
λ e 2 λ e 2 λ + 1 2 e 2 λ e 2 λ + 1 2 =
λ e 2 λ e 2 λ + 1 2 e 2 λ e 2 λ + 1 2 e 2 λ + 1 2 e 2 λ + 1 2 = 0 .
That is, ln q 2 λ is the only critical number of M q , λ over R . Hence at x = ln q 2 λ , M q , λ achieves its global maximum, which is
M q , λ ln q 2 λ = 1 4 g q , λ ln q 2 λ + 1 g q , λ ln q 2 λ 1 =
1 4 e λ ln q 2 λ + 1 q e λ ln q 2 λ + 1 e λ ln q 2 λ + 1 + q e λ ln q 2 λ + 1 e λ ln q 2 λ 1 q e λ ln q 2 λ 1 e λ ln q 2 λ 1 + q e λ ln q 2 λ 1 =
1 4 q e λ q q 1 2 e λ q e λ + q q 1 2 e λ q e λ q q 1 2 e λ q e λ + q q 1 2 e λ =
1 4 e λ e λ e λ + e λ e λ e λ e λ + e λ =
1 4 2 e λ e λ e λ + e λ = 1 2 e λ e λ e λ + e λ = tanh λ 2 .
Conclusion: The maximum value of M q , λ is
M q , λ ln q 2 λ = tanh λ 2 , λ > 0 .
We mention
Theorem 1
([16]). We have that
i = M q , λ x i = 1 , x R , λ , q > 0 .
Thus,
i = M q , λ n x i = 1 , n N , x R .
Similarly, it holds
i = M 1 q , λ x i = 1 , x R .
However, M 1 q , λ x i = ( 11 ) M q , λ i x , ∀ x R .
Hence,
i = M q , λ i x = 1 , x R ,
and
i = M q , λ i + x = 1 , x R .
It follows
Theorem 2
([16]). It holds
M q , λ x d x = 1 , λ , q > 0 .
So that M q , λ is a density function on R ; λ , q > 0 .
We need the following result
Theorem 3
([16]). Let 0 < α < 1 , and n N with n 1 α > 2 ; q , λ > 0 . Then,
k = : n x k n 1 α M q , λ n x k < max q , 1 q e 4 λ e 2 λ n 1 α = T e 2 λ n 1 α ,
where T : = max q , 1 q e 4 λ .
Let · the ceiling of the number, and · the integral part of the number.
Theorem 4
([16]). Let x a , b R and n N so that n a n b . For q > 0 , λ > 0 , we consider the number λ q > z 0 > 0 with M q , λ z 0 = M q , λ 0 and λ q > 1 . Then,
1 k = n a n b M q , λ n x k < m a x 1 M q , λ λ q , 1 M 1 q , λ λ 1 q = : Δ q .
We also mention
Remark 1
([16]). (i) We have that
lim n + k = n a n b M q , λ n x k 1 , for at least some x a , b ,
where λ , q > 0 .
(ii) Let a , b R . For large n we always have n a n b . Furthermore, a k n b , iff n a k n b . In general it holds
k = n a n b M q , λ n x k 1 .
Let X , · be a Banach space.
Definition 1.
Let f C a , b , X and n N : n a n b . We introduce and define the X-valued linear neural network operators
H n f , x : = k = n a n b f k n M q , λ n x k k = n a n b M q , λ n x k , x a , b ; q > 0 , q 1 .
For large enough n we always obtain n a n b . Furthermore, a k n b , iff n a k n b . The same H n is used for real valued functions. We study here the pointwise and uniform convergence of H n f , x to f x with rates.
For convenience, also we call
H n * f , x : = k = n a n b f k n M q , λ n x k ,
(the same H n * can be defined for real valued functions) that is
H n f , x : = H n * f , x k = n a n b M q , λ n x k .
So that
H n f , x f x = H n * f , x k = n a n b M q , λ n x k f x =
H n * f , x f x k = n a n b M q , λ n x k k = n a n b M q , λ n x k .
Consequently, we derive that
H n f , x f x Δ q H n * f , x f x k = n a n b M q , λ n x k =
Δ q k = n a n b f k n f x M q , λ n x k ,
where Δ q as in (25).
We will estimate the right hand side of the last quantity.
For that we need, for f C a , b , X the first modulus of continuity
ω 1 f , δ : = sup x , y a , b x y δ f x f y , δ > 0 .
Similarly, it is defined ω 1 for f C u B R , X (uniformly continuous and bounded functions from R into X), for f C B R , X (continuous and bounded X-valued), and for f C u R , X (uniformly continuous).
The fact f C a , b , X or f C u R , X , is equivalent to lim δ 0 ω 1 f , δ = 0 , see [18].
We make
Definition 2.
When f C u B R , X , or f C B R , X , we define
H n ¯ f , x : = k = f k n M q , λ n x k ,
n N , x R , the X-valued quasi-interpolation neural network operator.
We give
Remark 2.
We have that
f k n f , R < + ,
and
f k n M q , λ n x k f , R M q , λ n x k
and
k = λ λ f k n M q , λ n x k f , R k = λ λ M q , λ n x k ,
and, finally,
k = f k n M q , λ n x k f , R ,
a convergent series in R .
So, the series k = f k n M q , λ n x k is absolutely convergent in X, hence it is convergent in X and H n ¯ f , x X . We denote by f : = sup x a , b f x , for f C a , b , X , similarly it is defined for f C B R , X .

3. Main Results

We present a set of X-valued neural network approximations to a function given with rates.
Theorem 5.
Let f C a , b , X , 0 < α < 1 , n N : n 1 α > 2 , q > 0 , q 1 , x a , b . Then,
(i)
H n f , x f x Δ q ω 1 f , 1 n α + 2 f T e 2 λ n 1 α = : τ ,
where T as in (24),
and
(ii)
H n f f τ .
We obtain that lim n H n f = f , pointwise and uniformly.
Proof. 
We see that
k = n a n b f k n f x M q , λ n x k
k = n a n b f k n f x M q , λ n x k =
k = n a k n x 1 n α n b f k n f x M q , λ n x k +
k = n a k n x > 1 n α n b f k n f x M q , λ n x k
k = n a k n x 1 n α n b ω 1 f , k n x M q , λ n x k +
2 f k = n a k n x > n 1 α n b M q , λ n x k
ω 1 f , 1 n α k = k n x 1 n α M q , λ n x k +
2 f k = k n x > n 1 α M q , λ n x k ( by Theorem 3 )
ω 1 f , 1 n α + 2 f T e 2 λ n 1 α
That is
k = n a n b f k n f x M q , λ n x k
ω 1 f , 1 n α + 2 f T e 2 λ n 1 α .
Using the last equality we derive (37). □
Next we give
Theorem 6.
Let f C B R , X , 0 < α < 1 , q > 0 ,   q 1 , n N : n 1 α > 2 , x R . Then
(i)
H ¯ n f , x f x ω 1 f , 1 n α + 2 f T e 2 λ n 1 α = : γ ,
and
(ii)
H ¯ n f f γ .
For f C u B R , X we obtain lim n H ¯ n f = f , pointwise and uniformly.
Proof. 
We observe that
H ¯ n f , x f x = ( 18 ) k = f k n M q , λ n x k f x k = M q , λ n x k =
k = f k n f x M q , λ n x k
k = f k n f x M q , λ n x k =
k = k n x 1 n α f k n f x M q , λ n x k +
k = k n x > 1 n α f k n f x M q , λ n x k
k = k n x 1 n α ω 1 f , k n x M q , λ n x k +
2 f k = k n x > 1 n α M q , λ n x k
ω 1 f , 1 n α k = k n x 1 n α M q , λ n x k + 2 f T e 2 λ n 1 α
ω 1 f , 1 n α + 2 f T e 2 λ n 1 α ,
proving the claim. □
We need the X-valued Taylor’s formula in an appropriate form:
Theorem 7
([19,20]). Let N N , and f C N a , b , X , where a , b R and X is a Banach space. Let any x , y a , b . Then,
f x = i = 0 N x y i i ! f i y + 1 N 1 ! y x x t N 1 f N t f N y d t .
The derivatives f i , i N , are defined like the numerical ones, see [21], p. 83. The integral y x in (46) is of Bochner type, see [22].
By [20,23] we have that: if f C a , b , X , then f L a , b , X and f L 1 a , b , X .
In the next we discuss high order neural network X-valued approximation by using the smoothness of f.
Theorem 8.
Let f C N a , b , X , n , N N , q > 0 , q 1 , 0 < α < 1 , x a , b and n 1 α > 2 . Then,
(i)
H n f , x f x Δ q j = 1 N f j x j ! 1 n α j + b a j T e 2 λ n 1 α +
ω 1 f N , 1 n α 1 n α N N ! + 2 f N b a N N ! T e 2 λ n 1 α ,
(ii) assume further f j x 0 = 0 , j = 1 , , N , for some x 0 a , b , it holds
H n f , x 0 f x 0 Δ q ·
ω 1 f N , 1 n α 1 n α N N ! + 2 f N b a N N ! T e 2 λ n 1 α ,
and
(iii)
H n f f Δ q j = 1 N f j j ! 1 n α j + b a j T e 2 λ n 1 α +
ω 1 f N , 1 n α 1 n α N N ! + 2 f N b a N T e 2 λ n 1 α .
Again we obtain lim n H n f = f , pointwise and uniformly.
Proof. 
It is lengthy, and as similar to [24] is omitted. □
All integrals from now on are of Bochner type [22].
We need
Definition 3
([20]). Let a , b R , X be a Banach space, α > 0 ; m = α N , ( · is the ceiling of the number), f : a , b X . We assume that f m L 1 a , b , X . We call the Caputo–Bochner left fractional derivative of order α:
D * a α f x : = 1 Γ m α a x x t m α 1 f m t d t , x a , b .
If α N , we set D * a α f : = f m the ordinary X-valued derivative (defined similar to numerical one, see [21], p. 83), and also set D * a 0 f : = f .
By [19], D * a α f x exists almost everywhere in x a , b and D * a α f L 1 a , b , X .
If f m L a , b , X < , then by [23], D * a α f C a , b , X , hence D * a α f C a , b .
We mention
Definition 4
([19]). Let a , b R , X be a Banach space, α > 0 , m : = α . We assume that f m L 1 a , b , X , where f : a , b X . We call the Caputo–Bochner right fractional derivative of order α:
D b α f x : = 1 m Γ m α x b z x m α 1 f m z d z , x a , b .
We observe that D b m f x = 1 m f m x , for m N , and D b 0 f x = f x .
By [19], D b α f x exists almost everywhere on a , b and D b α f L 1 a , b , X .
If f m L a , b , X < , and α N , by [19], D b α f C a , b , X , hence D b α f C a , b .
We make
Remark 3
([18]). Let f C n 1 a , b , f n L a , b , n = ν , ν > 0 , ν N . Then,
D * a ν f x f n L a , b , X Γ n ν + 1 x a n ν , x a , b .
Thus, we observe
ω 1 D * a ν f , δ = sup x , y a , b x y δ D * a ν f x D * a ν f y
sup x , y a , b x y δ f n L a , b , X Γ n ν + 1 x a n ν + f n L a , b , X Γ n ν + 1 y a n ν
2 f n L a , b , X Γ n ν + 1 b a n ν .
Consequently,
ω 1 D * a ν f , δ 2 f n L a , b , X Γ n ν + 1 b a n ν .
Similarly, let f C m 1 a , b , f m L a , b , m = α , α > 0 , α N , then
ω 1 D b α f , δ 2 f m L a , b , X Γ m α + 1 b a m α .
So for f C m 1 a , b , f m L a , b , m = α , α > 0 , α N , we find
sup x 0 a , b ω 1 D * x 0 α f , δ x 0 , b 2 f m L a , b , X Γ m α + 1 b a m α ,
and
sup x 0 a , b ω 1 D x 0 α f , δ a , x 0 2 f m L a , b , X Γ m α + 1 b a m α .
By [20] we obtain that D * x 0 α f C x 0 , b , X , and by [19] we obtain that D x 0 α f C a , x 0 , X .
We present the following X-valued fractional approximation result by neural networks.
Theorem 9.
Let α > 0 , q > 0 , q 1 , N = α , α N , f C N a , b , X , 0 < β < 1 , x a , b , n N : n 1 β > 2 . Then,
(i)
H f , x j = 1 N 1 f j x j ! H n · x j x f x
Δ q Γ α + 1 ω 1 D x α f , 1 n β a , x + ω 1 D * x α f , 1 n β x , b n α β +
T e 2 λ n 1 β D x α f , a , x x a α + D * x α f , x , b b x α ,
(ii) if f j x = 0 , for j = 1 , , N 1 , we have
H n f , x f x Δ q Γ α + 1
ω 1 D x α f , 1 n β a , x + ω 1 D * x α f , 1 n β x , b n α β +
T e 2 λ n 1 β D x α f , a , x x a α + D * x α f , x , b b x α ,
(iii)
H n f , x f x Δ q
j = 1 N 1 f j x j ! 1 n β j + b a j T e 2 λ n 1 β +
1 Γ α + 1 ω 1 D x α f , 1 n β a , x + ω 1 D * x α f , 1 n β x , b n α β +
T e 2 λ n 1 β D x α f , a , x x a α + D * x α f , x , b b x α ,
x a , b ,
and
(iv)
H n f f Δ q
j = 1 N 1 f j j ! 1 n β j + b a j T e 2 λ n 1 β +
1 Γ α + 1 sup x a , b ω 1 D x α f , 1 n β a , x + sup x a , b ω 1 D * x α f , 1 n β x , b n α β +
T e 2 λ n 1 β b a α sup x a , b D x α f , a , x + sup x a , b D * x α f , x , b .
Above, when N = 1 the sum j = 1 N 1 · = 0 .
As we see here we obtain X-valued fractionally type pointwise and uniform convergence with rates of H n I the unit operator, as n .
Proof. 
The proof is very lengthy and similar to [24]; therefore, it is omitted. □
Next we apply Theorem 9 for N = 1 .
Theorem 10.
Let 0 < α , β < 1 , q > 0 , q 1 , f C 1 a , b , X , x a , b , n N : n 1 β > 2 . Then
(i)
H n f , x f x
Δ q Γ α + 1 ω 1 D x α f , 1 n β a , x + ω 1 D * x α f , 1 n β x , b n α β +
T e 2 λ n 1 β D x α f , a , x x a α + D * x α f , x , b b x α ,
and
(ii)
H n f f Δ q Γ α + 1
sup x a , b ω 1 D x α f , 1 n β a , x + sup x a , b ω 1 D * x α f , 1 n β x , b n α β +
b a α T e 2 λ n 1 β sup x a , b D x α f , a , x + sup x a , b D * x α f , x , b .
When α = 1 2 we derive
Corollary 1.
Let 0 < β < 1 , q > 0 , q 1 , f C 1 a , b , X , x a , b , n N : n 1 β > 2 . Then
(i)
H n f , x f x
2 Δ q π ω 1 D x 1 2 f , 1 n β a , x + ω 1 D * x 1 2 f , 1 n β x , b n β 2 +
T e 2 λ n 1 β D x 1 2 f , a , x x a + D * x 1 2 f , x , b b x ,
and
(ii)
H n f f 2 Δ q π
sup x a , b ω 1 D x 1 2 f , 1 n β a , x + sup x a , b ω 1 D * x 1 2 f , 1 n β x , b n β 2 +
b a T e 2 λ n 1 β sup x a , b D x 1 2 f , a , x + sup x a , b D * x 1 2 f , x , b < .
We make
Remark 4.
Some convergence analysis follows based on Corollary 1.
Let 0 < β < 1 , λ > 0 , f C 1 a , b , X , x a , b , n N : n 1 β > 2 . We elaborate on (65). Assume that
ω 1 D x 1 2 f , 1 n β a , x R 1 n β ,
and
ω 1 D * x 1 2 f , 1 n β x , b R 2 n β ,
x a , b , n N , where R 1 , R 2 > 0 .
Then it holds
sup x a , b ω 1 D x 1 2 f , 1 n β a , x + sup x a , b ω 1 D * x 1 2 f , 1 n β x , b n β 2
R 1 + R 2 n β n β 2 = R 1 + R 2 n 3 β 2 = R n 3 β 2 ,
where R : = R 1 + R 2 > 0 .
The other summand of the right hand side of (65), for large enough n, converges to zero at the speed e 2 λ n 1 β , so it is about A e 2 λ n 1 β , where A > 0 is a constant.
Then, for large enough n N , by (65), (68) and the above comment, we obtain that
H n f f B n 3 β 2 ,
where B > 0 , converging to zero at the high speed of 1 n 3 β 2 .
In Theorem 5, for f C a , b , X and for large enough n N , the speed is 1 n β . So by (69), H n f f converges much faster to zero. The last comes because we assumed differentiability of f. Notice that in Corollary 1 no initial condition is assumed.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declare no conflict of interest.

References

  1. Anastassiou, G.A. Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 1997, 212, 237–262. [Google Scholar] [CrossRef] [Green Version]
  2. Anastassiou, G.A. Quantitative Approximations; Chapman & Hall: Boca Raton, FL, USA; CRC: New York, NY, USA, 2001. [Google Scholar]
  3. Chen, Z.; Cao, F. The approximation operators with sigmoidal functions. Comput. Math. Appl. 2009, 58, 758–765. [Google Scholar] [CrossRef] [Green Version]
  4. Anastassiou, G.A. Univariate hyperbolic tangent neural network approximation. Math. Comput. Model. 2011, 53, 1111–1132. [Google Scholar] [CrossRef]
  5. Anastassiou, G.A. Multivariate hyperbolic tangent neural network approximation. Comput. Math. 2011, 61, 809–821. [Google Scholar]
  6. Anastassiou, G.A. Multivariate sigmoidal neural network approximation. Neural Netw. 2011, 24, 378–386. [Google Scholar] [CrossRef]
  7. Anastassiou, G.A. Inteligent Systems: Approximation by Artificial Neural Networks. In Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2011; Volume 19. [Google Scholar]
  8. Anastassiou, G.A. Univariate sigmoidal neural network approximation. J. Comput. Anal. Appl. 2012, 14, 659–690. [Google Scholar]
  9. Anastassiou, G.A. Fractional neural network approximation. Comput. Math. Appl. 2012, 64, 1655–1676. [Google Scholar] [CrossRef] [Green Version]
  10. Anastassiou, G.A. Intelligent Systems II: Complete Approximation by Neural Network Operators; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2016. [Google Scholar]
  11. Anastassiou, G.A. Nonlinearity: Ordinary and Fractional Approximations by Sublinear and Max-Product Operators; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2018. [Google Scholar]
  12. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New York, NY, USA, 1998. [Google Scholar]
  13. McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
  14. Mitchell, T.M. Machine Learning 1997.
  15. Anastassiou, G.A. q-Deformed and lambda-parametrized hyperbolic tangent function based Banach space valued multivariate multi layer neural network approximations. Ann. Univ. Sci. Bp. Sect. Comp. 2023, in press. [Google Scholar]
  16. El-Shehawy, S.A.; Abdel-Salam, E.A.-B. The q-deformed hyperbolic Secant family. Intern. J. Appl. Math. Stat. 2012, 29, 51–62. [Google Scholar]
  17. Anastassiou, G.A. General sigmoid based Banach space valued neural network approximation. J. Comput. Anal. Appl. 2023, 31, 520–534. [Google Scholar]
  18. Anastassiou, G.A. Vector fractional Korovkin type Approximations. Dyn. Syst. Appl. 2017, 26, 81–104. [Google Scholar]
  19. Anastassiou, G.A. Strong Right Fractional Calculus for Banach space valued functions. Rev. Proyecc. 2017, 36, 149–186. [Google Scholar] [CrossRef] [Green Version]
  20. Anastassiou, G.A. A strong Fractional Calculus Theory for Banach space valued functions. Nonlinear Funct. Anal. Appl. 2017, 22, 495–524. [Google Scholar]
  21. Shilov, G.E. Elementary Functional Analysis; Dover Publications, Inc.: New York, NY, USA, 1996. [Google Scholar]
  22. Mikusinski, J. The Bochner integral; Academic Press: New York, NY, USA, 1978. [Google Scholar]
  23. Kreuter, M. Sobolev Spaces of Vector-Valued Functions. Master’s Thesis, Ulm University, Ulm, Germany, 2015. [Google Scholar]
  24. Anastassiou, G.A.; Karateke, S. Parametrized hyperbolic tangent induced Banach space valued ordinary and fractional neural network approximation. Progr. Fract. Differ. Appl. 2023, in press. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassiou, G.A. Abstract Univariate Neural Network Approximation Using a q-Deformed and λ-Parametrized Hyperbolic Tangent Activation Function. Fractal Fract. 2023, 7, 208. https://doi.org/10.3390/fractalfract7030208

AMA Style

Anastassiou GA. Abstract Univariate Neural Network Approximation Using a q-Deformed and λ-Parametrized Hyperbolic Tangent Activation Function. Fractal and Fractional. 2023; 7(3):208. https://doi.org/10.3390/fractalfract7030208

Chicago/Turabian Style

Anastassiou, George A. 2023. "Abstract Univariate Neural Network Approximation Using a q-Deformed and λ-Parametrized Hyperbolic Tangent Activation Function" Fractal and Fractional 7, no. 3: 208. https://doi.org/10.3390/fractalfract7030208

APA Style

Anastassiou, G. A. (2023). Abstract Univariate Neural Network Approximation Using a q-Deformed and λ-Parametrized Hyperbolic Tangent Activation Function. Fractal and Fractional, 7(3), 208. https://doi.org/10.3390/fractalfract7030208

Article Metrics

Back to TopTop