Next Article in Journal
Maxima of the Aα-Index of Non-Bipartite C3-Free Graphs for 1/2 < α < 1
Previous Article in Journal
Strict Stability of Fractional Differential Equations with a Caputo Fractional Derivative with Respect to Another Function
Previous Article in Special Issue
On a New Modification of Baskakov Operators with Higher Order of Approximation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complex-Valued Multivariate Neural Network (MNN) Approximation by Parameterized Half-Hyperbolic Tangent Function

Department of Software Engineering, Faculty of Engineering and Natural Sciences, Istanbul Atlas University, 34408 Istanbul, Türkiye
Mathematics 2025, 13(3), 453; https://doi.org/10.3390/math13030453
Submission received: 23 December 2024 / Revised: 25 January 2025 / Accepted: 27 January 2025 / Published: 29 January 2025
(This article belongs to the Special Issue Approximation Theory and Applications)

Abstract

:
This paper deals with a family of normalized multivariate neural network (MNN) operators of complex-valued continuous functions for a multivariate context on a box of R N ¯ , N ¯ N . Moreover, we consider the case of approximation employing iterated MNN operators. In addition, pointwise and uniform convergence results are obtained in Banach spaces thanks to the multivariate versions of trigonometric and hyperbolic-type Taylor formulae on the corresponding feed-forward neural networks (FNNs) based on one or more hidden layers.

1. Introduction

Artificial neural networks (ANNs) continue to lead to many groundbreaking studies in both theoretical and applied research areas, especially after they became popular again in the 1980s. The ability to approximate an arbitrary function is one of the greatest advantages of NNs. When viewed from the perspective of real-life applications, ANNs have been used in many areas such as image classification, robotics, data science, regression, risk management and autonomous systems, Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), Artificial Super Intelligence (ASI), and other cutting-edge systems.
When we look at both ANNs/MNNs and other important approximation operators and their applications from a theoretical and pure mathematics perspective, we see that “Computational Methods in Approximation Theory” has come to the fore. Especially in recent years, the studies that have been carried out in this field have led to the opening of different avenues both in the ANN/DNN world and in the world of machine learning (ML) under the umbrella of artificial intelligence (AI) (please see [1,2,3,4,5,6,7] for the other types of operators in approximation theory).
It would be right to say that the application of ANNs to real-life problems has experienced its golden age, especially in the last decade. Some examples from the literature are as follows: in [8], some of the leading platforms, programming libraries, and web repositories used to develop and test ANN and deep neural network (DNN) models were introduced in detail. Again, in [9], by utilizing multilayer feed-forward perceptron (MLP) algorithms and developing models, a case study in stock markets was considered.
Some time after Minsky and Papert’s [10] considerable attempt in 1969, Hornik et al. [11] in 1989 showed that the universal approximation property is satisfactorily provided for feed-forward NNs (FNNs) holding one hidden layer. In 1987, inspired by Kolmogorov [12], Hecht-Nielsen [13] presented a modified version of Kolmogorov’s theorem using stronger arguments. From the theoretical framework, it was said that a continuous function defined on any compact set could be approximated by increasing the number of neurons to guarantee the accuracy of approximation. In [14,15], it was proved that any continuous function whose domain was a compact set with regular topology could be approximated using appropriate sigmoid activation functions and even with the help of a network such as N ̲ ψ , n ¯ . To date, many authors have contributed to the literature a great deal of work on FNN approaches for the functions of several variables, both in general and sometimes in special cases: see [16,17,18,19,20,21], etc.
In this paper, we delve into the approximation results in MLP-based operators employing the activation function described in [22]. Here, MLP stands for an FNN architecture which is composed of multiple layers. In each layer, there are “neurons”. Each neuron is linked to the corresponding one in the previous and next layers. In general, these connections are called “synapses” or “connections”. Since the flow of information is from one layer to the next, this mechanism is called “feed-forward”. No processing is conducted in the first layer, called the “input layer”. Layers in the middle part of the architecture are called “hidden layers”. The neurons in the input layer are fed throughout the first hidden layer, and right after all feed-forward operations, the last layer where the final output is acquired is called the “output layer”. The neuron’s outputs in each hidden layer are propagated forward to the final output layer where the output layers are fed (see Figure 1) [11,23]. Recently, Ismailov in Chapter 5 of [24] explained why two hidden layers are more feasible than one layer in FNNs. There, he emphasized that two hidden layer FNNs are more advantageous than single hidden layer ones and approximation is made to a continuous multivariate function. He claimed that these NNs can approximate all continuous multivariable functions, even if they have very few hidden neurons. He also added that the amount of hidden neurons, whether minor or major, does not affect the approximation accuracy. Furthermore, in [25], Costarelli and Piconi introduced two types of MNN-based algorithms enabled by the classical hyperbolic function to demonstrate the effectiveness of their latest method for satellite images derived from a dataset. They aimed to improve and rescale the relevant dataset by making a comparison with existing interpolation methods. With similar motivation, in [26], Angeloni et al. used discrete MNN operators with their Kantorovich forms. They demonstrated the ability of these operators to model two-dimensional structures such as satellite images, emphasizing their suitability for deterministic modeling and rescaling optimization approaches. In [27], Baxhaku et al. proposed a new family of MNN operators and supported their theoretical finding with numerical applications to the image compression and decompression. Another inspiring work from the literature is [28], where Kadak developed a new Kantorovich variant of fuzzy NN interpolation operators employing a multivariate sigmoidal function. He also proposed a novel interpolation algorithm to process a color image. In [29], Costarelli and Sambucini presented a comparative analysis of the algorithmic method with numerical methods, constructing interval-valued sets starting from fuzzy sets and a well-known method used for digital image processing. Finally, Karateke, who took her motivation from the generalization and mathematical properties of the tangent hyperbolic activation function, made an analysis by applying the relevant activation function to a real-world problem and dataset in her study (see, [30]).
It is well known that Anastassiou was the first to present NN convergence for continuous functions and to give the rates of convergence of Cardaliaguet–Euvrard-type NN operators. He was able to activate the continuity module by creating Jackson-type inequalities in his theories. Moreover, he has investigated the approximation results not only for a single variable but also for multivariable functions by employing tangent hyperbolic and a similar types of functions, generalized-type logistic functions (please see [22,32,33,34,35,36,37,38]).
An ANN’s architecture based upon multiple hidden layers is presented in a mathematical sense as:
N ̲ ψ , n ¯ x = i ¯ = 0 n ¯ k ¯ i ¯ ψ ω ˜ i ¯ · x + ϖ i ¯ , x R s , s N ,
such that x = x 1 , , x s ; ω ˜ i ¯ , k ¯ i ¯ R s ; ϖ i ¯ R with 0 i ¯ n ¯ , n ¯ N .
We define:
N ̲ ψ , n ¯ 2 x = i ¯ = 0 n ¯ k ¯ i ¯ ψ ω ˜ i ¯ · N ̲ ψ , n ¯ x + ϖ i ¯ = i ¯ = 0 n ¯ k ¯ i ¯ ψ ω ˜ i ¯ · i ¯ = 0 n ¯ k ¯ i ¯ ψ ω ˜ i ¯ · x + ϖ i ¯ + ϖ i ¯ .
Besides, we define
N ̲ ψ , n ¯ 3 x = i ¯ = 0 n ¯ k ¯ i ¯ ψ ω ˜ i ¯ · N ̲ ψ , n ¯ 2 x + ϖ i ¯ .
In a nutshell, in general, we define a typical feed-forward iterated MNN with m N hidden layers as follows:
N ̲ ψ , n ¯ m x = i ¯ = 0 n ¯ k ¯ i ¯ ψ ω ˜ i ¯ · N ̲ ψ , n ¯ m 1 x + ϖ i ¯ , m N .
Above, ω ˜ i ¯ = ω ˜ i ¯ , 1 , ω ˜ i ¯ , 2 , , ω ˜ i ¯ , s R s are the connection weights, k ¯ i ¯ are the coefficients, ω ˜ i ¯ · x is the inner product of ω ˜ i ¯ and x ¯ ; thus, ψ ω ˜ i ¯ · x + ϖ i ¯ R , and N ̲ ψ , n ¯ x R s , by k ¯ i ¯ R s ;   ϖ i ¯ R are the thresholds, and also ψ is the activation function. For an intensive knowledge of the NNs, those concerned are advised to take a look at [39,40,41]. For trainable activation functions thanks to their parameters, please see [22,30].
The organization of this paper is as follows. We present a literature review that provides motivation to the researchers. Especially, we would also like to point out the above description of the typical feed-forward iterated MNN architecture with m N hidden layers in Section 1. Some background information related to our activation function denoted by ψ ˜ q for q , β > 0 and the creation of the related density function depicted by D q for q > 0 with important properties have been launched into in Section 2. Section 3 deals with some lemmas and definitions used to prove the approximation results of the C -valued normalized MNN operators. Section 4, which forms the core of the work, performs the parameterized half-hyperbolic tangent function activated MNN approximations over R N ¯ , N ¯ N with complex values. The basic tools of the theory stand on the multivariate version of trigonometric and hyperbolic-type Taylor formulae. Section 5 is devoted to some final remarks and future directions. Finally, the Abbreviations and Symbols table, which summarizes important symbols used throughout the paper, has been added to help readers to follow the mathematical flow.

2. Preliminaries

Below, we deal with the function which is called the “parameterized half-hyperbolic tangent function” for q , β > 0 inspired by [22,42], for all x ¯ R ,
ψ ˜ q x ¯ : = 1 q e β x ¯ 1 + q e β x ¯ .
Some important properties of (1) are presented as:
  • ψ ˜ q 0 = 1 q 1 + q for x ¯ = 0 .
  • Reciprocal anti-symmetry: ψ ˜ q x ¯ = ψ ˜ 1 q x ¯ is satisfied for x ¯ R ; q > 0 .
  • Since ψ ˜ q x ¯ = 1 q e β x ¯ 1 + q e β x ¯ = 2 q β e β x ¯ e β x ¯ + q 2 > 0 is derived for every x ¯ R ; q , β > 0 , ψ ˜ q is strictly increasing over R .
  • For x ¯ R , q , β > 0 , ψ ˜ q x ¯ = 2 q β e β x ¯ e β x ¯ + q 2 = 2 q β 2 e β x ¯ q e β x ¯ q + e β x ¯ 3 C R . Thus, if x ¯ < ln q β , ψ ˜ q is strictly concave upward, then x ¯ > ln q β , ψ ˜ q is strictly concave downward on R with ψ ˜ q ln q β = 0 . For a deeper understanding and background of (1), please see [22,43].
Now, we go with the density function denoted by D q in (2), and for its basic properties, please see [22]. It is obvious that 1 > 1 x ¯ + 1 > x ¯ 1 . Thus, for all x ¯ R ; q , β > 0 ,
D q x ¯ : = 1 4 ψ ˜ q x ¯ + 1 ψ ˜ q x ¯ 1 > 0
is defined.
Theorem 1 
([22]). One has that
j ¯ = D q x ¯ j ¯ = 1 ,
for all x ¯ R , and q > 0 . Thus
j ¯ = D q n ¯ x ¯ j ¯ = 1 ,
for every n ¯ N , x ¯ R , q > 0 . In like manner, it holds that
j ¯ = D 1 q x ¯ j ¯ = 1 , .
for any x ¯ R , q > 0 . Moreover, using the “symmetry” property in Equation (3) of [42], we obtain
D 1 q x ¯ j ¯ = D q j ¯ x ¯ .
Thus
j ¯ = D q j ¯ x ¯ = 1 ,
and
j ¯ = D q j ¯ + x ¯ = 1
are satisfied for every x ¯ R , q > 0 .
Theorem 2 
([22]). It is verified that
D q x ¯ d x ¯ = 1 .
In other words, D q is a density function on R for q > 0 .
The following results are of vital importance for establishing the core theory.
Theorem 3 
([22]). Let 0 < λ < 1 ; n ¯ N such that n ¯ 1 λ > 2 ; q , β > 0 .
k ¯ = : n ¯ x ¯ k ¯ n ¯ 1 λ D q n ¯ x ¯ k ¯ < 2 max q , 1 q e 2 β e β n ¯ 1 λ = : C e β n ¯ 1 λ ,
holds for C : = 2 max q , 1 q e 2 β .
Theorem 4 
([22]). Let x ¯ a ˜ , b ˜ R and n ¯ N such that n ¯ a ˜ n ¯ b ˜ . For q , β > 0 , we have β q > ρ q > 0 with D q ρ 0 = D q 0 , and β q > 1 .
1 k ¯ = n ¯ a ˜ n ¯ b ˜ D q n ¯ x ¯ k ¯ < max 1 D q β q , 1 D 1 q β 1 q = : q ,
is valid. Here, n ¯ a ˜ represents the ceiling of the inside number and n ¯ b ˜ represents the integral part.
Remark 1 
([22]). (i) For convenient x ¯ a ˜ , b ˜ , and all q > 0 .
lim n ¯ + k ¯ = n ¯ a ˜ n ¯ b ˜ D q n ¯ x ¯ k ¯ 1 ,
is satisfied.
(ii) For sufficiently large n ¯ , n ¯ a ˜ n ¯ b ˜ is true such that a ˜ , b ˜ R . Thus
k ¯ = n ¯ a ˜ n ¯ b ˜ D q n ¯ x ¯ k ¯ 1
is valid with n ¯ a ˜ k ¯ n ¯ b ˜ if and only if a ˜ k ¯ n ¯ b ˜ .

3. C -Valued-Normalized MNN Operators

Let us consider the Banach space of complex numbers C , · on R . To generate C -valued linear MNN operators, we need the following basic information.
Remark 2 
([32]). For x ¯ = x ¯ 1 , , x ¯ N ¯ R N ¯ , q > 0 , N ¯ N ,
M q x ¯ 1 , , x ¯ N ¯ : = M q x ¯ : = i ¯ = 1 N ¯ D q x ¯ i ¯ .
The following also applies:
(i) For all x ¯ R N ¯ , M q x ¯ > 0 ,
(ii) For every x ¯ R N ¯ , and k ¯ : = k ¯ 1 , , k ¯ N ¯ Z N ¯ ,
k ¯ = M q x ¯ k ¯ : = k ¯ 1 = k ¯ 2 = k ¯ n ¯ = M q x ¯ 1 k ¯ 1 , , x ¯ N ¯ k ¯ N ¯ = 1 .
(iii) For all x ¯ R N ¯ , and n ¯ , N ¯ N :
k ¯ = M q n ¯ x ¯ k ¯ = 1 ,
and
(iv)
R N ¯ M q x ¯ d x ¯ = 1
are satisfied. Thus, M q is a density function for the multivariate case.
Moreover, in the multivariate sense: x ¯ : = max x ¯ 1 , , x ¯ n ¯ , x ¯ R N ¯ , and : = , , , : = , , are preferred so that
n ¯ a ˜ : = n ¯ a ˜ 1 , , n ¯ a ˜ n ¯ , n ¯ b ˜ : = n ¯ b ˜ 1 , , n ¯ b ˜ n ¯ ,
are defined such that a ˜ : = a ˜ 1 , , a ˜ N ¯ , b ˜ : = b ˜ 1 , , b ˜ N ¯ .
It is clear that for 0 < λ < 1 , n ¯ , N ¯ N and a fixed x ¯ R N ¯
k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ = k ¯ = n ¯ a ˜ n ¯ b ˜ i ¯ = 1 N ¯ D q n ¯ x ¯ i ¯ k ¯ i ¯
= k ¯ 1 = n ¯ a ˜ 1 n ¯ b ˜ 1 k ¯ n ¯ = n ¯ a ˜ n ¯ n ¯ b ˜ N ¯ i ¯ = 1 N ¯ D q n ¯ x ¯ i ¯ k ¯ i ¯ = i ¯ = 1 N ¯ k ¯ i ¯ = n ¯ a ˜ i ¯ n ¯ b ˜ i ¯ D q n ¯ x ¯ i ¯ k ¯ i ¯ .
is valid. Thus
k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯
= k ¯ = n ¯ a ˜ k ¯ n ¯ x ¯ 1 n ¯ β n ¯ b ˜ M q n ¯ x ¯ k ¯ + k ¯ = n ¯ a ˜ k ¯ n ¯ x ¯ > 1 n ¯ β n ¯ b ˜ M q n ¯ x ¯ k ¯ .
k ¯ n ¯ x ¯ > 1 n ¯ λ means the existence of k ¯ r ˜ n ¯ x ¯ v ˜ > 1 n ¯ λ such that v ˜ 1 , , N ¯ in the last quantity. Moreover, note the counting of sums on k ¯ ’s, which are disjoint vectors.
(v) One has by following Theorem 3 and as in [36], pp. 379–380, such that n ¯ N : n ¯ 1 λ > 2 , 0 < λ < 1 , and x ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ :
k ¯ = n ¯ a ˜ k ¯ n ¯ x ¯ > 1 n ¯ λ n ¯ b ˜ M q n ¯ x ¯ k ¯ < C e β n ¯ 1 λ .
(vi) By [36], one also has that
0 < 1 k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ < q N ¯ ,
x ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , n ¯ N .
(vii) For n ¯ N : n ¯ 1 λ > 2 , x ¯ R N ¯ ,
k ¯ = k ¯ n ¯ x ¯ > 1 n ¯ λ M q n ¯ x ¯ k ¯ < C e β n ¯ 1 λ ,
such that 0 < λ < 1 . Moreover, it yields that for convenient x ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯
lim n ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ 1
is true.
Again inspired by [32,44]:
Definition 1. 
For every f C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C , x ¯ = x ¯ 1 , , x ¯ N ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , N ¯ N , we define the “complex-valued linear normalized MNN operator" as:
S n ¯ f , x ¯ 1 , , x ¯ n ¯ : = S n ¯ f , x ¯ : = k ¯ = n ¯ a ˜ n ¯ b ˜ f k ¯ n ¯ M q n ¯ x ¯ k ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ = k ¯ 1 = n ¯ a ˜ 1 n ¯ b ˜ 1 k ¯ 2 = n ¯ a ˜ 2 n ¯ b ˜ 2 k ¯ n ¯ = n ¯ a ˜ N ¯ n ¯ b ˜ N ¯ f k ¯ 1 n ¯ , , k ¯ N ¯ n ¯ i ¯ = 1 N ¯ D q n ¯ x ¯ i ¯ k ¯ i ¯ i ¯ = 1 N ¯ k ¯ i ¯ = n ¯ a ˜ i ¯ n ¯ b ˜ i ¯ D q n ¯ x ¯ i ¯ k ¯ i ¯
such that n ¯ a ˜ i ¯ n ¯ b ˜ i ¯ , i ¯ = 1 , , N ¯ . Furthermore, notice that for large enough n ¯ N , a ˜ i ¯ k ¯ i ¯ n ¯ b ˜ i ¯ is true if and only if n ¯ a ˜ i ¯ k ¯ i ¯ n ¯ b ˜ i ¯ , i ¯ = 1 , , N ¯ .
Remark 3. 
For all g C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , we define the “associate operator” as:
S ˜ n ¯ g , x ¯ : = k ¯ = n ¯ a ˜ n ¯ b ˜ g k ¯ n ¯ M q n ¯ x ¯ k ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ .
It is obvious that S ˜ n ¯ is a positive linear operator. Thus, one has
(i) For all x ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , S ˜ n ¯ 1 , x ¯ = 1 ,
(ii) S n ¯ f C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C , S ˜ n ¯ g C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ .
Proposition 1. 
For all x ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , f C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C :
S n ¯ f , x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ f k ¯ n ¯ M q n ¯ x ¯ k ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ = S ˜ n ¯ f , x ¯
is satisfied.
Remark 4. 
Since f C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ for n ¯ N ,
S n ¯ f , x ¯ S ˜ n ¯ f , x ¯
is verified.
Proposition 2. 
When g C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , c C , it yields that c g C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C . Thus, for all x ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ ,
S n ¯ c g , x ¯ = c S ˜ n ¯ g , x ¯
is true. Moreover, one has
S n ¯ c = c
for every c C just because S ˜ n ¯ 1 = 1 .
Proposition 3. 
For x ¯ : = x ¯ 1 , , x ¯ N ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , f C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , it is suitable to write that
S n ¯ * f , x ¯ : = k ¯ = n ¯ a ˜ n ¯ b ˜ f k ¯ n ¯ M q n ¯ x ¯ k ¯ = k ¯ 1 = n ¯ a ˜ 1 n ¯ b ˜ 1 k ¯ 2 = n ¯ a ˜ 2 n ¯ b ˜ 2 k ¯ n ¯ = n ¯ a ˜ N ¯ n ¯ b ˜ N ¯ f k ¯ 1 n ¯ , , k ¯ N ¯ n ¯ i ¯ = 1 N ¯ M q n ¯ x ¯ i ¯ k ¯ i ¯ ,
where n ¯ a ˜ i ¯ n ¯ b ˜ i ¯ , i ¯ = 1 , , N ¯ , N ¯ N .
Remark 5. 
Since
S n ¯ f , x ¯ : = S n ¯ * f , x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ ,
one has that
S n ¯ f , x ¯ f x ¯ = S n ¯ * f , x ¯ f x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ .
Thus, for every x ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ ,
S n ¯ f , x ¯ f x ¯ ( 6 ) q N ¯ S n ¯ * f , x ¯ f x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ .
To calculate the quantity on the right side of (8), we need some basic concepts.
Definition 2 
([37]). The first modulus of continuity of f is defined as below.
ω 1 f , δ : = sup x ¯ , y ¯ Ω : x ¯ y ¯ p δ f x ¯ f y ¯ , 0 < δ d i a m Ω ,
where Ω is a convex and compact subset of R N ¯ , · p , p 1 , , C , · is a Banach space for all f C Ω , C . Whenever δ   >  diam Ω ,
ω 1 f , δ = ω 1 f , d i a m Ω
is verified. Moreover, notice that ω 1 f , δ is increasing for δ > 0 and for every f C Ω , C , ω 1 f , δ 0 , while δ 0 .
Proposition 4. 
The partial derivative of f is denoted by f α ¯ for f C 2 i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C , N ¯ N ; α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ i ¯ Z ¯ + , i ¯ = 1 , , N ¯ . Moreover, for l ¯ = 0 , 1 , 2 ; α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = l ¯ ; f α ¯ : = n ¯ f x ¯ n ¯ , and this is called as from the l ¯ order. One indicates that
ω 1 max f α ¯ , h : = max α ¯ : α ¯ = 2 ω 1 f α ¯ , h .
such that
f α ¯ max : = max α ¯ = 2 f α ¯ ,
where · represents the supremum norm.
Now, using p = :
Theorem 5 
([45]). (Taylor formula for multivariate case) If f C 2 t 1 , t 2 , C such that a ˜ , x ¯ t 1 , t 2 , then
f x ¯ f a ˜ = f a ˜ sin x ¯ a ˜ + 2 f a ˜ sin 2 x ¯ a ˜ 2 + a ˜ x ¯ f t + f t f a ˜ + f a ˜ sin x ¯ t d t .
is satisfied.
Remark 6. 
For 0 t 1 , z ¯ = z ¯ 1 , , z ¯ k ¯ , x ¯ 0 : = x ¯ 01 , , x ¯ 0 k ¯ Υ , where Υ is an open convex subset of R k ¯ , k ¯ 2 , it is obvious that x ¯ 0 + t z ¯ x ¯ 0 Υ . Let f C 2 Υ , C , then
g z ¯ t : = f x ¯ 0 + t z ¯ x ¯ 0 , g z ¯ t = i ¯ = 1 k ¯ z ¯ i ¯ x ¯ 0 i ¯ f x ¯ i ¯ x ¯ 01 + t z ¯ 1 x ¯ 01 , , x 0 k ¯ + t z ¯ k ¯ x 0 k ¯ , g z ¯ t = i ¯ = 1 k ¯ z ¯ i ¯ x ¯ 0 i ¯ x ¯ i ¯ 2 f x ¯ 01 + t z ¯ 1 x ¯ 01 , , x ¯ 0 k ¯ + t z ¯ k ¯ x ¯ 0 k ¯
are true such that f α ¯ : = α ¯ f x ¯ α ¯ where α ¯ : = α ¯ 1 , , α ¯ k ¯ , α ¯ i ¯ Z + , i ¯ = 1 , , k ¯ and α ¯ : = i ¯ = 1 k ¯ α ¯ i ¯ = 2 . Thus for t = 0 ,
g z ¯ 0 = f x ¯ 0 , g z ¯ 1 = f z ¯ , g z ¯ 0 = i ¯ = 1 k ¯ z ¯ i ¯ x ¯ 0 i ¯ f x ¯ i ¯ x ¯ 01 , , x ¯ 0 k ¯ ,
and
g z ¯ 0 = i ¯ = 1 k ¯ z ¯ i ¯ x ¯ 0 i ¯ x ¯ i ¯ 2 f x ¯ 01 , , x ¯ 0 k ¯ .
Obviously, g z ¯ C 2 0 , 1 , C , and one obtains that
f z ¯ 1 , , z ¯ k ¯ f x ¯ 01 , , x ¯ 0 k ¯ = g z ¯ 1 g z ¯ 0 = sin 1 g z ¯ 0 + 2 sin 2 1 2 g z ¯ 0 + 0 1 g z ¯ t + g z ¯ t g z ¯ 0 + g z ¯ 0 sin 1 t d t .
by Theorem 5.
Theorem 6 
([45]). (Hyperbolic Taylor formula for multivariate case) If f C 2 t 1 , t 2 , C such that a ˜ , x ¯ t 1 , t 2 , then
f x ¯ f a ˜ = f a ˜ sinh x ¯ a ˜ + 2 f a ˜ sinh 2 x ¯ a ˜ 2 + a ˜ x ¯ f t f t f a ˜ f a ˜ sinh x ¯ t d t
is verified.
Remark 7.One has that
f z ¯ 1 , , z ¯ k ¯ f x ¯ 01 , , x ¯ 0 k ¯ = g z ¯ 1 g z ¯ 0 = sinh 1 g z ¯ 0 + 2 g z ¯ 0 sinh 2 1 2 + 0 1 sinh 1 t g z ¯ t g z ¯ t g z ¯ 0 g z ¯ 0 d t .
Remark 8. 
Here, let k ¯ n ¯ , x ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ be such that k ¯ n ¯ : = k ¯ 1 n ¯ , , k ¯ N ¯ n ¯ , and x ¯ : = x ¯ 1 , , x ¯ N ¯ , N ¯ N . For f C 2 i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C , N ¯ N , and g k ¯ n ¯ t : = f x ¯ + t k ¯ n ¯ x ¯ , 0 t 1 , and one has (by (10))
f k ¯ n ¯ f x ¯ = i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ f x ¯ i ¯ x ¯ sin 1 + 2 i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ sin 2 1 2 + 0 1 i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ + t k ¯ n ¯ x ¯ + f x ¯ + t k ¯ n ¯ x ¯ i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ + f x ¯ sin 1 t d t .
Let us denote the remainder by
r ˜ : = 0 1 i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ + t k ¯ n ¯ x ¯ + f x ¯ + t k ¯ n ¯ x ¯ i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ + f x ¯ sin 1 t d t = 0 1 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ ! i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ f α ¯ x ¯ + t k ¯ n ¯ x ¯ f α ¯ x ¯ + f x ¯ + t k ¯ n ¯ x ¯ f x ¯ sin 1 t d t .
Thus, it holds that
r ˜ 0 1 { α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ !
i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ f α ¯ x ¯ + t k ¯ n ¯ x ¯ f α ¯ x ¯
+ f x ¯ + t k ¯ n ¯ x ¯ f x ¯ sin 1 t d t
0 1 { α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ ! i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ ω 1 f α ¯ , t k ¯ n ¯ x ¯
+ ω 1 f , t k ¯ n ¯ x ¯ sin 1 t d t .
Note that for 0 < λ < 1 , and i ¯ = 1 , , N ¯
k ¯ n ¯ x ¯ 1 n ¯ λ k ¯ i ¯ n ¯ x ¯ i ¯ 1 n ¯ λ
is satisfied.
One also observes that
{ ω 1 , 2 max f α ¯ , 1 n ¯ λ α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ ! i ¯ = 1 N ¯ 1 n ¯ λ α ¯ i ¯ + ω 1 f , 1 n ¯ λ 0 1 sin 1 t d t = [ ω 1 , 2 max f α ¯ , 1 n ¯ λ α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ ! 1 n ¯ 2 λ
+ ω 1 f , 1 n ¯ λ ] 1 cos 1
= 1 cos 1 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ .
One has proved that
r ˜ 1 cos 1 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ ,
with k ¯ n ¯ x ¯ 1 n ¯ λ .
Moreover, for a ˜ : = a ˜ 1 , , a ˜ N ¯ , b ˜ : = b ˜ 1 , , b ˜ N ¯
r ˜ 0 1 { α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ !
i ¯ = 1 N ¯ b ˜ i ¯ a ˜ i ¯ α ¯ i ¯ 2 f α ¯ + 2 f } sin 1 t d t
{ ( α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ ! )
2 b ˜ a ˜ 2 f α ¯ , 2 max + 2 f 0 1 sin 1 t d t
= 2 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + 2 f 1 cos 1 ,
is verified.
One has justified that
r ˜ 2 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + 2 f 1 cos 1 = : ρ .

4. Approximation Results

Here, the entire background comes from [42,44,46]. As a first step, employing the smoothness of f, trigonometric approximation will be examined.
Theorem 7. 
If for f C 2 i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C , 0 < λ < 1 , n ¯ , N ¯ N , n ¯ 1 λ > 2 ; x ¯ , x ¯ 0 i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , a ˜ : = a ˜ 1 , , a ˜ N ¯ , b ˜ : = b ˜ 1 , , b ˜ N ¯ , then the following inequalities are satisfied:
(i)
S n ¯ f , x ¯ f x ¯ i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ S n ¯ · x ¯ i ¯ , x ¯ sin 1
4 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! S n ¯ i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ sin 2 1 2 |
q N ¯ 1 cos 1 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ
+ 2 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + 2 f 1 cos 1 C e β n ¯ 1 λ ,
(ii) Assuming that f x ¯ 0 x ¯ i ¯ = 0 , i ¯ = 1 , , N ¯ , and f α ¯ x ¯ 0 = 0 , α ¯ : α ¯ = 2 , we have that
S n ¯ f , x ¯ f x ¯
q N ¯ 1 cos 1 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ
+ 2 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + 2 f 1 cos 1 C e β n ¯ 1 λ ,
(iii)
S n ¯ f , x ¯ f x ¯ q N ¯
i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ 1 n ¯ λ + b ˜ i ¯ a ˜ i ¯ C e β n ¯ 1 λ sin 1
+ 4 α ¯ : α ¯ = 2 f α ¯ x ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! 1 n ¯ 2 λ + i ¯ = 1 N ¯ b ˜ i ¯ a ˜ i ¯ α ¯ i ¯ C e β n ¯ 1 λ sin 2 1 2 }
+ 1 cos 1 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ
+ 2 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + 2 f 1 cos 1 C e β n ¯ 1 λ ,
and
(iv)
S n ¯ f f q N ¯ i ¯ = 1 N ¯ f x ¯ i ¯ 1 n ¯ λ + b ˜ i ¯ a ˜ i ¯ C e β n ¯ 1 λ sin 1 + 4 α ¯ : α ¯ = 2 f α ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! 1 n ¯ 2 λ + i ¯ = 1 N ¯ b ˜ i ¯ a ˜ i ¯ α ¯ i ¯ C e β n ¯ 1 λ sin 2 1 2 } + 1 cos 1 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ + 2 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + 2 f 1 cos 1 C e β n ¯ 1 λ = : ϱ n ¯ f .
We would like to draw your attention to the fact that uniform and pointwise convergences are achieved such that S n ¯ I , while n ¯ .
Proof. 
We take r ˜ as it is in (12). Observe that
Z ¯ n ¯ : = k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ r ˜
= k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ 1 n ¯ λ n ¯ b ˜ M q n ¯ x ¯ k ¯ r ˜ + k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ > 1 n ¯ λ n ¯ b ˜ M q n ¯ x ¯ k ¯ r ˜ .
Therefore,
Z ¯ n ¯ k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ 1 n ¯ λ n ¯ b ˜ M q n ¯ x ¯ k ¯
1 cos 1 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ + ρ C e β n ¯ 1 λ
1 cos 1 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ + ρ C e β n ¯ 1 λ .
Thus, the following is provided.
Z ¯ n ¯ 1 cos 1 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ
+ 2 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + 2 f 1 cos 1 C e β n ¯ 1 λ .
Also, by (11), we observe that
k ¯ = n ¯ a ˜ n ¯ b ˜ f k ¯ n ¯ M q n ¯ x ¯ k ¯ f x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯
= ( i ¯ = 1 N ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ k ¯ i ¯ n ¯ x ¯ i ¯ f x ¯ i ¯ x ¯ ) sin 1
+ 2 { α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 2 i ¯ = 1 N ¯ α ¯ i ¯ ! ( k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯
i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ ) sin 2 1 2 + Z ¯ n ¯ .
The last quantity yields that
S n ¯ * f , x ¯ f x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯
i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ S n ¯ * · x ¯ i ¯ , x ¯ sin 1
2 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 2 i ¯ = 1 N ¯ α ¯ i ¯ ! S n ¯ * i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ sin 2 1 2 = Z ¯ n ¯ .
We note that
S n ¯ * · x ¯ i ¯ , x ¯ S n ¯ * · x ¯ i ¯ , x ¯ = k ¯ = n ¯ a ˜ n ¯ b ˜ k ¯ i ¯ n ¯ x ¯ i ¯ M q n ¯ x ¯ k ¯
= k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ 1 n ¯ λ n ¯ b ˜ k ¯ i ¯ n ¯ x ¯ i ¯ M q n ¯ x ¯ k ¯
+ k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ > 1 n ¯ λ n ¯ b ˜ k ¯ i ¯ n ¯ x ¯ i ¯ M q n ¯ x ¯ k ¯
1 n ¯ λ + b ˜ i ¯ a ˜ i ¯ k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ > 1 n ¯ λ n ¯ b ˜ M q n ¯ x ¯ k ¯
1 n ¯ λ + b ˜ i ¯ a ˜ i ¯ C e β n ¯ 1 λ .
We have verified that
S n ¯ * · x ¯ i ¯ , x ¯ 1 n ¯ λ + b ˜ i ¯ a ˜ i ¯ C e β n ¯ 1 λ ,
  i ¯ = 1 , , N ¯ .
Next, we see that
S n ¯ * i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ S n ¯ * i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯
= k ¯ = n ¯ a ˜ n ¯ b ˜ i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ M q n ¯ x ¯ k ¯
= k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ 1 n ¯ λ n ¯ b ˜ i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ M q n ¯ x ¯ k ¯
+ k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ > 1 n ¯ λ n ¯ b ˜ i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ M q n ¯ x ¯ k ¯
1 n ¯ 2 λ + i ¯ = 1 N ¯ b ˜ i ¯ a ˜ i ¯ α ¯ i ¯ C e β n ¯ 1 λ .
We have proved that
S n ¯ * i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ 1 n ¯ 2 λ + i ¯ = 1 N ¯ b ˜ i ¯ a ˜ i ¯ α ¯ i ¯ C e β n ¯ 1 λ .
At last, we observe that
S n ¯ f , x ¯ f x ¯ i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ S n ¯ · x ¯ i ¯ , x ¯ sin 1
4 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! S n ¯ i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ sin 2 1 2 |
q N ¯ Z ¯ n ¯
= q N ¯ S n ¯ * f , x ¯ f x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯
i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ S n ¯ * · x ¯ i ¯ , x ¯ sin 1
4 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! S n ¯ * i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ sin 2 1 2 | .
All of the above are put together to prove the theorem. □
Remark 9. 
Let f C 2 i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C , N ¯ N . By the mean value theorem, we have that sinh x ¯ = sinh x ¯ sinh 0 = cosh ξ x ¯ 0 , ξ in 0 , x ¯ for any x ¯ R .
Hence,
sinh x ¯ cosh , 1 , 1 x ¯ , x ¯ 1 , 1 .
However
cosh , 1 , 1 = cosh 1 .
Thus, we have
sinh x ¯ cosh 1 x ¯ , x ¯ 1 , 1 .
Let k ¯ n ¯ : = k ¯ 1 n ¯ , , k ¯ N ¯ n ¯ , and x ¯ : = x ¯ 1 , , x ¯ N ¯ , with k ¯ n ¯ , x ¯ i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , then (by (12)), where g k ¯ n ¯ t : = f x ¯ + t k ¯ n ¯ x ¯ , 0 t 1 ), we have
f k ¯ n ¯ f x ¯ = i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ f x ¯ i ¯ x ¯ sinh 1 + 2 i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ sinh 2 1 2 + 0 1 i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ + t k ¯ n ¯ x ¯ f x ¯ + t k ¯ n ¯ x ¯ i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ f x ¯ sinh 1 t d t .
Using the remainder, it holds that
r ˜ : = 0 1 i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ + t k ¯ n ¯ x ¯ f x ¯ + t k ¯ n ¯ x ¯ i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ x ¯ i ¯ 2 f x ¯ f x ¯ sinh 1 t d t = 0 1 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ ! i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ f α ¯ x ¯ + t k ¯ n ¯ x ¯ f α ¯ x ¯ f x ¯ + t k ¯ n ¯ x ¯ f x ¯ sinh 1 t d t .
Therefore, it yields that
r ˜ 0 1 { α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ !
i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ f α ¯ x ¯ + t k ¯ n ¯ x ¯ f α ¯ x ¯
+ f x ¯ + t k ¯ n ¯ x ¯ f x ¯ sinh 1 t d t
0 1 { α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ ! i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ ω 1 f α ¯ , t k ¯ n ¯ x ¯
+ ω 1 f , t k ¯ n ¯ x ¯ } cosh 1 1 t d t .
Notice here that for 0 < λ < 1 , i ¯ = 1 , , N ¯ ,
k ¯ n ¯ x ¯ 1 n ¯ λ k ¯ i ¯ n ¯ x ¯ i ¯ 1 n ¯ λ .
We also see that
cosh 1 { ω 1 , 2 max f α ¯ , 1 n ¯ λ α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ ! i ¯ = 1 N ¯ 1 n ¯ λ α ¯ i ¯
+ ω 1 f , 1 n ¯ λ 0 1 1 t d t
= cosh ( 1 ) { ω 1 , 2 max f α ¯ , 1 n ¯ β α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ ! 1 n ¯ 2 λ
+ ω 1 f , 1 n ¯ λ 1 2
= cosh 1 2 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ .
We note that
r ˜ cosh 1 2 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ ,
given that k ¯ n ¯ x ¯ 1 n ¯ λ .
We notice also that
r ˜ cosh 1 0 1 { α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ !
i ¯ = 1 N ¯ b ˜ i ¯ a ˜ i ¯ α ¯ i ¯ 2 f α ¯ + 2 f 1 t d t
cosh 1 { α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 2 i ¯ = 1 N ¯ α ¯ i ¯ !
2 b ˜ a ˜ 2 f α ¯ , 2 max + 2 f 0 1 1 t d t
= cosh 1 2 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + 2 f 1 2
= cosh 1 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + f ,
where a ˜ : = a ˜ 1 , , a ˜ N ¯ , b ˜ = b ˜ 1 , , b ˜ N ¯ .
We have proved that
r ˜ cosh 1 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + f = : ρ ˜ .
We continue with the hyperbolic approximation.
Theorem 8. 
Let f C 2 i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C , 0 < λ < 1 ; n ¯ , N ¯ N ; n ¯ 1 λ > 2 ; x ¯ , x ¯ 0 i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ ; a ˜ : = a ˜ 1 , , a ˜ N ¯ , b ˜ : = b ˜ 1 , , b ˜ N ¯ . Then,
(i)
S n ¯ f , x ¯ f x ¯ i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ S n ¯ · x ¯ i ¯ , x ¯ sinh 1
4 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! S n ¯ i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ sinh 2 1 2 |
q N ¯ cosh 1 1 2 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ
+ b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + f C e β n ¯ 1 λ ,
(ii) Let f x ¯ 0 x ¯ i ¯ = 0 , i ¯ = 1 , , N ¯ be, and f α ¯ x ¯ 0 = 0 , α ¯ : α ¯ = 2 ; then, one has that
S n ¯ f , x ¯ f x ¯
q N ¯ cosh 1 1 2 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ
+ b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + f C e β n ¯ 1 λ ,
(iii)
S n ¯ f , x ¯ f x ¯ q N ¯
i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ 1 n ¯ β + b ˜ i ¯ a ˜ i ¯ C e β n ¯ 1 λ sinh 1
+ 4 α ¯ : α ¯ = 2 f α ¯ x ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! 1 n ¯ 2 λ + i ¯ = 1 N ¯ b ˜ i ¯ a ˜ i ¯ α ¯ i ¯ C e β n ¯ 1 λ sinh 2 1 2 }
+ cosh 1 1 2 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ
+ b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + f C e β n ¯ 1 λ } } ,
and
(iv)
S n ¯ f f q N ¯ i ¯ = 1 N ¯ f x ¯ i ¯ 1 n ¯ λ + b ˜ i ¯ a ˜ i ¯ C e β n ¯ 1 λ sinh 1 + 4 α ¯ : α ¯ = 2 f α ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! 1 n ¯ 2 λ + i ¯ = 1 N ¯ b ˜ i ¯ a ˜ i ¯ α ¯ i ¯ C e β n ¯ 1 λ sinh 2 1 2 } + cosh 1 1 2 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ + b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + f C e β n ¯ 1 λ } } = : ϰ n ¯ f .
Again, we would like to highlight the fact that uniform and pointwise convergences are achieved such that S n ¯ I as n ¯ .
Proof. 
Let r ˜ be the same as with (15). One has that
Z ¯ n ¯ : = k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ r ˜
= k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ 1 n ¯ λ n ¯ b ˜ M q n ¯ x ¯ k ¯ r ˜ + k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ > 1 n ¯ λ n ¯ b ˜ M q n ¯ x ¯ k ¯ r ˜ .
Thus,
Z ¯ n ¯ k ¯ = n ¯ a ˜ : k ¯ n ¯ x ¯ 1 n ¯ λ n ¯ b ˜ M q n ¯ x ¯ k ¯
cosh 1 1 2 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ + ρ ˜ C e β n ¯ 1 λ
cosh 1 1 2 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ + ρ ˜ C e β n ¯ 1 λ .
We have established that
Z ¯ n ¯ cosh 1 1 2 ω 1 , 2 max f α ¯ , 1 n ¯ λ N ¯ 2 n ¯ 2 λ + ω 1 f , 1 n ¯ λ
+ cosh 1 b ˜ a ˜ 2 f α ¯ , 2 max N ¯ 2 + f C e β n ¯ 1 λ .
By (16), one notes that
k ¯ = n ¯ a ˜ n ¯ b ˜ f k ¯ n ¯ M q n ¯ x ¯ k ¯ f x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯
= ( i ¯ = 1 N ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯ k ¯ i ¯ n ¯ x ¯ i ¯ f x ¯ i ¯ x ¯ ) sinh 1
+ 2 { α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 2 i ¯ = 1 N ¯ α ¯ i ¯ ! ( k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯
i ¯ = 1 N ¯ k ¯ i ¯ n ¯ x ¯ i ¯ α ¯ i ¯ ) } sinh 2 1 2 + Z ¯ n ¯ .
The last quantity yields
S n ¯ * f , x ¯ f x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯
i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ S n ¯ * · x ¯ i ¯ , x ¯ sinh 1
2 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 2 i ¯ = 1 N ¯ α ¯ i ¯ ! S n ¯ * i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ sinh 2 1 2 = Z ¯ n ¯ .
As earlier it verifies that
S n ¯ * · x ¯ i ¯ , x ¯ 1 n ¯ λ + b ˜ i ¯ a ˜ i ¯ C e β n ¯ 1 λ ,
i ¯ = 1 , , N ¯ .
Moreover, it holds that
S n ¯ * i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ 1 n ¯ 2 λ + i ¯ = 1 N ¯ b ˜ i ¯ a ˜ i ¯ α ¯ i ¯ C e β n ¯ 1 λ .
Finally, one finds that
S n ¯ f , x ¯ f x ¯ i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ S n ¯ · x ¯ i ¯ , x ¯ sinh 1
4 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! S n ¯ i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ sinh 2 1 2 |
q N ¯ Z ¯ n ¯
= q N ¯ | S n ¯ * f , x ¯ f x ¯ k ¯ = n ¯ a ˜ n ¯ b ˜ M q n ¯ x ¯ k ¯
i ¯ = 1 N ¯ f x ¯ x ¯ i ¯ S n ¯ * · x ¯ i ¯ , x ¯ sinh 1
4 α ¯ : = α ¯ 1 , , α ¯ N ¯ , α ¯ : = i ¯ = 1 N ¯ α ¯ i ¯ = 2 f α ¯ x ¯ 1 i ¯ = 1 N ¯ α ¯ i ¯ ! S n ¯ * i ¯ = 1 N ¯ · x ¯ i ¯ α ¯ i ¯ , x ¯ sinh 2 1 2 | .
When we consider all of the above together, we establish the theorem. □
Remark 10. 
By (7), for f C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C , we find that S n ¯ f f < , and S n ¯ f C i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯ , C .
Explicitly,
S n ¯ 2 f = S n ¯ S n ¯ f S n ¯ f f
is true.
Thus, we find that for k ¯ N ,
S n ¯ k ¯ f f
holds by “contraction property".
Moreover, we show that
S n ¯ k ¯ f S n ¯ k ¯ 1 f S n ¯ f f .
Additionally, S n ¯ 1 = 1 , S n ¯ k ¯ 1 = 1 , k ¯ N .
In addition, we has found that (by pp. 401–402, of [36])
S n ¯ r f f r S n ¯ f f , r N .
Ultimately, we show that:
Theorem 9. 
As in Theorems 7 and 8. Then
(i)
S n ¯ r ˜ f f r ϱ n ¯ f ,
where ϱ n ¯ f as in (13).
(ii)
S n ¯ r ˜ f f r ϰ n ¯ f ,
where ϰ n ¯ f as in (16).
As a result, the speed of convergence for S n ¯ r ˜ I might be considered as better than that of S n ¯ s ; see also [35].

5. Conclusions and Discussion

In theoretical terms, our findings correspond to a series of approximation results by using Jackson-type inequalities and the continuity modulus and related partial derivatives defined for multivariate functions. Our C -valued normalized MNN operators are expressed using a multidimensional density function driven by the parameterized half-hyperbolic tangent function. Moreover, it was aimed to fill the gap in the literature by revealing complex-valued hyperbolic and trigonometric convergence cases employing Ostrowski- and Opial-type inequalities, norms, and also trigonometric- and hyperbolic-type Taylor formulae to obtain the convergence rate.
On the other hand, when this study is considered in the context of real-world applications, it is believed that the accuracy of the outputs in the relevant datasets used mainly in machine learning applications can be relatively increased by using a parameterized activation function—a trainable activation function. When the relevant applied literature studies, some of which are included in the Introduction section, are examined, the freedom to choose the activation function appropriately and flexibly in an ANN architecture has provided serious advantages in providing more effective solutions to real-world problems.
Although we focused on the theoretical side of ANN and MNN operators in the current study, we believe that this theoretical basis will provide more robust ANN/DNN models, solution methods, and tools for real-world applications in the future.
For example, it is thought that the feed-forward artificial neural network (FNN) (also known as MLP) architecture and MNN operators that we consider in this study may be advantageous in terms of reproducing the dataset called “training set” in the literature with less error. In addition, this architecture highlights the features of “universality” and “versatility” for the solutions of real-world problems in areas such as natural language processing (NLP), complex classification, image recognition/compression/decompression, digital image processing, object identification, etc., because it performed better in nonlinear tasks.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The author would like to thank the respected George A. Anastassiou for generously sharing his time and views.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

SymbolDescription
R ; C , · The set of real numbers; Banach space of complex numbers
N The set of natural numbers
k ¯ , n ¯ , N , ¯ r , i ¯ Arbitrarily chosen natural numbers
N ̲ ψ , n ¯ Mathematical formulation of an “ANN architecture” based upon multiple hidden layers
x = x 1 , , x s An s -dimensional element from the set of R s
a ˜ i ¯ , b ˜ i ¯ Closed subinterval of R N ¯
x ¯ : = x ¯ 1 , , x ¯ N ¯ An element of i ¯ = 1 N ¯ a ˜ i ¯ , b ˜ i ¯
ψ A general activation function of an ANN architecture
ω ˜ i ¯ Connection weights of an ANN architecture
ϖ i ¯ Thresholds of an ANN architecture
ψ ˜ q Parameterized half-hyperbolic tangent activation function
q , β Parameters of parameterized half-hyperbolic tangent activation function
D q Density function
M q Density function for multivariate case
x ¯ Supremum norm of x ¯ in multivariate case
S n ¯ Complex-valued linear normalized MNN operator
S ˜ n ¯ Complex-valued linear normalized MNN associate operator
S n ¯ * A component of operator S n ¯
ω 1 The first modulus of continuity
r ˜ Reminder of multivariate hyperbolic Taylor formula

References

  1. Ansari, K.J.; Özger, F. Pointwise and weighted estimates for Bernstein-Kantorovich type operators including beta function. Indian J. Pure Appl. Math. 2024. [Google Scholar] [CrossRef]
  2. Savaş, E.; Mursaleen, M. Bézier Type Kantorovich q-Baskakov Operators via Wavelets and Some Approximation Properties. Bull. Iran. Math. Soc. 2023, 49, 68. [Google Scholar] [CrossRef]
  3. Cai, Q.; Aslan, R.; Özger, F.; Srivastava, H.M. Approximation by a new Stancu variant of generalized (λ,μ)-Bernstein operators. Alex. Eng. J. 2024, 107, 205–214. [Google Scholar] [CrossRef]
  4. Ayman-Mursaleen, M.; Zaman, M.N.; Sharma, S.K.; Cai, Q.-B. Invariant means and lacunary sequence spaces of order (α,β). Demonstr. Math. 2024, 57, 20240003. [Google Scholar] [CrossRef]
  5. Rao, N.; Ayman-Mursaleen, M.; Aslan, R. A note on a general sequence of λ-Szász Kantorovich type operators. Comput. Appl. Math. 2024, 43, 428. [Google Scholar] [CrossRef]
  6. Alamer, A.; Nasiruzzaman, M. Approximation by Stancu variant of λ-Bernstein shifted knots operators associated by Bézier basis function. J. King Saud Univ. Sci. 2024, 36, 103333. [Google Scholar] [CrossRef]
  7. Ayman-Mursaleen, M.; Zaman, M.N.; Rao, N.; Dilshad, M. Approximation by the modified λ-Bernstein-polynomial in terms of basis function. AIMS Math. 2024, 9, 4409–4426. [Google Scholar] [CrossRef]
  8. Mudarra, A.; Valdivia, D.; Ducange, P.; Germán, M.; Rivera, A.J.; Pérez-Godoy, M.D. Nets4Learning: A Web Platform for Designing and Testing ANN/DNN Models. Electronics 2024, 13, 4378. [Google Scholar] [CrossRef]
  9. Rashedi, K.A.; Ismail, M.T.; Al Wadi, S.; Serroukh, A.; Alshammari, T.S.; Jaber, J.J. Multi-Layer Perceptron-Based Classification with Application to Outlier Detection in Saudi Arabia Stock Returns. J. Risk Financ. Manag. 2024, 17, 69. [Google Scholar] [CrossRef]
  10. Minsky, M.; Papert, S. Perceptrons; MIT Press: Cambridge, MA, USA, 1969. [Google Scholar]
  11. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  12. Kolmogorov, A.N. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. Transl. Am. Math. Soc. 1963, 2, 55–59. [Google Scholar]
  13. Hecht-Nielsen, R. Kolmogorov’s mapping neural network existence theorem. In Proceedings of the International Conference on Neural Networks, San Diego, CA, USA, 21–24 June 1987; IEEE Press: New York, NY, USA, 1987; pp. 11–14. [Google Scholar]
  14. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  15. Funahashi, K.I. On the approximate realization of continuous mappings by neural networks. Neural Netw. 1989, 2, 183–192. [Google Scholar] [CrossRef]
  16. Chen, T.; Chen, H. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Trans. Neural Netw. 1995, 6, 911–917. [Google Scholar] [CrossRef]
  17. Chui, C.K.; Li, X. Approximation by ridge functions and neural networks with one hidden layer. J. Approx. Theory 1992, 70, 131–141. [Google Scholar] [CrossRef]
  18. Hahm, N.; Hong, B.I. An approximation by neural networks with a fixed weight. Comput. Math. Appl. 2004, 47, 1897–1903. [Google Scholar] [CrossRef]
  19. Costarelli, D.; Spigler, R. Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 2013, 44, 101–106. [Google Scholar] [CrossRef]
  20. Costarelli, D.; Spigler, R. Multivariate neural network operators with sigmoidal activation functions. Neural Netw. 2013, 48, 72–77. [Google Scholar] [CrossRef]
  21. Costarelli, D. Neural network operators: Constructive interpolation of multivariate functions. Neural Netw. 2015, 67, 28–36. [Google Scholar] [CrossRef]
  22. Anastassiou, G.A. Banach Space Valued Ordinary and Fractional Neural Network Approximation Based on q-Deformed and β-Parametrized Half Hyperbolic Tangent. In Parametrized, Deformed and General Neural Networks; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2023; Volume 1116. [Google Scholar]
  23. Pinkus, A. Approximation theory of the MLP model in neural networks. Acta Numer. 1999, 8, 143–195. [Google Scholar] [CrossRef]
  24. Ismailov, V.E. Ridge Functions and Applications in Neural Networks, Mathematical Surveys and Monographs, Vol. 263; American Mathematical Society: Providence, RI, USA, 2021; p. 186. [Google Scholar]
  25. Costarelli, D.; Piconi, M. Implementation of neural network operators with applications to remote sensing data. arXiv 2024, arXiv:2412.00375. [Google Scholar]
  26. Angeloni, L.; Bloisi, D.D.; Burghignoli, P.; Comite, D.; Costarelli, D.; Piconi, M.; Sambucini, A.R.; Troiani, A.; Veneri, A. Microwave Remote Sensing of Soil Moisture, Above Ground Biomass and Freeze-Thaw Dynamic: Modeling and Empirical Approaches. arXiv 2024, arXiv:2412.03523. [Google Scholar]
  27. Baxhaku, F.; Berisha, A.; Agrawal, P.N.; Baxhaku, B. Multivariate neural network operators activated by smooth ramp functions. Expert Syst. Appl. 2025, 269, 126119. [Google Scholar] [CrossRef]
  28. Kadak, U. Multivariate fuzzy neural network interpolation operators and applications to image processing. Expert Syst. Appl. 2022, 206, 117771. [Google Scholar] [CrossRef]
  29. Costarelli, D.; Sambucini, A.R. A comparison among a fuzzy algorithm for image rescaling with other methods of digital image processing. Constr. Math. Anal. 2024, 7, 45–68. [Google Scholar] [CrossRef]
  30. Karateke, S. Some Mathematical Properties of Flexible Hyperbolic Tangent Activation Function with Application to Deep Neural Networks. 2025; accepted. [Google Scholar]
  31. Available online: https://alexlenail.me/NN-SVG/ (accessed on 26 January 2025).
  32. Anastassiou, G.A. Multivariate hyperbolic tangent neural network approximation. Comput. Math. 2011, 61, 809–821. [Google Scholar]
  33. Anastassiou, G.A. Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 1997, 212, 237–262. [Google Scholar] [CrossRef]
  34. Anastassiou, G.A. Multivariate sigmoidal neural network approximation. Neural Netw. 2011, 24, 378–386. [Google Scholar] [CrossRef]
  35. Anastassiou, G.A. Approximation by neural networks iterates. In Advances in Applied Mathematics and Approximation Theory, Springer Proceedings in Mathematics & Statistics; Anastassiou, G., Duman, O., Eds.; Springer: New York, NY, USA, 2013; pp. 1–20. [Google Scholar]
  36. Anastassiou, G. Intelligent Systems II: Complete Approximation by Neural Network Operators; Springer: Heidelberg, NY, USA, 2016. [Google Scholar]
  37. Anastassiou, G.A. Intelligent Computations: Abstract Fractional Calculus, Inequalities, Approximations; Springer: Heidelberg, NY, USA, 2018. [Google Scholar]
  38. Karateke, S. On an (ι,x0)-Generalized Logistic-Type Function. Fundam. J. Math. Appl. 2024, 7, 35–52. [Google Scholar] [CrossRef]
  39. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New York, NY, USA, 1998. [Google Scholar]
  40. McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
  41. Mitchell, T.M. Machine Learning; WCB-McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  42. Anastassiou, G.A.; Karateke, S. Parametrized Half-Hyperbolic Tangent Function-Activated Complex-Valued Neural Network Approximation. Symmetry 2024, 16, 1568. [Google Scholar] [CrossRef]
  43. Arai, A. Exactly solvable supersymmetric quantum mechanics. J. Math. Anal. Appl. 1991, 158, 63–79. [Google Scholar] [CrossRef]
  44. Anastassiou, G.A. Perturbed Hyperbolic Tangent Function-Activated Complex-Valued Trigonometric and Hyperbolic Neural Network High Order Approximation. In Trigonometric and Hyperbolic Generated Approximation Theory; World Scientific: Singapore, 2025. [Google Scholar]
  45. Anastassiou, G.A. Opial and Ostrowski Type Inequalities Based on Trigonometric and Hyperbolic Type Taylor Formulae. Malaya J. Mat. 2023, 11, 1–26. [Google Scholar] [CrossRef]
  46. Ali, H.A.; Pales, Z. Taylor-type expansions in terms of exponential polynomials. Math. Inequalities Appl. 2022, 25, 1123–1141. [Google Scholar] [CrossRef]
Figure 1. A classical and very general ANN architecture with two hidden layers [31].
Figure 1. A classical and very general ANN architecture with two hidden layers [31].
Mathematics 13 00453 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karateke, S. Complex-Valued Multivariate Neural Network (MNN) Approximation by Parameterized Half-Hyperbolic Tangent Function. Mathematics 2025, 13, 453. https://doi.org/10.3390/math13030453

AMA Style

Karateke S. Complex-Valued Multivariate Neural Network (MNN) Approximation by Parameterized Half-Hyperbolic Tangent Function. Mathematics. 2025; 13(3):453. https://doi.org/10.3390/math13030453

Chicago/Turabian Style

Karateke, Seda. 2025. "Complex-Valued Multivariate Neural Network (MNN) Approximation by Parameterized Half-Hyperbolic Tangent Function" Mathematics 13, no. 3: 453. https://doi.org/10.3390/math13030453

APA Style

Karateke, S. (2025). Complex-Valued Multivariate Neural Network (MNN) Approximation by Parameterized Half-Hyperbolic Tangent Function. Mathematics, 13(3), 453. https://doi.org/10.3390/math13030453

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop