1. Motivation
Under the umbrella of Artificial Intelligence (AI), neural network (NN) operators supply a vigorous structure for solving complex real-life problems in various fields of science such as cybersecurity, healthcare, economics, medicine, psychology, sociology, etc. Thanks to their differentiability properties, they can optimize parameters directly. Moreover, NN operators need correctly selected activation functions to work effectively. When we look at the literature, we encounter a very wide range of activation functions [
1]. In this study, we choose 
—a parametrized and deformed half-hyperbolic tangent function—as the activation function. 
-like functions can be more effective in reaching the optimum solution due to their trainability [
2]. One of the interesting aspects of this work is that all higher-order approximations are based on trigonometric and hyperbolic-type Taylor’s formulae inequalities (see [
3,
4,
5]). Next, we emphasize the following: as is well known, the human brain has been proven medically to be a non-symmetrical human organ. As a result, NNs try to imitate its operation and are not symmetrical mathematical structures. But in our paper, we build an approximation apparatus that is as close as possible to symmetry. Namely the activation function we initially use is described as reciprocal anti-symmetrical (see Proposition 1). This is the building block for our density function which is used in our approximation NN operators. We prove that this density function is a reciprocal symmetric function (see Equation (
3)). This represents our study’s interesting connection to the general “symmetry” phenomenon.
The first construction of approximation by NN operators in the sense of the “Cardaliaguet Euvrard” and “Squashing” types was made by G.A. Anastassiou in 1997. As a fruit of this construction, he also brought to the literature the ability to calculate the convergence speed with the help of convergence rates using the modulus of continuity [
6].
The mathematical expression of the one-hidden-layer neural network (NN) architecture is presented as
      
      for 
, where 
 are the connection weights, 
 is the coefficient, 
 is the inner product of 
, 
  is the threshold, and 
 is the activation function. For more knowledge of NNs, readers are recommended to read [
7,
8,
9].
This paper is organized as follows: in 
Section 1, we present our motivation to the readers. In 
Section 2, step by step, we build up our activation function 
. We also include our basic estimations, which form the basis for the main results. We devote 
Section 3 and 
Section 4 to the construction of density function 
, and the creation of the 
valued linear NN operators, respectively. In 
Section 5, the authors perform deformed and parametrized half-hyperbolic tangent function-activated high-order neural network approximations on continuous functions over compact intervals of a real line with complex values. All convergences have rates expressed via the modulus of continuity of the involved functions’ high-order derivatives, derived from very tight Jackson-type inequalities. We conclude with 
Section 6.
  3. Construction of Density Function 
In this section, we aim to create the density function 
, and we also present its basic properties, which will be used throughout the paper. It is clear that 
 So, for every 
 let us consider the following density function:
Furthermore, note that
      
      so the 
x-axis is a horizontal asymptote. One can write that
      
Remark 1. - (i) 
 Let  then  and , that is, . Thus,  is strictly increasing on .
- (ii) 
 Let  then  and , namely, . Therefore,  is strictly decreasing on .
 Remark 2. Let . Then,Explicitly, according to  one determines that  for , and is strictly concave downward on  Therefore,  is a bell-shaped function on  Moreover,  is satisfied.  Remark 3. The maximum value of  is  Theorem 1 ([
10])
. We determine thatIn this manner, it holds that  Theorem 2 Thus, this means that  is a density function over  such that 
 Next, the following result is needed.
Theorem 3 ([
10])
. Let , and  with ; . Then,where  Let  and  be the ceiling of the number and the integral part of the number, respectively.
Let us continue with the following conclusion:
Theorem 4 ([
10])
. Let  and  so that . For , we consider the number  with  and . Then, We also mention the following:
Remark 4 ([
10])
. (i) We also notice thatwhere (ii) Let . For large  we always have . Also , iff . In general, it holds that    5. Approximation Results
Now, we are ready to perform -valued neural network high-order approximations to a function f given with rates. Let us start with a trigonometric approach.
Theorem 5. Let , , ,  Then,
- (2) 
 If  we obtain
Note here that there is a high rate of convergence at 
- (3) 
 In addition, we obtain
namely, , pointwise and uniformly,
- (4) 
 Eventually, it holds that
and a high speed of convergence at  is gained.
 Proof.   Inspired by [
3], and applying the trigonometric Taylor’s formula for 
, let 
; then,
        
Furthermore, it holds that
        
We assume that
        
        when 
 in other words, when large enough 
, 
 is assumed. Thus, this yields 
 or 
.
For the case , the following are obtained:
        
        (employing 
, ∀
)
        
        namely,
        
So, we have proven that when 
, it is always true that
        
        (employing 
, ∀ 
)
        
- (b)
 One more time, let 
Again, we apply , ∀
Then, we calculate
        
        using 
, ∀
, and calculate
        
        i.e.,
        
As a consequence, we obtain the following:
        
- (1)
 - (2)
 If 
 according to (
14), we have
            
			Here, we keep in mind that 
 has a high convergence rate.
        
- (3)
 In addition, according to (
14), we have
		We state that convergence is pointwise and uniform such that 
.
Lastly, we obtain (∀, ):
		 
The theorem is proved.    □
 We move ahead with a hyperbolic high-order neural network approximation.
Theorem 6. Let , , ,  Then,
- (2) 
 If  we obtain
considering the high rate of convergence at . - (3) 
 In addition, we obtain
it yields that , pointwise and uniformly, and Here, again,  is our high convergence speed.
 Proof.  Using the mean value theorem, we write
            
            for some 
 in 
, for any 
.
Thus,
            
			In other words, there exists 
 such that
            
            where 
Inspired by [
3,
4], and applying the hyperbolic Taylor’s formula for 
, when 
, then
            
So, we obtain
            
            where
            
			We assume that
            
For large enough , let , that is, when .
Hence,  or .
For , we have the following:
			
So, we have verified that when 
, it gives
            
Now, let us again assume that 
; then,
            
Also, we have
            
			Thus, it yields that
            
Therefore, we determine that
            
Also, we have that
            
            and
            
			We obtain that
            
According to (
12), and based on (
16)–(
19), we acquire the following:
            
- (2)
 If 
 according to (
20), we achieve that
            with the high rate of convergence at 
- (3)
 Moreover, according to (
20), we gain
It yields a pointwise and uniform convergence such that .
Consequently, we gain (∀, ):
            
The theorem is accomplished.    □
 Now, we go further with a hybrid-type, i.e., hyperbolic–trigonometric high-order NN approximation.
Theorem 7. Let , , ,  Then,
- (2) 
 If   we obtain
and in (21),  appears as the highest speed.  Proof.  Inspired by [
4], we employ the hyperbolic–trigonometric Taylor’s formula for 
:
When 
, then
            
From Theorems 5 and 6, we determine that
            
            where
            
Without loss of generality, let us consider that .
Hence,  or .
For , we gain the following cases:
            
Namely, if 
, then
            
            Finally, when 
, we always determine that
            
Again, let 
; then,
            
If we let 
, we obtain
            
            which is why there exists that
            
So,
            
			We try to prove that
            
			We examine that
            
			As a result, we obtain (∀
, 
):
            
The theorem is established.    □
 Now, a general trigonometric result will be considered.
Theorem 8. Let , , ,  and  such that  Then,
- (2) 
 If   we obtain
 is the high convergence speed in both (1) and (2).
 Proof.  This proof is inspired by [
4] (Chapter 3, Theorem 3.13, pp. 84–89), and also the proof of Theorem 7.   □
 We finalize with a general hyperbolic result.
Theorem 9. Let , , , , and let  with  Then,
- (2) 
 If   we obtain
Again,  is the high convergence speed in both (1) and (2).
 Proof.  This proof is inspired by [
4], and also the proof of Theorem 7.    □