Next Article in Journal
A Self-Deciding Adaptive Digital Twin Framework Using Agentic AI for Fuzzy Multi-Objective Optimization of Food Logistics
Next Article in Special Issue
An Explanatory CDF Manifold Algorithm for Large Telecom Datasets
Previous Article in Journal
SDENet: A Novel Approach for Single Image Depth of Field Extension
Previous Article in Special Issue
A Scalable Framework with Modified Loop-Based Multi-Initial Simulation and Numerical Algorithm for Classifying Brain-Inspired Nonlinear Dynamics with Stability Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Analysis of Half-Hyperbolic Convolution (HHC)-Type Operators via Regression-Based Metrics

1
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
2
Department of Software Engineering, Faculty of Engineering and Natural Sciences, Istanbul Atlas University, Istanbul 34408, Türkiye
3
Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Sivas University of Science and Technology, Sivas 58000, Türkiye
*
Author to whom correspondence should be addressed.
Algorithms 2026, 19(3), 217; https://doi.org/10.3390/a19030217
Submission received: 26 February 2026 / Accepted: 11 March 2026 / Published: 13 March 2026
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)

Abstract

In this paper, we first introduce the adjustable half-hyperbolic (adj HH) tangent function as an activation function. We then establish both quantitative and qualitative convergence results for HH-activated convolution-type positive linear operators (PLOs) acting on the space of bounded and continuous functions on the real line. The theoretical convergence results are numerically validated by means of error decay plots obtained using Python (version 3.13). Moreover, we compare three different classes of HHC-type operators in terms of their convergence behavior and approximation performance. Finally, we conclude by discussing several potential application areas that illustrate the relevance of the presented theoretical framework.

1. Introduction and Preliminaries

In the contemporary artificial intelligence (AI) era, the design and analysis of neural network (NN) operators have become increasingly important, particularly with respect to their approximation properties and convergence behavior. Activation functions play a central role in determining the stability, learning capacity, and expressive power of neural architectures. Adjustable and asymmetric activation mechanisms provide additional flexibility, allowing for improved approximation performance and enhanced adaptability to complex data structures.
Convolution-type operators form a fundamental class in approximation theory and signal processing. Their systematic study within the framework of positive linear operators (PLOs) originated with Bernstein’s constructive proof of the Weierstrass theorem [1], and was further strengthened by Korovkin’s classical convergence theorem [2]. A unified operator-theoretic treatment of convolution, integral, and polynomial operators was developed by Butzer and Nessel [3,4]. In recent decades, Anastassiou has extended these ideas toward probabilistic and NN-inspired operators, providing quantitative approximation estimates and connections with modern learning models [5,6,7,8].
From the viewpoint of NNs, smooth and bounded activation functions such as the hyperbolic tangent are known to promote stable learning and efficient approximation. To overcome limitations such as slow convergence and limited flexibility, several parametrized and deformed variants of the tanh function have been proposed and analyzed [9,10,11]. Experimental evidence indicates that introducing asymmetry into the activation function can significantly improve accuracy and representational capacity [9]. Moreover, adaptive hyperbolic tangent activations have demonstrated superior performance in data mining and learning tasks due to their enhanced generalization ability [10].
Parallel to these developments, symmetrized neural network operators have been shown to exhibit improved convergence behavior and sharper approximation bounds compared to classical constructions, especially in convolution-type settings [6,8]. Motivated by these advances, the present work investigates PLOs of convolution-type generated by the adjustable half-hyperbolic (adj HH) tangent activation function, establishing quantitative and qualitative convergence results supported by numerical validation .
This paper is organized as follows. In Section 1, we introduce the adj HH-tangent function and present the necessary preliminaries and notation. Section 2 gives the technical instruments. In addition, several versions of the operators are constructed, and their approximation behavior is analyzed. Section 3 is devoted to the main theoretical results, where we establish convergence properties and quantitative estimates for the proposed operators using tools from PLOs of the approximation theory. Section 4 provides numerical examples and graphical illustrations that support the theoretical findings with so-called regression-type metrics. Finally, Section 5 contains the concluding remarks and discusses possible directions for future research.
Definition 1
([5]). We use the following “adjustable half-hyperbolic (adj HH) tangent function” for κ ,   l R + ;   y R such that
κ , l y : = 1 κ Exp l y 1 + κ Exp l y .
Here, while ℓ controls the steepness of the transition, κ controls the asymmetry. Moreover, it is obvious that κ , l C R as a composition of exponentials and rational operations.
Definition 2.
Let y 1 < y + 1 for every y R . Then, let
D κ , l y : = 1 4 κ , l y + 1 κ , l y 1 > 0 ,
and
D κ , l y : = 1 4 κ , l y + 1 κ , l y 1 = D 1 κ , l y
Such that κ , l > 0 .
D κ , l * y : = D κ , l y + D 1 κ , l y 2 > 0
that is symmetric about the y axis for every y R . It is known that
j = D κ , l * y j = 1 .
Again,
D κ , l * y d y = 1
is valid, and for all n N
D κ , l * n y v d v = 1 .
Thus, here, D κ , l * works as a “density kernel”.
Definition 3
([12]). For ε > 0 ,   N N ; f C B R : = f : R R : f is bounded and continuous , the modulus of continuity is defined by
ω 1 f , ε : = sup u 1 , u 2 R u 1 u 2 < ε f u 1 f u 2 .

2. Proposed Operators and Preliminary Results

Here, we focus on establishing a solid foundation for our approximation results with emphasis on the PLO properties of our operators.
Definition 4.
For all v , y R ; n N , and  f C B R ; “Classical adj HHC-type operator” C n is defined as follows:
C n f y : = f v n D κ , l * n y v d v ,
where D κ , l * is in (4).
Definition 5.
For v , y R ; n N , and  f C B R ; “Adj HHC-based Kantorovich-type operator” K n is defined by
K n f y : = n v n v + 1 n f q d q D κ , l * n y v d v .
Definition 6.
For every v , y R ; n N , and  f C B R ; “Adj HHC-quadrature-type operator” Q n is defined by
Q n f y : = j = 1 m α j f v n + j n m D κ , l * n y v d v ,
where j = 1 m α j = 1 such that α j 0 .
Remark 1
([8]). For f C B R , it is known that the operators C n ,   K n ,   Q n : C B R C B R are PLOs satisfying C n 1 = K n 1 =   Q n 1 = 1 for n N . Moreover, if  f C i R and f j C B R for j = 0 , 1 , . . . , i , then these operators commute with differentiation up to order i; in other words,
Ł n f j = Ł n f j , for each Ł n C n , K n , Q n ,
j = 0 , 1 , . . . , i , and all y R .
Proposition 1.
The inequality
R D κ , l * n y v d v < κ + 1 κ Exp l n 1 δ 1 ,
is satisfied for 0 < δ < 1 ; n N : n 1 δ > 2 ;   κ , l > 0 such that
R : = v R : n y v n 1 δ .
Proof. 
For every y R , we have that
D κ , l y : = 1 4 κ , l y + 1 κ , l y 1 .
Let y 1 , i.e.,  0 y 1 y + 1 . Applying the mean value theorem, there exists c ( y 1 , y + 1 ) so that
κ , l c = κ , l y + 1 κ , l y 1 2 .
Thus,
D κ , l y = 1 2 c κ , l .
Compute
κ , l y = 2 κ l Exp l y 1 + κ Exp l y 2 ;
hence,
D κ , l y = κ l Exp l c ( Exp l c + κ ) 2 , y 1 < c < y + 1 .
Therefore,
D κ , l y < κ l Exp l y 1 , y 1 .
Similarly,
D 1 κ , l y < 1 κ l Exp l y 1 , y 1 .
Adding (20) and (21) and using (4) yields
D κ , l * y < 1 2 κ + 1 κ l Exp l y 1 , y 1 .
Let
R : = v R : n y v n 1 δ .
Then,
R D κ , l * n y v d v = R D κ , l * n y v d v < 1 2 κ + 1 κ R l Exp l ( n y v 1 ) d v = κ + 1 κ l n 1 δ Exp l ( t 1 ) d t = κ + 1 κ Exp l ( n 1 δ 1 ) = κ + 1 κ Exp l ( n 1 δ 1 ) .
   □
Proposition 2.
For κ , l , Δ > 0 ,
+ p Δ D κ , l * p d p 1 , l 1 Δ + 1 + 1 κ + κ Exp l l Δ Γ Δ + 1 <
is satisfied.
Proof. 
Let us write that
+ p Δ D κ , l * p d p = 2 0 p Δ D κ , l * p d p .
Using the fact that from [8],
D κ , l * ln κ l = 1 Exp ( l ) 2 1 + Exp ( l ) = 1 , l 1 2 ,
and for all y R
D κ , l * y < 1 2 κ + 1 κ l Exp l y 1 .
Hence,
2 0 p Δ D κ , l * ( p ) d p = 2 0 1 p Δ D κ , l * ( p ) d p + 1 p Δ D κ , l * ( p ) d p 2 0 1 p Δ d p 1 , l ( 1 ) 2 + 1 p Δ 1 2 κ + 1 κ l Exp ( l ( p 1 ) ) d p = 1 , l ( 1 ) Δ + 1 + κ + 1 κ Exp ( l ) l Δ Γ ( Δ + 1 ) .
This proves (25).    □
Proposition 3.
Let κ ¯ , n N ;   κ R , and y 0 R . It is also assumed that the operator C n is defined as in (9). Then,
C n · y 0 κ ¯ y 0 > 0 ,
and
0 < C n · y 0 κ ¯ y 0 1 n κ ¯ 1 , l 1 κ ¯ + 1 + 1 κ + κ Exp l l κ ¯ Γ κ ¯ + 1 < +
as n + .
Proof. 
It is necessary and sufficient to observe that
0 < C n · y 0 κ ¯ y 0 = v n y 0 κ ¯ D κ , l * n y 0 v d v = 1 n κ ¯ n y 0 v κ ¯ D κ , l * n y 0 v d v = 1 n κ ¯ + p κ ¯ D κ , l * p d p 1 n κ ¯ 1 , l ( 1 ) κ ¯ + 1 + 1 κ + κ Exp ( l ) l κ ¯ Γ ( κ ¯ + 1 ) ,
using (25). This is finite and tends to 0 as n .    □
Proposition 4.
Let the operator K n be defined as in (10) for κ ¯ , n , Δ N ;   κ R , and y 0 R . Then,
K n · y 0 κ ¯ y 0 > 0 ,
and
0 < K n · y 0 κ ¯ y 0 2 κ ¯ 1 n κ ¯ 1 + 1 , l ( 1 ) κ ¯ + 1 + 1 κ + κ Exp ( l ) l κ ¯ Γ ( κ ¯ + 1 ) < +
as n + .
Proof. 
One has that
0 < K n · y 0 κ ¯ y 0 = n 0 1 n q + v n y 0 κ ¯ d q D κ , l * n y 0 v d v 1 n + v n y 0 κ ¯ D κ , l * n y 0 v d v = 1 n κ ¯ 1 + n y 0 v κ ¯ D κ , l * n y 0 v d v 2 κ ¯ 1 n κ ¯ 1 + n y 0 v κ ¯ D κ , l * n y 0 v d v = 2 κ ¯ 1 n κ ¯ 1 + + | p | κ ¯ D κ , l * ( p ) d p 2 κ ¯ 1 n κ ¯ 1 + 1 , l ( 1 ) κ ¯ + 1 + 1 κ + κ Exp ( l ) l κ ¯ Γ ( κ ¯ + 1 ) ,
by (25).    □
Proposition 5.
Consider κ ¯ , n N ; κ R + , and y 0 R . Suppose that the operator Q n is defined as in (11). Then,
Q n · y 0 κ ¯ y 0 > 0 ,
and
0 < Q n · y 0 κ ¯ y 0 2 κ ¯ 1 n κ ¯ 1 + 1 , l ( 1 ) κ ¯ + 1 + 1 κ + κ Exp ( l ) l κ ¯ Γ ( κ ¯ + 1 )
is finite as n + .
Proof. 
It is known that
0 < Q n · y 0 κ ¯ y 0 = + r ˜ = 0 γ λ r ˜ v n + r ˜ n γ y 0 κ ¯ D κ , l * n y 0 v d v + 1 n + v n y 0 κ ¯ D κ , l * n y 0 v d v 1 n κ ¯ + 1 + n y 0 v κ ¯ D κ , l * n y 0 v d v .
The rest of the proof is identical to the previous proposition.    □

3. Main Results

Theorem 1.
We denote by n any of the operators C n , K n , or Q n for n N acting on C B R . For y R and f C B R , the following is provided:
n f y f y 2 ω 1 f , n · y y < + .
Moreover, if f is continuous uniformly, then lim n + n f y = f y .
Proof. 
This proof is obtained through a straightforward application of Theorem 3.6 in [8] and Propositions 1–4.    □
Theorem 2.
Assume that f ,   f N C B R such that f j y = 0 for j = 1 , . . . , N ; N N . Then,
n ( f ) ( y ) f ( y ) ω 1 f ( N ) , n | · y | N + 1 ( y ) 1 N + 1 × n | · y | N ( y ) + n | · y | N + 1 ( y ) N N + 1 N + 1 .
is finite. Thus, lim n n f y = f y . Specifically, if  N = 1 , then we obtain that
n f y f y ω 1 f , n · y 2 y 1 2 × n · y y + n · y 2 y 1 2 2 < .
Again, lim n n f y = f y is satisfied.
Proof. 
Applying Theorem 4.1 of [7] with Propositions 1–4 immediately yields the proof.    □
Notation 1.
Let κ ¯ , n N be. Then,
ρ 1 n κ ¯ : = 1 n κ ¯ 1 , l 1 κ ¯ + 1 + 1 κ + κ Exp l l κ ¯ Γ κ ¯ + 1 0 , as n + ,
and
ρ 2 n κ ¯ : = 2 κ ¯ 1 n κ ¯ 1 + 1 , l 1 κ ¯ + 1 + 1 κ + κ Exp l l κ ¯ Γ κ ¯ + 1 0 , as n + .
Corollary 1.
Let N N and f , f N C B R , y R . Assume that f i y = 0 for i = 1 , , N . Then,
C n f y f y ω 1 f N , ρ 1 n N + 1 1 N + 1
× ρ 1 n N + ρ 1 n N + 1 N N + 1 N + 1 < + , n N .
Thus, lim n + C n f y = f y .
Proof. 
By a direct application of Proposition 1 and Theorem 2 with (41).    □
Corollary 2.
Let N N and f , f N C B R , y R . Assume f i y = 0 , i = 1 , . . . , N . Then,
K n f y f y Q n f y f y ω 1 ( f N , ρ 2 n N + 1 1 N + 1 )
ρ 2 n N + ρ 2 n N + 1 N N + 1 N + 1 < + , n N .
Hence, lim n + K n f y = lim n + Q n f y = f y .
Proof. 
The proof follows directly from Theorem 2, Propositions 2 and 3, and (42).   □
Corollary 3.
Let f C B R . Then,
C n f f 2 ω 1 f , ρ 1 n 1 < + , n N .
If f is also uniformly continuous, we get lim n + C n f = f , pointwise and uniformly.
Proof. 
An application of Proposition 1, Theorem 1, and (41) completes the proof.    □
Corollary 4.
Let f C B R . Then,
K n f f Q n f f 2 ω 1 f , ρ 2 n 1 < + , n N .
If f is also uniformly continuous, we obtain lim n + K n f = lim n + Q n f = f , pointwise and uniformly.
Proof. 
Combining Propositions 2 and 3, Theorem 1, and (42) establishes the proof.    □
Theorem 3.
Let f j C B R , for j = 0 , 1 , . . . , N . Then, for n N
C n f j f j 2 ω 1 f j , ρ 1 n 1 < +
is valid. Moreover, if  f j is uniformly continuous, then we get lim n + C n f j = f j , pointwise and uniformly.
Proof. 
Corollary 3 and (12) together yield the proof.    □
Theorem 4.
Let f j C B R , for j = 0 , 1 , . . . , N . Then,
K n f j f j Q n f j f j 2 ω 1 f j , ρ 2 n 1 < + , n N .
If f j is uniformly continuous, we obtain lim n + K n f j = lim n + Q n f j = f j , pointwise and uniformly.
Proof. 
The proof follows from Corollary 4 and (12).    □

4. Numerical Approximation and Error Analysis via Regression- Based Metrics

In this section, we shall illustrate the approximation behavior of the adj HHC–type operators by numerical experiments. All computations are performed on the real line and confirm the theoretical convergence results established in the previous sections. For reproducibility, the complete Python implementation is provided in Appendix A (see Listings A1–A4). Algorithms 1–5 summarize the computational procedures, while the full Python implementation is reported in Appendix A (Listings A1–A4).
Algorithm 1 Activation, kernel, and target function
Require: Parameters κ , l > 0
1:
Define the activation function:
h ( y ; κ , l ) 1 κ Exp l y 1 + κ Exp l y .
2:
Define the adj HHC kernel:
D ( y ; κ , l ) 1 4 h ( y + 1 ; κ , l ) h ( y 1 ; κ , l ) .
3:
Define the test function:
f ( y ) 0 , y < 0 , Exp y , y 0 .
Algorithm 2 Numerical evaluation of C n ( f ) ( y ) (truncated adaptive quadrature)
Require:  y R , n N , parameters ( κ , l ) , truncation L > 0
1:
a n y L ,     b n y + 0.8 L
2:
ϕ ( v ) f ( v / n ) D ( n y v ; κ , l )
3:
Compute I a b ϕ ( v ) d v using adaptive quadrature
4:
return  C n ( f ) ( y ) I
Algorithm 3 Evaluation of K n ( f ) ( y ) (finite weighted sampling)
Require:  y R , n N , m N , weights α 1 , , α m with j = 1 m α j = 1
1:
S 0
2:
for  j = 1 to m do
3:
    S S + α j f y + j n m
4:
end for
5:
return  K n ( f ) ( y ) S
Algorithm 4 Numerical evaluation of Q n ( f ) ( y ) (adj quadrature)
Require:  y R , n N , m N , weights α 1 , , α m , parameters ( κ , l ) , truncation L > 0
1:
a n y L ,     b n y + L
2:
S ( v ) j = 1 m α j f v n + j n m
3:
ψ ( v ) S ( v ) D ( n y v ; κ , l )
4:
Compute I a b ψ ( v ) d v using adaptive quadrature
5:
return  Q n ( f ) ( y ) I
Algorithm 5 Numerical experiment pipeline (tables and figures)
Require: Interval [ y min , y max ] , number of points N, set of n values, reference n ref , parameters ( κ , l ) , ( m , α )
 1:
Construct uniform grid { y i } i = 1 N [ y min , y max ]
 2:
Compute t i f ( y i ) for all i
 3:
for each n do
 4:
   Compute C i C n ( f ) ( y i ) , K i K n ( f ) ( y i ) , Q i Q n ( f ) ( y i )
 5:
   Evaluate metrics for ( t i , C i ) , ( t i , K i ) , ( t i , Q i ) via Algorithm 6
 6:
   Report a metric table for C n , K n , Q n
 7:
end for
 8:
Plot f and C n ref ( f ) , K n ref ( f ) , Q n ref ( f )
 9:
Plot pointwise errors | C n ref ( f ) f | , | K n ref ( f ) f | , | Q n ref ( f ) f |
10:
For each n compute T n ( f ) f = max i | a i t i | and plot versus n in log–log scale
Algorithm 6 Discrete error metrics on a sampling grid
Require: Grid { y i } i = 1 N , targets t i = f ( y i ) , approximations a i = T n ( f ) ( y i )
1:
MSE 1 N i = 1 N ( t i a i ) 2
2:
RMSE MSE
3:
MAE 1 N i = 1 N | t i a i |
4:
MaxE max 1 i N | t i a i |
5:
R 2 1 i = 1 N ( t i a i ) 2 i = 1 N ( t i t ¯ ) 2 , t ¯ = 1 N i = 1 N t i
6:
MAPE 100 N i = 1 N t i a i t i + ε (small ε > 0 )
7:
return  MSE , RMSE , MAE , MaxE , R 2 , MAPE

4.1. Quantitative Error Measures

Let [ a , b ] R be a fixed compact interval, containing the main support of f, f C B ( R ) , and
n { C n , K n , Q n } .
We consider a uniform partition of [ a , b ] given by
y i = a + ( i 1 ) h , i = 1 , 2 , , N ,
where h = b a N 1 is the mesh size. The values { y i } i = 1 N thus form a sampling grid on [ a , b ] , which is used to evaluate both the target function f and its operator approximations n ( f ) . Then, we could define the discrete pointwise error by
e i : = n ( f ) ( y i ) f ( y i ) , i = 1 , , N .
Motivated by Corollaries 3 and 4, which provide estimates in the uniform norm, we introduce the following quantitative error metrics (please see [13]).
(i)
Max Error
E ( n , f ) : = max { | e 1 | , | e 2 | , , | e N | } n ( f ) f .
This metric directly reflects the theoretical estimates in (45) and (46).
(ii)
Mean absolute error (MAE)
MAE ( n , f ) : = 1 N i = 1 N | e i | 1 N n ( f ) f 1 ,
where . 1 is a 1 - norm on vector space C B ( R ) .
(iii)
Mean squared error (MSE)
MSE ( n , f ) : = 1 N i = 1 N | e i | 2 1 N n ( f ) f 2 2 ,
where . 2 is a Euclid norm on vector space C B ( R ) .
(iv)
Root mean squared error (RMSE)
RMSE ( n , f ) : = MSE ( n , f ) .
This quantity penalizes large local deviations and highlights pointwise instabilities.
(v)
Coefficient of determination ( R 2 )
R 2 ( n , f ) : = 1 i = 1 N | e i | 2 i = 1 N | f ( y i ) f ¯ | 2 , f ¯ = 1 N i = 1 N f ( y i ) .
Values of R 2 close to 1 indicate strong agreement between n ( f ) and f.
These discrete metrics provide numerical counterparts of the theoretical convergence results established in Corollaries 3 and 4 allow for a detailed comparison of the approximation behavior of the operators C n , K n , and  Q n for increasing values of n.

4.2. Examples

Example 1.
As a test function, we consider the bounded, infinitely differentiable function f y = Exp y 2 ,   y R . This function is a standard benchmark in approximation theory since it is smooth, rapidly decreasing at infinity, and satisfies the assumptions required for convolution-type PLOs.
We consider D κ , l * in (4) as a density kernel, “Classical adj HHC-type operator” C n as PLOs, and adj HH-tangent function in (1) as activation function for κ = l = 1 .
As we observe in Figure 1, Figure 2 and Figure 3 and Table 1; numerical calculations and graphics convergence results reveal that the proposed operator C n works perfectly in a well-fitting space.
Remark 2.
Theoretical justification of the “sliding window truncation”:
( 1 ) The HHC-type operator is defined by the improper integral. So, a direct numerical evaluation of this integral is not feasible due to the infinite integration domain. However, the structure of the kernel D κ , l * allows for a rigorous truncation. Since 1 , 1 converges exponentially to ± 1 as y , it follows that there exist suitable constants c , C > 0 such that
D κ , l * y C Exp c y
is satisfied for y R . Consequently, the kernel is rapidly decaying and effectively localized around the origin.
( 2 )  Change in variables and concentration nearby  v = n y  : Observe that the kernel is evaluated at n y v . Hence, the integrand f v n D κ , l * n y v is significant only when v is close to n y . For large v n y , the kernel suppresses the contribution exponentially. This yields the estimate
v n y > L f v n D κ , l * n y v d v f t > L D κ , l * t d t < ,
uniformly for y , t R .
( 3 )  Sliding window truncation: Based on the previous arguments, the improper integral can be approximated by
C n f y n y L n y + L f v n D κ , l * n y v d v ,
where the integration interval slides with the evaluation point y. Please note that this “sliding window” is not an ad hoc numerical trick; it is a direct consequence of the approximate identity structure of the kernel D κ , l * .
Example 2.
In this example, we numerically investigate the approximation behavior of several HHC-type operators acting on a bounded function defined on the real line. We consider the bounded and continuous function
f ( y ) = 0 , y < 0 , Exp y , y 0 ,
which belongs to the space C B ( R ) . This function is not symmetric and exhibits a non-smooth transition at the origin, making it a suitable benchmark for evaluating proposed operators. We compare C n , K n , and  Q n generated by the adj HH tangent kernel. For numerical implementation, these integrals are approximated by a sliding-window truncation of the form
+ · d v n y L n y + L · d v ,
where sufficiently large L > 0 is chosen to ensure numerical stability and accuracy.
As shown in Figure 4, Figure 5 and Figure 6 and Table 2, the numerical experiments and graphical convergence results demonstrate that the proposed operators C n , K n , and Q n exhibit satisfactory performance within the considered functional setting.
The operators are evaluated on a uniform grid in the interval [ 5 , 5 ] for several values of n. Approximation quality is measured using classical regression-based metrics, including the root mean square error (RMSE), mean absolute error (MAE), maximum error, and the coefficient of determination R 2 .

5. Discussion and Future Directions

The use of ML error metrics such as RMSE and MAE provides an average-case, regression-oriented interpretation of the approximation behavior of the proposed operators. These metrics complement classical uniform error estimates and allow for a transparent numerical comparison of different operator constructions at finite resolution. From an ML perspective, the proposed CNN operators can be interpreted as deterministic regression models, whose approximation accuracy is assessed using standard ML error metrics.
In this study, we have introduced and rigorously analyzed new classes of convolution-type approximation operators constructed via adj HH-tangent activation function using the framework of PLOs, and we have established quantitative convergence estimates toward the identity operator, supported by precise inequalities involving the modulus of continuity.
Furthermore, by extending the analysis to simultaneous settings, we have demonstrated the robustness and adaptability of these operators. The integration of adjustability within the operator design not only enhances theoretical understanding but also holds potential for practical applications in NN training and deep learning (DL) architectures.
Future research may further explore their performance in applications to real-world datasets and adaptations within stochastic frameworks. We can present related problems in fields such as Image Processing, Computer Vision, Signal Processing and Acoustics, Natural Language Processing (NLP), etc., as real-life applications.

Author Contributions

Conceptualization, G.A.A., S.K. and M.Z.; methodology, G.A.A. and S.K.; software, M.Z. and S.K.; validation, G.A.A., S.K. and M.Z.; formal analysis, G.A.A.; investigation, G.A.A., S.K. and M.Z.; resources, G.A.A.; writing—original draft preparation, G.A.A. and S.K.; writing—review and editing, G.A.A., S.K. and M.Z.; visualization, S.K. and M.Z.; supervision, G.A.A.; project administration, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors thank the reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Python Implementation

The complete Python implementation corresponding to Algorithms 1–5 is listed below.
Listing A1. Python code: activation, kernel, target function, and operators Cn, Kn, Qn.
Algorithms 19 00217 i001
Listing A2. Python code: discrete error metrics.
Algorithms 19 00217 i002
Listing A3. Python code: grid setup and metric table generation.
Algorithms 19 00217 i003
Listing A4. Python code: figures for approximations and errors.
Algorithms 19 00217 i004aAlgorithms 19 00217 i004b

References

  1. Bernstein, S.N. Démonstration du théorème de Weierstrass fondée sur le calcul des probabilités. Commun. Soc. Math. Kharkow 1912, 13, 1–2. [Google Scholar]
  2. Korovkin, P.P. Linear Operators and Approximation Theory; Hindustan Publ. Corp.: Delhi, India, 1953. [Google Scholar]
  3. Butzer, P.L.; Nessel, R.J. Fourier Analysis and Approximation; Academic Press: New York, NY, USA, 1971; Volumes I–II. [Google Scholar]
  4. Butzer, P.L.; Nessel, R.J. Approximation Theory and Functional Analysis; Academic Press: New York, NY, USA, 1971. [Google Scholar]
  5. Anastassiou, G.A. Parametrized, Deformed and General Neural Networks; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  6. Anastassiou, G.A. Approximation by symmetrized and perturbed hyperbolic tangent activated convolution type operators. Mathematics 2024, 12, 3302. [Google Scholar] [CrossRef]
  7. Anastassiou, G.A. Neural networks in infinite domain as positive linear operators. Ann. Univ. Sci. Bp. Sect. Comp. 2025, 58, 15–29. [Google Scholar] [CrossRef]
  8. Anastassiou, G.A. Approximation by symmetrized and perturbed hyperbolic tangent activated convolutions as positive linear operators. Mod. Math. Methods 2025, 3, 72–84. [Google Scholar] [CrossRef]
  9. Kim, D.; Kim, W.; Kim, S. Tanh works better with asymmetry. Adv. Neural Inf. Process. Syst. 2023, 36, 12536–12554. [Google Scholar]
  10. Xu, S. Data mining using an adaptive HONN model with hyperbolic tangent neurons. In Knowledge Management and Acquisition for Smart Systems and Services; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  11. Zhang, S.; Ren, G. RoSwish: A novel rotating swish activation function with adaptive rotation around zero. Neural Netw. 2025, 192, 107892. [Google Scholar] [CrossRef] [PubMed]
  12. Mamedov, R.G. Asymptotic approximation of differentiable functions with linear positive operators. Dokl. Akad. Nauk SSSR 1959, 128, 471–474. (In Russian) [Google Scholar]
  13. Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. PeerJ Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Approximation of the Gaussian function f ( y ) = Exp ( y 2 ) by HHC-type PLOs C n for increasing values of n. The operator curves converge uniformly to f on compact subsets of R , illustrating the theoretical approximation properties.
Figure 1. Approximation of the Gaussian function f ( y ) = Exp ( y 2 ) by HHC-type PLOs C n for increasing values of n. The operator curves converge uniformly to f on compact subsets of R , illustrating the theoretical approximation properties.
Algorithms 19 00217 g001
Figure 2. Pointwise absolute errors | C n ( f ) f | for the HHC-type operators approximating f ( y ) = Exp ( y 2 ) . As n increases, the error curves uniformly decrease, confirming the theoretical convergence of the operators.
Figure 2. Pointwise absolute errors | C n ( f ) f | for the HHC-type operators approximating f ( y ) = Exp ( y 2 ) . As n increases, the error curves uniformly decrease, confirming the theoretical convergence of the operators.
Algorithms 19 00217 g002
Figure 3. Uniform norm error decay of the HHC-type operators for the approximation of the bounded function f ( y ) = Exp ( y 2 ) . The decrease in the supremum norm C n ( f ) f as n increases confirms the quantitative convergence predicted by the theoretical results.
Figure 3. Uniform norm error decay of the HHC-type operators for the approximation of the bounded function f ( y ) = Exp ( y 2 ) . The decrease in the supremum norm C n ( f ) f as n increases confirms the quantitative convergence predicted by the theoretical results.
Algorithms 19 00217 g003
Figure 4. Comparison of operator approximations for a fixed parameter n = 20 . The target function is depicted by the dashed curve, while C n , K n , and Q n denote the adj classical, adj Kantorovich-type, and adj quadrature HHC-type operators, respectively.
Figure 4. Comparison of operator approximations for a fixed parameter n = 20 . The target function is depicted by the dashed curve, while C n , K n , and Q n denote the adj classical, adj Kantorovich-type, and adj quadrature HHC-type operators, respectively.
Algorithms 19 00217 g004
Figure 5. Pointwise absolute error functions for n = 20 . The curves represent the absolute deviations | C n f f | , | K n f f | , and | Q n f f | corresponding to the adj classical, adj Kantorovich-type, and adj quadrature HHC-type operators, respectively.
Figure 5. Pointwise absolute error functions for n = 20 . The curves represent the absolute deviations | C n f f | , | K n f f | , and | Q n f f | corresponding to the adj classical, adj Kantorovich-type, and adj quadrature HHC-type operators, respectively.
Algorithms 19 00217 g005
Figure 6. Uniform error decay of the HHC-type operators. The log–log plot illustrates the convergence behavior of C n , K n , and Q n in T n f f as n increases. Here, T n represents C n , K n , and Q n , respectively.
Figure 6. Uniform error decay of the HHC-type operators. The log–log plot illustrates the convergence behavior of C n , K n , and Q n in T n f f as n increases. Here, T n represents C n , K n , and Q n , respectively.
Algorithms 19 00217 g006
Table 1. Error analysis for the approximation of f ( y ) = Exp ( y 2 ) by HHC-type operators C n . The decay of RMSE, MAE, and uniform error C n ( f ) f , together with R 2 1 , confirms convergence as n increases.
Table 1. Error analysis for the approximation of f ( y ) = Exp ( y 2 ) by HHC-type operators C n . The decay of RMSE, MAE, and uniform error C n ( f ) f , together with R 2 1 , confirms convergence as n increases.
nRMSEMAEMax Error R 2
5 3.619494 × 10 2 2.097727 × 10 2 1.142367 × 10 1 0.98597
10 1.044906 × 10 2 5.881949 × 10 3 3.369132 × 10 2 0.99883
20 2.724851 × 10 3 1.520008 × 10 3 8.852413 × 10 3 0.99992
40 6.887967 × 10 4 3.835620 × 10 4 2.242478 × 10 3 0.99999
Table 2. Regression-based error metrics for the HHC-type operators C n , K n , and  Q n applied to the test function in (60).
Table 2. Regression-based error metrics for the HHC-type operators C n , K n , and  Q n applied to the test function in (60).
nOperatorRMSEMAEMax Error R 2
5 C n 8.816 × 10 2 2.808 × 10 2 5.618 × 10 1 0.8052
K n 8.982 × 10 2 2.226 × 10 2 9.140 × 10 1 0.7978
Q n 9.498 × 10 2 3.398 × 10 2 4.812 × 10 1 0.7739
10 C n 6.287 × 10 2 1.409 × 10 2 4.761 × 10 1 0.9009
K n 6.028 × 10 2 1.074 × 10 2 7.197 × 10 1 0.9089
Q n 6.881 × 10 2 1.908 × 10 2 4.862 × 10 1 0.8813
20 C n 4.324 × 10 2 7.436 × 10 3 3.757 × 10 1 0.9531
K n 4.099 × 10 2 6.326 × 10 3 4.949 × 10 1 0.9579
Q n 4.809 × 10 2 1.004 × 10 2 4.556 × 10 1 0.9420
40 C n 2.585 × 10 2 4.124 × 10 3 2.360 × 10 1 0.9832
K n 3.448 × 10 3 1.527 × 10 3 1.497 × 10 2 0.9997
Q n 3.027 × 10 2 4.932 × 10 3 3.312 × 10 1 0.9770
80 C n 8.770 × 10 3 2.049 × 10 3 8.198 × 10 2 0.9981
K n 1.732 × 10 3 7.672 × 10 4 7.519 × 10 3 0.9999
Q n 1.125 × 10 2 1.876 × 10 3 1.270 × 10 1 0.9968
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassiou, G.A.; Karateke, S.; Zontul, M. Performance Analysis of Half-Hyperbolic Convolution (HHC)-Type Operators via Regression-Based Metrics. Algorithms 2026, 19, 217. https://doi.org/10.3390/a19030217

AMA Style

Anastassiou GA, Karateke S, Zontul M. Performance Analysis of Half-Hyperbolic Convolution (HHC)-Type Operators via Regression-Based Metrics. Algorithms. 2026; 19(3):217. https://doi.org/10.3390/a19030217

Chicago/Turabian Style

Anastassiou, George A., Seda Karateke, and Metin Zontul. 2026. "Performance Analysis of Half-Hyperbolic Convolution (HHC)-Type Operators via Regression-Based Metrics" Algorithms 19, no. 3: 217. https://doi.org/10.3390/a19030217

APA Style

Anastassiou, G. A., Karateke, S., & Zontul, M. (2026). Performance Analysis of Half-Hyperbolic Convolution (HHC)-Type Operators via Regression-Based Metrics. Algorithms, 19(3), 217. https://doi.org/10.3390/a19030217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop