Abstract
In this study, we research the univariate quantitative symmetrized approximation of complex-valued continuous functions on a compact interval by complex-valued symmetrized and perturbed neural network operators. These approximations are derived by establishing Jackson-type inequalities involving the modulus of continuity of the used function’s high order derivatives. The kinds of our approximations are trigonometric and hyperbolic. Our symmetrized operators are defined by using a density function generated by a q-deformed and -parametrized hyperbolic tangent function, which is a sigmoid function. These accelerated approximations are pointwise and of the uniform norm. The related complex-valued feed-forward neural networks have one hidden layer.
Keywords:
q-deformed and λ-parametrized hyperbolic tangent; complex-valued symmetrized neural network approximation; complex-valued quasi-interpolation operator; modulus of continuity; trigonometric and hyperbolic accelerated approximation MSC:
41A17; 41A25; 41A99; 42A10
1. Introduction
The author of [1,2] (see Chapters 2–5) was the first to establish neural network approximation to continuous functions with rates by very specifically defined neural network operators of Cardaliaguet–Euvrard and “Squashing” types, by employing the modulus of continuity of the engaged function or its high-order derivative, and producing very tight Jackson-type inequalities. The author addresses both the univariate and multivariate cases. In defining these operators as “bell-shaped” and “squashing” functions, they are assumed to have compact support.
In addition, inspired by [3], the author continued his studies on neural network approximation by introducing and using the proper quasi-interpolation operators of sigmoidal and hyperbolic tangent type, which resulted in [4], by treating both the univariate and multivariate cases.
Brain asymmetry has been observed in animals and humans in terms of structure, function and behaviour. This lateralization is thought to reflect evolutionary, hereditary, developmental, experiential and pathological factors. Therefore, for our study, it is natural to consider deformed neural network activation functions and operators. So, this article is a specific study under this philosophy of approaching reality as closely as possible.
Consequently, the author performs symmetrized q-deformed and -parametrized hyperbolic tangent function-activated high-order neural network approximations to continuous functions over compact intervals of the real line with complex values. All convergences are with rates expressed via the modulus of continuity of the involved functions high-order derivatives, derived by very tight Jackson-type inequalities.
In this study, the basis of our higher-order approximations is some trigonometric- and hyperbolic-type Taylor formulae newly discovered by the author.
Our compact intervals are not necessarily symmetric to the origin. The applied symmetrization technique and the newly introduced related operators cut in half the feed to neural networks, thus immensely accelerating their convergence speed to the unit operator.
For related recent studies, you may refer to [5,6,7,8,9,10,11,12,13,14]. For a detailed basic general study in neural networks, you may consult [15,16,17].
The author’s recent monumental and extensive monographs [18,19] take the reader into great depths and are expected to revolutionize the study of neural networks and AI in general.
A multilayer feed-forward neural network can be defined as follows (with hidden layers):
Let ; , where ; ; , with , .
Here, is the inner product, and thus, ; and , by , as it is coming from
We define the following:
Furthermore, we can define
And, in general, we define the following:
2. Basics
Here, we follow [18] (pp. 455–460).
Our activation function to be used here is
where is the parameter and q is the deformation coefficient, and typically
For more details, read Chapter 18 of [18]: “q-Deformed and -Parametrized Hyperbolic Tangent based Banach space Valued Ordinary and Fractional Neural Network Approximation”.
The chapter motivates our current work.
The proposed “symmetrization method” aims to use half of the data feed to our neural networks.
We will employ the following density function:
; .
Therefore,
is an even function, symmetric with respect to the y-axis.
By (18.18) of [18], we have
which share the same maximum at symmetric points.
By Theorem 18.1, p. 458 of [18], we have
Consequently, we derive that
By Theorem 18.2, p. 459 of [18], we have that
so that
Therefore, is a density function.
By Theorem 18.3, p. 459 of [18], we have the following:
Let , and with ; . Then,
where
Similarly, we obtain
Consequently, we obtain
where
Here, denotes the ceiling of the number, and is its integral part.
We will mention the following:
Theorem 18.4 (p. 459, [18]): Let and so that . For , we consider , such that , and . Then,
Similarly, we consider , such that , and . Thus,
Hence,
and
Consequently, the following holds:
so that
That is
We have proved the following:
Theorem 1.
Let and so that . For , we consider , such that , and . Also, consider , such that , and . Then,
We have the following:
Remark 1.
(I) By Remark 18.5, p. 460 of [18], we have
and
Therefore, the following holds:
Hence,
even if
because then
and equivalently,
which is true by
(II) Let . For a large n, we always have . Also, , iff . So, in general, the following holds:
Let be a Banach space.
Definition 1.
Let and . We introduce and define the X-valued symmetrized and perturbed linear neural network operators:
The modulus of continuity is defined by
The same is defined for (uniformly continuous and bounded functions) and for (bounded and continuous functions) and for (uniformly continuous functions).
The fact or is equivalent to
In this work, we specialize to the case of , the field of complex numbers, which is a Banach space over the real numbers.
3. Main Results
We present -valued symmetrized and perturbed neural network high-order approximations to a function given with rates. We start with a trigonometric approximation.
Theorem 2.
Let , , , Then,
(1)
(2) if we obtain
and notice here the high rate of convergence at
(3) furthermore, we obtain
i.e., , pointwise and uniformly;
(4) and finally, the following holds:
and again, here we achieve the high speed of convergence at
Proof.
As it is very lengthy but similar to the proof of Theorem 4.5 of [19] (pp. 120–129), it is omitted. Basically, the density function here, , replaces the density function of the reference □
We continue with a hyperbolic high-order symmetrized and perturbed neural network approximation.
Theorem 3.
Let , , , Then,
(1)
(2) if we obtain
and notice here the high rate of convergence at
(3) furthermore, we obtain
and it follows that , pointwise and uniformly;
in addition,
(4)
and again, here we achieve the high speed of convergence at
Proof.
It is very similar to the lengthy proof of Theorem 4.6 of [19] (pp. 129–138). Basically, the density function here, , replaces the density function of the reference □
Next follows a mixed hyperbolic–trigonometric high-order symmetrized and perturbed neural network approximation.
Theorem 4.
Let , , , Then,
(1)
Proof.
It is very similar to the long proof of Theorem 4.7 of [19] (pp. 139–145). Basically, the density function here, , replaces the density function of the reference □
We continue with a general symmetrized and perturbed trigonometric result.
Theorem 5.
Let , , , Also, let with Then,
(1)
(2) if we obtain
The high speed of convergence in (1) and (2) is
Proof.
As it is similar to the proof of Theorem 4.8 of [19], p. 146, it is omitted. Basically, the density function here, , replaces the density function of the reference □
We finish with a general symmetrized and perturbed hyperbolic result.
Theorem 6.
Let , , , Also, let with Then,
(1)
(2) if we obtain
The high speed of convergence in (1) and (2) is
Proof.
As it is similar to the proof of Theorem 4.9 of [19], p. 147, it is omitted. Basically, the density function here, , replaces the density function of the reference . □
4. Conclusions
The author herein exhibits symmetrized q-deformed and -parametrized hyperbolic tangent function-activated high-order neural network approximations to continuous functions over compact intervals of the real line with complex values. All convergences are with rates expressed via the modulus of continuity of the involved functions high-order derivatives. The basis of our higher-order approximations herein is some trigonometric- and hyperbolic-type Taylor formulae newly discovered by the author. Our compact intervals are not necessarily symmetric to the origin. The applied symmetrization technique and the newly introduced related operators cut in half the feed to neural networks, thus immensely accelerating their convergence speed to the unit operator. The setting is trigonometric and hyperbolic.
Funding
This research received no external funding.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Acknowledgments
We would like to thank the reviewers who generously shared their time and opinions.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Anastassiou, G.A. Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 1997, 212, 237–262. [Google Scholar] [CrossRef]
- Anastassiou, G.A. Quantitative Approximations; Chapman & Hall/CRC: Boca Raton, FL, USA; New York, NY, USA, 2001. [Google Scholar]
- Chen, Z.; Cao, F. The approximation operators with sigmoidal functions. Comput. Math. Appl. 2009, 58, 758–765. [Google Scholar] [CrossRef]
- Anastassiou, G.A. Inteligent Systems: Approximation by Artificial Neural Networks; Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2011; Volume 19. [Google Scholar]
- Yu, D.; Cao, F. Construction and approximation rate for feedforward neural network operators with sigmoidal functions. J. Comput. Appl. Math. 2025, 453, 116150. [Google Scholar] [CrossRef]
- Cen, S.; Jin, B.; Quan, Q.; Zhou, Z. Hybrid neural-network FEM approximation of diffusion coefficient in elliptic and parabolic problems. IMA J. Numer. Anal. 2024, 44, 3059–3093. [Google Scholar] [CrossRef]
- Coroianu, L.; Costarelli, D.; Natale, M.; Pantiş, A. The approximation capabilities of Durrmeyer-type neural network operators. J. Appl. Math. Comput. 2024, 70, 4581–4599. [Google Scholar] [CrossRef]
- Warin, X. The GroupMax neural network approximation of convex functions. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 11608–11612. [Google Scholar] [CrossRef] [PubMed]
- Fabra, A.; Guasch, O.; Baiges, J.; Codina, R. Approximation of acoustic black holes with finite element mixed formulations and artificial neural network correction terms. Finite Elem. Anal. Des. 2024, 241, 104236. [Google Scholar] [CrossRef]
- Grohs, P.; Voigtlaender, F. Proof of the theory-to-practice gap in deep learning via sampling complexity bounds for neural network approximation spaces. Found. Comput. Math. 2024, 24, 1085–1143. [Google Scholar] [CrossRef]
- Basteri, A.; Trevisan, D. Quantitative Gaussian approximation of randomly initialized deep neural networks. Mach. Learn. 2024, 113, 6373–6393. [Google Scholar] [CrossRef]
- De Ryck, T.; Mishra, S. Error analysis for deep neural network approximations of parametric hyperbolic conservation laws. Math. Comp. 2024, 93, 2643–2677. [Google Scholar] [CrossRef]
- Liu, J.; Zhang, B.; Lai, Y.; Fang, L. Hull form optimization research based on multi-precision back-propagation neural network approximation model. Internal. J. Numer. Methods Fluid 2024, 96, 1445–1460. [Google Scholar] [CrossRef]
- Yoo, J.; Kim, J.; Gim, M.; Lee, H. Error estimates of physics-informed neural networks for initial value problems. J. Korean Soc. Ind. Appl. Math. 2024, 28, 33–58. [Google Scholar]
- Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New York, NY, USA, 1998. [Google Scholar]
- McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
- Mitchell, T.M. Machine Learning; WCB-McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
- Anastassiou, G.A. Parametrized, Deformed and General Neural Networks; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2023. [Google Scholar]
- Anastassiou, G.A. Trigonometric and Hyperbolic Generated Approximation Theory; World Scientific: Hackensack, NJ, USA; London, UK; Singapore, 2025. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).