Multiple-Composite Quantitative Approximation by Multivariate Kantorovich–Choquet Neural Networks
Abstract
1. Introduction
- Sequential Composition: Applying one activation function after another (e.g., ). A recent development includes “PolyCom” (Polynomial Composition), which combines polynomial functions with others like ReLU to accelerate convergence and improve accuracy. Another example is compounding functions to create “normalized cusp neural network operators”, which can reduce infinite domains to compact supports, enhancing approximation capabilities.
- Parallel Composition (Concatenation): Computing the outputs of different activation functions ( ) for the same input and concatenating them into a single vector. This allows the network to utilize multiple non-linearities at once.
- Dynamic Activation Composition (Dyn): A technique that uses learnable, normalized convex combinations of basis activation functions. This allows the network to adaptively “mix” activations during training, enhancing model adaptability and performance.
- Multivariate/Multi-dimensional Activations: Moving beyond simple scalar functions to functions that take multiple inputs and produce multiple outputs (e.g., generalizing ReLU to a second-order cone projection). These approaches are shown to have higher expressive power than traditional single-input, single-output activations.
- Increased Expressiveness: Complex compositions allow networks to learn more intricate patterns and handle non-linearly separable data more effectively.
- Improved Accuracy: Studies have shown that combining different activation functions can lead to better performance compared to using a single standard activation, particularly for less predictable data distributions.
- Faster Convergence: Certain compositions, such as multi-kernel activation functions (multi-KAF) or dynamic mixtures, can help models converge faster by better adapting to the data.
- Controlled Output Range: Composing functions can limit the output to specific, desirable ranges, which helps in focusing on important information and filtering out noise.
- Better Gradient Flow: Some compositions, such as concatenating Swish and Tanh, can provide paths with non-zero derivatives, helping to mitigate vanishing gradient problems.
- Flexibility: Research indicates that compositing activation functions—such as using a “cusp” function (a composition of two functions)—results in more flexible and powerful neural networks.
- Learned Activations: Instead of manually selecting the composition, techniques like “Trainable Adaptive Activation Function Structure (TAAFS)” learn the optimal combination of activation functions during training.
- Efficiency Concerns: While composing functions can improve accuracy, it may add to the computational load. However, the performance gains often justify the added complexity.
2. Background
2.1. Description of Choquet Integral [33]
2.2. On Multi-Composite Activation Functions
3. Main Results
4. Conclusions
Funding
Data Availability Statement
Conflicts of Interest
References
- Anastassiou, G.A. Rate of Convergence of Some Neural Network Operators to the Unit-Univariate Case. J. Math. Anal. Appl. 1997, 212, 237–262. [Google Scholar] [CrossRef]
- Anastassiou, G.A. Quantitative Approximations; Chapman & Hall/CRC: Boca Raton, NY, USA, 2001. [Google Scholar]
- Chen, Z.; Cao, F. The approximation operators with sigmoidal functions. Comput. Math. Appl. 2009, 58, 758–765. [Google Scholar] [CrossRef]
- Anastassiou, G.A. Intelligent Systems: Approximation by Artificial Neural Networks; Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2011; Volume 19. [Google Scholar]
- Anastassiou, G.A. Intelligent Systems II: Complete Approximation by Neural Network Operators; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2016. [Google Scholar]
- Anastassiou, G.A. Intelligent Computations: Abstract Fractional Calculus, Inequalities, Approximations; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2018. [Google Scholar]
- Anastassiou, G.A. Parametrized, Deformed and General Neural Networks; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2023. [Google Scholar]
- Costarelli, D.; Spigler, R. Approximation results for neural network operators activated by sigmoidal functions. Neural Netw. 2013, 44, 101–106. [Google Scholar] [CrossRef] [PubMed]
- Costarelli, D.; Spigler, R. Multivariate neural network operators with sigmoidal activation functions. Neural Netw. 2013, 48, 72–77. [Google Scholar] [CrossRef] [PubMed]
- Haykin, I.S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New York, NY, USA, 1998. [Google Scholar]
- McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
- Mitchell, T.M. Machine Learning; WCB-McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
- Dansheng, Y.; Feilong, C. Construction and approximation rate for feed-forward neural network operators with sigmoidal functions. J. Comput. Appl. Math. 2025, 453, 116150. [Google Scholar]
- Siyu, C.; Bangti, J.; Qimeng, Q.; Zhi, Z. Hybrid neural-network FEM approximation of diffusion coeficient in elyptic and parabolic problems. IMA J. Numer. Anal. 2024, 44, 3059–3093. [Google Scholar]
- Lucian, C.; Danillo, C.; Mariarosaria, N.; Alexandra, P. The approximation capabilities of Durrmeyer-type neural network operators. J. Appl. Math. Comput. 2024, 70, 4581–4599. [Google Scholar] [CrossRef]
- Xavier, W. The GroupMax neural network approximation of convex functions. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 11608–11612. [Google Scholar]
- Arnau, F.; Oriol, G.; Joan, B.; Ramon, C. Approximation of acoustic black holes with finite element mixed formulations and artificial neural network correction terms. Finite Elem. Anal. Des. 2024, 241, 104236. [Google Scholar]
- Philipp, G.; Felix, V. Proof of the theory-to-practice gap in deep learning via sampling complexity bounds for neural network approximation spaces. Found. Comput. Math. 2024, 24, 1085–1143. [Google Scholar]
- Andrea, B.; Dario, T. Quantitative Gaussian approximation of randomly initialized deep neural networks. Mach. Learn. 2024, 113, 6373–6393. [Google Scholar] [CrossRef]
- De Ryck, T.; Mishra, S. Error analysis for deep neural network approximations of parametric hyperbolic conservation laws. Math. Comp. 2024, 93, 2643–2677. [Google Scholar] [CrossRef]
- Jie, L.; Baoji, Z.; Yuyang, L.; Liqiau, F. Hull form optimization reserach based on multi-precision back-propagation neural network approximation model. Int. J. Numer. Methods Fluids 2024, 96, 1445–1460. [Google Scholar]
- Yoo, J.; Kim, J.; Gim, M.; Lee, H. Error estimates of physics-informed neural networks for initial value problems. J. Korean Soc. Ind. Appl. Math. 2024, 28, 33–58. [Google Scholar]
- Kaur, J.; Goyal, M. Hyers-Ulam stability of some positive linear operators. Stud. Univ. Babeş-Bolyai Math. 2025, 70, 105–114. [Google Scholar] [CrossRef]
- Abel, U.; Acu, A.M.; Heilmann, M.; Raşa, I. On some Cauchy problems and positive linear operators. Mediterr. J. Math. 2025, 22, 20. [Google Scholar] [CrossRef]
- Moradi, H.R.; Furuichi, S.; Sababheh, M. Operator quadratic mean and positive linear maps. J. Math. Inequal. 2024, 18, 1263–1279. [Google Scholar] [CrossRef]
- Bustamante, J.; Torres-Campos, J.D. Power series and positive linear operators in weighted spaces. Serdica Math. J. 2024, 50, 225–250. [Google Scholar] [CrossRef]
- Acu, A.-M.; Rasa, I.; Sofonea, F. Composition of some positive linear integral operators. Demonstr. Math. 2024, 57, 20240018. [Google Scholar] [CrossRef]
- Patel, P.G. On positive linear operators linking gamma, Mittag-Leffler and Wright functions. Int. J. Appl. Comput. Math. 2024, 10, 152. [Google Scholar] [CrossRef]
- Wang, Z.; Klir, G.J. Generalized Measure Theory; Springer: New York, NY, USA, 2009. [Google Scholar]
- Ozger, F.; Aslan, A.R.; Merve, E. Some approximation results on a class of Szasz-Mirakjan-Kantorovich operators including non-negative parameter. Numer. Funct. Anal. Optim. 2025, 46, 461–484. [Google Scholar] [CrossRef]
- Costarelli, D.; Piconi, M. Strong and weak sharp bounds for neural network operators in Sobolev-Orlicz spaces and their quantitative extensions to Orlicz spaces. Bull. Sci. Math. 2026, 208, 103791. [Google Scholar] [CrossRef]
- Saini, S.; Singh, U. Kantorovich-Type Stochastic Neural Network Operators for the Mean-Square Approximation of Certain Second-Order Stochastic Processes. arXiv 2026, arXiv:2601.03634. [Google Scholar]
- Choquet, G. Theory of capacities. Ann. Inst. Fourier 1954, 5, 131–295. [Google Scholar] [CrossRef]
- Denneberg, D. Non-Additive Measure and Integral; Kluwer: Dordrecht, The Netherlands, 1994. [Google Scholar]
- Gal, S. Uniform and Pointwise Quantitative Approximation by Kantorovich-Choquet type integral Operators with respect to monotone and submodular set functions. Mediterr. J. Math. 2017, 14, 205. [Google Scholar] [CrossRef]
- Anastassiou, G.A. General Multi-Composite Sigmoid Relied Banach Space Valued Univariate Neural Network Approximation. In Parametrized, Deformed and General Neural Networks; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2023. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Anastassiou, G.A. Multiple-Composite Quantitative Approximation by Multivariate Kantorovich–Choquet Neural Networks. Mathematics 2026, 14, 1027. https://doi.org/10.3390/math14061027
Anastassiou GA. Multiple-Composite Quantitative Approximation by Multivariate Kantorovich–Choquet Neural Networks. Mathematics. 2026; 14(6):1027. https://doi.org/10.3390/math14061027
Chicago/Turabian StyleAnastassiou, George A. 2026. "Multiple-Composite Quantitative Approximation by Multivariate Kantorovich–Choquet Neural Networks" Mathematics 14, no. 6: 1027. https://doi.org/10.3390/math14061027
APA StyleAnastassiou, G. A. (2026). Multiple-Composite Quantitative Approximation by Multivariate Kantorovich–Choquet Neural Networks. Mathematics, 14(6), 1027. https://doi.org/10.3390/math14061027
