Next Article in Journal
Revisiting Some Relationships Between the Weighted Spectral Mean and the Wasserstein Mean
Previous Article in Journal
Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems
Previous Article in Special Issue
Sufficient Conditions for Optimal Stability in Hilfer–Hadamard Fractional Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetrized, Perturbed Hyperbolic Tangent-Based Complex-Valued Trigonometric and Hyperbolic Neural Network Accelerated Approximation

by
George A. Anastassiou
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA
Mathematics 2025, 13(10), 1688; https://doi.org/10.3390/math13101688
Submission received: 18 April 2025 / Revised: 16 May 2025 / Accepted: 19 May 2025 / Published: 21 May 2025

Abstract

:
In this study, we research the univariate quantitative symmetrized approximation of complex-valued continuous functions on a compact interval by complex-valued symmetrized and perturbed neural network operators. These approximations are derived by establishing Jackson-type inequalities involving the modulus of continuity of the used function’s high order derivatives. The kinds of our approximations are trigonometric and hyperbolic. Our symmetrized operators are defined by using a density function generated by a q-deformed and λ -parametrized hyperbolic tangent function, which is a sigmoid function. These accelerated approximations are pointwise and of the uniform norm. The related complex-valued feed-forward neural networks have one hidden layer.

1. Introduction

The author of [1,2] (see Chapters 2–5) was the first to establish neural network approximation to continuous functions with rates by very specifically defined neural network operators of Cardaliaguet–Euvrard and “Squashing” types, by employing the modulus of continuity of the engaged function or its high-order derivative, and producing very tight Jackson-type inequalities. The author addresses both the univariate and multivariate cases. In defining these operators as “bell-shaped” and “squashing” functions, they are assumed to have compact support.
In addition, inspired by [3], the author continued his studies on neural network approximation by introducing and using the proper quasi-interpolation operators of sigmoidal and hyperbolic tangent type, which resulted in [4], by treating both the univariate and multivariate cases.
Brain asymmetry has been observed in animals and humans in terms of structure, function and behaviour. This lateralization is thought to reflect evolutionary, hereditary, developmental, experiential and pathological factors. Therefore, for our study, it is natural to consider deformed neural network activation functions and operators. So, this article is a specific study under this philosophy of approaching reality as closely as possible.
Consequently, the author performs symmetrized q-deformed and λ -parametrized hyperbolic tangent function-activated high-order neural network approximations to continuous functions over compact intervals of the real line with complex values. All convergences are with rates expressed via the modulus of continuity of the involved functions high-order derivatives, derived by very tight Jackson-type inequalities.
In this study, the basis of our higher-order approximations is some trigonometric- and hyperbolic-type Taylor formulae newly discovered by the author.
Our compact intervals are not necessarily symmetric to the origin. The applied symmetrization technique and the newly introduced related operators cut in half the feed to neural networks, thus immensely accelerating their convergence speed to the unit operator.
For related recent studies, you may refer to [5,6,7,8,9,10,11,12,13,14]. For a detailed basic general study in neural networks, you may consult [15,16,17].
The author’s recent monumental and extensive monographs [18,19] take the reader into great depths and are expected to revolutionize the study of neural networks and AI in general.
A multilayer feed-forward neural network can be defined as follows (with m N hidden layers):
Let x R s ; s N , where x = x 1 , . . . , x s ; α j , c j R s ; b j R , with 0 j n , n N .
Here, α j · x is the inner product, and thus, σ α j · x + b j R ; and N n x R s , by c j R s , as it is coming from N n x = j = 0 n c j σ α j · x + b j .
We define the following:
N n 2 x = j = 0 n c j σ α j · N n x + b j =
j = 0 n c j σ α j · j = 0 n c j σ α j · x + b j + b j .
Furthermore, we can define
N n 3 x = j = 0 n c j σ α j · N n 2 x + b j .
And, in general, we define the following:
N n m x = j = 0 n c j σ α j · N n m 1 x + b j , for m N .

2. Basics

Here, we follow [18] (pp. 455–460).
Our activation function to be used here is
g q , λ x : = e λ x q e λ x e λ x + q e λ x , λ , q > 0 , x R .
where λ is the parameter and q is the deformation coefficient, and typically 0 < λ , q 1 .
For more details, read Chapter 18 of [18]: “q-Deformed and λ -Parametrized Hyperbolic Tangent based Banach space Valued Ordinary and Fractional Neural Network Approximation”.
The chapter motivates our current work.
The proposed “symmetrization method” aims to use half of the data feed to our neural networks.
We will employ the following density function:
M q . λ x : = 1 4 g q , λ x + 1 g q , λ x 1 > 0 ,
x R ; q , λ > 0 .
We have
M q , λ x = M 1 q , λ x , x R ; q , λ > 0 ,
and
M 1 q , λ x = M q , λ x , x R ; q , λ > 0 .
Adding (3) and (4), we obtain
M q , λ x + M 1 q , λ x = M q , λ x + M 1 q , λ x ,
the key to this work.
Therefore,
Φ x : = M q , λ x + M 1 q , λ x 2
is an even function, symmetric with respect to the y-axis.
By (18.18) of [18], we have
M q , λ ln q 2 λ = tanh λ 2 , and M 1 q , λ ln q 2 λ = tanh λ 2 , λ > 0 .
which share the same maximum at symmetric points.
By Theorem 18.1, p. 458 of [18], we have
i = M q , λ x i = 1 , x R , λ , q > 0 , and i = M 1 q , λ x i = 1 , x R , λ , q > 0 .
Consequently, we derive that
i = Φ x i = 1 , x R .
By Theorem 18.2, p. 459 of [18], we have that
M q , λ x d x = 1 , and M 1 q , λ x d x = 1 ,
so that
Φ x d x = 1 ,
Therefore, Φ is a density function.
By Theorem 18.3, p. 459 of [18], we have the following:
Let 0 < α < 1 , and n N with n 1 α > 2 ; q , λ > 0 . Then,
k = : n x k n 1 α M q , λ n x k < 2 max q , 1 q e 4 λ e 2 λ n 1 α = T e 2 λ n 1 α ,
where T : = 2 max q , 1 q e 4 λ .
Similarly, we obtain
k = M 1 q , λ n x k < T e 2 λ n 1 α .
Consequently, we obtain
k = : n x k n 1 α Φ n x k < T e 2 λ n 1 α ,
where T : = 2 max q , 1 q e 4 λ .
Here, · denotes the ceiling of the number, and · is its integral part.
We will mention the following:
Theorem 18.4 (p. 459, [18]): Let x a , b R and n N so that n a n b . For q , λ > 0 , we consider λ q > z 0 > 0 , such that M q , λ z 0 = M q , λ 0 , and λ q > 1 . Then,
1 k = n a n b M q , λ n x k < max 1 M q , λ λ q , 1 M 1 q , λ λ 1 q = : Δ q .
Similarly, we consider λ 1 q > z 1 > 0 , such that M 1 q , λ z 1 = M 1 q , λ 0 , and λ 1 q > 1 . Thus,
1 k = n a n b M 1 q , λ n x k < max 1 M 1 q , λ λ 1 q , 1 M q , λ λ q = Δ q .
Hence,
k = n a n b M q , λ n x k > 1 Δ q ,
and
k = n a n b M 1 q , λ n x k > 1 Δ q .
Consequently, the following holds:
k = n a n b M q , λ n x k + M 1 q , λ n x k 2 > 2 2 Δ q = 1 Δ q ,
so that
1 k = n a n b M q , λ n x k + M 1 q , λ n x k 2 < Δ q ,
That is
1 k = n a n b Φ n x k < Δ q .
We have proved the following:
Theorem 1. 
Let x a , b R and n N so that n a n b . For q , λ > 0 , we consider λ q > z 0 > 0 , such that M q , λ z 0 = M q , λ 0 , and λ q > 1 . Also, consider λ 1 q > z 1 > 0 , such that M 1 q , λ z 1 = M 1 q , λ 0 , and λ 1 q > 1 . Then,
1 k = n a n b Φ n x k < Δ q .
We have the following:
Remark 1. 
(I) By Remark 18.5, p. 460 of [18], we have
lim n k = n a n b M q , λ n x 1 k 1 , for some x 1 a , b ,
and
lim n k = n a n b M 1 q , λ n x 2 k 1 , for some x 2 a , b .
Therefore, the following holds:
lim n k = n a n b M q , λ n x 1 k + M 1 q , λ n x 2 k 2 1 .
Hence,
lim n k = n a n b M q , λ n x 1 k + M 1 q , λ n x 1 k 2 1 ,
even if
lim n k = n a n b M 1 q , λ n x 1 k = 1 ,
because then
lim n k = n a n b M q , λ n x 1 k 2 + 1 2 1 ,
and equivalently,
lim n k = n a n b M q , λ n x 1 k 2 1 2 ,
which is true by
lim n k = n a n b M q , λ n x 1 k 1 .
(II) Let a , b R . For a large n, we always have n a n b . Also, a k n b , iff n a k n b . So, in general, the following holds:
k = n a n b Φ n x k 1 .
Let X , · be a Banach space.
Definition 1. 
Let f C a , b , X and n N : n a n b . We introduce and define the X-valued symmetrized and perturbed linear neural network operators:
F n s f , x : = k = n a n b f k n Φ n x k k = n a n b Φ n x k =
k = n a n b f k n M q , λ n x 1 k + M 1 q , λ n x 1 k k = n a n b M q , λ n x 1 k + M 1 q , λ n x 1 k .
The modulus of continuity is defined by
ω 1 f , δ : = sup x , y a , b x y δ f x f y , δ > 0 .
The same is defined for f C u B R , X (uniformly continuous and bounded functions) and for f C B R , X (bounded and continuous functions) and for f C u R , X (uniformly continuous functions).
The fact f C a , b , X or f C u R , X is equivalent to lim δ 0 ω 1 f , δ = 0 .
In this work, we specialize to the case of X = C , · , the field of complex numbers, which is a Banach space over the real numbers.

3. Main Results

We present C -valued symmetrized and perturbed neural network high-order approximations to a function given with rates. We start with a trigonometric approximation.
Theorem 2. 
Let f C 2 a , b , C , 0 < α < 1 , n N : n 1 α > 2 , x a , b . Then,
(1)
F n s f , x f x Δ q f x 1 n α + b a T e 2 λ n 1 α
+ f x 2 1 n 2 α + b a 2 T e 2 λ n 1 α +
ω 1 f + f , 1 n α 2 n 2 α + f + f b a 2 T e 2 λ n 1 α ,
(2) if f x = f x = 0 , we obtain
F n s f , x f x Δ q ω 1 f + f , 1 n α 2 n 2 α + f + f b a 2 T e 2 λ n 1 α ,
and notice here the high rate of convergence at n 3 α ;
(3) furthermore, we obtain
F n s f f Δ q f 1 n α + b a T e 2 λ n 1 α
+ f 2 1 n 2 α + b a 2 T e 2 λ n 1 α +
ω 1 f + f , 1 n α 2 n 2 α + f + f b a 2 T e 2 λ n 1 α ,
i.e., lim n + F n s H n f = f , pointwise and uniformly;
(4) and finally, the following holds:
F n s f , x f x F n s sin · x , x 2 f x F n s sin 2 · x 2 , x f x
Δ q ω 1 f + f , 1 n α 2 n 2 α + f + f b a 2 T e 2 λ n 1 α ,
and again, here we achieve the high speed of convergence at n 3 α .
Proof. 
As it is very lengthy but similar to the proof of Theorem 4.5 of [19] (pp. 120–129), it is omitted. Basically, the density function here, Φ , replaces the density function of the reference M q , λ .
We continue with a hyperbolic high-order symmetrized and perturbed neural network approximation.
Theorem 3. 
Let f C 2 a , b , C , 0 < α < 1 , n N : n 1 α > 2 , x a , b . Then,
(1)
F n s f , x f x Δ q cosh b a f x 1 n α + b a T e 2 λ n 1 α
f x 2 1 n 2 α + b a 2 T e 2 λ n 1 α +
ω 1 f f , 1 n α 2 n 2 α + f f b a 2 T e 2 λ n 1 α ,
(2) if f x = f x = 0 , we obtain
F n s f , x f x Δ q cosh b a
ω 1 f f , 1 n α 2 n 2 α + f f b a 2 T e 2 λ n 1 α ,
and notice here the high rate of convergence at n 3 α ;
(3) furthermore, we obtain
F n s f f Δ q cosh b a f 1 n α + b a T e 2 λ n 1 α
f 2 1 n 2 α + b a 2 T e 2 λ n 1 α +
ω 1 f f , 1 n α 2 n 2 α + f f b a 2 T e 2 λ n 1 α ,
and it follows that lim n + F n s f = f , pointwise and uniformly;
in addition,
(4)
F n s f , x f x F n s sinh · x , x 2 f x F n s sinh 2 · x 2 , x f x
Δ q cosh b a ω 1 f f , 1 n α 2 n 2 α + f f b a 2 T e 2 λ n 1 α ,
and again, here we achieve the high speed of convergence at n 3 α .
Proof. 
It is very similar to the lengthy proof of Theorem 4.6 of [19] (pp. 129–138). Basically, the density function here, Φ , replaces the density function of the reference M q , λ .
Next follows a mixed hyperbolic–trigonometric high-order symmetrized and perturbed neural network approximation.
Theorem 4. 
Let f C 4 a , b , C , 0 < α < 1 , n N : n 1 α > 2 , x a , b . Then,
(1)
F n s f , x f x f x 2 F n s sinh · x + sin · x , x
f x 2 F n s cosh · x cos · x , x
f 3 x 2 F n s sinh · x sin · x , x
f 4 x F n s sinh 2 · x 2 sin 2 · x 2 , x
Δ q cosh b a + 1 2
ω 1 f 4 f , 1 n α 2 n 2 α + f 4 f b a 2 T e 2 λ n 1 α ,
(2) if f i x = 0 , i = 1 , 2 , 3 , 4 , we obtain
F n s f , x f x Δ q cosh b a + 1 2
ω 1 f 4 f , 1 n α 2 n 2 α + f 4 f b a 2 T e 2 λ n 1 α ,
and in the last (44), we observe the high speed of convergence at n 3 α .
Proof. 
It is very similar to the long proof of Theorem 4.7 of [19] (pp. 139–145). Basically, the density function here, Φ , replaces the density function of the reference M q , λ .
We continue with a general symmetrized and perturbed trigonometric result.
Theorem 5. 
Let f C 4 a , b , C , 0 < α < 1 , n N : n 1 α > 2 , x a , b . Also, let α ¯ , β ¯ R with α ¯ β ¯ α ¯ 2 β ¯ 2 0 . Then,
(1)
F n s f , x f x f x α ¯ β ¯ β ¯ 2 α ¯ 2 F n s β ¯ 3 sin α ¯ · x α ¯ 3 sin β ¯ · x , x
f x β ¯ 2 α ¯ 2 F n s cos α ¯ · x cos β ¯ · x , x
f x α ¯ β ¯ β ¯ 2 α ¯ 2 F n s β ¯ sin α ¯ · x α ¯ sin β ¯ · x , x
2 f 4 x + α ¯ 2 + β ¯ 2 f x α ¯ β ¯ 2 β ¯ 2 α ¯ 2
F n s β ¯ 2 sin 2 α ¯ · x 2 α ¯ 2 sin 2 β ¯ · x 2 , x
Δ q β ¯ 2 α ¯ 2 ω 1 f 4 + α ¯ 2 + β ¯ 2 f + α ¯ 2 β ¯ 2 f , 1 n α n 2 α +
2 f 4 + α ¯ 2 + β ¯ 2 f + α ¯ 2 β ¯ 2 f b a 2 T e 2 λ n 1 α ,
(2) if f i x = 0 , i = 1 , 2 , 3 , 4 , we obtain
F n s f , x f x Δ q β ¯ 2 α ¯ 2
ω 1 f 4 + α ¯ 2 + β ¯ 2 f + α ¯ 2 β ¯ 2 f , 1 n α n 2 α +
2 f 4 + α ¯ 2 + β ¯ 2 f + α ¯ 2 β ¯ 2 f b a 2 T e 2 λ n 1 α .
The high speed of convergence in (1) and (2) is n 3 α .
Proof. 
As it is similar to the proof of Theorem 4.8 of [19], p. 146, it is omitted. Basically, the density function here, Φ , replaces the density function of the reference M q , λ .
We finish with a general symmetrized and perturbed hyperbolic result.
Theorem 6. 
Let f C 4 a , b , C , 0 < α < 1 , n N : n 1 α > 2 , x a , b . Also, let α ¯ , β ¯ R with α ¯ β ¯ α ¯ 2 β ¯ 2 0 . Then,
(1)
F n s f , x f x f x α ¯ β ¯ β ¯ 2 α ¯ 2 F n s β ¯ 3 sinh α ¯ · x α ¯ 3 sinh β ¯ · x , x
f x β ¯ 2 α ¯ 2 F n s cosh β ¯ · x cosh α ¯ · x , x
f x α ¯ β ¯ β ¯ 2 α ¯ 2 F n s α ¯ sinh β ¯ · x β ¯ sinh α ¯ · x , x
2 ( f 4 x α ¯ 2 + β ¯ 2 f x ) α ¯ β ¯ 2 β ¯ 2 α ¯ 2
F n s α ¯ 2 sinh 2 β ¯ · x 2 β ¯ 2 sinh 2 α ¯ · x 2 , x
Δ q cosh b a β ¯ 2 α ¯ 2 ω 1 f 4 α ¯ 2 + β ¯ 2 f + α ¯ 2 β ¯ 2 f , 1 n α n 2 α +
2 f 4 α ¯ 2 + β ¯ 2 f + α ¯ 2 β ¯ 2 f b a 2 T e 2 λ n 1 α ,
(2) if f i x = 0 , i = 1 , 2 , 3 , 4 , we obtain
F n s f , x f x Δ q cosh b a β ¯ 2 α ¯ 2
ω 1 f 4 α ¯ 2 + β ¯ 2 f + α ¯ 2 β ¯ 2 f , 1 n α n 2 α +
2 f 4 α ¯ 2 + β ¯ 2 f + α ¯ 2 β ¯ 2 f b a 2 T e 2 λ n 1 α .
The high speed of convergence in (1) and (2) is n 3 α .
Proof. 
As it is similar to the proof of Theorem 4.9 of [19], p. 147, it is omitted. Basically, the density function here, Φ , replaces the density function of the reference M q , λ . □

4. Conclusions

The author herein exhibits symmetrized q-deformed and λ -parametrized hyperbolic tangent function-activated high-order neural network approximations to continuous functions over compact intervals of the real line with complex values. All convergences are with rates expressed via the modulus of continuity of the involved functions high-order derivatives. The basis of our higher-order approximations herein is some trigonometric- and hyperbolic-type Taylor formulae newly discovered by the author. Our compact intervals are not necessarily symmetric to the origin. The applied symmetrization technique and the newly introduced related operators cut in half the feed to neural networks, thus immensely accelerating their convergence speed to the unit operator. The setting is trigonometric and hyperbolic.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We would like to thank the reviewers who generously shared their time and opinions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Anastassiou, G.A. Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl. 1997, 212, 237–262. [Google Scholar] [CrossRef]
  2. Anastassiou, G.A. Quantitative Approximations; Chapman & Hall/CRC: Boca Raton, FL, USA; New York, NY, USA, 2001. [Google Scholar]
  3. Chen, Z.; Cao, F. The approximation operators with sigmoidal functions. Comput. Math. Appl. 2009, 58, 758–765. [Google Scholar] [CrossRef]
  4. Anastassiou, G.A. Inteligent Systems: Approximation by Artificial Neural Networks; Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2011; Volume 19. [Google Scholar]
  5. Yu, D.; Cao, F. Construction and approximation rate for feedforward neural network operators with sigmoidal functions. J. Comput. Appl. Math. 2025, 453, 116150. [Google Scholar] [CrossRef]
  6. Cen, S.; Jin, B.; Quan, Q.; Zhou, Z. Hybrid neural-network FEM approximation of diffusion coefficient in elliptic and parabolic problems. IMA J. Numer. Anal. 2024, 44, 3059–3093. [Google Scholar] [CrossRef]
  7. Coroianu, L.; Costarelli, D.; Natale, M.; Pantiş, A. The approximation capabilities of Durrmeyer-type neural network operators. J. Appl. Math. Comput. 2024, 70, 4581–4599. [Google Scholar] [CrossRef]
  8. Warin, X. The GroupMax neural network approximation of convex functions. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 11608–11612. [Google Scholar] [CrossRef] [PubMed]
  9. Fabra, A.; Guasch, O.; Baiges, J.; Codina, R. Approximation of acoustic black holes with finite element mixed formulations and artificial neural network correction terms. Finite Elem. Anal. Des. 2024, 241, 104236. [Google Scholar] [CrossRef]
  10. Grohs, P.; Voigtlaender, F. Proof of the theory-to-practice gap in deep learning via sampling complexity bounds for neural network approximation spaces. Found. Comput. Math. 2024, 24, 1085–1143. [Google Scholar] [CrossRef]
  11. Basteri, A.; Trevisan, D. Quantitative Gaussian approximation of randomly initialized deep neural networks. Mach. Learn. 2024, 113, 6373–6393. [Google Scholar] [CrossRef]
  12. De Ryck, T.; Mishra, S. Error analysis for deep neural network approximations of parametric hyperbolic conservation laws. Math. Comp. 2024, 93, 2643–2677. [Google Scholar] [CrossRef]
  13. Liu, J.; Zhang, B.; Lai, Y.; Fang, L. Hull form optimization research based on multi-precision back-propagation neural network approximation model. Internal. J. Numer. Methods Fluid 2024, 96, 1445–1460. [Google Scholar] [CrossRef]
  14. Yoo, J.; Kim, J.; Gim, M.; Lee, H. Error estimates of physics-informed neural networks for initial value problems. J. Korean Soc. Ind. Appl. Math. 2024, 28, 33–58. [Google Scholar]
  15. Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: New York, NY, USA, 1998. [Google Scholar]
  16. McCulloch, W.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 7, 115–133. [Google Scholar] [CrossRef]
  17. Mitchell, T.M. Machine Learning; WCB-McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  18. Anastassiou, G.A. Parametrized, Deformed and General Neural Networks; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2023. [Google Scholar]
  19. Anastassiou, G.A. Trigonometric and Hyperbolic Generated Approximation Theory; World Scientific: Hackensack, NJ, USA; London, UK; Singapore, 2025. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anastassiou, G.A. Symmetrized, Perturbed Hyperbolic Tangent-Based Complex-Valued Trigonometric and Hyperbolic Neural Network Accelerated Approximation. Mathematics 2025, 13, 1688. https://doi.org/10.3390/math13101688

AMA Style

Anastassiou GA. Symmetrized, Perturbed Hyperbolic Tangent-Based Complex-Valued Trigonometric and Hyperbolic Neural Network Accelerated Approximation. Mathematics. 2025; 13(10):1688. https://doi.org/10.3390/math13101688

Chicago/Turabian Style

Anastassiou, George A. 2025. "Symmetrized, Perturbed Hyperbolic Tangent-Based Complex-Valued Trigonometric and Hyperbolic Neural Network Accelerated Approximation" Mathematics 13, no. 10: 1688. https://doi.org/10.3390/math13101688

APA Style

Anastassiou, G. A. (2025). Symmetrized, Perturbed Hyperbolic Tangent-Based Complex-Valued Trigonometric and Hyperbolic Neural Network Accelerated Approximation. Mathematics, 13(10), 1688. https://doi.org/10.3390/math13101688

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop