There was an error in the original publication [1]. A correction has been made to 2. Basics, Paragraph 6–8:
So we have h2 : ℝ → (−1, 1), h1|(−1,1) : (−1, 1) → (−1, 1), and the strictly increasing function H := h1|(−1,1) ◦ h2 : ℝ → (−1, 1), with the graph of H containing an arc of finite length, such that H(0) = 0, starting at (−1, h1(h2(−1))) and terminating at (1, h1(h2(1))). We call this arc also H. In particular H is negative and convex over (−1, 0], and it is positive and concave over [0, 1).
So it has compact support [−1, 1] and it is like a squashing function, see [3], Ch. 1, p. 8.
We will work from now on with |H|, which has as a graph a cusp joining the points (−1, |h1(h2(−1))|), (0, 0), (1, h1(h2(1))) and with compact support, again, [−1, 1]. The points (−1, |h1(h2(−1))|), (1, h1(h2(1))) belong to the graph of |H| and (0, 0) too.
Typically H has a steeper slope than of h2, but it is flatter and closer to the x-axis than h2 is, e.g. tanh(tanh x) has asymptotes ±0.76, while tanh x has asymptotes ±1, notice that tanh(1) = 0.76. Clearly H has applications in spiking neural networks.
The authors state that the scientific conclusions are unaffected. This correction was approved by the Section Editor-in-Chief. The original publication has also been updated.
Reference
- Anastassiou, G.A. Composition of Activation Functions and the Reduction to Finite Domain. Mathematics 2025, 13, 3177. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).