Advances in the Approximation of the Matrix Hyperbolic Tangent

: In this paper, we introduce two approaches to compute the matrix hyperbolic tangent. While one of them is based on its own deﬁnition and uses the matrix exponential, the other one is focused on the expansion of its Taylor series. For this second approximation, we analyse two different alternatives to evaluate the corresponding matrix polynomials. This resulted in three stable and accurate codes, which we implemented in MATLAB and numerically and computationally compared by means of a battery of tests composed of distinct state-of-the-art matrices. Our results show that the Taylor series-based methods were more accurate, although somewhat more computationally expensive, compared with the approach based on the exponential matrix. To avoid this drawback, we propose the use of a set of formulas that allows us to evaluate polynomials in a more efﬁcient way compared with that of the traditional Paterson–Stockmeyer method, thus, substantially reducing the number of matrix products (practically equal in number to the approach based on the matrix exponential), without penalising the accuracy of the result.


Introduction and Notation
Matrix functions have been an increasing focus of attention due to their applications to new and interesting problems related, e.g., to statistics [1], Lie theory [2], differential equations (the matrix exponential function e At can be considered as a classical example for its application in the solution of first order differential systems Y (t) = AY(t) with A ∈ C n×n and for its use in the development of exponential integrators for nonlinear ODEs and PDEs, see [3] for example), approximation theory, and many other areas of science and engineering [4].
There are different ways to define the notion of the function f (A) of a square matrix A. The most common are via the Jordan canonical form, via the Hermite interpolation, and via the Cauchy integral formula. The equivalence among the different definitions of a matrix function can be found in [5]. Several general methods have been proposed for evaluating matrix functions, among which, we can highlight the Taylor or Padé approximations and methods based on the Schur form of a matrix [4].
Among the most well-known matrix functions, we have the matrix hyperbolic cosine and the matrix hyperbolic sine functions, respectively defined in terms of the matrix exponential function e A by means of the following expressions: These matrix functions are applied, e.g., in the study of the communicability analysis in complex networks [6], or to construct the exact series solution of coupled hyperbolic systems [7]. Precisely due to their applicability, the numerical computation of these functions has received remarkable and growing attention in recent years. A set of state-of-the-art algorithms to calculate these functions developed by the authors can be found in [8][9][10][11].
On the other hand, we have the matrix hyperbolic tangent function, defined as tanh (A) = sinh (A)(cosh (A)) −1 = (cosh (A)) −1 sinh (A), (2) and used, for instance, to give an analytical solution of the radiative transfer equation [12], in the heat transference field [13,14], in the study of symplectic systems [15,16], in graph theory [17], and in the development of special types of exponential integrators [18,19].
In this work, we propose and study two different implementations that compute the matrix hyperbolic tangent function: the first uses the matrix exponential function whereas the second is based on its Taylor series expansion. In addition, for the second approach, we use and compare two different alternatives to evaluate the matrix polynomials involved in the series expansion.

The Matrix Exponential Function-Based Approach
This first option is derived from the matrix hyperbolic tangent function definition as expressed in Equations (1) and (2), from which the following matrix rational expression is immediately deduced: where I denotes the identity matrix with the same order as A. Equation (3) reduces the approximation of the matrix hyperbolic tangent function to the computation of the matrix exponential e 2A .
There exists profuse literature (see e.g., [4,20]) about the approximation of the matrix exponential function and the inconveniences that its calculation leads to [21]. The most competitive methods used in practice are either those based on polynomial approximations or those based on Padé rational approaches, with the former, in general, being more accurate and with lower computational costs [22].
In recent years, different polynomial approaches to the matrix exponential function have been proposed, depending on the type of matrix polynomial used. For example, some approximations use the Hermite matrix polynomials [23], while others derived from on Taylor polynomials [22,24]. More recently, a new method based on Bernoulli matrix polynomials was also proposed in [25].
All these methods use the scaling and squaring method based on the identity which satisfies the matrix exponential. In the scaling phase, an integer scaling factor s is taken, and the approximation of e 2 −s A is computed using any of the proposed methods so that the required precision is obtained with the lowest possible computational cost. In the squaring phase, we obtain e A by s repeated squaring operations.

The Taylor Series-Based Approach
The other possibility for computing the matrix hyperbolic tangent function is to use its Taylor series expansion where B 2k are the Bernoulli's numbers. As in the case of the matrix exponential, it is highly recommended to use the scaling and squaring technique to reduce the norm of the matrix to be computed and, thus, to obtain a good approximation of the matrix hyperbolic tangent with an acceptable computational cost. Due to the double angle formula for the matrix hyperbolic tangent function which is derived from the scalar one it is possible to compute T s = tanh(A) by using the following recurrence: Throughout this paper we will denote by σ(A) the set of eigenvalues of matrix A ∈ C n×n and by I n (or I) the matrix identity of order n. In addition, ρ(A) refers to the spectral radius of A, defined as With x , we denote the value obtained by rounding x to the nearest integer greater than or equal to x, and x is the value obtained rounding x to the nearest integer less than or equal to x. The matrix norm || · || will stand for any subordinate matrix norm and, in particular, || · || 1 denotes the 1-norm.
This work is organised as follows. First, Section 2 incorporates the algorithms corresponding to the different approaches previously described for approximating the matrix hyperbolic tangent and for computing the scaling parameter and the order of the Taylor polynomials. Next, Section 3 details the experiments carried out to compare the numerical properties of the codes to be evaluated. Finally, in Section 4, we present our conclusions.

The Matrix Exponential Function-Based Algorithm
The first algorithm designed, called Algorithm 1, computes the matrix hyperbolic tangent by means of the matrix exponential according to Formula (3). In Steps 1 and 2, Algorithm 2 from [26] is responsible for computing e 2 −s B by means of the Taylor approximation of order m k , being B = 2A. In Step 3, T tanh(2 −s B) is worked out using Formula (3). In this phase, T is computed by solving a system of linear equations, with e 2 −s B + I being the coefficient matrix and e 2 −s B − I being the right hand side term. Finally, through Steps 4-8, tanh(A) is recovered from T by using the squaring technique and the double angle Formula (5).

Taylor Approximation-Based Algorithms
Let be the Taylor series expansion of the hyperbolic tangent function, with the radius of convergence r = π/2, where B 2n are the Bernoulli's numbers, defined by the recursive expression The following proposition is easily obtained: Let A ∈ C n×n be a matrix satisfying ρ(A) < π/2. Then, the matrix hyperbolic tangent tanh(A) can be defined for A as the Taylor series From [4] Theorem 4.7, this series in Equation (7) converges if the distinct eigenvalues λ 1 , λ 2 , · · · , λ t of A satisfy one of these conditions: 1. |λ i | < π/2.
To simplify the notation, we denote with the Taylor series (7), and with the Taylor approximation of order 2m There exist several alternatives that can be applied to obtain P m (B), such as the Paterson-Stockmeyer method [27] or the Sastre formulas [28], with the latter being more efficient, in terms of matrix products, compared with the former.
Algorithm 2 works out tanh(A) by means of the Taylor approximation of the scaled matrix 4 −s B Equation (8). In addition, it uses the Paterson-Stockmeyer method for the matrix polynomial evaluation, and finally it applies the recurrence Equation (5) for recovering tanh(A).
Phase I of Algorithm 2 is in charge of estimating the integers m and s so that the Taylor approximation of the scaled matrix B is computed accurately and efficiently. Then, in Phase II, once the integer m k has been chosen from the set M = {2, 4, 6, 9, 12, 16, 20, 25, 30, . . . }, and powers B i , 2 ≤ i ≤ q are calculated, with q = √ m k or q = √ m k as an integer divisor of m k , the Paterson-Stockmeyer method computes P m k (B) with the necessary accuracy and with a minimal computational cost as Taking into account Equation (9), the computational cost of Algorithm 2 is O 2k + 4 + 8s 3 n 3 flops. Finally, in Phase III, the matrix hyperbolic tangent of matrix A is recovered by squaring and repeatedly solving a system of linear equations equivalent to Equation (4).
With the purpose of evaluating P m (B) in Equation (8) in a more efficient way compared with that offered by the Paterson-Stockmeyer method, as stated in the Phase II of Algorithm 2, the formulas provided in [28] were taken into consideration into the design of Algorithm 3. Concretely, we use the evaluation formulas for Taylor-based matrix polynomial approximations of orders m = 8, 14+, and 21+ in a similar way to the evaluation described in [22] (Sections 3.1-3.3) for the matrix exponential function. Nevertheless, the Paterson-Stockmeyer method is still being used for orders equal to m = 2 and m = 4.
Following the notation given in [22] (Section 4) for an order m, the suffix "+" in m = 14+ and m = 21+ means that these Taylor approximations are more accurate than those approximations of order m = 14 and m = 21, respectively, since the former will be composed of a few more polynomial terms. The coefficients of these additional terms will be similar but not identical to the corresponding traditional Taylor approximation ones. It is convenient to clarify that we have used the order m = 14+, instead of the order m = 15+, because we have not found a real solution for the coefficients of the corresponding evaluation formula with order m = 15+.
The evaluation formulas for the order m = 8 that comprise the system of non-linear equations to be solved for determining the unknown coefficients c i , i = 1, . . . , 6, are: where and T 17 (A), or AP 8 (B), refers to the Taylor polynomial of order 17 of function tanh(A) given by Equation (8).

Algorithm 3: Given a matrix
Solve for X the system of linear equations BX = 2T Regarding the non-linear equations for order m = 14+ and its unknown coefficients c i , i = 1, . . . , 13, we have where and AP 14 represents the Taylor polynomial of order 29 of function tanh(A) given by Equation (8). If we denote as p 15 and p 16 the Taylor polynomial coefficients corresponding to the powers B 15 and B 16 , respectively, the relative error of coefficients b 15 and b 16 with respect to them, with two decimal digits, are: the evaluation formulas related to the system of non-linear equations for order m = 21+ with the coefficients c i , i = 1, . . . , 21 to be determined are +c 9 y 0 (B) + c 10 B 3 , where and AP 21 stands for the Taylor polynomial of order 43 of the function tanh(A) given by Equation (8). With two decimal digits of accuracy, the relative error made by the coefficients b 22 , b 23 , and b 24 with respect to their corresponding Taylor polynomial coefficients p 22 , p 23 , and p 24 that accompany their respective powers B 22 , B 23 , and B 24 , are the following: Similarly to [22] (Sections 3.1-3.3), we obtained different sets of solutions for the coefficients in Equations (10)-(12) using the vpasolve function (https://es.mathworks.com/ help/symbolic/vpasolve.html, accessed on 7 March 2020) from the MATLAB Symbolic Computation Toolbox with variable precision arithmetic. For the case of m = 21+, the random option of vpasolve has been used, which allowed us to obtain different solutions for the coefficients, after running it 1000 times. From all the sets of real solutions provided, we selected the most stable ones according to the stability check proposed in [28] (Ex. 3.2).

Polynomial Order m and Scaling Value s Calculation
The computation of m and s from Phase I in Algorithms 2 and 3 is based on the relative forward error of approximating tanh(A) by means of the Taylor approximation Equation (8). This error, defined as E f = tanh (A) −1 (I − T 2m+1 ) , can be expressed as and it can be bounded as (see Theorem 1.1 from [29]): where u = 2 −53 is the unit roundoff in IEEE double precision arithmetic. The values of Θ m can be computed with any given precision by using symbolic computations as is shown in Tables 1 and 2, depending on the polynomial evaluation alternative selected. Algorithm 4 provides the Taylor approximation order m k ∈ M, lower ≤ k ≤ upper, where m lower and m upper are, respectively, the minimum and maximum order used, the scaling factor s, together with 2 −s A, and the necessary powers of 4 −s B for computing T in Phase II of Algorithm 2. To simplify reading this algorithm, we use the following aliases: Algorithm 4: Given a matrix A ∈ C n×n , the values Θ from Table 1, a minimum order m lower ∈ M, a maximum order m upper ∈ M, with M = {2, 4,6,9,12,16,20,25, 30}, and a tolerance tol, this algorithm computes the order of Taylor approximation m ∈ M, m lower ≤ m k ≤ m upper , and the scaling factor s, together with 2 −s A and the necessary powers of 4 −s B for computing P m k (4 −s B) from (9). In Steps 1-4 of Algorithm 4, the required powers of B for working out P m k (B) are computed. Then, in Step 5, β k is obtained by using Algorithm 1 from [30].
As lim where ρ is the spectral radius of matrix B, then lim m→∞ |β m − β m−1 | = 0. Hence, given a small tolerance value tol, Steps 6-16 test if there is a value β k such that |β k − β k−1 | < tol and β k < Θ k . In addition, the necessary powers of B for computing P m k (B) are calculated. Next, the scaling factor s ≥ 0 is provided in Step 17: With those values of m k and s, we guarantee that: i.e., the relative forward error of T 2m k +1 (2 −s A) is lower than the unit roundoff u.
Step 18 checks whether the matrices A and B should be scaled or not. If so, the algorithm analyses the possibility of reducing the order of the Taylor polynomial, but at the same time ensuring that Equation (13) is verified (Steps 19-23). For this purpose, the scaling value corresponding to the order of the Taylor polynomial immediately below the one previously obtained is calculated as well. If both values are identical, the polynomial order reduction is performed. Once the optimal scaling parameter s has been determined, the matrices A and B are scaled (Steps 24-27).
Algorithm   Table 2. Values of Θ m k , 1 ≤ k ≤ 5, for polynomial evaluation using the Sastre formulas.

Numerical Experiments
The following codes have been implemented in MATLAB to test the accuracy and the efficiency of the different algorithms proposed: Three types of matrices with distinct features were used to build a battery of tests that enabled us to compare the numerical performance of these codes. The MATLAB Symbolic Math Toolbox with 256 digits of precision was used to compute the "exact" matrix hyperbolic tangent function using the vpa (variable-precision floating-point arithmetic) function. The test battery featured the following three sets: (a) Diagonalizable complex matrices: one hundred diagonalizable 128 × 128 complex matrices obtained as the result of A = V · D · V −1 , where D is a diagonal matrix (with real and complex eigenvalues) and matrix V is an orthogonal matrix, V = H/ √ n, being H a Hadamard matrix and n is the matrix order. As 1-norm, we have that 2.56 ≤ A 1 ≤ 256. The matrix hyperbolic tangent was calculated "exactly" as tanh (A) = V · tanh (D) · V T using the vpa function. (b) Non-diagonalizable complex matrices: one hundred non-diagonalizable 128 × 128 complex matrices computed as A = V · J · V −1 , where J is a Jordan matrix with complex eigenvalues whose modules are less than 5 and the algebraic multiplicity is randomly generated between 1 and 4. V is an orthogonal random matrix with elements in the interval [−0.5, 0.5]. As 1-norm, we obtained that 45.13 ≤ A 1 ≤ 51.18. The "exact" matrix hyperbolic tangent was computed as tanh (A) = V · tanh (J) · V −1 by means of the vpa function. (c) Matrices from the Matrix Computation Toolbox (MCT) [31] and from the Eigtool MATLAB Package (EMP) [32]: fifty-three matrices with a dimension lower than or equal to 128 were chosen because of their highly different and significant characteristics from each other. We decided to scale these matrices so that they had 1-norm not exceeding 512. As a result, we obtained that 1 ≤ A 1 ≤ 489.3. The "exact" matrix hyperbolic tangent was calculated by using the two following methods together and the vpa function: • Find a matrix V and a diagonal matrix D so that A = VDV −1 by using the MATLAB function eig. In this case, Compute the Taylor approximation of the hyperbolic tangent function (T 2 ), with different polynomial orders (m) and scaling parameters (s). This procedure is finished when the obtained result is the same for the distinct values of m and s in IEEE double precision.
The "exact" matrix hyperbolic tangent is considered only if Although MCT and EMP are really comprised of 72 matrices, only 53 matrices of them, 42 from MCT and 11 from EMP, were considered for our purposes. On the one hand, matrix 6 from MCT and matrix 10 from EMP were rejected because the relative error made by some of the codes to be evaluated was greater or equal to unity. This was due to the ill-conditioning of these matrices for the hyperbolic tangent function. On the other hand, matrices 4, 12, 17, 18, 23, 35, 40, 46, and 51 from MCT and matrices 7, 9, 16, and 17 from EMP were not generated because they did not satisfy the described criterion to obtain the "exact" matrix hyperbolic tangent. Finally, matrices 8, 13, 15, and 18 from EMP were refused as they are also part of MCT.
For each of the three previously mentioned sets of matrices, one test was respectively and independently carried out, which indeed corresponds to an experiment to analyse the numerical properties and to account for the computational cost of the different implemented codes. All these experiments were run on an HP Pavilion dv8 Notebook PC with an Intel Core i7 CPU Q720 @1.60 Ghz processor and 6 GB of RAM, using MATLAB R2020b. First, Table 3 shows the percentage of cases in which the normwise relative errors of tanh_expm are lower than, greater than, or equal to those of tanh_tayps and tanh_pol. These normwise relative errors were obtained as where tanh(A) represents the exact solution andtanh(A) stands for the approximate one. As we can appreciate, from the point of view of the accuracy of the results, tanh_tayps outperformed tanh_expm in 68% of the matrices for Test 1 and 100% and 77.36% of them for Tests 2 and 3. On the other hand, tanh_pol obtained slightly more modest results with improvement percentages equal to 56%, 100%, and 69.81% for Tests 1, 2, and 3, respectively. Table 4 reports the computational complexity of the algorithms. This complexity was expressed as the number of matrix products required to calculate the hyperbolic tangent of the different matrices that make up each of the test cases. This number of products includes the number of matrix multiplications and the cost of the systems of linear equations that were solved in the recovering phase, by all the codes, together with one more in Step 2 of Algorithm 1 by tanh_expm.
The cost of each system of linear equations with n right-hand side vectors, where n denotes the size of the square coefficient matrices, was taken as 4/3 matrix products. The cost of other arithmetic operations, such as the sum of matrices or the product of a matrix by a vector, was not taken into consideration. As can be seen, the lowest computational cost corresponded to tanh_expm, followed by tanh_pol and tanh_tayps. For example, tanh_expm demanded 1810 matrix products to compute the matrices belonging to Test 1, compared to 1847 by tanh_pol and 2180 by tanh_tayps. Respectively, for the three experiments, Figures 1-3 illustrate the normwise relative errors (a), the performance profiles (b), the ratio of the relative errors (c), the lowest and highest relative error rate (d), the ratio of the matrix products (e), and the polynomial orders (f) for our three codes to be evaluated. Figures 1a, 2a and 3a correspond to the normwise relative error. As they reveal, the three methods under evaluation were numerically stable for all the matrices that were computed, and all of them provided very accurate results, where the relative errors incurred were always less than 10 −11 . The solid line that appears in these figures traces the function k tanh u, where k tanh (or cond) stands for the condition number of the matrix hyperbolic tangent function [4] (Chapter 3), and u represents the unit roundoff.
It is clear that the errors incurred by all the codes were usually quite close to this line for the three experiments and even below it, as was largely the case for Tests 1 and 3. For the sake of readability in the graphs, normwise relative errors lower than 10 −20 were plotted with that value in the figures. Notwithstanding, their original quantities were maintained for the rest of the results. Figures 1b, 2b and 3b depict the performance profile of the codes. In them, the α coordinate on the x-axis ranges from 1 to 5 in steps equal to 0.1. For a concrete α value, the p coordinate on the y-axis indicates the probability that the considered algorithm has a relative error lower than or equal to α-times the smallest relative error over all the methods on the given test.
The implementation of tanh_tayps always achieved the results with the highest accuracy, followed closely by tanh_pol. For Tests 1 and 2, Figures 1b and 2b indicate that the results provided by them are very similar, although the difference in favour of tanh_tayps is more remarkable in Test 3. As expected from the percentages given in Table 3, tanh_expm computed the hyperbolic tangent function with the worst accuracy, most notably for the matrices from Tests 2 and 3.  Precisely, the relationship among the normwise relative errors incurred by the codes to be examined is displayed in Figures 1c, 2c and 3c. All these ratios are presented in decreasing order with respect to Er(tanh_tayps)/Er(tanh_expm). This factor is less than 1 for the great majority of the matrices, which indicates that tanh_tayps and tanh_pol are more accurate codes than tanh_expm for estimating the hyperbolic tangent function.
These data are further corroborated by the results shown in Figures 1d, 2d and 3d. These graphs report the percentages of the matrices, for each of the tests, in which each code resulted in the lowest or highest normwise relative error among the errors provided by all of them. Thus, tanh_tayps exhibited the smallest relative error in 44% of the matrices for Test 1 and in 47% of them for Test 3, followed by tanh_pol in 29% and 36% of the cases, respectively. For all the other cases, tanh_expm was the most reliable method. For Test 2, tanh_pol was the most accurate code in 53% of the matrices, and tanh_tayps was the most accurate in 47% of them. In line with these results, tanh_expm was found to be the approach that led to the largest relative errors in 51% of the cases in Test 1, in 100% of them in Test 2, and in 64% for Test 3.  In contrast, although tanh_expm proved to be the most inaccurate code, it is also evident that its computational cost was usually the lowest one, as Table 4 and Figures 1e, 2e and 3e reported. The ratio between the number of tanh_tayps and tanh_expm matrix products ranged from 0.82 to 1.43 for Test 1, was equal to 1.20 for Test 2, and ranged from 0.82 to 1.8 for Test 3. Regarding tanh_pol and tanh_expm, this quotient varied from 0.82 to 1.13 for Test 1, from 0.68 to 1.2 for Test 3, and was equal to 1 for Test 2. Table 5 lists, in order for Tests 1, 2, and 3, the minimum, maximum, and average values of the degree of the Taylor polynomials m and the scaling parameter s employed by the three codes. Additionally, and in a more detailed way, Figures 1f, 2f and 3f illustrate the order of the polynomial used in the calculation of each of the matrices that compose the testbed. The value of m allowed to be chosen was between 2 and 30 for tanh_expm and tanh_tayps and between 2 and 21 for tanh_pol. As we can see, the average value of m that was typically used varied from 25 and 30 for tanh_expm. It was around 25 for tanh_tayps, and it ranged from 14 to 21 for tanh_pol.   Having concluded the first part of the experimental results, we continue by comparing tanh_tayps and tanh_pol, the Taylor series-based codes that returned the best accuracy in the results. Table 6 presents the percentage of cases in which tanh_tayps gave place to relative errors that were lower than, greater than, or equal to those of tanh_pol. According to the exposed values, both methods provided very similar results, and the percentage of cases in which each method was more accurate than the other was approximately equal to 50% for the different tests. Figures 4-6 incorporate the normwise relative errors (a), the ratio of relative errors (b), and the ratio of matrix products (c) between tanh_tayps and tanh_pol. The graphs corresponding to the performance profiles and the polynomial orders are not included now since the results match with those already shown in the previous figures. All this information is also complemented by Table 7, which collects, respectively for each test, the minimum, maximum, and average relative errors incurred by both methods to be analysed, together with the standard deviation.  As Table 6 details, for Tests 1 and 3, tanh_tayps slightly improved on tanh_pol in the percentage of matrices in which the relative error committed was lower, although it is true that the difference between the results provided by the two methods was small in most cases. However, when such a difference occurred, it was more often in favour of tanh_tayps than the other way around, in quantitative terms.
With all this, we can also appreciate that the mean relative error and the standard deviation incurred by tanh_pol were lower than those of tanh_tayps. For matrices that were part of Test 2, the numerical results achieved by both methods were almost identical, and the differences between them were not significant.     To conclude the analysis and with regard to the computational cost of both codes, such as depicted in Figures 4c, 5c and 6c, it simply remains to note that tanh_tayps performed between 1 and 1.43 more matrix products compared with tanh_pol for Test 1, 1.2 more for Test 2, and between 1.06 and 1.5 more for Test 3. Therefore, tanh_pol computed the matrix tangent function with a very similar accuracy in the results compared to tanh_tayps but with a considerably lower computational cost.

Conclusions
Two alternative methods to approximate the matrix hyperbolic tangent were addressed in this work. The first was derived from its own definition and reduced to the computation of a matrix exponential. The second method deals with its Taylor series expansion. In this latter approach, two alternatives were developed and differed on how the evaluation of the matrix polynomials was performed. In addition, we provided algorithms to determine the scaling factor and the order of the polynomial. As a result, we dealt with a total of three MATLAB codes (tanh_expm, tanh_tayps, and tanh_pol), which were evaluated on a complete testbed populated with matrices of three different types.
The Taylor series-based codes reached the most accurate results in the tests, a fact that is in line with the recommendation suggested in [33] of using a Taylor development against other alternatives whenever possible. However, codes based on Taylor series can be computationally expensive if the Paterson-Stockmeyer method is employed to evaluate the polynomial, as we confirmed with the tanh_tayps implementation. One idea to reduce this problem is to use Sastre formulas, as we did in our tanh_pol code, resulting in an efficient alternative that significantly reduced the number of matrix operations without affecting the accuracy.
The results found in this paper demonstrated that the three codes were stable and provided acceptable accuracy. However, and without underestimating the other two codes, the tanh_pol implementation proposed here offered the best ratio of accuracy/computational cost and proved to be an excellent method for the computation of the matrix hyperbolic tangent.

Conflicts of Interest:
The authors declare no conflict of interest.