Using Matrix Eigenvalues to Construct an Iterative Method with the Highest Possible Efﬁciency Index Two

: In this paper, we ﬁrst derive a family of iterative schemes with fourth order. A weight function is used to maintain its optimality. Then, we transform it into methods with several self-accelerating parameters to reach the highest possible convergence rate 8. For this aim, we employ the property of the eigenvalues of the matrices and the technique with memory. Solving several nonlinear test equations shows that the proposed variants have a computational efﬁciency index of two (maximum amount possible) in practice.


Introduction
This paper is concerned with the numerical solution of nonlinear problems having the structure g(x) = 0. In fact, we look at iterative approaches to solving nonlinear problems. It is famous that the celebrated Newton's scheme can define x k+1 = x k − g(x k ) g (x k ) with the local second convergence rate for simple roots while per iteration it requires one evaluation of the function and one of its first derivative. For some applications, one may refer to [1,2].
A wide range of problems which are not related to nonlinear equations at first sight can be expressed as finding the solution of nonlinear equations in special spaces (e.g., in operator form.) For example, finding approximate-analytic solutions to nonlinear stochastic differential equations [3] is possible via Chaplygin type solvers which are in fact some Newton's iteration in an appropriate operator environment in order to be imposed for solving such equations [4].
Let us recall that the efficiency of the iterative schemes [5] is calculated via d √ p wherein d is the number of functional evaluations per cycle and p is the speed rate. Besides, for multi-point without-memory iterative schemes, the optimal convergence order is 2 n−1 , needing n functional evaluations per cycle [6]. Now some definitions are given which will be used later in this work. A famous fourth-order two-point method without-memory is the King's method, which is given by [7]: g (x k ) , γ ∈ R, x k+1 = y k − g(y k ) g (x k ) g(x k )+γg(y k ) g(x k )+(γ−2)g(y k ) . (1) The authors of [8] made the following three steps based on Ostrowski's method and the use of the technique of the Chun weight function [9]: g (x k ) , γ ∈ R, t k = g(z k ) g(x k ) , u(0) = u (0) = 1, g(x k )+γg(y k ) g(x k )+(γ−2)g(y k ) , g[x k ,y y ]g(z k ) g[x k ,z y ]g[z k ,y k ] . (2) Ostrowski's method proposed in [5] has fourth order of convergence as follows: This method can be rewritten as follows: This method supports the optimality conjecture of Kung-Traub for the highest possible convergence order for methods without memory. Accordingly, the efficiency index of Newton's and Ostrowski's methods are: √ 2 ≈ 1.414 and 3 √ 4 ≈ 1.587, respectively. In this work, we turn the famous Ostrowski method into a family of Steffensen-like methods, ref. [10]. This technique eliminates the disadvantage of calculating the function derivative. A three self-parameters family of optimal two-step methods is obtained, which uses the weight function technique to maintain the optimality of the without-memory methods. In addition, the matrix eigenvalue technique is employed to prove the convergence order of the proposed methods.
To explain the motivation of the current manuscript clearly we should address why we need to achieve such high precision results, for which applications? The answer is that we mostly do not need high precision. Actually the current study is useful in terms of a theoretical point of view by proposing a general family of methods with memory that possesses a 100% order improvement without any additional functional evaluations. In terms of the application point of view, we employ multiple precision arithmetic in numerical simulations just to re-check the order of convergence in numerical tests. In applications, clearly our method that has a higher order of convergence reaches the convergence radius faster, and gives the final solution in reasonable timing.
We describe the structure of the modified Ostrowski's methods two-step without memory in Section 2. The improvement of the convergence rate of this family is attained via employing several parameters of self-acceleration. Such parameters are computed per loop via the information from the current and the previous loops which help to accelerate the convergence without adding further incorporation of functional evaluations. The efficiency index of the new method is two (the highest efficiency index available). The theoretical proof is presented in Section 3. Computational pieces of evidence are brought forward in Section 4 and uphold the analytical results. Finally, we provide concluding remarks in Section 5.

Derivation of Methods and Convergence Analysis
By looking at relation (4), it can be seen that this method uses the derivative of the function in the first and second steps and this shows that the two-point family of schemes (4) achieves the fourth convergence rate employing three evaluations of func-tions only (viz., g(x k ), g(y k ), and g (x k )) per full iteration. To derive new methods, we approximate g (x k ) given in one-step (4) as follows: In what follows, the derivative g (x k ) in the second step will be estimated via g[y k ,w k ] h(t k ) , where h(t k ) is a differentiable function that relies on the real variable Thus, it is begun by the scheme (4), the approximation (5), and mentions the following two-point method: The next theorem illustrates the weight function and under what conditions the convergence rate of (6) will achieve the optimal order four.

Theorem 1.
Let for the open interval D, the function g : D ⊂ R → R have a simple root x * ∈ D. As long as the starting point x 0 is close enough to the exact root, then {x k } obtained via (6) tends to x * . If H is a real function under the assumptions H (0) < ∞, H (0) = −1, H(0) = 1 and β = 0 then the fourth order of convergence can be obtained for (6).
Proof. The proof of this theorem is similar to the way of proving convergence order for similar schemes in the literature, see e.g., ref. [11]. It is hence omitted and we bring the final error equation, which can be written as follows: where g (x * ) and using relations h0 = H(0), h1 = H (0), h2 = H (0). Hence, the fourth-order convergence is established. The proof is ended.
Some of the functions that satisfy Theorem 1 are as follows: 1+t , H 9 (t) = e t − 2t.
By considering a new accelerator, the following two-step method can be obtained: The method (6) can also be converted into a without memory method by adding two self-accelerator parameters to a three-parameter method as follows: Theorem 2. Having similar conditions as in Theorem 1, then, the iteration methods (9) and (10) have fourth order of convergence and satisfy the following error equations, respectively: and Proof. This is proved as in the Theorem 1; hence, it is omitted.
We also note here that (12) can be rewritten as follows:

One-Parametric Method
It is observed by (7) that the convergence rate order for the presented methods (6) is 4 when β = 1 g (x * ) . We could approximate the parameter β by β k : where N 3 (x k ) are defined as follows: Combining (6) with (14), one is able to propose a family of two-point Ostrowski-Steffensentype methods with memory as comes next: Theorem 3. This theorem has similar conditions to Theorem 1. As long as β k in (16) is computed recursively by (14), then the convergence R-order would be six.
Proof. The matrix approach discussed initially in [12] is now used to obtain the rate of convergence for such an accelerated method. Recalling the lower bound for the rate of convergence in such cases on the single-step s-point procedure technique (14), i.e., x k−s ) would be the spectral radius of M (s) = (m ij ), corresponding to the method by having: Then the spectral radius of M = M 1 .M 2 . · · · .M s would be the lower bound for the s-step method ϕ = ϕ 1 · ϕ 2 • · · · • ϕ s . We can state each of the estimates x k+1 , y k , and w k as a function of available information g(y k ), g(w k ) and g(x k ) from the k-th iterate and g(y k−1 ), g(w k−1 ) and g(x k−1 ) from the past iterate. From the relations (16) and (17), we create the corresponding matrices as follows: Hence, we obtain for which its eigenvalues are (6, 0, 0, 0). It is derived that the rate of convergence would be six for the methods with memory (16). The proof is complete now.

Two-Parametric Method
Now, similar to the prior case, we build the following derivative-free with the memory method from (9): Theorem 4. Let x 0 be an initial approach close enough to x * of g(x) = 0. If the parameters β k and γ k will compute recursively, then the convergence R-order of (18) is at least 7.
Proof. Using appropriate matrixes in the proof of Theorem 3 and substituting them into the goal matrix, we obtain that (18) has seventh order of convergence. In fact, the proof is similar to the proof of Theorem 3 and hence it is omitted.

Tri-Parametric Method
The method (10) with memory could be expressed in what follows: , k = 1, 2, 3, · · · , Now, we establish a theorem for determining the rate of (19).

Theorem 5.
With the hypotheses of Theorem 3 and that the three parameters β k , γ k and λ k have been recursively calculated in (19), then the convergence rate of the with-memory method suggested in (19) is 7.53.
Proof. From the relation (19) and similar to that used in Theorem 3, we construct the corresponding matrix as follows: We will present the process of the work in the following four sections until the maximum degree of convergence (i.e., 100%): (I) Now, we consider three parameters' iterative methods as follows: , k = 2, 3, 4, · · · , Theorem 6. Having the same conditions as in Theorems 1 and 5, then (20) converges to x * with the rate of convergence 7.77.
Proof. In a similar fashion, one obtains that (II) Now, we study tri-parametric iterative methods as follows: Theorem 7. With the hypotheses of Theorem 3 and that the three parameters β, γ and λ have been recursively calculated in (21), then (21) converges to x * with the convergence order 7.89.

Proof.
Proving this theorem is similar to that of Theorem 3.

Numerical Results
The principal purpose of numerical examples is to verify the validity of the theoretical developments through a variety of test examples using high accuracy computations by use of the Mathematica program. All computations were performed using Mathematica 11 [13].
In the tables, the abbreviations Div, TNE and Iter are used as follows: TNE: Total Number of Evaluations required for a method to do the specified iterations; Iter: The number of iterations; The errors |x k − α| of estimations to the simple zeros of g i (x), i = 1, 2, 3; The computational order of convergence (r c ) [14] can be calculated via: We shall check the effectiveness of the new without and with memory methods. We employ the presented methods (6), (16), (18), (19) and (24) [29].
In Tables 1-7, we show the numerical results obtained by applying the different methods with a memory for approximating the solution of g i (x) = 0, i = 1, 2, 3, given as follows: Here, ≈ stands for an approximation of the solution that we have written only to provide an overview of the solution. All calculations are performed using 2000 floating point arithmetic in Wolfram Mathematica. This means that we care about very small numbers in terms of magnitude and we do not allow the programming package to consider them as zero automatically. Actually, higher orders can only be seen in the convergence phase and more clearly in high precision computing. The numerical results shown in Tables 1-7 confirmed the theoretical discussions and the efficiency of the proposed scheme under different choices of the weight functions.  6.00 6.00 6.00 6.00 6.00 6.00 6.00 6.00 6.00 6.00  Table 8. Comparison improvement of convergence order of the proposed method with other schemes.

Optimal Order
We end this section by pointing out that the extension of our methods for the system of nonlinear equations (see some application of nonlinear system in [31][32][33]) require the computation of a divided difference operator (DDO) which would be a dense matrix. This dense structure of the DDO matrix restricts the usefulness of such methods. Because of this, we consider our proposed methods only for the scalar case.

Conclusions
In this paper, we have used the idea of the weight function and have turned Ostrowski's method into an optimal order method. We have constructed the with-memory methods by using the same number of evaluations that do not require the calculation of a function derivative. Then, with the advent of accelerator parameters and eigenvalues of matrices, we created the with memory methods with higher orders. By interpolatory accelerator parameters, the methods with memory reached up to 100% convergence improvement. Numerical tests were intended to verify the better performance of the proposed methods over the others. Employing such an efficient numerical scheme for practical problems in solving stochastic differential equations [34] is worth investigating in future work.  Data Availability Statement: For data availability statement, we state that data sharing is not applicable to this article as no new data were used in this study. All used data has clearly been mentioned in the text.