Sixteenth-Order Optimal Iterative Scheme Based on Inverse Interpolatory Rational Function for Nonlinear Equations

: The principal motivation of this paper is to propose a general scheme that is applicable to every existing multi-point optimal eighth-order method/family of methods to produce a further sixteenth-order scheme. By adopting our technique, we can extend all the existing optimal eighth-order schemes whose ﬁrst sub-step employs Newton’s method for sixteenth-order convergence. The developed technique has an optimal convergence order regarding classical Kung-Traub conjecture. In addition, we fully investigated the computational and theoretical properties along with a main theorem that demonstrates the convergence order and asymptotic error constant term. By using Mathematica-11 with its high-precision computability, we checked the efﬁciency of our methods and compared them with existing robust methods with same convergence order.


Introduction
The formation of high-order multi-point iterative techniques for the approximate solution of nonlinear equations has always been a crucial problem in computational mathematics and numerical analysis. Such types of methods provide the utmost and effective imprecise solution up to the specific accuracy degree of where Ω : C → C is holomorphic map/function in the neighborhood of required ξ. A certain recognition has been given to the construction of sixteenth-order iterative methods in the last two decades. There are several reasons behind this. However, some of them are advanced digital computer arithmetic, symbolic computation, desired accuracy of the required solution with in a small number of iterations, smaller residual errors, CPU time, smaller difference between two iterations, etc. (for more details please see Traub [1] and Petković et al. [2]).
We have a handful of optimal iterative methods of order sixteen [3][4][5][6][7][8][9]. Among these methods most probably are the improvement or extension of some classical methods e.g., Newton's method or Newton-like method, Ostrowski's method at the liability of further values of function/s and/or 1st-order derivative/s or extra numbers of sub-steps of the native schemes.
In addition, we have very few such techniques [5,10] that are applicable to every optimal 8-order method (whose first sub-step employs Newton's method) to further obtain 16-order convergence optimal scheme, according to our knowledge. Presently, optimal schemes suitable to every iterative method of particular order to obtain further high-order methods have more importance than obtaining a high-order version of a native method. Finding such general schemes are a more attractive and harder chore in the area of numerical analysis.
Therefore, in this manuscript we pursue the development of a scheme that is suitable to every optimal 8-order scheme whose first sub-step should be the classical Newton's method, in order to have further optimal 16-order convergence, rather than applying the technique only to a certain method. The construction of our technique is based on the rational approximation approach. The main advantage of the constructed technique is that it is suitable to every optimal 8-order scheme whose first sub-step employs Newton's method. Therefore, we can choose any iterative method/family of methods from [5,[11][12][13][14][15][16][17][18][19][20][21][22][23][24][25], etc. to obtain further 16-order optimal scheme. The effectiveness of our technique is illustrated by several numerical examples and it is found that our methods execute superior results than the existing optimal methods with the same convergence order.

Construction of the Proposed Optimal Scheme
Here, we present an optimal 16-order general iterative scheme that is the main contribution of this study. In this regard, we consider a general 8-order scheme, which is defined as follows: where φ 4 and ψ 8 are optimal scheme of order four and eight, respectively. We adopt Newton's method as a fourth sub-step to obtain a 16-order scheme, which is given by that is non-optimal in the regard of conjecture given by Kung-Traub [5] because of six functional values at each step. We can decrease the number of functional values with the help of following γ(x) third-order rational functional where the values of disposable parameters b i (1 ≤ i ≤ 5) can be found with the help of following tangency constraints Then, the last sub-step iteration is replaced by that does not require Ω (t r ). Expressions (2) and (6) yield an optimal sixteenth-order scheme. It is vital to note that the γ(x) in (4) plays a significant role in the construction of an optimal 16-order scheme.
In this paper, we adopt a different last sub-step iteration, which is defined as follows: where Q Ω can be considered to be a correction term to be called naturally as "error corrector". The last sub-step of this type is handier for the convergence analysis and additionally in the dynamics study through basins of attraction. The easy way of obtaining such a fourth sub-step iteration with a feasible error corrector is to apply the Inverse Function Theorem [26] to (5). Since ξ is a simple root (i.e., γ (ξ) = 0), then we have a unique map τ(x) satisfying γ(τ(x)) = x in the certain neighborhood of γ(ξ). Hence, we adopt such an inverse map τ(x) to obtain the needed last sub-step of the form (7) instead of using γ(x) in (5).
With the help of Inverse Function Theorem, we will yield the final sub-step iteration from the expression (5): where b i , i = 1, 2, . . . , 5 are disposable constants. We can find them by adopting the following tangency conditions One should note that the rational function on the right side of (8) is regarded as an error corrector. Indeed, the desired last sub-step iteration (8) is obtained using the inverse interpolatory function approach meeting the tangency constraints (9). Clearly, the last sub-step iteration (6) looks more suitable than (3) in the error analysis. It remains for us to determine parameters b i (1 ≤ i ≤ 5) in (8) By using the first two tangency conditions, we obtain By adopting last three tangency constraints and the expression (10), we have the following three independent relations which further yield where Let us consider that the rational Function (8) cuts the x -axis at x = x r+1 , in order to obtain the next estimation x r+1 . Then, we obtain which further yield by using the above values of b 1 , b 2 and b 3 where Finally, by using expressions (2) and (14), we have where θ 2 and θ 3 are defined earlier. We illustrate that convergence order reach at optimal 16-order without adopting any additional functional evaluations in the next Theorem 1. It is vital to note that only coefficients A 0 and B 0 from φ 4 (x r , w r ) and ψ 8 (x r , w r , z r ), respectively, contribute to its important character in the development of the needed asymptotic error constant, which can be found in Theorem 1.

Theorem 1.
Let Ω : C → C be an analytic function in the region containing the simple zero ξ and initial guess x = x 0 is sufficiently close to ξ for guaranteed convergence. In addition, we consider that φ 4 (x r , w r ) and ψ 8 (x r , w r , z r ) are any optimal 4-and 8-order schemes, respectively. Then, the proposed scheme (15) has an optimal 16-order convergence.
Proof. Let us consider e r = x r − ξ be the error at rth step. With the help of the Taylor's series expansion, we expand the functions Ω(x r ) and Ω (x r ) around x = ξ with the assumption Ω (ξ) = 0 which leads us to: and where c j = Ω (j) (ξ) j!Ω (ξ) for j = 2, 3, . . . , 16, respectively. By inserting the expressions (16) and (17) in the first sub-step (15), we have where G k = G k (c 2 , c 3 , . . . , c 16 ) are given in terms of c 2 , c 3 , . . . , c i with explicitly written two coefficients The following expansion of Ω(w r ) about a point x = ξ with the help of Taylor series As in the beginning, we consider that φ 4 (x r , w r ) and φ 8 (x r , w r , z r ) are optimal schemes of order four and eight, respectively. Then, it is obvious that they will satisfy the error equations of the following forms and respectively, where A 0 , B 0 = 0. By using the Taylor series expansion, we further obtain and where With the help of expressions (16)-(23), we have Finally, we obtain The above expression (25) claims that our scheme (15) reaches the 16-order convergence. The expression (15) is also an optimal scheme in the regard of Kung-Traub conjecture since it uses only five functional values at each step. Hence, this completes the proof. Remark 1. Generally, we naturally expect that the presented general scheme (15) should contain other terms from A 0 , A 1 , . . . A 12 and B 0 , B 1 , . . . , B 8 . However, there is no doubt from the expression (25) that the asymptotic error constant involves only on A 0 and B 0 . This simplicity of the asymptotic error constant is because of adopting the inverse interpolatory function with the tangency constraints.

Special Cases
This is section is devoted to the discussion of some important cases of the proposed scheme. Therefore, we consider 1.
We assume an optimal eighth-order technique suggested scheme by Cordero et al. [13]. By using this scheme, we obtain the following new optimal 16-order scheme , where b 1 , b 2 , b 3 ∈ R, provided b 2 + b 3 = 0. Let us consider b 1 = b 2 = 1 and b 3 = 2 in the above scheme, recalled by (OM1).

2.
Again, we consider another optimal 8-order scheme presented by Behl and Motsa in [11]. In this way, we obtain another new optimal family of 16-order methods, which is given by where b ∈ R. We chose b = − 1 2 in this expression, called by (OM2).

3.
Let us choose one more optimal 8-order scheme proposed by Džuníc and Petkovíc [15]. Therefore, we have Let us call the above scheme by (OM3). 4. Now, we pick another optimal family of eighth-order iterative methods given by Bi et al. in [12]. By adopting this scheme, we further have where α ∈ R and Ω[·, ·] is finite difference of first order. Let us consider α = 1 in the above scheme, denoted by (OM4). In similar fashion, we can develop several new and interesting optimal sixteenth-order schemes by considering any optimal eighth-order scheme from the literature whose first sub-step employs the classical Newton's method.

Numerical Experiments
This section is dedicated to examining the convergence behavior of particular methods which are mentioned in the Special Cases section. Therefore, we shall consider some standard test functions, which are given as follows: Here, we confirm the theoretical results of the earlier sections on the basis of gained results x r+1 − x r (x r − x r−1 ) 16 and computational convergence order. We displayed the number of iteration indexes (n), approximated zeros (x r ), absolute residual error of the corresponding function (|Ω(x r )|), error in the consecutive iterations |x r+1 − x r |, x r+1 − x r (x r − x r−1 ) 16 , the asymptotic error constant η = lim n→∞ x r+1 − x r (x r − x r−1 ) 16 and the computational convergence order (ρ) in Table 1. To calculate (ρ), we adopt the following method We calculate (ρ), asymptotic error term and other remaining parameters up to a high number of significant digits (minimum 1000 significant digits) to reduce the rounding-off error. However, due to the restricted paper capacity, we depicted the values of x r and ρ up to 25 and 5 significant figures, respectively. Additionally, we mentioned x r+1 − x r (x r − x r−1 ) 16 and η by 10 significant figures. In addition to this, the absolute residual error in the function |Ω(x r )| and error in the consecutive iterations |x r+1 − x r | are depicted up to 2 significant digits with exponent power that can be seen in Tables 1-3.
Furthermore, the estimated zeros by 25 significant figures are also mentioned in Table 1. Now, we compare our 16-order methods with optimal 16-order families of iterative schemes that were proposed by Sharma et al. [7], Geum and Kim [3,4] and Ullah et al. [8]. Among these schemes, we pick the iterative methods namely expression (29), expression (Y1) (for more detail please see Table 1 of Geum and Kim [3]) and expression (K2) (please have look at Table 1 of Geum and Kim [4] for more details) and expression (9), respectively called by SM, GK1, GK2 and MM. The numbering and titles of the methods (used for comparisons) are taken from their original research papers. Table 1. Convergence behavior of methods OM1, OM2, OM3 and OM4 on Ω 1 (x)-Ω 8 (x).
Cases It is straightforward to say that our proposed methods not only converge very fast towards the required zero, but they have also small asymptotic error constant.    We want to demonstrate that our methods perform better than the existing ones. Therefore, instead of manipulating the results by considering self-made examples or/and cherry-picking among the starting points, we assume 4 numerical examples; the first one is taken from Sharma et al. [7]; the second one is considered from Geum and Kim [3]; the third one is picked from Geum and Kim [4] and the fourth one is considered from Ullah et al. [8] with the same starting points that are mentioned in their research articles. Additionally, we want to check what the outcomes will be if we assume different numerical examples and staring guesses that are not suggested in their articles. Therefore, we assume another numerical example from Behl et al. [27]. For the detailed information of the considered examples or test functions, please see Table 4.
We have suggested two comparison tables for every test function. The first one is associated with (|Ω(x r )|) mentioned in Table 2. On the other hand, the second one is related to |x r+1 − x r | and the corresponding results are depicted in Table 3. In addition, we assume the estimated zero of considered functions in the case where exact zero is not available, i.e., corrected by 1000 significant figures to calculate |x r − ξ|. All the computations have been executed by adopting the programming package Mathematica 11 with multiple precision arithmetic. Finally, b 1 (±b 2 ) stands for b 1 × 10 (±b 2 ) in Tables 1-3. 0 Ω 13 (z) = (x − 2) 2 − log x − 33x; [27] 37.5 36.98947358294466986534473

Conclusions
We constructed a general optimal scheme of 16-order that is suitable for every optimal 8-order iterative method/family of iterative methods provided the first sub-step employs classical Newton's method, unlike the earlier studies, where researchers suggested a high-order version or extension of certain existing methods such as Ostrowski's method or King's method [28], etc. This means that we can choose any iterative method/family of methods from [5,[11][12][13][14][15][16][17][18][19][20][21], etc. to obtain further optimal 16-order scheme. The construction of the presented technique is based on the inverse interpolatory approach. Our scheme also satisfies the conjecture of optimality of iterative methods given by Kung-Traub. In addition, we compare our methods with the existing methods with same convergence order on several of the nonlinear scalar problems. The obtained results in Tables 2 and 3 also illustrate the superiority of our methods to the existing methods, despite choosing the same test problem and same initial guess. Tables 1-3 confirm that smaller |Ω(x r )|, |x r+1 − x r | and simple asymptotic error terms are related to our iterative methods. The superiority of our methods over the existing robust methods may be due to the inherent structure of our technique with simple asymptotic error constants and inverse interpolatory approach.