Abstract
The principal motivation of this paper is to propose a general scheme that is applicable to every existing multi-point optimal eighth-order method/family of methods to produce a further sixteenth-order scheme. By adopting our technique, we can extend all the existing optimal eighth-order schemes whose first sub-step employs Newton’s method for sixteenth-order convergence. The developed technique has an optimal convergence order regarding classical Kung-Traub conjecture. In addition, we fully investigated the computational and theoretical properties along with a main theorem that demonstrates the convergence order and asymptotic error constant term. By using Mathematica-11 with its high-precision computability, we checked the efficiency of our methods and compared them with existing robust methods with same convergence order.
1. Introduction
The formation of high-order multi-point iterative techniques for the approximate solution of nonlinear equations has always been a crucial problem in computational mathematics and numerical analysis. Such types of methods provide the utmost and effective imprecise solution up to the specific accuracy degree of
where is holomorphic map/function in the neighborhood of required . A certain recognition has been given to the construction of sixteenth-order iterative methods in the last two decades. There are several reasons behind this. However, some of them are advanced digital computer arithmetic, symbolic computation, desired accuracy of the required solution with in a small number of iterations, smaller residual errors, CPU time, smaller difference between two iterations, etc. (for more details please see Traub [] and Petković et al. []).
We have a handful of optimal iterative methods of order sixteen [,,,,,,]. Among these methods most probably are the improvement or extension of some classical methods e.g., Newton’s method or Newton-like method, Ostrowski’s method at the liability of further values of function/s and/or 1st-order derivative/s or extra numbers of sub-steps of the native schemes.
In addition, we have very few such techniques [,] that are applicable to every optimal 8-order method (whose first sub-step employs Newton’s method) to further obtain 16-order convergence optimal scheme, according to our knowledge. Presently, optimal schemes suitable to every iterative method of particular order to obtain further high-order methods have more importance than obtaining a high-order version of a native method. Finding such general schemes are a more attractive and harder chore in the area of numerical analysis.
Therefore, in this manuscript we pursue the development of a scheme that is suitable to every optimal 8-order scheme whose first sub-step should be the classical Newton’s method, in order to have further optimal 16-order convergence, rather than applying the technique only to a certain method. The construction of our technique is based on the rational approximation approach. The main advantage of the constructed technique is that it is suitable to every optimal 8-order scheme whose first sub-step employs Newton’s method. Therefore, we can choose any iterative method/family of methods from [,,,,,,,,,,,,,,,], etc. to obtain further 16-order optimal scheme. The effectiveness of our technique is illustrated by several numerical examples and it is found that our methods execute superior results than the existing optimal methods with the same convergence order.
2. Construction of the Proposed Optimal Scheme
Here, we present an optimal 16-order general iterative scheme that is the main contribution of this study. In this regard, we consider a general 8-order scheme, which is defined as follows:
where and are optimal scheme of order four and eight, respectively.
We adopt Newton’s method as a fourth sub-step to obtain a 16-order scheme, which is given by
that is non-optimal in the regard of conjecture given by Kung-Traub [] because of six functional values at each step. We can decrease the number of functional values with the help of following third-order rational functional
where the values of disposable parameters can be found with the help of following tangency constraints
Then, the last sub-step iteration is replaced by
that does not require . Expressions (2) and (6) yield an optimal sixteenth-order scheme. It is vital to note that the in (4) plays a significant role in the construction of an optimal 16-order scheme.
In this paper, we adopt a different last sub-step iteration, which is defined as follows:
where can be considered to be a correction term to be called naturally as “error corrector”. The last sub-step of this type is handier for the convergence analysis and additionally in the dynamics study through basins of attraction. The easy way of obtaining such a fourth sub-step iteration with a feasible error corrector is to apply the Inverse Function Theorem [] to (5). Since is a simple root (i.e., ), then we have a unique map satisfying in the certain neighborhood of . Hence, we adopt such an inverse map to obtain the needed last sub-step of the form (7) instead of using in (5).
With the help of Inverse Function Theorem, we will yield the final sub-step iteration from the expression (5):
where are disposable constants. We can find them by adopting the following tangency conditions
One should note that the rational function on the right side of (8) is regarded as an error corrector. Indeed, the desired last sub-step iteration (8) is obtained using the inverse interpolatory function approach meeting the tangency constraints (9). Clearly, the last sub-step iteration (6) looks more suitable than (3) in the error analysis. It remains for us to determine parameters in (8)
By using the first two tangency conditions, we obtain
By adopting last three tangency constraints and the expression (10), we have the following three independent relations
which further yield
where .
Let us consider that the rational Function (8) cuts the x – axis at , in order to obtain the next estimation . Then, we obtain
which further yield by using the above values of and
where .
Finally, by using expressions (2) and (14), we have
where and are defined earlier. We illustrate that convergence order reach at optimal 16-order without adopting any additional functional evaluations in the next Theorem 1. It is vital to note that only coefficients and from and , respectively, contribute to its important character in the development of the needed asymptotic error constant, which can be found in Theorem 1.
Theorem 1.
Let be an analytic function in the region containing the simple zero ξ and initial guess is sufficiently close to ξ for guaranteed convergence. In addition, we consider that and are any optimal 4- and 8-order schemes, respectively. Then, the proposed scheme (15) has an optimal 16-order convergence.
Proof.
Let us consider be the error at rth step. With the help of the Taylor’s series expansion, we expand the functions and around with the assumption which leads us to:
and
where for , respectively.
By inserting the expressions (16) and (17) in the first sub-step (15), we have
where are given in terms of with explicitly written two coefficients , etc.
The following expansion of about a point with the help of Taylor series
As in the beginning, we consider that and are optimal schemes of order four and eight, respectively. Then, it is obvious that they will satisfy the error equations of the following forms
and
respectively, where . By using the Taylor series expansion, we further obtain
and
where and .
Finally, we obtain
Remark 1.
Generally, we naturally expect that the presented general scheme (15) should contain other terms from and . However, there is no doubt from the expression (25) that the asymptotic error constant involves only on and . This simplicity of the asymptotic error constant is because of adopting the inverse interpolatory function with the tangency constraints.
Special Cases
This is section is devoted to the discussion of some important cases of the proposed scheme. Therefore, we consider
- We assume an optimal eighth-order technique suggested scheme by Cordero et al. []. By using this scheme, we obtain the following new optimal 16-order schemewhere , provided . Let us consider and in the above scheme, recalled by .
- Again, we consider another optimal 8-order scheme presented by Behl and Motsa in []. In this way, we obtain another new optimal family of 16-order methods, which is given bywhere . We chose in this expression, called by .
- Let us choose one more optimal 8-order scheme proposed by Džuníc and Petkovíc []. Therefore, we haveLet us call the above scheme by .
- Now, we pick another optimal family of eighth-order iterative methods given by Bi et al. in []. By adopting this scheme, we further havewhere and is finite difference of first order. Let us consider in the above scheme, denoted by .In similar fashion, we can develop several new and interesting optimal sixteenth-order schemes by considering any optimal eighth-order scheme from the literature whose first sub-step employs the classical Newton’s method.
3. Numerical Experiments
This section is dedicated to examining the convergence behavior of particular methods which are mentioned in the Special Cases section. Therefore, we shall consider some standard test functions, which are given as follows:
| [] | |
| [] | |
| [] | |
| [] | |
| [] | |
| [] | |
| [] | |
| [] | |
Here, we confirm the theoretical results of the earlier sections on the basis of gained results and computational convergence order. We displayed the number of iteration indexes , approximated zeros , absolute residual error of the corresponding function , error in the consecutive iterations , , the asymptotic error constant and the computational convergence order in Table 1. To calculate , we adopt the following method
Table 1.
Convergence behavior of methods , , and on –.
We calculate , asymptotic error term and other remaining parameters up to a high number of significant digits (minimum 1000 significant digits) to reduce the rounding-off error. However, due to the restricted paper capacity, we depicted the values of and up to 25 and 5 significant figures, respectively. Additionally, we mentioned and by 10 significant figures. In addition to this, the absolute residual error in the function and error in the consecutive iterations are depicted up to 2 significant digits with exponent power that can be seen in Table 1, Table 2 and Table 3.
Table 2.
Comparison of residual error on the test examples –.
Table 3.
Comparison of error in the consecutive iterations on the test examples –.
Furthermore, the estimated zeros by 25 significant figures are also mentioned in Table 1.
Now, we compare our 16-order methods with optimal 16-order families of iterative schemes that were proposed by Sharma et al. [], Geum and Kim [,] and Ullah et al. []. Among these schemes, we pick the iterative methods namely expression (29), expression (Y1) (for more detail please see Table 1 of Geum and Kim []) and expression (K2) (please have look at Table 1 of Geum and Kim [] for more details) and expression (9), respectively called by , and . The numbering and titles of the methods (used for comparisons) are taken from their original research papers.
We want to demonstrate that our methods perform better than the existing ones. Therefore, instead of manipulating the results by considering self-made examples or/and cherry-picking among the starting points, we assume 4 numerical examples; the first one is taken from Sharma et al. []; the second one is considered from Geum and Kim []; the third one is picked from Geum and Kim [] and the fourth one is considered from Ullah et al. [] with the same starting points that are mentioned in their research articles. Additionally, we want to check what the outcomes will be if we assume different numerical examples and staring guesses that are not suggested in their articles. Therefore, we assume another numerical example from Behl et al. []. For the detailed information of the considered examples or test functions, please see Table 4.
Table 4.
Test problems.
We have suggested two comparison tables for every test function. The first one is associated with mentioned in Table 2. On the other hand, the second one is related to and the corresponding results are depicted in Table 3. In addition, we assume the estimated zero of considered functions in the case where exact zero is not available, i.e., corrected by 1000 significant figures to calculate . All the computations have been executed by adopting the programming package with multiple precision arithmetic. Finally, stands for in Table 1, Table 2 and Table 3.
4. Conclusions
We constructed a general optimal scheme of 16-order that is suitable for every optimal 8-order iterative method/family of iterative methods provided the first sub-step employs classical Newton’s method, unlike the earlier studies, where researchers suggested a high-order version or extension of certain existing methods such as Ostrowski’s method or King’s method [], etc. This means that we can choose any iterative method/family of methods from [,,,,,,,,,,,], etc. to obtain further optimal 16-order scheme. The construction of the presented technique is based on the inverse interpolatory approach. Our scheme also satisfies the conjecture of optimality of iterative methods given by Kung-Traub. In addition, we compare our methods with the existing methods with same convergence order on several of the nonlinear scalar problems. The obtained results in Table 2 and Table 3 also illustrate the superiority of our methods to the existing methods, despite choosing the same test problem and same initial guess. Table 1, Table 2 and Table 3 confirm that smaller , and simple asymptotic error terms are related to our iterative methods. The superiority of our methods over the existing robust methods may be due to the inherent structure of our technique with simple asymptotic error constants and inverse interpolatory approach.
Author Contributions
Both authors have equal contribution.
Funding
This research received no external funding
Conflicts of Interest
The authors declare no conflict of interest.
References
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: New York, NY, USA, 2012. [Google Scholar]
- Geum, Y.H.; Kim, Y.I. A family of optimal sixteenth-order multipoint methods with a linear fraction plus a trivariate polynomial as the fourth-step weighting function. Comput. Math. Appl. 2011, 61, 3278–3287. [Google Scholar] [CrossRef]
- Geum, Y.H.; Kim, Y.I. A biparametric family of optimally convergent sixteenth-order multipoint methods with their fourth-step weighting function as a sum of a rational and a generic two-variable function. J. Comput. Appl. Math. 2011, 235, 3178–3188. [Google Scholar] [CrossRef]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
- Neta, B. On a family of multipoint methods for non-linear equations. Int. J. Comput. Math. 1981, 9, 353–361. [Google Scholar] [CrossRef]
- Sharma, J.R.; Guha, R.K.; Gupta, P. Improved King’s methods with optimal order of convergence based on rational approximations. Appl. Math. Lett. 2013, 26, 473–480. [Google Scholar] [CrossRef]
- Ullah, M.Z.; Al-Fhaid, A.S.; Ahmad, F. Four-Point Optimal Sixteenth-Order Iterative Method for Solving Nonlinear Equations. J. Appl. Math. 2013, 2013, 850365. [Google Scholar] [CrossRef]
- Sharifi, S.; Salimi, M.; Siegmund, S.; Lotfi, T. A new class of optimal four-point methods with convergence order 16 for solving nonlinear equations. Math. Comput. Simul. 2016, 119, 69–90. [Google Scholar] [CrossRef]
- Behl, R.; Amat, S.; Magreñán, Á.A.; Motsa, S.S. An efficient optimal family of sixteenth order methods for nonlinear models. J. Comput. Appl. Math. 2019, 354, 271–285. [Google Scholar] [CrossRef]
- Behl, R.; Motsa, S.S. Geometric construction of eighth-order optimal families of ostrowski’s method. Sci. World J. 2015, 2015, 11. [Google Scholar] [CrossRef] [PubMed]
- Bi, W.; Ren, H.; Wu, Q. Three-step iterative methods with eighth-order convergence for solving nonlinear equations. J. Comput. Appl. Math. 2009, 255, 105–112. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Three-step iterative methods with optimal eighth-order convergence. J. Comput. Appl. Math. 2011, 235, 3189–3194. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. New modifications of Potra-Pták’s method with optimal fourth and eighth order of convergence. J. Comput. Appl. Math. 2010, 234, 2969–2976. [Google Scholar] [CrossRef]
- Džuníc, J.; Petkovíc, M.S. A family of three point methods of Ostrowski’s type for solving nonlinear equations. J. Appl. Math. 2012, 2012, 425867. [Google Scholar] [CrossRef]
- Liu, L.; Wang, X. Eighth-order methods with high efficiency index for solving nonlinear equations. J. Comput. Appl. Math. 2010, 215, 3449–3454. [Google Scholar] [CrossRef]
- Sharma, J.R.; Sharma, R. A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algorithms 2010, 54, 445–458. [Google Scholar] [CrossRef]
- Soleymani, F.; Vanani, S.K.; Khan, M.; Sharifi, M. Some modifications of King’s family with optimal eighth-order of convergence. Math. Comput. Model. 2012, 55, 1373–1380. [Google Scholar] [CrossRef]
- Soleymani, F.; Sharifi, M.; Mousavi, B.S. An improvement of Ostrowski’s and King’s techniques with optimal convergence order eight. J. Optim. Theory Appl. 2012, 153, 225–236. [Google Scholar] [CrossRef]
- Thukral, R.; Petkovíc, M.S. A family of three point methods of optimal order for solving nonlinear equations. J. Comput. Appl. Math. 2010, 233, 2278–2284. [Google Scholar] [CrossRef]
- Wang, X.; Liu, L. Modified Ostrowski’s method with eighth-order convergence and high efficiency index. Appl. Math. Lett. 2010, 23, 549–554. [Google Scholar] [CrossRef]
- Salimi, M.; Lotfi, T.; Sharifi, S.; Siegmund, S. Optimal Newton-Secant like methods without memory for solving nonlinear equations with its dynamics. Int. J. Comput. Math. 2017, 94, 1759–1777. [Google Scholar] [CrossRef]
- Salimi, M.; Long, N.M.A.N.; Sharifi, S.; Pansera, B.A. A multi-point iterative method for solving nonlinear equations with optimal order of convergence. Jpn. J. Ind. Appl. Math. 2018, 2018 35, 497–509. [Google Scholar] [CrossRef]
- Sharifi, S.; Ferrara, M.; Salimi, M.; Siegmund, S. New modification of Maheshwari method with optimal eighth order of convergence for solving nonlinear equations. Open Math. 2016, 2016 14, 443–451. [Google Scholar]
- Lotfi, T.; Sharifi, S.; Salimi, M.; Siegmund, S. A new class of three point methods with optimal convergence order eight and its dynamics. Numer. Algorithms 2016, 68, 261–288. [Google Scholar] [CrossRef]
- Apostol, T.M. Mathematical Analysis; Addison-Wesley Publishing Company, Inc.: Boston, MA, USA, 1974. [Google Scholar]
- Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Construction of fourth-order optimal families of iterative methods and their dynamics. Appl. Math. Comput. 2015, 271, 89–101. [Google Scholar] [CrossRef][Green Version]
- King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).