Abstract
A nonlinear equation is mathematically transformed to a coupled system of quasi-linear equations in the two-dimensional space. Then, a linearized approximation renders a fractional iterative scheme , which requires one evaluation of the given function per iteration. A local convergence analysis is adopted to determine the optimal values of a and b. Moreover, upon combining the fractional iterative scheme to the generalized quadrature methods, the fourth-order optimal iterative schemes are derived. The finite differences based on three data are used to estimate the optimal values of a and b. We recast the Newton iterative method to two types of derivative-free iterative schemes by using the finite difference technique. A three-point generalized Hermite interpolation technique is developed, which includes the weight functions with certain constraints. Inserting the derived interpolation formulas into the triple Newton method, the eighth-order optimal iterative schemes are constructed, of which four evaluations of functions per iteration are required.
Keywords:
nonlinear equation; two-dimensional approach; fractional iterative scheme; modified derivative-free Newton method; quadratures; fourth-order optimal iterative scheme; three-point generalized Hermite interpolation; eighth-order optimal iterative scheme MSC:
41A25; 65D99; 65H05
1. Introduction
In the paper, we develop several novel iterative schemes to solve the nonlinear equation:
which is frequently occurred in engineering and scientific applications. For solving Equation (1),
is a well-known Newton iterative method. A lot of methods were modified from the Newton iterative method [1,2,3,4,5,6,7,8], and they are effective to solve nonlinear equations. We are going to replace in the denominator of Equation (2) by a linear function of , like as for some constants a and b. In doing so, a major drawback of the sensitivity to the initial guess of the Newton iterative method can be avoided, and at the same time, a major advantage is that is no longer needed.
The iterative schemes in [2,9,10,11,12,13,14] were based on the quadratures, which are of two-step-type iterative schemes with third-order convergence, needing the first Newton step to generate a trial solution, and then a correction at the second step by some quadrature rules. Our fractional iterative scheme is of the one-step type and also of the third-order convergence, which saves much computation of function per iteration.
Weerakoon and Fernando [2] resorted to a trapezoidal quadrature rule to derive an arithmetic mean Newton method with third-order convergence of the iterative scheme. After that, third-order iterative schemes based on different quadrature methods were developed in [9,10,11,12,15,16], of which the evaluations of with are required per iteration. They have the same order and have the same efficiency index (E.I.). However, the optimal order and efficiency index of the iterative scheme based on are and E.I. = 1.5874, according to the conjecture of Kung and Traub [17].
For the fourth-order optimal iterative scheme (FOIS) with performing at each iteration, there were many methods [1,17,18,19,20,21,22,23,24,25,26,27,28,29]. Recently, Liu and Liu [30] derived a double-weight function technique to derive the FOIS. The iterative schemes using difference modifications of Potra-Ptak’s method with optimal fourth and eighth orders of convergence were performed by Cordero et al. [31]. However, the two-step FOIS based on is not yet developed. We are going to propose a simple method based on generalized quadratures to derive the FOIS, which is the optimal combination of two third-order iterative schemes to be developed in the paper.
Besides the quadratures and the finite difference approximation as used in the Ostrowski method and its modifications in [32,33,34,35,36], the function interpolation technique is often used to generate the high-order iterative scheme. Data interpolation is a mathematical process to construct the interpolant from the given data, of which the differential at each point is inserted to set up the Hermite interpolant, which was used in higher-order iterative schemes [30,37,38,39,40,41,42,43]. Also the three-point generalized Hermite interpolation techniques, and a new class of three-step eighth-order optimal iterative schemes (EOIS) is constructed in the paper, which involves four evaluations of functions that are optimal in the sense of Kung and Traub.
Previously, Zhanlav et al. [44] used the generating function method for constructing new EOIS iterations, by which our technique is quite different.
The scalar equation obtained in the engineering application is usually an implicit function of the unknown variable. For instance, the target equation used in the shooting method to solve the nonlinear boundary value problem is an implicit equation with being an implicit function of x, and under this situation, it is hard to obtain the derivative term . When the Newton iterative method cannot be applied to solve this sort problem, the proposed fractional iterative schemes have an advantage to solve this type of problem without using .
The paper is organized as follows. A two-dimensional variant of Newton’s iterative method is developed in Section 2. We derive a fractional iterative scheme, whose convergence criterion is proven. The convergence behavior analysis of the fractional iterative scheme is carried out in Section 3. We verify the performance of the proposed iterative schemes in Section 4 by computing several numerical examples. In Section 5, we combine the fractional iterative scheme to the quadrature methods to generate the FOIS. We reduce the fractional iterative schemes to some derivative-free iterative schemes in Section 6, and the convergence is identified. The Hermite interpolation is introduced in Section 7, and a three-point interpolation formula is derived. The results are used in Section 8 to derive the three-point generalized Hermite interpolations. In Section 9, we construct the EOIS by using the weight functions obtained from the three-point generalized Hermite interpolations, and examples are given. Finally, we draw the conclusions in Section 10. The abbreviations are listed in the Abbreviations.
2. Two-Dimensional Generalization of Newton’s Method
To motivate the present study, we begin with
where r is a simple solution of with and . Inserting Equation (3) for into Equation (2) and using derived from Equation (4) with higher-order terms being neglected yields
This iterative scheme is a variant near to the Newton iterative scheme (2). We will prove that the iterative scheme (5), like Equation (2), is quadratically convergent as to be stated by Theorem 3 in Section 3. Below we will derive an iterative scheme with a similar form to Equation (5), but it is cubically convergent, rather than the quadratic convergence of Equation (5).
When a new variable is defined by
where we suppose that can be decomposed as , from Equation (6) the following identity holds:
By Equations (1) and (7), we have
where a is an accelerating parameter and is a splitting function to be discussed below. While is added on both sides of the equation in Equation (8), we add on both sides of Equation (7) to render Equation (9). Herein, the problem of finding the solution of Equation (1) is mathematically transformed to a coupled system of quasi-linear Equations (8) and (9) in the two-dimensional space .
The splitting technique of in Equation (9) is used to solve Equation (1) in [45]. Then, Liu et al. [46] proposed a derivative-free iterative scheme using . We further carry out a theoretical analysis in the two-dimensional space directly for .
When is obtained at the nth step, the linearizations around for Equations (8) and (9) are
which are two-dimensional linear system for . We take and for shorting the notations.
From Equations (10) and (11), we can obtain at the next step by
which renders
Both the nominator and denominator on the right-hand side of Equation (12) are multiplied by , and using Equation (6), it can be refined to
Remark 1.
Mathematically, after adding on both sides, Equation (1) is equivalent to
which is then multiplied by x,
If is already known, we can seek the next by
Upon taking the same notation with , Equation (15) goes back to Equation (14). However, this one-dimensional approach cannot generate Equation (13); without setting the problem in the two-dimensional space as carried out in Equations (8) and (9), it is hard to determine a and , and this proves Theorem 1 given below.
We are going to show that without resorting on the derivative term, the third-order convergence of the iterative scheme can also be realized. Our aim is reducing the number of function evaluations to one per iteration, without using the derivative term to maintain the order of convergence to be three.
Letting
where b is a constant, we can cancel in the fractional term of Equation (14), and with the aid of Equation (16), we can achieve
which, including two constant parameters a and b, is a novel iterative scheme to solve Equation (1). For use in the later, the iterative scheme (17) is called a fractional iterative scheme.
Theorem 1.
For solving , the iterative scheme (17) is convergent, if
A finer criterion is
3. Convergence Analysis of Fractional Iterative Scheme
The convergence analysis of Equation (17) is given below.
Theorem 2.
The iterative scheme (17) for solving has third-order convergence, with the parameters given by
Proof.
For the proof of convergence, let r be a simple solution of , i.e., and . Thus, by giving
it follows that
where
Inserting Equation (26) into Equation (17) yields
where we have used the first one in Equation (24), and , and are given by
If holds as that given in second one in Equation (24), we have
and at the same time, Equation (29) reduces to
Equation (30) indicates the third-order convergence. □
Theorem 3.
The iterative scheme (5) for solving has second-order convergence.
Proof.
By Equation (29),
proves this theorem. □
In practice, the iterative scheme (5) is a variant of the Newton iterative scheme (2). Both orders of convergence are two.
Notice that the Newton method (2) is a single-point second-order optimal iterative scheme, with two function operations of and . Halley [47] derived the following extension of the Newton method to a third-order iterative scheme:
However, because it needs three function operations on , and , it is not an optimal iterative scheme. Besides the Halley method, there are many two-point iterative schemes which are of third-order convergence. Liu and Lee [48] generalized many quadrature-type third-order iterative schemes to
Based on three function operations of , and , Equation (32) is not an optimal iterative scheme. It is interesting that upon comparing Equation (31) to Equation (17), these two iterative schemes are the same if we take and . But merely with a and b given by Equation (24), the iterative scheme (17) is of third-order convergence. It must be emphasized that we do not need , and two-point operation to achieve the third-order convergence. Therefore, the key issue of Equations (17) and (24) is that we need to give a precise estimation of a and b, without using the information from and .
4. Numerical Verifications
Some examples are used to evaluate the iterative scheme (17), which is subjected to the convergence criteria:
We fix for all numerical tests. The numerically computed order of convergence (COC) is defined in [2]:
where .
The presently computed results are compared to those obtained by the Newton method (NM), the Halley method (HM) [47] in Equation (31) and the method of Li (LM) [18]:
The orders of convergence for the NM, HM and LM are two, three and four, respectively.
We first use the following example:
to present the monotonically decreasing sequence of , which is generated by the method in Equation (17). The parameters a and b are specified, not that given by Equation (24).
There are three solutions , and of Equation (35). We consider four cases:
We take , and for (a); , and for (b); , and for (c); , and for (d). Cases (a) and (b) tend to the solution ; case (c) tends to ; and case (d) tends to .
Due to the monotonically decreasing sequence of , all the COCs are near to the third-order and they converge very fast. In the last column of Table 1, we list the NIs by using the LM of [18]. Starting from the same initial values, for the first two cases, the scheme (17) is convergent slightly faster than the LM. As mentioned by Li [18], the iterative scheme (34) requires two evaluations of the function and one first-order derivative per iteration. Therefore, the scheme (17) with one evaluation of the function saves much of the computational cost.
Table 1.
For Equation (35), showing the monotonically decreasing sequence of with respect to the number n of steps and the COC for different cases and listing the number of iterations (NI) of present method. The last column is the NI of LM.
Other test examples are given by
Table 2 lists , a and b and the NIs for different methods. We can observe that the present iterative scheme converges faster than the NM and HM. NM and HM are not good for the solution of with a worse initial guess , and cannot be applied to solve in Equation (39).
Table 2.
For , listing , a and b and the NIs of present method. The last two columns are the NIs of NM and HM.
5. Fourth-Order Optimal Iterative Schemes
Now, we propose some new FOIS by a constantly weighting combination of the third-order iterative schemes from Equations (17) and (24), as well as the following one. Before that, we cite the following result [48].
Lemma 1.
The following two-step iterative scheme has third-order convergence:
where W satisfies
Here,
The error equation reads as
Proof.
The proof can refer to [48]. □
Theorem 4.
Proof.
The combination of Equations (17) and (40) is given by Equation (42), whose weighting factors and are subjected to
Then, we consider the weighting combination of the error equations in Equations (30) and (41), such that the combined coefficient preceding is zero:
Equation (45) leads to
Solving Equations (44) and (46), we can derive Equation (43). Thus, the error equation of the optimally combined iterative scheme in Equation (42) is
This completes the proof of Theorem 4. □
When we take , the conditions , , in Theorem 4 are satisfied. In Table 3, we solve Equation (35) by using the FOIS (42), and list the results for three different solutions , which show large values of the COC.
Let be a parameter. With in Equation (40), we have
The best value of is chosen such that
is an FOIS.
6. Derivative-Free Iterative Schemes
In this section, we approximate a and b in Equation (24). With two initial guesses and satisfying to render , is taken. By a finite difference approximation of a and b, we take
Inserting a and b in Equation (48) into Equation (17), we solve Equation (35) and the related data are tabulated in Table 5.
Table 5.
For Equation (35), listing , , a and b, NIs and COCs of present method for three solutions: .
Table 6.
For , listing , , a and b, NIs and COCs of present method.
Remark 3.
For the solution of , it does not exist and such that , due to . However, we place and on the right-side of the solution and the present iterative scheme is applicable to find the solution with 14 iterations, as shown in Table 6.
If the curve of vs. x is available, we can observe a rough position of the solution r, and then the slope and the curvature can be estimated roughly. Intuitively, we can estimate a and b by the slope and curvature. In order to maintain the fast convergence, we must choose and quite close to the solution r, such that
are very close to a and b in Equation (24), where . For Equation (35) with the solution , if we take and , NI is greatly reduced from 11 to 5 and COC = 2.904; for , NI reduces to 6 and COC = 2.862; and for , NI reduces to 5 and COC = 2.936. Here, NI and COC are improved by comparing to that in Table 5.
A greed search such as the 2D golden section search algorithm in the given range can help us to obtain the optimal values of a and b for fast convergence. However, it would spend much more computations. Instead of a greed search in the plane , we discuss the influence of and in Equation (49) by giving and . For different c, and are different. In Table 7, we list the results. Obviously, COC defined in Equation (33) is sensitive to the values of and , when they approach to the optimal values, but NI does not have a large variation. When and tend to optimal values and , COC = 2.9041 tends to the theoretical one with COC = 3, as listed in Table 1.
Table 7.
For Equation (35) with the solution and with the initial guess , listing , , NIs and COCs of present method for different values of c.
Like that performed in Equations (48) and (17), we introduce a derivative-free modification of the Newton variant in Equation (5) to
where
For Equations (36)–(39) solved by the first derivative-free Newton method (FDFNM), the related data are tabulated in Table 8.
Table 8.
For , listing , , A and B, NIs and COCs of the FDFNM.
We have
By using and neglecting the higher-order terms, it follows from Equation (53) a quadratic equation for :
Thus, we can derive
Inserting Equation (54) into Equation (52), we can derive the second modified Newton method:
which is different from the first modified Newton method (5). Let
and we can obtain the second derivative-free Newton method (SDFNM):
Table 9.
For , listing , , A and B, NIs and COCs of the SDFNM.
Upon comparing Table 6, Table 8 and Table 9, the performance of the presented method in Equations (48) and (17), as well as the FDFNM in Equations (50) and (51) and the SDFNM in Equations (56) and (57), are almost the same.
As a practical application of the proposed iterative schemes, let us consider a nonlinear boundary value problem:
An exact solution is
The conventional shooting method is assumed to be an unknown initial slope , and we integrate Equation (58) with the initial conditions and , which results an implicit equation to be solved. The exact solution is .
7. Hermite Interpolation
To be the extensions of the one-step Newton method (2), there are, respectively, two-step and three-step methods of double Newton and triple Newton:
However, due to the low efficiency index (E.I.) = 1.414 of these iterative schemes, they are rarely used in the solution of nonlinear equations. Below, we will employ the generalized Hermite interpolation techniques to raise the values of E.I.
We fix
The Hermite function for the interpolation of the data of a function at two points and is such that
If is a polynomial to match these four conditions in Equation (63), it is at least a second-order function of x, denoted as . When is computed from Equation (61), it is not independent to ; hence, there exists an Hermite interpolation formula to predict from the data . The two-point Hermite interpolation formula was generalized in [30], involving a weight function.
The second-order Hermite polynomial is constructed according to the Hermite interpolation conditions [30]:
Wang and Liu [36] derived
Then, it follows from and Equations (61) and (62) that
which expressed a certain two-point generalized Hermite interpolation.
Definition 1.
A two-point generalized Hermite interpolation of in terms of and is depicted by
where the weight function satisfies
If one replaces in the first equation in Equation (60) by that in Equation (65), it generates an FOIS [17,37,38]:
The E.I. of Equation (67) is now raised to E.I. = 1.587, which is better than E.I. = 1.414 of the double Newton method.
This fact encourages us also to replace in Equation (60) by the following three-point interpolation formula:
such that the combination of Equations (60), (65) and (68) leads to
With certain conditions on and , the E.I. of the iterative scheme (69) can be further raised to E.I. = 1.682. They are three-point EOISs.
8. Three-Point Generalized Hermite Interpolations
Using the third-order Hermite polynomial for the three-point interpolation, Wang and Liu [36] and Petković [37] derived
where
and and are defined by the same fashion.
From the first two equations in Equation (69) and Equation (62), we can derive the following divided differences:
Definition 2.
A three-point generalized Hermite interpolation of in terms of and is defined by Equations (73) and (74) as
where the weight functions , and satisfy
Equation (77) is a special case of Equation (76). Unlike that in Equation (74), for the generalized interpolation in Equation (75), can be independent to .
The function is subjected to four conditions, in which it is somewhat not easy to obtain . We attempt to construct it by a function with merely two conditions.
Lemma 2.
A function , with
can be obtained from
where
Proof.
Taking advantage of Lemma 2, we can replace Equation (76) by
There appear different interpolation techniques. Using a rational function, Sharma and Sharma [50] derived
which in terms of Equation (71) can be written as
Based on the Taylor series expansion,
was derived by Bi et al. [34], which with the aid of Equations (71) and (72) can be recast to
Both interpolations (81) and (82) are special cases of the generalized interpolation in Equation (75).
To observe the accuracy of Equation (75) and the other two interpolations (81) and (82), we consider two definite functions:
of which and . We define
to be the relative error of the interpolation of , where and are, respectively, the value calculated from Equation (75) and the exact value.
The following cases are with simple weight functions:
Table 10 lists the REs.
Table 10.
Listing REs for the functions and in Equation (83), which are computed at and , respectively. Each number of means that .
For , the accuracy of Cases (a)–(d) is much better than others because the interpolant is itself a third-order polynomial; however, for , the accuracy of all cases are of the levels and .
9. Three-Point Eighth-Order Optimal Iterative Schemes
In this section, we combine the two-point and three-point generalized Hermite interpolations to generate some eighth-order optimal iterative schemes (EOIS).
Theorem 5.
Proof.
Before giving the proof, we emphasize that for the special case with and , and given in Equation (77), Wang and Liu [36] have proven the eighth-order convergence of the iterative scheme (69) and derived the corresponding error equation. For saving space, the details are not written here, and the error equation is not written out explicitly.
Let , , and . As shown in [36],
where
In view of Equations (88)–(92), in Equation (62) have the following asymptotic estimations:
Therefore, we merely need to expand g to the third order by
where Equation (93) was taken into account.
Inserting and Equations (94) and (95) into the last one in Equation (69) yields
since satisfies Equation (86); , and have the same values at as listed in Equation (76) with those in Equation (77); and according to [36], the coefficients preceding to are zeros for the iterative scheme (69). Equation (96) indicates that Equation (69) has eighth-order convergence. Because we do not derive the error equation in an explicit form, many processes were omitted. □
Although, the details of the derivation of the corresponding error equation is not given here, we give two examples to verify the performance of the iterative scheme (69), of which and are independent, not like that in [36] as shown by Case (a) in Equation (85), wherein g is related to h by Equations (75) and (74). We abbreviate the method in [36] as WLM. For the purpose of comparison, the WLM is written as follows:
This iterative scheme was proved in [36] to be eighth-order convergence, which is optimal, because there are four function operations on . Compared to Equation (69), which is also eighth-order convergence, and also with the same function operations on , the two methods are different in their construction techniques.
The iterative scheme (69), which involves and , is definitely more general than that in Equations (97) and (98). The theoretical basis of Equation (69) is the generalizations of the two-point and three-point Hermite interpolation methods.
Substituting the functions in Cases (a)–(f) into the iterative scheme (69), we have different algorithms; in particular, using the functions in Equations (81) and (82), we recover the iterative schemes developed in [34,50], which are shortened as SSM and BRWM in Table 11. The convergence criteria are and . It can be seen that these iterative schemes have the same performance.
Table 11.
For the functions with and with , comparing the number of iterations.
10. Conclusions
We have derived a simple iterative scheme to solve nonlinear equations from a two-dimensional approach with two constant parameters involved. Executing the convergence analysis, the parameters were constrained by a derived inequality, which resulted in a third-order convergence iterative scheme. The presented method is derivative-free and the theoretically and numerically proved order of the convergence is three. We proved that the iterative scheme is a variant of Newton’s method improved from quadratic convergence to cubic convergence. Numerical results revealed that the new method can be of practical interest for finding the solution quickly, and at each iteration, it merely required one evaluation of the given function. The proposed fractional iterative scheme was combined with the generalized quadrature methods to develop a class of fourth-order optimal iterative schemes. Examples were given to show that the COC is close to four. We employed a finite difference technique on the data at three points near to the solution to estimate the optimal values of two parameters, whose efficiency was identified by numerical tests. In terms of weight functions with certain constraints, two new generalized Hermite interpolation techniques were performed for the transformations between the derivatives at two points and at three points. A modification of the triple Newton’s method with the two generalized Hermite interpolation formulas can realize the eighth-order optimal iterative scheme, whose E.I. = 1.682 is better than E.I. = 1.414 of the triple Newton method. Two examples were verified for testing the accuracy of the generalized Hermite interpolations and the efficiency of the proposed eighth-order optimal iterative schemes. In summary, the novelty points of this paper are as follows:
- A two-dimensional approach to the modification of the Newton method is developed.
- The resulting fractional iterative scheme involving two parameters is of the one-step type and also of third-order convergence, which saves much computation of function per iteration.
- A new fourth-order optimal iterative scheme was constructed.
- A new three-point generalized Hermite interpolation was constructed, which is quite general.
- A new eighth-order optimal iterative scheme was constructed.
Author Contributions
Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L., E.R.E.-Z. and C.-W.C.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L., E.R.E.-Z. and C.-W.C.; Resources, E.R.E.-Z. and C.-W.C.; Data curation, C.-S.L.; Writing—original draft, C.-S.L.; Writing—review & editing, C.-W.C.; Visualization, C.-S.L., E.R.E.-Z. and C.-W.C.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C.; Funding acquisition and Resources, E.R.E.-Z. and C.-W.C. All authors have read and agreed to the published version of the manuscript.
Funding
This work was financially supported by the National United University [grant number: 111I1206-8] and the National Science and Technology Council [grant number: NSTC 112-2221-E-239-022]. Also, this study is supported via funding from Prince sattam bin Abdulaziz University project number (PSAU/2023/R/1445).
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| BRWM: | Bi, Ren and Wu method |
| COC: | computed order of convergence |
| DFNM: | derivative-free Newton method |
| E.I.: | efficiency index |
| EOIS: | eighth-order optimal iterative scheme |
| FDFNM: | first derivative-free Newton method |
| FOIS: | fourth-order optimal iterative scheme |
| HM: | Halley method |
| LM: | Li method |
| NI: | number of iterations |
| NM: | Newton method |
| RE: | relative error |
| SDFNM: | second derivative-free Newton method |
| SSM: | Sharma and Sharma method |
| WLM: | Wang and Liu method |
References
- Chun, C.; Ham, Y. Some fourth-order modifications of Newton’s method. Appl. Math. Comput. 2008, 197, 654–658. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Noor, M.A.; Noor, K.I.; Al-Said, E.; Waseem, M. Some new iterative methods for nonlinear equations. Math. Prob. Eng. 2000, 2010, 198943. [Google Scholar] [CrossRef]
- Morlando, F. A class of two-step Newton’s methods with accelerated third-order convergence. Gen. Math. Notes 2015, 29, 17–26. [Google Scholar]
- Thukral, R. New modification of Newton method with third-order convergence for solving nonlinear equations of type f(0) = 0. Am. J. Comput. Appl. Math. 2016, 6, 14–18. [Google Scholar]
- Saqib, M.; Iqbal, M. Some multi-step iterative methods for solving nonlinear equations. Open J. Math. Sci. 2017, 1, 25–33. [Google Scholar] [CrossRef]
- Qureshi, U.K. A new accelerated third-order two-step iterative method for solving nonlinear equations. Math. Theo. Model. 2018, 8, 64–68. [Google Scholar]
- Ali, F.; Aslam, W.; Ali, K.; Anwar, M.A.; Nadeem, A. New family of iterative methods for solving nonlinear models. Discr. Dyn. Nat. Soc. 2018, 2018, 9619680. [Google Scholar] [CrossRef]
- Homeier, H.H.H. On Newton-type methods with cubic convergence. J. Comput. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef]
- Ozban, A.Y. Some new variants of Newton’s method. Appl. Math. Lett. 2004, 17, 677–682. [Google Scholar] [CrossRef]
- Lukic, T.; Ralevic, N.M. Geometric mean Newton’s method for simple and multiple roots. Appl. Math. Lett. 2008, 21, 30–36. [Google Scholar] [CrossRef]
- Ababneh, O.Y. New Newton’s method with third-order convergence for solving nonlinear equations. World Acad. Sci. Eng. Tech. 2012, 6, 1269–1271. [Google Scholar]
- Abdul-Hassan, N.Y. Two new predictor-corrector iterative methods with third- and ninth-order convergence for solving nonlinear equations. Math. Theo. Model. 2016, 6, 44–56. [Google Scholar]
- Kou, J.; Li, Y.; Wang, X. Third-order modification of Newtons method. J. Comput. Appl. Math. 2007, 205, 1–5. [Google Scholar]
- Chun, C. On the construction of iterative methods with at least cubic convergence. Appl. Math. Comput. 2007, 189, 1384–1392. [Google Scholar] [CrossRef]
- Verma, K.L. On the centroidal mean Newton’s method for simple and multiple roots of nonlinear equations. Int. J. Comput. Sci. Math. 2016, 7, 126–143. [Google Scholar] [CrossRef]
- Liu, C.S.; Li, T.L. A new family of fourth-order optimal iterative schemes and remark on Kung and Traub’s conjecture. J. Math. 2021, 2021, 5516694. [Google Scholar] [CrossRef]
- Li, S. Fourth-order iterative method without calculating the higher derivatives for nonlinear equation. J. Algor. Comput. Tech. 2019, 13, 1–8. [Google Scholar] [CrossRef]
- Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
- King, R. A family of fourth-order iterative methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
- Chun, C. Certain improvements of Chebyshev-Halley methods with accelerated fourth-order convergence. Appl. Math. Comput. 2007, 189, 597–601. [Google Scholar] [CrossRef]
- Kuo, J.; Li, Y.; Wang, X. Fourth-order iterative methods free from second derivative. Appl. Math. Comput. 2007, 184, 880–885. [Google Scholar]
- Chun, C. Some variants of King’s fourth-order family of methods for nonlinear equations. Appl. Math. Comput. 2007, 190, 57–62. [Google Scholar] [CrossRef]
- Ostrowski, A.M. Solutions of Equations and System Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
- Maheshwari, A.K. A fourth order iterative method for solving nonlinear equations. Appl. Math. Comput. 2009, 211, 383–391. [Google Scholar] [CrossRef]
- Ghanbari, B. A new general fourth-order family of methods for finding simple roots of nonlinear equations. J. King Saud Univ. Sci. 2011, 23, 395–398. [Google Scholar] [CrossRef]
- Khattri, S.K.; Noot, M.A.; Al-Said, E. Unifying fourth-order family of iterative methods. Appl. Math. Lett. 2011, 24, 1295–1300. [Google Scholar] [CrossRef]
- Kumar, S.; Kanwar, V.; Singh, S. Modified efficient families of two and three-step predictor-corrector iterative methods for solving nonlinear equations. Appl. Math. 2010, 1, 153–158. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Wide stability in a new family of optimal fourth-order iterative methods. Comput. Math. Meth. 2019, 1, e1023. [Google Scholar] [CrossRef]
- Liu, D.; Liu, C.S. Two-point generalized Hermite interpolation: Double-weight function and functional recursion methods for solving nonlinear equations. Math. Comput. Simul. 2022, 193, 317–330. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. New modifications of Potra-Ptak’s method with optimal fourth and eighth orders of convergence. J. Comput. Appl. Math. 2010, 234, 2969–2976. [Google Scholar] [CrossRef]
- Chun, C.; Ham, Y. Some sixth-order variants of Ostrowski root-finding methods. Appl. Math. Comput. 2007, 193, 389–394. [Google Scholar] [CrossRef]
- Kou, J.; Li, Y.; Wang, X. Some variants of Ostrowskis method with seventh-order convergence. J. Comput. Appl. Math. 2007, 209, 153–159. [Google Scholar] [CrossRef]
- Bi, W.; Ren, H.; Wu, Q. Three-step iterative methods with eighth-order convergence for solving nonlinear equations. J. Comput. Appl. Math. 2009, 225, 105–112. [Google Scholar] [CrossRef]
- Bi, W.; Wu, Q.; Ren, H. A new family of eighth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2009, 214, 236–245. [Google Scholar] [CrossRef]
- Wang, X.; Liu, L. Modified Ostrowski’s method with eighth-order convergence and high efficiency index. Appl. Math. Lett. 2010, 23, 549–554. [Google Scholar] [CrossRef]
- Petković, M.S. On optimal multipoint methods for solving nonlinear equations. Novi Sad J. Math. 2009, 39, 123–130. [Google Scholar]
- Petković, M.S.; Petković, L.D. Families of optimal multipoint methods for solving nonlinear equations: A survey. Appl. Anal. Discr. Math. 2010, 4, 1–22. [Google Scholar]
- Neta, B.; Petković, M.S. Construction of optimal order nonlinear solvers using inverse interpolation. Appl. Math. Comput. 2010, 217, 2448–2455. [Google Scholar] [CrossRef]
- Soleymani, F.; Shateyi, S.; Salmani, H. Computing simple roots by an optimal sixteenth-order class. J. Appl. Math. 2012, 2012, 958020. [Google Scholar] [CrossRef]
- Matinfar, M.; Aminzadeh, M. Three-step iterative methods with eighth-order convergence for solving nonlinear equations. J. Interpol. Approx. Sci. Comput. 2013, 2013, jiasc-00013. [Google Scholar] [CrossRef][Green Version]
- Zafar, F.; Yasmin, N.; Akram, S.; Junjua, M.D. A general class of derivative-free optimal root finding methods based on rational interpolation. Sci. World J. 2015, 2015, 935260. [Google Scholar] [CrossRef] [PubMed]
- Junjua, M.D.; Zafar, F.; Yasmin, N. Optimal derivative-free root finding methods based on inverse interpolation. Mathematics 2019, 7, 164. [Google Scholar] [CrossRef]
- Zhanlav, T.; Chuluunbaatar, O.; Ulziibayar, V. Generating function method for constructing new iterations. Appl. Math. Comput. 2017, 315, 414–423. [Google Scholar] [CrossRef]
- Liu, C.S. A new splitting technique for solving nonlinear equations by an iterative scheme. J. Math. Res. 2020, 12, 40–48. [Google Scholar] [CrossRef]
- Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
- Halley, E. A new exact and easy method for finding the roots of equations generally and without any previous reduction. Philos. Roy. Soc. London 1964, 8, 136–147. [Google Scholar]
- Liu, C.S.; Li, T.L. A new family of generalized quadrature methods for solving nonlinear equations. Asian-Eur. J. Math. 2022, 15, 2250044. [Google Scholar] [CrossRef]
- Cordero, A.; Franceschi, J.; Torregrosa, J.R.; Zagati, A.C. A convex combination approach for mean-based variants of Newton’s method. Symmetry 2019, 11, 1106. [Google Scholar] [CrossRef]
- Sharma, J.R.; Sharma, R. A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algorithms 2010, 54, 445–458. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).