Abstract
In the paper, we iteratively solve a scalar nonlinear equation , where , and I includes at least one real root r. Three novel two-step iterative schemes equipped with memory updating methods are developed; they are variants of the fixed-point Newton method. A triple data interpolation is carried out by the two-degree Newton polynomial, which is used to update the values of and . The relaxation factor in the supplementary variable is accelerated by imposing an extra condition on the interpolant. The new memory method (NMM) can raise the efficiency index (E.I.) significantly. We apply the NMM to five existing fourth-order iterative methods, and the computed order of convergence (COC) and E.I. are evaluated by numerical tests. When the relaxation factor acceleration technique is combined with the modified Duni’s memory method, the value of E.I. is much larger than that predicted by the paper [Kung, H.T.; Traub, J.F. J. Assoc. Comput. Machinery 1974, 21]. for the iterative method without memory.
Keywords:
nonlinear equation; two-step iterative schemes; new memory updating method; relaxation factor; supplementary variable MSC:
65H05; 65B99; 41A25
1. Introduction
Three novel two-step iterative schemes with memory will be proposed to solve a given scalar nonlinear equation:
where I is an interval to include the real solution of .
For iteratively solving Equation (1),
is a famous Newton method (NM). Obviously, for the Newton method, is required to be differentiable, even though the NM is still a popular iterative method to solve Equation (1), owing to its simplicity.
From Equation (2), we can define the following Newton iteration function:
It follows that
The critical point of the mapping has two sources: one is the root of and another is the zero point of , which causes . When the iteration tends to the zero point of , the NM no longer converges to the real solution of . For the function , as an example, we have
It follows that
Because of and , if the initial point is located near 0 and 1, the NM does not converge to the true solution of .
In summary, the NM possesses some drawbacks, such as sensitiveness to the initial guess, dividing by a nearly zero value in the denominator, and nonconvergence near to the critical values which are not the roots of . In order to overcome these difficulties, we propose the following perturbation of the Newton method. Mathematically, Equation (1) can be written as
where is determined in [1]. By canceling and on both sides of
we can achieve
which is equivalent to Equation (1).
From Equation (8):
which upon dividing both sides by preceding , yields
The iterative scheme (10) was developed to be a one-step continuation Newton-like method [2], and it was used as the first step in the multistep iterative schemes in [3,4,5]. Some dynamical analysis of Equation (10) can be seen in [6,7]. When , Equation (10) is still applicable, but Equation (2) is a failure. As pointed out by Wu [8], Equation (10) has some merits over the NM.
The following second-order boundary value problem (BVP) demonstrates the usefulness of Equation (1):
Upon letting
and using and to render and automatically, we can transform Equations (11) and (12) to an initial value problem of the following ordinary differential equation (ODE):
the solution is endowed with an unknown value x, given by
If a real value exists for x, then we have a real solution for . By imposing ,
results to the equation for determining x; then, by Equations (16) and (13), the exact solution can be obtained.
Sometimes is obtained from a nonlinear ODE, rather than the linear ODE in Equation (11). We further consider a nonlinear BVP:
One assumes an initial value with x being unknown and integrates Equation (18) with the initial conditions and . The nonlinear equation for satisfying the right-end boundary condition in Equation (19) is derived. Since is an implicit function of x, the function cannot be written out explicitly. In this nonlinear problem, when we apply the NM to solve , we encounter a difficulty to calculate . Recently, Liu et al. [9] proposed a single-step memory-dependent method to solve by a Lie-symmetry formulation of Equation (18).
Consider the following one [10]:
which simulates the time-varying relation between stress and strain of an elastic–perfectly plastic material. In the above, is the elastic modulus and is the yield stress of material. We need to solve the nonlinear scalar equation to determine x, but is governed by a system of first-order ODEs being coupled to x in Equation (20). The difficulty is exhibited by using the NM to solve Equation (21), where the transition from elastic phase to plastic phase is not smooth.
Many engineering problems and adapted mathematical methods have been proposed for solving nonlinear equations, e.g., a weighted density functional theory for an inhomogeneous 12-6 Lennard–Jones fluid and the Euler–Lagrange equation derived from the density functional theory of inhomogeneous fluids [11], the governing mathematical equations defining the physical features of the first-grade viscoelastic nanofluid flow and heat transfer models [12], and a specialized nonlinear Fredholm integral equation in the turbo-reactors industry [13].
If one attempts to obtain an approximate analytical solution of the nonlinear BVP, the functional iteration method may be a useful tool. A conventional functional iteration method is the Picard iteration method; however, it has a major disadvantage because of its slow convergence. In order to improve the convergence property, He [14,15] proposed the variational iteration method, which is a modification of the Picard iteration method for the second-order nonlinear initial value problem. Recently, Wang et al. [16] developed an accurate predictor–corrector Picard iteration method for solving nonlinear problems. By the Newton–Kurchatov method for solving nonlinear equations, Argyros et al. [17] addressed a semilocal analysis and derived the weaker sufficient semilocal convergence criteria, and Argyros and Shakhno [18] employed a local convergence in the Banach space valued equations.
The drawbacks and limitations of the NM as just mentioned above induced a lot of studies that continue to the present and render some modifications of the Newton method, like the Adomian decomposition method [19], decomposition method [20], the arithmetic mean quadrature method [21], contra-harmonic mean and quadrature method [22], power-means variants [23], modified homotopy perturbation method [24], generating function method [25], perturbation of Newton’s method [26], and the variants of Bawazir’s iterative methods [27].
To improve the low-order convergence of the one-step iterative scheme and to enhance the convergence order, many multistep iterative schemes were developed. One can refer to [28,29] for many discussions of the multistep iterative methods. According to the conjecture of Kung and Traub [30], the upper bound of the efficiency index (E.I.) for the optimal iterative scheme with m evaluations of functions is E.I. = . For , the NM is an optimal iterative scheme with E.I. = 1.414. With , the Halley method is not the optimal one with a low-value E.I. = 1.44225. The conjecture of Kung and Traub [30] is only applicable to the integer-order convergence scheme. The computed order of convergence (COC) proposed in [21] can be adopted to evaluate the convergence order of the iterative scheme. In this paper, we extend the Newton method by the idea of perturbations, which involve some optimal parameters determined by the convergence analysis. The COC of these two-step iterative schemes is larger than .
Liu et al. [31] verified that the following iterative scheme:
is of third-order convergence, if and . By using the accelerating parameters, we can speed up the convergence. A more detailed analysis of the iterative scheme (22) can be seen in [1]. Equation (22) was addressed by Liu [32] from a two-dimensional approach together with the splitting technique. If we take and , Equation (22) is known to be a fixed-point Newton method:
Traub [33] developed a simple accelerating method by giving and :
By incorporating the memory of into account, the order of convergence can be raised. Traub’s technique is a typical method with memory, in which the data that appeared in the previous iteration were adopted in the iteration. For recent progress of the memory method with accelerating parameters in the multistep iterative methods, one can refer to [5,27,34,35,36,37,38,39,40]. One major goal of this paper is to develop the two-step iterative schemes with a new memory method to determine the accelerating parameters by updating technique with the information at the current step.
2. Three New Two-Step Iterative Schemes
In 2010, Wang and Liu [41] proposed
which is a two-step iterative scheme based on Equation (10). The error equation is
where and . Equation (25) is a two-step iterative scheme because it involves two variables, and , and two steps for computing ; the first step is computing , and then and are inserted into the second step to compute .
Wang and Zhang [42] developed a family of Newton-type two-step iterative schemes with a memory method for solving nonlinear equations whose R-convergence order is increased from 4 to 4.5616, 4.7913, and 5 depending on whether updating techniques are used in the accelerating parameters. Nowadays, most memory-accelerating methods do not take the differential term into the iterative schemes.
2.1. First New Two-Step Iterative Scheme
Instead of Equation (25), we consider an extension of Equation (23) to a two-step iterative scheme:
where is a parameter whose optimal value is to be determined. The first step is a variant of the so-called fixed-point Newton method .
Theorem 1.
The function is sufficiently differentiable on the domain I, and is a simple root with and . If is sufficiently close to r within the radius of convergence, then the iterative scheme (27) for solving has fourth-order convergence:
where the optimal value of β is given by
Furthermore, the error of reads as
where .
Proof.
Define
as usual,
A straightforward computation renders
where
By using the second one in Equation (27), subtracting both sides by r and from Equations (34), (38) and (39), we can derive
If we take , then , , and then Equation (28) is derived.
If is taken, Equation (38) reduces to
The proof of Theorem 1 is completed. □
Notice that the error Equation (28) is the same as the Ostrowski’s two-step fourth-order optimal iterative scheme [43]. Since Equation (27) is different from Equation (25), the error Equation (28) is different from that in Equation (26); if , the iterative scheme (25) also has fourth-order convergence.
2.2. Second New Two-Step Iterative Scheme
Let
where and are two parameters whose optimal values are to be determined.
Theorem 2.
The function is sufficiently differentiable on the domain I, and is a simple root with and . If is sufficiently close to r within the radius of convergence, then the iterative scheme (42) with for solving has fourth-order convergence:
If the optimal value of β is given by
then the iterative scheme (42) with has fifth-order convergence.
Proof.
2.3. Third New Two-Step Iterative Scheme
We further consider
The second-step is enhanced by using , rather than in Equation (27). Equation (46) is a two-step iterative scheme because it involves two variables, and , and two steps for computing ; the first step is computing , and then is inserted into the second step to compute .
Theorem 3.
The function is sufficiently differentiable on the domain I, and is a simple root with and . If is sufficiently close to r within the radius of convergence, then the iterative scheme (46) for solving is of the fourth-order convergence:
If , Equation (47) reduces to . That is, the iterative scheme (46) is of the fifth-order convergence.
3. Four New Memory Methods
3.1. The First and Second New Memory Methods
The error Equation (40) is simpler than that in Equation (26). Theorem 1 indicates that the optimal value of is . However, because the root r is itself an unknown value, and are not available; they are critical parameters to enhance the performance of the proposed two-step iterative schemes.
The memory-dependent techniques to obtain the suitable parameters’ values were found in [34,44,45,46,47,48]. Let and . We develop a new memory method for updating the values of A and B with the current values. In Equation (27), there are only two current values, and , which are insufficient to update . Therefore, we introduce a supplementary variable obtained by the fixed point Newton method:
Then, with the three data , a second-degree Newton polynomial that is an interpolant is given by
where
It is easy to derive , and
In general, .
The new algorithm based on Theorem 1, namely the first new memory-updating method (FNMUM), is depicted by (i) giving , , , and and computing and by
(ii) for , perform the following computations until convergence:
There exist three evaluations of functions for , , and such that the optimal order of convergence is , and E.I. = 1.5874. The role of , which does not engage in the iteration, is different from and ; and are step variables used in the iteration in Equations (55) and (56), and is computed from Equation (54) to provide an extra datum used in Equations (57) and (58) to update the values of and .
Therefore, the present parameters’ updating technique is different from the memory-dependent accelerating techniques in [34,44,45,46,47,48]. In the FNMUM, no previous iteration values of and were used in addition to the initial values and . Therefore, the new memory method can save much more computational cost than the previous memory-accelerating technique.
The second new memory-updating method (SNMUM) based on Theorem 2 is depicted by (i) giving , , , and and computing and by
(ii) for , perform the following computations until convergence:
In the SNMUM, only the initial values and are guessed, and no previous iteration values of and were used, which renders it more computationally cost-effective than the previous memory-accelerating technique.
3.2. The Third New Memory Method
According to Theorem 3, the third new memory-updating method (TNMUM), is depicted by (i) giving , , , and and computing and by
(ii) for , perform the following computations until convergence:
Similarly, in the TNMUM, only the initial values and are guessed, and no previous iteration values of and are used; it is quite computationally cost-effective.
3.3. The Accelerated Third New Memory Method
In the previous three new memory methods, the datum of at was not used. We modify the supplementary variable by
where is a relaxation factor to be designed.
Then, we set
where was given by Equation (50). Inserting
into Equation (72), we can derive
Through some manipulations we can derive
which can be used to update .
The new algorithm based on Theorem 3 and Equations (71) and (75), namely the accelerated third new memory-updating method (ATNMUM), is depicted by (i) giving , , , and and computing and by
(ii) for , perform the following computations until convergence:
In Equation (46), we take the free parameter .
In the ATNMUM, the initial values , , and are guessed, and no previous iteration values of and are used; it is quite computationally cost-effective.
In the above four iterative algorithms, FNMUM, SNMUM, TNMUM, and ATNMUM, the function with is sufficient. We suppose that the interval I includes at least one real root. In general, the nonlinear equation may possess many roots, which can be determined by the iterative schemes by giving different initial guesses of .
4. A Simple Memory Approach to Existent Iterative Schemes
4.1. The WMM Method
Wang [36] modified the Ostrowski-type method with memory for self-accelerating a parameter , given by
whose error equation was proven to be
Substituting the first one into the second one in Equation (83), it is indeed a two-step method:
Upon taking to be an updating parameter, the order can be raised by the Wang memory method (WMM), which is depicted by (i) giving , , , and and computing by
(ii) perform for ,
There exist four evaluations of functions for , , , and .
4.2. The ZLHMM Method
Zheng et al. [49] modified the Steffensen-type method with an accelerating parameter :
whose error equation is
Let and . The ZLH memory method (ZLHMM) reads as (i) giving , , and computing by
(ii) for ,
There exist three evaluations of functions for , , and .
4.3. The CCTMM Method
Chicharro et al. [37] proposed a two-step iterative scheme:
where , , and , and . They derived the following error equation:
If we take , the convergence at least increases by one order.
Let . The CCT memory method (CCTMM) reads as (i) giving , , and computing by Equation (93), and (ii) for ,
In the numerical test, we take . Three evaluations of functions for , , and are needed.
4.4. The DMM Method
In 2013, Duni [34] proposed a two-step iterative scheme:
where , , and , and . Duni [34] verified that the convergence order is at least seven when the parameters and p are accelerated by the memory-dependent technique using the third-order and fourth-order Newton polynomials.
Here, we employ the second-order in Equation (50) to update and p. Let and . The new Duni memory method (DMM) reads as (i) giving , , , and computing and by Equation (53), as well as (ii) performing, for ,
In the numerical test, we take . Three evaluations of functions for , , and are needed.
Like the accelerated third new memory method in Section 3.3, we propose a modification of DMM (MDMM). The new memory method of MDMM reads as (i) giving , , , , and computing and by Equation (53), as well as (ii) performing, for ,
4.5. The CLBTMM Method
In 2015, Cordero et al. [50] proposed a two-step iterative scheme:
where . They argued that the memory-dependent method possesses at least seven-order convergence.
Let and . The new CLBT memory method (CLBTMM) reads as (i) giving , , , and computing and by Equation (53), as well as (ii) performing, for ,
Three evaluations of functions for , , and are needed.
5. Numerical Verifications of the New Memory-Updating Method
We give some examples to assess the performance of the proposed iterative methods by the numerically computed order of convergence (COC), which is approximated by [21,51]
The triple is the last three values. If and are not computable due to and , we can shift the triple forward to .
To save space, we test three examples by
The corresponding solutions are, respectively, , , and .
As noticed by Duni [34], shows a nontrivial behavior since two relatively close roots, and , appear, and a singularity is close to the sought root . In practice, we solved by the Newton method. With , the NM spent 12 iterations to find ; with ; the NM does not converge to within 500 iterations.
With three evaluations of functions, the optimal order of convergence of the two-step iterative scheme without memory is and E.I. = 1.5874. In Table 1, we list the number of iterations (NI) to satisfy , where, as expected, the value of E.I. is near or larger than 1.5874. The roots of are triple; however, FNMUM can still quickly find the solution . Through the new memory-updating method, the convergence is significantly increased by several orders.
Table 1.
The NI, COC1, COC2, and E.I. for the first new memory-updating method (FNMUM).
Compared with the Newton method mentioned above, for , the FNMUM with spent five iterations to find ; with , the FNMUM spent eight iterations for ; and with , it took six iterations for .
For , the exact values of and are obtained; hence, we can estimate the errors of parameters’ values by ERR1 = and ERR2 = . Table 2 demonstrates that and tended to exact ones.
Table 2.
The values of and tend to exact ones.
In Table 3, we list the number of iterations (NI) to satisfy , where the value of E.I. is near or larger than 1.5874. As expected, the second new memory-updating method significantly increased the COC by more than 4.7 for and . But for , The SNMUM is weak.
Table 3.
The NI, COC1, COC2, and E.I. for the second new memory-updating method (SNMUM).
In Table 4, we list the results for the third new memory-updating method (TNMUM). As expected, the COCs are greater than four for and .
Table 4.
The NI, COC1, COC2, and E.I. for the third new memory-updating method (TNMUM).
In Table 5, we list the results for the accelerated third new memory-updating method (ATNMUM). As expected, the COCs are larger than those in Table 3.
Table 5.
The NI, COC1, COC2, and E.I. for the accelerated third new memory-updating method (ATNMUM).
We can estimate the errors by ERR1 = and ERR2 = for the solution of by using the ATNMUM. Table 6 demonstrates that tends to an exact one, and quickly approaches to owing to the design of the relaxation factor in Equation (75).
Table 6.
The values of and tend to exact ones.
In Table 7, we list the number of iterations (NI) for solving by the WMM, of which four evaluations of functions are required; hence, we take E.I. = (COC1)1/4. The good performance of the new memory method can be seen, which raises COC = 4 to values in the range .
Table 7.
The NI, COC1, COC2, and E.I. for the WMM method.
In Table 8, we list the number of iterations (NI) for solving by the ZLHMM method.
Table 8.
The NI, COC1, COC2, and E.I. for the ZLHMM method.
In Table 9, we list the number of iterations (NI) to satisfy , where, as expected, the value of E.I. is near or larger than 1.7. Moreover, with the same initial value , the presented COC1 and NI for are better than those computed by Chicharro et al. [37], where NI = 6 and ACOC = 4.476. The values 1.705, 1.736, and 1.986 are also better than the E.I. = obtained by the eighth-order optimal iterative scheme with four evaluations of functions.
Table 9.
The NI, COC1, COC2, and E.I. for the CCTMM method.
In Table 10, the presented COC2 for is better than that computed by Duni [34], where COC = 6.9 is smaller than 7.3. Even if we do not use the full memory information, the performance is better than that in [34].
Table 10.
The NI, COC1, COC2, and E.I. for the DMM method.
In Table 11, the presented COC2 for is better than that computed by Duni [34], where COC = 6.9 is smaller than 11.959. Even if we do not use the full memory information, the performance is better than that in [34]. Compared with Table 10, the high performance was gained in Table 11 by introducing a relaxation factor in the modified Duni’s memory method. The COC2 = 21.009 is abnormal for the solution of .
Table 11.
The NI, COC1, COC2, and E.I. for the MDMM method.
In Table 12, the presented COC2 = 8.53 for with the root is better than that computed in [52], where COC = 7 is smaller than 8.53. Even if we do not use the full memory information, the performance is better than that in [52], where and were used for the memory-dependent function in Equation (119). is the third-degree Newton interpolation polynomial.
Table 12.
The NI, COC1, COC2, and E.I. for the CLBTMM method.
The presented COC2 = 9.579 for is better than that computed in [50], where COC = 6.9041 was obtained, which used , , , and in Equation (119). and are, respectively, the third-degree and fourth-degree Newton interpolation polynomials. For with the root , the presented COC2 = 9.686 is better than the COC = 6.9727 computed in [50].
6. Enhancing the Updating Speed of Parameters
To further accelerate the updating speed of the parameters and , according to [52], we can take and . The memory-dependent method of CLBTM reads as (i) giving , , , and giving or computing and by Equation (53), and , as well as (ii) performing, for ,
where
In Table 13, the presented COC2 = 9.157 for is better than that computed in [52]. The presented COC2 = 15.634 for is better than that computed in [50], where COC = 6.9041 was obtained. The presented COC2 = 10.159 for is better than that computed in [50], where COC = 6.9727 was obtained.
Table 13.
The NI, COC1, COC2, and E.I. for the CLBTM method.
Comparing Table 13 with Table 12, the convergence speed as reflected in the values of COC and E.I. is enhanced by the CLBTM; however, the complexity is increased compared with the iterative scheme CLBTMM in Section 4.5.
On this occasion, we can point out the difference between the new memory method (NMM) and the memory method (MM): in the MM, is computed, but it is not needed in the NMM. The supplementary variable is updated by Equation (131) in the MM, but no update of w is required in the NMM; in the NMM, a lower-order polynomial is sufficient, but for the MM, the higher orders polynomials of , , are necessary. According to the authors, the computational cost of the NMM saves much more than the MM.
7. Concluding Remarks
Traub [33] was the first to develop a memory method from the Steffnensen’s iterative scheme:
By giving and , the above iterative scheme can be initiated. We notice that in the proposed new memory-updating method, we do not need to compute , which saves one more evaluation of the functions than Traub’s memory method.
In [34], the accelerating technique is based on the memory of by
In addition to the cost of storing the information in the previous iteration, there is an expensive computational cost for computing , , and . Since the work in [34], there has been extensive literature on this memory style using similar techniques. The new memory approach for updating the accelerating parameters proposed in this paper without using the information at the previous iteration, which takes the current values of into for updating and , is more computationally cost-effective than the previous techniques. The three new two-step iterative schemes developed together with four updating techniques worked very well to quickly solve nonlinear equations. Numerical examples revealed that without computing extra functions, the new memory-updating methods can raise several orders of convergence and significantly enhance the values of E.I.
We introduced an accelerating technique in the third new memory method by imposing to determine the relaxation factor. The values of COC and E.I. are greatly raised by this accelerated third new memory-updating method. When the relaxation factor acceleration technique was combined with the modified Duni’s memory method, very high values of COC = 13.222 and E.I. = 2.46 were achieved. High performance is achieved by the proposed two-step iterative methods to find the root of nonlinear equations.
The novelties involved in this paper are as follows:
- Developing three novel two-step iterative schemes with simple forms, which are derivative-free.
- A second-degree Newton polynomial was used to update two critical parameters, and , greatly saving computation cost by evaluating three functions and using the new memory method (NMM).
- The NMM was applied to five existing two-step iterative schemes, WMM, ZLHMM, CCTMM, DMM, and CLBTMM, with the high values of E.I. all being larger than 1.5874.
- The new idea of imposing to determine the relaxation factor was developed, whose resulting E.I. is larger than E.I. = 1.5874 of the fourth-order optimal iterative scheme.
- Combining the relaxation factor acceleration technique and Duni’s new memory method, very high values of COC and E.I. were achieved.
Author Contributions
Conceptualization, C.-S.L.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L. and C.-W.C.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L. and C.-W.C.; Resources, C.-W.C.; Data curation, C.-S.L. and C.-W.C.; Writing—original draft, C.-S.L.; Writing—review and editing, C.-W.C.; Visualization, C.-S.L. and C.-W.C.; Supervision, C.-S.L.; Project administration, C.-S.L.; Funding acquisition, C.-W.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The data presented in this study are available on request from the corresponding authors. The data are not publicly available due to restrictions privacy.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| ATNMUM | Accelerated third new memory-updating method |
| BVP | Boundary value problem |
| CCTMM | Chicharro, Cordero, and Torregrosa’s memory method |
| CLBTM | Cordero, Lotfi, Bakhtiari, and Torregrosa’s method |
| CLBTMM | Cordero, Lotfi, Bakhtiari, and Torregrosa’s memory method |
| COC | Computed order of convergence |
| DMM | Duni’s memory method |
| E.I. | Efficiency index |
| FNMUM | First new memory-updating method |
| MDMM | Modification of Duni’s memory method |
| NI | number of iterations |
| NM | Newton method |
| NMM | New memory method |
| ODE | Ordinary differential equation |
| SNMUM | Second new memory-updating method |
| TNMUM | Third new memory-updating method |
| WMM | Wang memory method |
| ZLHMM | Zheng, Li, and Huang’s memory method |
References
- Liu, C.S.; El-Zahar, E.R.; Chang, C.W. A two-dimensional variant of Newton’s method and a three-point Hermite interpolation: Fourth- and eighth-order optimal iterative schemes. Mathematics 2023, 11, 4529. [Google Scholar] [CrossRef]
- Wu, X.Y. A new continuation Newton-like method and its deformation. Appl. Math. Comput. 2000, 112, 75–78. [Google Scholar] [CrossRef]
- Lee, M.Y.; Kim, Y.I.; Magrenãń, A.A. On the dynamics of tri-parametric family of optimal fourthorder multiple-zero finders with a weight function of the principal mth root of a function-function ratio. Appl. Math. Comput. 2017, 315, 564–590. [Google Scholar]
- Zafar, F.; Cordero, A.; Torregrosa, J.R. Stability analysis of a family of optimal fourth-order methods for multiple roots. Numer. Algor. 2019, 81, 947–981. [Google Scholar] [CrossRef]
- Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations. Mathematics 2023, 14, 2036. [Google Scholar] [CrossRef]
- Singh, M.K.; Singh, A.K. A derivative free globally convergent method and its deformations. Arab. J. Math. 2021, 10, 481–496. [Google Scholar] [CrossRef]
- Singh, M.K.; Argyros, I.K. The dynamics of a continuous Newton-like method. Mathematics 2022, 10, 3602. [Google Scholar] [CrossRef]
- Wu, X.Y. Newton-like method with some remarks. Appl. Math. Comput. 2007, 118, 433–439. [Google Scholar] [CrossRef]
- Liu, C.S.; Chang, C.W.; Kuo, C.L. Memory-accelerating methods for one-step iterative schemes with Lie-symmetry method solving nonlinear boundary value problem. Symmetry 2024, 16, 120. [Google Scholar] [CrossRef]
- Liu, C.S. Elastoplastic models and oscillators solved by a Lie-group differential algebraic equations method. Int. J. Non-Linear Mech. 2015, 69, 93–108. [Google Scholar] [CrossRef]
- Yu, Y.X. A novel weighted density functional theory for adsorption, fluid-solid interfacial tension, and disjoining properties of simple liquid films on planar solid surfaces. J. Chem. Phys. 2009, 131, 024704. [Google Scholar] [CrossRef] [PubMed]
- Alazwari, M.A.; Abu-Hamdeh, N.H.; Goodarzi, M. Entropy optimization of first-grade viscoelastic nanofluid flow over a stretching sheet by using classical Keller-box scheme. Mathematics 2021, 9, 2563. [Google Scholar] [CrossRef]
- Khan, F.A.; Aldhabani, M.S.; Alamer, A.; Alshaban, E.; Alamrani, F.M.; Mohammed, H.I.A. Almost nonlinear contractions under locally finitely transitive relations with applications to integral equations. Mathematics 2023, 11, 4749. [Google Scholar] [CrossRef]
- He, J.H. Variational iteration method—A kind of non-linear analytical technique: Some examples. Int. J. Non-linear Mech. 1999, 34, 699–708. [Google Scholar] [CrossRef]
- He, J.H. Variational iteration method for autonomous ordinary systems. Appl. Math. Comput. 2000, 114, 115–123. [Google Scholar] [CrossRef]
- Wang, X.; He, W.; Feng, H.; Atluri, S.N. Fast and accurate predictor-corrector methods using feedback-accelerated Picard iteration for strongly nonlinear problems. Comput. Model. Eng. Sci. 2024, 139, 1263–1294. [Google Scholar] [CrossRef]
- Argyros, I.K.; Shakhno, S.M.; Yarmola, H.P. Extended semilocal convergence for the Newton-Kurchatov method. Mat. Stud. 2020, 53, 85–91. [Google Scholar]
- Argyros, I.K.; Shakhno, S.M. Extended local convergence for the combined Newton-Kurchatov method under the generalized Lipschitz conditions. Mathematics 2019, 7, 207. [Google Scholar] [CrossRef]
- Abbasbandy, S. Improving Newton-Raphson method for nonlinear equations by modified Adomian decomposition method. Appl. Math. Comput. 2003, 145, 887–893. [Google Scholar] [CrossRef]
- Chun, C. Iterative methods improving Newton’s method by the decomposition method. Comput. Math. Appl. 2005, 50, 1559–1568. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Morlando, F. A class of two-step Newton’s methods with accelerated third-order convergence. Gen. Math. Notes 2015, 29, 17–26. [Google Scholar]
- Ogbereyivwe, O.; Umar, S.S. Behind Weerakoon and Fernando’s scheme: Is Weerakoon and Fernando’s scheme version computationally better than its power-means variants? FUDMA J. Sci. 2023, 7, 368–371. [Google Scholar] [CrossRef]
- Saqib, M.; Iqbal, M. Some multi-step iterative methods for solving nonlinear equations. Open J. Math. Sci. 2017, 1, 25–33. [Google Scholar] [CrossRef]
- Zhanlav, T.; Chuluunbaatar, O.; Ulziibayar, V. Generating function method for constructing new iterations. Appl. Math. Comput. 2017, 315, 414–423. [Google Scholar] [CrossRef]
- Argyros, I.K.; Regmi, S.; Shakhno, S.; Yarmola, H. Perturbed Newton methods for solving nonlinear equations with applications. Symmetry 2022, 14, 2206. [Google Scholar] [CrossRef]
- Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of optimal iterative methods with their applications and basins of attraction. Symmetry 2022, 14, 2020. [Google Scholar] [CrossRef]
- Petkovic, M.; Neta, B.; Petkovic, L.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
- Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; SEMA-SEMAI Springer series; Springer: Cham, Switzerland, 2016. [Google Scholar]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iterations. J. Assoc. Comput. Machinery 1974, 21, 643–651. [Google Scholar] [CrossRef]
- Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
- Liu, C.S. A new splitting technique for solving nonlinear equations by an iterative scheme. J. Math. Res. 2020, 12, 40–48. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
- Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algor. 2013, 63, 549–569. [Google Scholar]
- Lotfi, T.; Soleymani, F.; Noori, Z.; KJlJçman, A.; Khaksar Haghani, F. Efficient iterative methods with and without memory possessing high efficiency indices. Discr. Dyna. Natu. Soc. 2014, 2014, 912796. [Google Scholar] [CrossRef]
- Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameter. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. Appl. Math. Comput. 2019, 354, 286–298. [Google Scholar] [CrossRef]
- Torkashvand, V.; Kazemi, M.; Moccari, M. Sturcture a family of three-step with-memory methods for solving nonlinear equations and their dynamics. Math. Anal. Convex Optim. 2021, 2, 119–137. [Google Scholar]
- Sharma, E.; Panday, S.; Mittal, S.K.; Joit, D.M.; Pruteanu, L.L.; Jäntschi, L. Derivative-free families of with- and without-memory iterative methods for solving nonlinear equations and their engineering applications. Mathematics 2023, 14, 4512. [Google Scholar] [CrossRef]
- Thangkhenpau, G.; Panday, S.; Bolundut, L.C.; Jäntschi, L. Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations. Symmetry 2023, 15, 1546. [Google Scholar] [CrossRef]
- Wang, H.; Liu, H. Note on a cubically convergent Newton-type method under weak conditions. Acta Appl. Math. 2010, 110, 725–735. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, T. A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo 2014, 51, 1–15. [Google Scholar] [CrossRef]
- Ostrowski, A.M. Solutions of Equations and System Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
- Wang, X. A family of Newton-type iterative methods using some special self-accelerating parameters. Int. J. Comput. Math. 2018, 95, 2112–2127. [Google Scholar] [CrossRef]
- Jain, P.; Chand, P.B. Derivative free iterative methods with memory having higher R-order of convergence. Int. J. Nonl. Sci. Numer. Simul. 2020, 21, 641–648. [Google Scholar] [CrossRef]
- Zhou, X.; Liu, B. Iterative methods for multiple roots with memory using self-accelerating technique. J. Comput. Appl. Math. 2023, 428, 115181. [Google Scholar] [CrossRef]
- Džunić, J.; Petković, M.S.; Petković, L.D. Three-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput. 2012, 218, 4917–4927. [Google Scholar]
- Džunić, J.; Petković, M.S. On generalized multipoint root-solvers with memory. J. Comput. Appl. Math. 2012, 236, 2909–2920. [Google Scholar]
- Zheng, Q.; Li, J.; Huang, F. Optimal Steffensen-type families for solving nonlinear equations. Appl. Math. Comput. 2011, 217, 9592–9597. [Google Scholar] [CrossRef]
- Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parameter family with memory for nonlinear equations. Numer. Algor. 2015, 68, 323–335. [Google Scholar] [CrossRef]
- Petković, M.S. Remarks on “On a general class of multipoint root-finding methods of high computational efficiency”. SIAM J. Numer. Anal. 2011, 49, 1317–1319. [Google Scholar] [CrossRef]
- Torkashvand, V.; Kazemi, M. On an efficient family with memory with high order of convergence for solving nonlinear equations. Int. J. Indus. Math. 2020, 12, IJIM-1260. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).