Abstract
In this paper, some one-step iterative schemes with memory-accelerating methods are proposed to update three critical values , , and of a nonlinear equation with r being its simple root. We can achieve high values of the efficiency index (E.I.) over the bound with three function evaluations and over the bound with two function evaluations. The third-degree Newton interpolatory polynomial is derived to update these critical values per iteration. We introduce relaxation factors into the Duni method and its variant, which are updated to render fourth-order convergence by the memory-accelerating technique. We developed six types optimal one-step iterative schemes with the memory-accelerating method, rendering a fourth-order convergence or even more, whose original ones are a second-order convergence without memory and without using specific optimal values of the parameters. We evaluated the performance of these one-step iterative schemes by the computed order of convergence (COC) and the E.I. with numerical tests. A Lie symmetry method to solve a second-order nonlinear boundary-value problem with high efficiency and high accuracy was developed.
1. Introduction
An elementary, yet very important problem is solving a nonlinear equation . Given an initial guess , suppose that it is quite close to the real root r with ; we can approximate the nonlinear equation by
When , solving the equation for x yields
Along this line of thinking,
is coined as the Newton method (NM), which exhibits quadratic convergence. Since then, there have arisen many studies, and this work continues to now, while different fourth-order methods have been used to modify the Newton method, which aim to more quickly and stably solve the nonlinear equations [1,2,3,4,5,6,7]. In general, the fourth-order iterative methods are two-step with at least three function evaluations. Kung and Traub conjectured that a multi-step iterative scheme without memory based on m evaluations of functions has an optimal convergence order . When the fourth-order iterative method is optimal, the bound of the efficiency index (E.I.) is 1.587.
When the one-step iterative scheme with two function evaluations is considered, like the NM, its optimal order is two, while the E.I. reduces to 1.414. From this aspect, the multi-step iterative scheme is superior to the one-step iterative scheme with multi-function evaluations. In the local convergence analysis of the iterative scheme for solving nonlinear equations near the root r, three critical values , , and and their ratios are dominant, which appear as the first three Taylor coefficients. In many iterative schemes, the accelerating parameter and optimal parameter are determined by these critical values. But, the root r is itself an unknown constant, such that the precise values of , , and are not available. The primary goal of many memory methods is to develop a powerful updating technique based on the memory of the previous values of the variables to quickly obtain , , and step-by-step, which will be used in several one-step iterative schemes developed in the paper for achieving high values of the computed order of convergence (COC) and the E.I. with the memory-accelerating technique.
Traub [8] was the first to develop a memory-dependent accelerating method from Steffensen’s iterative scheme by giving and :
With this modification, by taking the memory of into account, the computational order of convergence is raised from 2 to at least 2.414. The iterative methods using information from the current and previous iteration are the methods with memory. In Equation (4), when is a step variable, is an adjoint variable, which does not have an iteration for itself. The role of is different from in the two-step iterative scheme. Later, we will introduce a supplementary variable, which just provides the extra datum used in the data interpolation, and its role is different from and .
In 2013, Duni [9] proposed a modification of Steffensen’s and Traub’s iterative schemes by introducing two parameters and p in
The error equation was derived as
where is a ratio of the second and first Taylor coefficients. Taking is sufficient for the vanishing of the second-order error term; hence, there is freedom to assign the value for the accelerating parameter p.
There are only a few papers that have been concerned with the one-step iterative schemes with memory [9,10,11]. In this paper, we develop several memory-dependent one-step iterative methods for a high-performance solution of nonlinear equations. The methodologies involve introducing a free parameter and a combination function, and then, they are optimized to raise the order of convergence, which is original and highly novel. The strategy for updating the values of the parameters with the memory-accelerating method can significantly speed up the convergence and raise the value of the E.I. to a limit bound.
The novelties involved in the paper are as follows:
- Introducing a free parameter in the existing or newly created model of the iterative scheme.
- Inserting a combination function into two iterative schemes; then, the parameter or combination function is optimized to raise the order of convergence.
- Several ideas presented here are novel and have not yet appeared in the literature, which can promote the development of fourth-order one-step iterative schemes while saving the computational cost.
- For the application of the derivative-free one-step iterative schemes, we developed a powerful Lie symmetry method to solve a second-order nonlinear boundary-value problem.
The rest of the paper’s contents proceed as follows. In Section 2, we introduce two basic one-step iterative schemes, which are the starting point and motivate the development of the accelerating techniques to update the critical parameters and appearing in the third-order iterative schemes. Section 3 gives a detailed local convergence analysis of these two basic one-step iterative schemes; a new concept of the optimal combination of these two basic one-step iterative schemes is given in Theorem 3. In Section 4, some numerical experiments are carried out with the computed order of convergence (COC) to evaluate the performance upon comparing to some fourth-order optimal two-step iterative schemes. In Section 5, we introduce the updating techniques of the three critical parameters , , and by using the memory-updating technique and the third-degree Newton interpolation polynomial. The idea of the supplementary variable is introduced, and the result is the first memory-accelerating technique. In Section 6, we first derive a new third-order iterative scheme as a variant of Duni’s method; then, the updating techniques of two parameters and three parameters by the memory methods are developed; the second memory-accelerating technique is developed. In Section 7, we improve Duni’s method and propose the new optimal combination methods; three memory-accelerating techniques are developed. In Section 8, we introduce a relaxation factor into Duni’s method, and the optimal value of the relaxation factor is derived; the sixth memory-accelerating technique is developed. As a practical application, a Lie symmetry method is developed in Section 9 to solve the second-order boundary-value problem. Finally, we conclude the achievements in Section 10.
2. Preliminaries
Mathematically speaking, is equivalent to
where the constant is to be determined. If is known, we can find the next by solving
Upon viewing the terms including as the coefficient on both sides and dividing by , we can obtain
which includes a parameter to be assigned. The above iterative scheme was developed in [12] for a one-step continuation Newton-like method; it is referred to as Wu’s method [12]. The iterative scheme (9) was used by Lee et al. [13], Zafar et al. [14], and Thangkhenpau et al. [15] as the first step in the multi-step iterative schemes for finding multiple zeros. Recently, Singh and Singh [16] and also Singh and Argyros [17] gave a detailed dynamical analysis of the continuous Newton-like method (9).
The iterative scheme (10) includes two parameters to be assigned. The iterative scheme (10) is referred to as Liu’s method [18].
In 2021, Liu et al. [19] verified that the iterative scheme (10) is in general of one-order convergence; however, if we take , where r is a simple root of , the order rises to two; moreover, if we take and , the order further rises to three. This technique is somewhat like the method of using accelerating parameters to speed up the convergence, whose optimal values can be determined by the convergence analysis.
The memory methods reuse the information from the previous iteration; they are not required to evaluate the function and its derivative at any new point, but it is required to store this information. The so-called R-convergence order of the memory method increases, and at the same time, the E.I. may go over the corresponding one without the memory method. Duni [9] earlier developed an efficient two-parameter method for solving nonlinear equations by using the memory technique and Newton polynomial interpolation to determine the accelerating parameters. For the progress of the memory methods with accelerating parameters in the multi-step iterative schemes, one can refer to [20,21,22,23,24,25,26,27,28,29].
3. Convergence Analysis
Comparing to the Newton method in Equation (3), Equation (9) is still applicable when . But, in this situation, the Newton method would fail, which restricts the practical application of the Newton method. The iterative scheme (9) has some remarkable advantages over the Newton method [30]. It is interesting to study the convergence behaviors of the iterative scheme (9) and its variant in the iterative scheme (10).
Wang and Liu [31] proposed a two-step iterative scheme as an extension of Equation (9):
where is a parameter. The error formula was proven to be
The iterative scheme (12) is not the optimal one, which, with three function evaluations, leads to the third-order convergence, not the fourth-order convergence. This motivated us to further investigate the local convergence property of the iterative schemes (9) and (10). A new idea of the optimal combination of the iterative schemes (9) and (10) is introduced, such that a one-step optimal fourth-order iterative scheme can be achieved, which is better than the two-step iterative scheme (12).
Theorem 1.
The iterative scheme (9) for solving has third-order convergence, if
Proof.
Let
which is small when and r are sufficiently close. Repeating Equation (15) for and subtracting yield
As usual, we have
This ends the proof of Theorem 1. □
Theorem 2.
Proof.
It can be proven similarly by inserting Equation (17) into Equation (10):
where we use Equation (22), for which
This ends the proof of Theorem 2. □
The combination of any two iterative schemes of the same order cannot yield a new iterative scheme whose convergence order can be raised by one. The conditions for the success of the combination are that, in the two error equations of these two iterative schemes, the coefficients preceding cannot be the same, and at the same time, the coefficients preceding cannot be the same.
Theorem 3.
4. Numerical Experiments
For the purposes of comparison, we list the fourth-order iterative schemes developed by Chun [3]:
by King [4]:
where , and by Chun and Ham [1]:
Equations (31)–(33) are fourth-order optimal iterative schemes with the E.I. = . However, the E.I. of the iterative scheme (26) can be larger, as shown in Section 5. Furthermore, the iterative scheme (26) is a single-step one, rather than the two steps of the iterative schemes (31)–(33).
The convergence criteria are given by
where is fixed for all tests. In [32], the numerically computed order of convergence (COC) is approximated by
where r is a solution of .
The iterative schemes in Equations (9), (10), and (26) are named, respectively, Algorithm 1, Algorithm 2, and Algorithm 3. We considered a simple case of and specify how to calculate the COC. By starting from , the NM achieves the root with seven iterations, Algorithms 1 and 2 with five iterations, and Algorithm 3 with four iterations. For each triple of the data of x, we can compute the COC by Equation (35). The triple comprises the last three values of x before the convergence, and we set the convergence value of x to be r with . If is not computable due to , we can shift the triple forward to , and so on.
By the data from Table 1, we take the COC of the NM to be 1.999, of Algorithm 1 to be 3.046, of Algorithm 2 to be 2.968 and of Algorithm 3 to be 4.899. As expected, both Algorithms 1 and 2 have near third-order convergence; however, Algorithm 3 with COC = 4.899 is greater than the theoretical value of fourth-order.
Table 1.
The comparison of different methods for the COCs computed. ×: undefined.
The test examples are given by
The corresponding solutions are, respectively, , , , , and .
Algorithm 1 was tested by Wu [12,30], and Algorithm 2 was tested by Liu et al. [19]. Because Algorithm 3 is a new iterative scheme, we tested it for the above examples. In Table 2, for different functions, we list the NI obtained by the presently developed Algorithm 3, which are compared to the NM, the method of Jarratt [33] (JM), the method of Traub–Ostrowski [8] (TM), the method of King [4] (KM) with , and the method of Chun and Ham [1] (CM).
Table 2.
The comparison of different methods for the number of iterations.
Algorithm 3 theoretically has fourth-order convergence with the optimal values of the parameters. For the first example, , Algorithm 3 converges faster than other fourth-order iterative schemes. For other examples, Algorithm 3 is much better than the NM and is competitive with the other fourth-order iterative schemes, JM, TM, KM, and CM.
5. Updating Three Parameters by Memory Method
In order to achieve the fourth-order convergence of the iterative scheme in Equation (26), the values of , , and must be known a priori; however, these values are unknown because the root r is an unknown constant.
Therefore, we introduce a supplementary variable predicted by the Newton scheme used in the data interpolation:
Based on the updated data of and and the previous data of , we can construct a third-degree Newton interpolatory polynomial by
where
It is easy to derive
We update the values of , and Q in Equation (28) by
Now, we have the first memory-accelerating technique for the iterative scheme (26) in Theorem 3, which reads as (i) giving , , and and and (ii) performing for :
Some numerical tests of the first memory-accelerating technique for the iterative scheme are listed in Table 3, for which we can observe that the values of the COC are very large. Because the differential term was included, the E.I. = (COC) was computed. All E.I.s are greater than the E.I. = 1.587 of the optimal fourth-order iterative scheme without memory; they have the same numberof function evaluations. The last four E.I.s are also greater than the E.I. = 1.682 of the optimal eighth-order iterative scheme without memory and with four function evaluations.
Table 3.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the first memory-accelerating technique.
6. A New Fourth-Order Iterative Scheme with Memory Updating
Below, we will develop the memory-accelerating technique without using the differential term . The test examples will be changed to
The corresponding solutions are, respectively, , , , , and .
In order to compare the numerical results with that given in [9], a new function is added in Equation (51). For consistency, the other four functions are rewritten as , , , and to replace the , , , and appearing in Equations (36) and (38)–(40).
In this section, a new one-step iterative scheme with the aid of a supplementary variable and a relaxation factor is introduced. The detailed convergence analysis is derived. Then, we accelerate the introduced parameters by the memory-updating technique.
6.1. A New Third-Order Result
We address the following iterative scheme:
which is a novel one-step iterative scheme with an adjoint variable. Equation (55) is a variant of Duni’s method [9], where appears in the denominator, not .
Theorem 4.
If we take
then the order of convergence is further increased to four.
Proof.
Using the Taylor series yields
where
Hence, it follows from Equations (57) and (58) that
Through some operations, we can obtain
which can be written as
where
Then, Equation (65) is further reduced to
Letting for the vanishing of , we can derive a relation between and p:
Taking , we prove Equation (60). By using and Equation (68), the coefficient preceding in Equation (69) can be simplified to
which, further using and , reduces to
If Equation (61) is satisfied, the error equation becomes . This completes the proof of Theorem 4. □
Theorem 4 gives us a clue to achieving a fourth-order iterative scheme (55), if and p are given by Equations (60) and (61). However, there are three unknown parameters, , , and . We will apply the memory-accelerating technique to adapt the values of and p in Equation (55) per iteration. The accuracy of this memory-dependent adaption technique depends on how many current and previous data are taken into account. Similar to what was performed in [9], we can further estimate the lower bound of the convergence order of such a memory-accelerating method, upon giving the updating technique for and p. Instead of deriving the formulas of this kind of estimation, we use the numerical values of the COC to display the numerical performance.
6.2. Updating Two Parameters
To mimic the memory-updating procedure in Section 5, we can obtain a memory method of the iterative scheme in Equation (55) by (i) giving , , and and and (ii) performing for :
In the above iterative scheme, is a given constant value of the parameter.
To demonstrate the usefulness of the above iterative scheme, we consider the solution of . We fix , and and vary the values of in Table 4. Because only two function evaluations are required, the E.I. = (COC) was computed.
Table 4.
The NI, COC, and E.I. for the method by updating two parameters, A and B.
6.3. Updating Three Parameters
Table 4 reveals an optimal value of , such that the COC and E.I. are the best. Indeed, by setting the coefficient preceding as zero in Theorem 4, we can truly obtain a fourth-order iterative scheme, whose is determined by Equation (61).
Thus, we have the second memory-accelerating technique for the iterative scheme (55) in Theorem 4 using in Equation (61), which reads as (i) giving , , and and , and (ii) performing for :
Some numerical tests of the second memory-accelerating technique for the iterative scheme are listed in Table 5. For the equation , the COC = 3.535 obtained is slightly larger than the COC = 3.48 obtained in [9].
Table 5.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and , in the second memory-accelerating technique.
7. Improvement of Duni’s Method and Optimal Combination
In this section, we improve Duni’s method [9] by deriving the optimal parameters and their accelerating techniques. The idea of two optimal combinations of Duni’s and Wu’s method, and Duni’s and Liu’s method are introduced. Then, three new one-step iterative schemes with the memory-updating techniques are developed.
7.1. Improvement of Duni’s Memory Method
As was performed in [9], and p in Equation (5) were taken to be and . To guarantee the third-order convergence of Duni’s method, is sufficient, as shown in Equation (6). Therefore, there exists the freedom to chose the value of p.
Theorem 5.
Proof.
At the same time, Equation (65) is modified to
Since we do not have interest in the details of the fourth-order error equation, we write
where
Letting for the vanishing of , we can derive
if we take . By using Equation (83), the coefficient preceding is reduced to zero. This completes the proof of Theorem 5. □
The third memory-accelerating technique for the iterative scheme (5) in Theorem 5 using p in Equation (83) reads as (i) giving and and and (ii) performing for :
Some numerical tests of the third memory-accelerating technique for the iterative scheme are listed in Table 6. We can notice that, for the equation , the COC = 4.002 obtained is larger than the COC = 3.48 obtained in [9].
Table 6.
The NI, COC, and E.I. for the method by updating two parameters, A and p, in the third memory-accelerating technique.
7.2. Optimal Combination of Duni’s and Wu’s Iterative Methods
Theorem 6.
Proof.
This ends the proof of Theorem 6. □
The fourth memory-accelerating technique for the iterative scheme (96) in Theorem 6 reads as (i) giving , , and and and (ii) performing for :
Some numerical tests of the fourth memory-accelerating technique for the iterative scheme are listed in Table 7, which shows that the values of COC are high, which, however, need the differential term . We can notice that, for the equation , the COC = 6.643 obtained is much larger than the COC = 3.48 obtained in [9].
Table 7.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the fourth memory-accelerating technique.
7.3. Optimal Combination of Duni’s and Liu’s Iterative Methods
Theorem 7.
Proof.
This ends the proof of Theorem 7. □
The fifth memory-accelerating technique for the iterative scheme (105) in Theorem 7 is (i) giving , , and and and (ii) performing for :
Some numerical tests of the fifth memory-accelerating technique for the iterative scheme are listed in Table 8. We can notice that, for the equation , the COC = 5.013 obtained is larger than the COC = 3.48 obtained in [9].
Table 8.
The NI, COC, and E.I. for the method by updating three parameters, A, B and Q, in the fifth memory-accelerating technique.
8. Modification of Duni’s Method
As was performed in [9], and p in Equation (5) were taken to be and . To guarantee the third-order convergence of Duni’s method, is sufficient, as shown in Equation (6). Therefore, there exists the freedom to chose the value of . We propose a modification of Duni’s method by
that is, we take and , where is a relaxation factor to be determined for increasing the order of convergence. If we take , Duni’s method is recovered. The present modification is different from that analyzed in Section 7.1 and Section 7.2, where p is a free parameter.
Theorem 8.
For solving , the iterative scheme (113) with has third- order convergence:
Proof.
The sixth memory-accelerating technique for the iterative scheme (113) in Theorem 8 is (i) giving , , and and and (ii) for , performing
Some numerical tests of the sixth memory-accelerating technique for the iterative scheme are listed in Table 9. For the equation , the COC = 8.526 is much larger than the COC = 3.48 obtained in [9].
Table 9.
The NI, COC, and E.I. for the method by updating three parameters, A, B, and , in the sixth memory-accelerating technique.
In Equation (113), if we take , then Duni’s method is recovered. In Table 10, the values of the COC are compared, which shows that the COC obtained by the sixth memory-accelerating technique is larger than that obtained by Duni’s memory method.
Table 10.
The values of the COC for Duni’s memory method and the sixth memory accelerating technique.
Notice that, by using the suggested initial values of , , and given in [9] and using
instead of that in Equation (35), we can obtain the COC = 3.538 for , which is close to the lower bound of 3.56 derived in [9]. However, using and , the COC = 4.990 is obtained by Equation (35), and the COC = 3.895 is obtained by Equation (124). Most papers have used Equation (35) to compute the COC. In any case, the lower bound derived in [9] may underestimate the true value of the COC. As shown in Table 10, all COCs obtained by Duni’s memory method are greater than 3.56.
9. A Lie Symmetry Method
As a practical application of the proposed iterative schemes, we developed a Lie symmetry method based on the Lie group to solve the second-order nonlinear boundary-value problem. This Lie symmetry method was first developed in [34] for computing the eigenvalues of the generalized Sturm–Liouville problem.
Let
whose exact solution is
The conventional shooting method is assumed to have an unknown initial slope and integrated with Equation (125) with the initial conditions and , which results in an implicit equation to be solved.
From Equation (125), a nonlinear system consists of two first-order ordinary differential equations:
Let
being the coefficient matrix. The Lie symmetry system in Equation (128) permits a Lie group symmetry , a two-dimensional real-valued special linear group, because of tr.
By using the closure property of the Lie group, there exists a , such that the following mapping holds:
where
and is an unknown weighting factor to be determined.
We can derive
Since and are given, we can obtain from Equations (132) and (134):
It is interesting that the unknown slope can be derived in Equation (136) by using the Lie symmetry method, which is more powerful than the traditional shooting method, of which no such explicit formula for can be obtained.
Now, we apply the fourth-order Runge–Kutta method to integrate Equation (125) with the initial conditions and given in Equation (136) in terms of x. The right-end value must satisfy , which is an implicit function of x. We take steps in the Runge–Kutta method and fix the initial guess . In Table 11, we compare the NI, the error of , and the maximum error of u obtained by comparing to Equation (127). The weighting factor is obtained, such that computed from Equation (136) is very close to the exact one .
Table 11.
For Equations (125) and (126), comparing the performances of the second, third, fifth, and sixth memory-accelerating techniques in the solution of a second-order nonlinear boundary-value problem by the Lie symmetry method.
Instead of the Dirichlet boundary conditions in Equation (126), we consider mixed-type boundary conditions:
Now, from Equation (135), it follows that
10. Conclusions
In this paper, we addressed five one-step iterative methods: A (Wu’s method), B (Liu’s method), C (a novel method), D (Duni’s method), and E (a modification of Duni’s method). Without using specific values of the parameters and without memory, they all have second-order convergence; when specific optimal values of the parameters are used, they have third-order convergence. Three critical values of , , and are parameters in and , which are crucial for achieving good performance in the designed iterative scheme, such that the coefficients (involving and ) preceding and are zeros. We introduced a combination function, which is determined by raising the order of convergence. The optimal combination of A and B can generate a fourth-order one-step iterative scheme. When the values of the parameters and combination function were obtained by a memory-accelerating method with the third-degree Newton polynomial to interpolate the previous and current data, we obtained the first memory-accelerating technique to realize a fourth-order one-step iterative scheme.
In the novel method C, a relaxation factor appeared. If we used the memory-accelerating method for updating the values of the relaxation factor and other parameters, we obtained the second memory-accelerating technique to realize the fourth-order convergence of the derived novel one-step iterative scheme.
We mathematically improved Duni’s method to an iterative scheme with fourth-order convergence, and the third memory-accelerating technique was developed to realize a fourth-order one-step iterative scheme based on Duni’s memory method.
The optimal combination of A and D generated the fourth memory-accelerating technique to realize a fourth-order one-step iterative scheme based on an optimal combination function between Duni’s and Wu’s methods. The optimal combination of B and D generated the fifth memory-accelerating technique to realize a fourth-order one-step iterative scheme based on an optimal combination function between Duni’s and Liu’s methods.
In E, we finally introduced a relaxation factor into Duni’s method, which is optimized to the fourth-order convergence by the sixth memory-accelerating technique.
In the first and fourth memory-accelerating techniques, three evaluations of the function and its derivative were required. In contrast, the second, third, fifth, and sixth memory-accelerating techniques needed two evaluations of the function. Numerical tests confirmed that these fourth-order one-step iterative schemes performed very well with high values of the COC and E.I. Among them, the fifth memory-accelerating technique was the best one with the COC and E.I. for all testing examples. Recall that the efficiency index of the optimal fourth-order two-step iterative scheme with three evaluations of the function and without memory is the E.I. = = 1.587.
As an application of the derivative-free one-step iterative schemes with the second, third, fifth, and sixth memory-accelerating technique, a second-order nonlinear boundary-value problem was solved by the Lie symmetry method. It is remarkable that the Lie symmetry method can derive the unknown initial slope to be an explicit formula of the weighting factor x, whose implicit nonlinear equation can be solved with high efficiency and high accuracy.
The basic iterative schemes in Equations (9) and (10) are applicable to find the multiple roots of a nonlinear equation, for instance with a triple root . It can be treated well by the proposed accelerated one-step iterative schemes. As for the system of nonlinear equations, more studies are needed by extending the presented accelerating techniques.
Author Contributions
Conceptualization, C.-S.L.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L. and C.-W.C.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L., C.-W.C. and C.-L.K.; Resources, C.-S.L., C.-W.C. and C.-L.K.; Data curation, C.-S.L., C.-W.C. and C.-L.K.; Writing—original draft, C.-S.L.; Writing—review & editing, C.-W.C.; Visualization, C.-S.L., C.-W.C. and C.-L.K.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The data presented in this study are available on request from the corresponding authors. The data are not publicly available due to restrictions privacy.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Chun, C.; Ham, Y. Some fourth-order modifications of Newton’s method. Appl. Math. Comput. 2008, 197, 654–658. [Google Scholar] [CrossRef]
- Noor, M.A.; Noor, K.I.; Waseem, M. Fourth-order iterative methods for solving nonlinear equations. Int. J. Appl. Math. Eng. Sci. 2010, 4, 43–52. [Google Scholar]
- Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
- King, R. A family of fourth-order iterative methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
- Li, S. Fourth-order iterative method without calculating the higher derivatives for nonlinear equation. J. Algorithms Comput. Technol. 2019, 13. [Google Scholar] [CrossRef]
- Chun, C. Certain improvements of Chebyshev-Halley methods with accelerated fourth-order convergence. Appl. Math. Comput. 2007, 189, 597–601. [Google Scholar] [CrossRef]
- Kuo, J.; Li, Y.; Wang, X. Fourth-order iterative methods free from second derivative. Appl. Math. Comput. 2007, 184, 880–885. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
- Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algorithms 2013, 63, 549–569. [Google Scholar]
- Haghani, F.K. A modiffied Steffensen’s method with memory for nonlinear equations. Int. J. Math. Model. Comput. 2015, 5, 41–48. [Google Scholar]
- Khdhr, F.W.; Saeed, R.K.; Soleymani, F. Improving the computational efficiency of a variant of Steffensen’s method for nonlinear equations. Mathematics 2019, 7, 306. [Google Scholar] [CrossRef]
- Wu, X.Y. A new continuation Newton-like method and its deformation. Appl. Math. Comput. 2000, 112, 75–78. [Google Scholar] [CrossRef]
- Lee, M.Y.; Kim, Y.I.; Magrenãń, A.A. On the dynamics of tri-parametric family of optimal fourthorder multiple-zero finders with a weight function of the principal mth root of a function-function ratio. Appl. Math. Comput. 2017, 315, 564–590. [Google Scholar]
- Zafar, F.; Cordero, A.; Torregrosa, J.R. Stability analysis of a family of optimal fourth-order methods for multiple roots. Numer. Algorithms 2019, 81, 947–981. [Google Scholar] [CrossRef]
- Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations. Mathematics 2023, 14, 2036. [Google Scholar] [CrossRef]
- Singh, M.K.; Singh, A.K. A derivative free globally convergent method and its deformations. Arab. J. Math. 2021, 10, 481–496. [Google Scholar] [CrossRef]
- Singh, M.K.; Argyros, I.K. The dynamics of a continuous Newton-like method. Mathematics 2022, 10, 3602. [Google Scholar] [CrossRef]
- Liu, C.S. A new splitting technique for solving nonlinear equations by an iterative scheme. J. Math. Res. 2020, 12, 40–48. [Google Scholar] [CrossRef]
- Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
- Džunić, J.; Petković, M.S. On generalized biparametric multipoint root finding methods with memory. J. Comput. Appl. Math. 2014, 255, 362–375. [Google Scholar]
- Wang, X.; Zhang, T. A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo 2014, 51, 1–15. [Google Scholar] [CrossRef]
- Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parameter family with memory for nonlinear equations. Numer. Algorithms 2015, 68, 323–335. [Google Scholar] [CrossRef]
- Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of optimal iterative methods with their applications and basins of attraction. Symmetry 2022, 14, 2020. [Google Scholar] [CrossRef]
- Lotfi, T.; Soleymani, F.; Noori, Z.; Kiliçman, A.; Haghani, F.K. Efficient iterative methods with and without memory possessing high efficiency indices. Discret. Dyn. Nat. Soc. 2014, 2014, 912796. [Google Scholar] [CrossRef]
- Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameter. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. Appl. Math. Comput. 2019, 354, 286–298. [Google Scholar] [CrossRef]
- Torkashvand, V.; Kazemi, M.; Moccari, M. Sturcture a family of three-step with-memory methods for solving nonlinear equations and their dynamics. Math. Anal. Convex Optim. 2021, 2, 119–137. [Google Scholar]
- Sharma, E.; Panday, S.; Mittal, S.K.; Joit, D.M.; Pruteanu, L.L.; Jäntschi, L. Derivative-free families of with- and without-memory iterative methods for solving nonlinear equations and their engineering applications. Mathematics 2023, 14, 4512. [Google Scholar] [CrossRef]
- Thangkhenpau, G.; Panday, S.; Bolundut, L.C.; Jäntschi, L. Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations. Symmetry 2023, 15, 1546. [Google Scholar] [CrossRef]
- Wu, X.Y. Newton-like method with some remarks. Appl. Math. Comput. 2007, 118, 433–439. [Google Scholar] [CrossRef]
- Wang, H.; Liu, H. Note on a cubically convergent Newton-type method under weak conditions. Acta Appl. Math. 2010, 110, 725–735. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Argyros, I.K.; Chen, D.; Qian, Q. The Jarratt method in Banach space setting. J. Comput. Appl. Math. 1994, 51, 103–106. [Google Scholar] [CrossRef]
- Liu, C.S. Computing the eigenvalues of the generalized Sturm-Liouville problems based on the Lie-group SL(2,). J. Comput. Appl. Math. 2012, 236, 4547–4560. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).