Abstract
In the paper, two nonlinear variants of the Newton method are developed for solving nonlinear equations. The derivative-free nonlinear fractional type of the one-step iterative scheme of a fourth-order convergence contains three parameters, whose optimal values are obtained by a memory-dependent updating method. Then, as the extensions of a one-step linear fractional type method, we explore the fractional types of two- and three-step iterative schemes, which possess sixth- and twelfth-order convergences when the parameters’ values are optimal; the efficiency indexes are and , respectively. An extra variable is supplemented into the second-degree Newton polynomial for the data interpolation of the two-step iterative scheme of fractional type, and a relaxation factor is accelerated by the memory-dependent method. Three memory-dependent updating methods are developed in the three-step iterative schemes of linear fractional type, whose performances are greatly strengthened. In the three-step iterative scheme, when the first step involves using the nonlinear fractional type model, the order of convergence is raised to sixteen. The efficiency index also increases to , and a third-degree Newton polynomial is taken to update the values of optimal parameters.
Keywords:
nonlinear equation; nonlinear perturbation of Newton method; fractional type iterative schemes; multi-step iterative scheme; memory-dependent method MSC:
65H05; 41A25
1. Introduction
We consider a second-order nonlinear ordinary differential equation subjecting to boundary values:
where is a given nonlinear continuous function, and a and b are given constants. We integrate Equation (1), starting from the initial values and , where x is an unknown value determined by in Equation (2), which results in a nonlinear equation:
where is a given continuous function, not necessarily a differentiable function. Therefore, the solutions for Equations (1) and (2) can be obtained by solving an implicit nonlinear equation to find the root x, where is the value of at .
The linear fractional one-step iterative scheme below [1]
is cubically convergent if and , where and . The parameters and can be updated to speed up the convergence by the memory-dependent method [2]. One can refer to [3,4,5,6,7,8,9,10] for more memory-dependent iterative methods.
For the Newton method, there exists some weak points as pointed out in [11]. In addition, for an odd function with , there exist two-cycle points of the Newton method. Let
be the mapping function of the Newton method. A cyclic point is determined by
with and , which becomes a nonlinear equation:
When is a solution of Equation (7), is also a solution, because of and . The pair are two-cycle points of the Newton method, with the properties and ; hence, and .
For as an instance, it follows from Equation (7) that
whose solutions are and , which are two-cycle points of the Newton method for the function . When the initial guess satisfies , the Newton method is convergent; however, when , the Newton method is divergent.
He et al. [12] proposed an iterative algorithm for approximating a common element of a set of fixed points of a nonexpansive mapping and the solutions of variational inequality on Hadamard manifolds. They proved that the sequence generated by the suggested algorithm strongly converges to the common solution of the fixed point problem. After that, Zhou et al. [13] built an accurate threshold representation theory, and on that basis, a fast and efficient iterative threshold algorithm of log-sum regularization was exploited. Note that the log-sum regularization possesses an exceptionally strong capability to solve the sparsity problem.
We plan to develop a new iterative scheme to overcome these drawbacks of the Newton method. The idea behind the development of the new iterative scheme is the SOR technique [14] for the following system of linear equations:
where
and , , and , respectively, represent a diagonal matrix, a strictly upper triangular matrix, and a strictly lower triangular matrix of .
An equivalent linear system from Equations (9) to (10) is
Equation (11) is multiplied by w, and then is added on both sides,
the corresponding iterative form is the SOR [15]:
Traub’s technique is a typical method with memory, of which the data that appeared in the previous iteration were adopted in the following iteration [16]:
By resulting in and and incorporating the memory of , Traub’s iterative scheme can proceed to find the solution of upon convergence. For a recent report on the progress of the memory method with accelerating parameters in the one-step iterative method, one can refer to [2], while for a recent progress report on the memory method with accelerating parameters in the two-step iterative method, one can refer to [11]. One major goal of the paper is the development of multi-step iterative schemes with a new memory method to determine the accelerating parameters by updating the technique with information at the current step.
We have arranged other contents of the paper as follows. Two types of one-step iterative schemes are introduced in Section 2 as nonlinear perturbations of the Newton method. Section 3 gives them a local convergence analysis for obtaining the optimal values of parameters with fourth-order convergence. In Section 4, we evaluate these two one-step iterative schemes using optimal values; a memory-dependent technique is developed for updating the optimal values of a nonlinear one-step iterative scheme of fractional type. In Section 5, we develop multi-step iterative schemes of fractional type, giving a detailed convergence analysis. Numerical experiments of the fractional type iterative schemes are executed in Section 6. In Section 7, we derive the memory-dependent method for determining the critical parameters in three linear fractional type iterative schemes, and new updating methods are developed for the linear fractional type three-step iterative scheme. An accelerated two-step memory-dependent iterative scheme is developed in Section 8. A nonlinear three-step iterative scheme of fractional type is developed in Section 9, where an accelerated memory-dependent method based on the Newton interpolant is used to update three critical parameters. Finally, the achievements are sketched in Section 10.
2. Nonlinear Perturbations of Newton Method
The idea leading to Equations (12) and (13) from Equation (9) motivates a nonlinear perturbation of the Newton method, which includes the introduction of a parameter w, adding the same term on both sides and generating an iterative form. We realize it for Equation (3) as follows. By adding on both sides of Equation (3), which can be extended to
is split into , such that
Next, we move to the right-hand side, which results in
where is a weight factor and H is a weight function; and the iterative form is
which results in
It is a new one-step iterative scheme to solve Equation (3), including a parameter and a weight function H, to be assigned. If , Equation (16) is a continuation Newton-like method developed by the author of [17]. If one takes and , the Newton method (NM) is recovered.
Comparing with a third-order iterative scheme, namely the Halley method [18] (HM)
Equation (16) possesses two advantages: the convergence order increasing to four, and not needing .
As a variant of Equation (16), we consider the following nonlinear one-step iterative scheme of fractional type:
which includes two parameters and a weight function H, to be assigned. When compared with Equation (17), Equation (18) does not need the differential terms and . For , , and in Equation (18), the resulting iterative scheme is proven to be a third-order convergence in [19].
To improve the efficiency of the one-step iterative schemes, a lot of iterative schemes based on two-step and three-step methods were depicted in [20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36], and some were based on the multi-composition of the functions [37,38,39,40].
3. Convergence Analysis of Equations (16) and (18)
Theorem 1.
The function is sufficiently differentiable on the domain I, and is a simple root with and . With
Equation (16) is of fourth-order convergence.
Proof.
Let r be a simple solution of with and . We suppose that is sufficiently close to r, such that
is a sufficiently small number. Subtracting by Equation (20) renders
Using the Taylor series yields
from which we have
due to and . Then, we have
where Equation (19) was used, and
Theorem 2.
The function is sufficiently differentiable on the domain I, and is a simple root with and . With
Equation (18) is of fourth-order convergence.
4. Numerical Results Based on Theorems 1 and 2
4.1. Numerical Results
In [41], the computed order of convergence (COC) of an iterative scheme is evaluated by
We consider as discussed in Section 1. For Equation (16), by using , and , and Equation (18) by using , , , and , we compare the number of iterations (NIs) and COC with those computed by the Newton method (NM) with different initial guesses, , shown in Table 1. For the initial guess , the NM is divergent owing to , where is a solution of Equation (8). The many decimal numbers after this point guarantees that is a solution of Equation (8), thus satisfying the error tolerance with .
Table 1.
For different methods; the NIs and COCs computed with different initial guesses for .
4.2. A Memory-Updating Method for Equation (18)
We update the values of , , and in Equation (29) for the iterative Scheme (18) by the memory-dependent method. For this purpose, an extra variable is supplemented by
Now, we have and as the current data in memory and and as the updated data. Accordingly, we can construct the second- and third-degree Newton polynomials by
where are data points of ; are data points of ; and
In Equation (29), we take , , and . By Theorem 2, we have a one-step memory-dependent iterative Scheme (18): (i) resulting in , , and , and , and (ii) calculating for ,
where or can be selected.
We consider , whose solutions are −1, 0, and 1. By using , and with and , Table 2 lists the results for different values of , where E.I. = CO signifies the efficiency index for two evaluations of functions and .
Table 2.
A one-step memory-dependent accelerating technique for the iterative Scheme (18) to solve .
We consider , whose solution is 1.365230013414097. By starting from , using and with and , Table 3 lists the results for different values of .
Table 3.
A one-step memory-dependent accelerating technique for the iterative Scheme (18) to solve .
Table 2 and Table 3 reveal that the values of COC can be over four of the theoretical one, if a good choice of for the iterative Scheme (18) with memory-dependent technique is taken.
We solve Equations (1) and (2) with , , and again by using the one-step memory-dependent accelerating technique for the iterative Scheme (18) with . We take , , and ; through three iterations, we obtain the same results as that shown in Section 4.1; however, the memory-dependent technique converges faster than when not using the memory.
We consider the Troesch problem [42]:
When is large, there appears to be a strong singularity in the boundary layer near to the right-end side.
We solve Equation (43) with by using the one-step memory-dependent accelerating technique for the iterative Scheme (18) with . We take , , and , and through four iterations, we obtain COC = 3.473, , and the solution with an error at the right-end is . In Table 4, we list the numerical results at some points for the case . It clear that the present method is more accurate than that obtained by the B-spline method [43].
Table 4.
Comparing the numerical results for the Troesch equation with .
The Bratu problem is used as a benchmark problem to test numerical methods for solving nonlinear boundary value problems [44]:
where is solved from the following equation:
For , and are obtained by the iterative Scheme (18) with NI=4 and NI=3, respectively. When we apply the NM to solve Equation (46), we cannot find the second solution; and for the first solution, it used up 22 iterations.
We solve Equation (44) by using the one-step memory-dependent accelerating technique for the iterative Scheme (18) with . For the first solution, we take , , and , and through three iterations, we obtain the slope ; is the maximum error obtained for . For the second solution, we take , , and , and through four iterations, we obtain the slope ; is the maximum error of .
5. Multi-Step Iterative Schemes of Fractional Type
As the extensions of Equation (4), we propose the following fractional type two-step iterative scheme:
as well as the fractional type three-step iterative scheme:
Theorem 3.
The function is sufficiently differentiable on the domain I, and is a simple root with and . If
then the iterative Scheme (4) has third-order convergence.
Proof.
Let
In view of Equation (50), is a function of ; hence, F is deemed to be a function of . It follows from Equations (4), (21), and (51) that
where
Invoking the Taylor series for generates the following:
It is apparent from Equations (53) and (54) that
By inserting into the formulas
we have
Then, from Equations (56) and (57), it follows that
which, with the help of Equations (49) and (60), generates
Hence, Equation (55) becomes
and from Equation (52), we can obtain
The third-order convergence is thus proven. □
Theorem 4.
Proof.
When is a small quantity, we have the following series:
We rewrite Equation (62) as
Then, from the first one in Equation (47) and Equations (51), (65), and (20), it follows that
where
We expand around r, with , as follows:
Inserting Equations (66) and (68) into the second one in Equation (47) yields
owing to Equations (20) and (49), it becomes
Upon using Equations (64) and (67), we can obtain
Thus, we complete the proof that the iterative Scheme (47) has sixth-order convergence.
Inserting Equations (66) and (68) into the second one in Equation (48) and using Equations (49) and (64) yields
Then, we expand around the root r in terms of the Taylor series:
Upon inserting Equations (72) and (73) into the last one in Equation (48), we have
which, owing to Equations (20), (49), (64), and (67), becomes
The twelfth-order convergence of Equation (48) is thus proven. □
It should be noted that the following three-step iterative scheme
has ninth-order convergence [45]. The first step is obtained from the Halley method in Equation (17). In Equation (76), six evaluations of functions are required such that E.I. = , which is the same as the Halley method. For Equation (48) with and , E.I. = is much larger than that of Equation (76).
There are two main factors such that the EIs of the iterative Schemes (47) and (48) are and , respectively. The first factor is that the optimal parameters and are used in all steps of Equations (47) and (48). Then, the second factor is that only a new function is used in the second step of Equation (47), and only two new functions and are used in the second and third steps of Equation (48). Therefore, when Equation (47) has the optimal parameters’ values, it is EI = ; only two functions’ evaluations, and , are required. When Equation (48) has the optimal parameters’ values, it is EI = ; only three functions’ evaluations, , , and , are required.
6. Numerical Experiments Based on Theorems 3 and 4
Equations (4), (47), and (48) are sequentially labeled as Algorithms 1–3. The requirement of is mandatory for the convergence of most algorithms, which include the derivative terms, like the Newton method. In order to investigate the applicability of Algorithms 1–3 in this condition, we consider a simple case of , where is a double-root and . By taking and and starting from , Algorithm 1 converges with three iterations, while both Algorithms 2 and 3 converge with two iterations. Even though for , where is a triple-root and , with and , and starting from , Algorithm 1 converges with six iterations, Algorithm 2 with four iterations, and Algorithm 3 with three iterations.
Then, we consider with ; we can compute the COC by Equation (35) in Table 5 to find the root , where and are used in Algorithms 1–3.
Table 5.
For , the COCs computed by different methods; × denotes not computable.
Unlike the NM, which is sensitive to the initial value , the new methods are insensitive to the initial value . For example, we seek another root of by the NM, starting from , which converges to (third root) with two iterations; to with eleven iterations starting from ; and to with six iterations starting from . The new methods converge to with three or four iterations, no matter which initial value is used among .
The other test examples are given by
In Table 6, NIs are tabulated for solving , which starts from . We compare the computed results to the Newton method (NM), the Halley method [18], the method of Soheili et al. [46], and the method of Bahgat [47]. The values of parameters used are and .
Table 6.
For , the NIs for different methods.
In Table 7, NIs and COCs obtained by Algorithms 1–3 with different parameters of are tabulated. It can be seen that Algorithm 3 is faster than the Algorithms 1 and 2. Even though the values of the parameters are not the best ones, Algorithms 2 and 3 converge very fast.
Table 7.
For , a comparison between the number of iterations (NIs) and the COCs for Algorithms 1–3.
Table 8 tabulates NIs obtained by Algorithms 1–3, NM, NNT (the method of Noor et al. [35]), CM (the method of Chun [20]), and NRM (the method of Noor et al. [21]).
Table 8.
A comparison of different methods for the number of iterations.
We evaluate the E.I. as defined by Traub [16], where p is the order of convergence and m is the number of the evaluations of functions per iteration. In Table 9, for different methods, we list the E.I. obtained by Algorithms 1–3, NM, HM (the Halley method [18]), CM (the method of Chun [20]), NRM (the method of Noor et al. [21]), LM (the method of Li [48]), MCM (the method of Milovanovic and Cvetkovic [49]), AM (the method of Abdul-Hassan [50]), and AHHRM (the method of Ahmad et al. [51]).
Table 9.
A comparison of different methods for the efficiency index (E.I.)
7. Memory-Dependent Updating Iterative Schemes
In Equations (4), (47), and (48), the values of and are crucial, whose optimal values are and . In this section, we approximate and without using the differentials.
Give and and take
close to and in Equation (49), where .
In Equations (4), (47), and (48), if and are replaced by and , we can quickly obtain the solution, as shown in Table 10, for , where and .
Table 10.
For with the solution and , a comparison of the number of iterations (NIs) for different methods.
For Algorithm 2, COC = 6.016 is obtained, and for Algorithm 3, COC = 7.077 is obtained. The value COC = 7.077 is much smaller than the theoretical value 12, as demonstrated in Table 9. However, from Equation (47), there are two functions’ evaluations, and , which, according to the conjecture of Kung and Traub, have an optimal order of , which is smaller than COC=6.016. Similarly, from Equation (48), there are three functions’ evaluations, , , and , which, according to the conjecture of Kung and Traub, have the optimal order of , which is smaller than COC=7.077.
In Table 11, for different functions, we list the NIs and COC obtained by Algorithms 1–3. For , we obtain the root .
Table 11.
A comparison of Algorithms 1–3 for the NIs and COCs of different functions.
In Algorithm 3, the value of COC as just mentioned was 7.077. However, this value is significantly smaller than the theoretically expected value of 12. The main reason behind this discrepancy is that we just computed two rough values of and by Equation (82) with two ad hoc values of and , which are not the optimal values of and . Below, we will update the values of and by using the memory-dependent technique to a better approximation of the optimal parameters and . Higher-order data interpolation by higher-order polynomial can enhance the convergence order; however, at the same time, more algebraic operations are needed. By balancing the number of function evaluations, algebraic operations, and their impact on the convergence order, we can employ the polynomial data interpolation up to the second order to approximate and .
To raise the COC for Algorithm 3, we can update the values and in Equation (82) after the first iteration by proposing the following Algorithm 4, which is depicted by (i) giving , , , and , such that , and computing and by Equation (82), and (ii) calculating for ,
Here, and and E.I. = 1.682 are optimal ones. Table 12 lists the the results obtained by Algorithm 4. Some E.I.s are larger than 1.682.
Table 12.
The NIs, COCs and E.I.s for Algorithm 4.
There are some self-accelerating iterative methods for simple roots [3,52,53], which are then extended to the self-accelerating technique for the iterative methods for multiple roots [8,54]. In Equations (86)–(88), the self-accelerating technique for and is quite simple as compared to that in the literature.
The term in Equation (88) can be computed by the second-order polynomial interpolation:
In doing so, the evaluations of functions are reduced from , , , and to , , and , and E.I. can be increased. With this modification, the iterative scheme is named Algorithm 5. Table 13 lists the NIs, COCs, and E.I.s obtained by Algorithm 5. For three evaluations of functions, and E.I. = 1.5874 are optimal ones. However, the values of E.I. in Table 13 are much larger than E.I. = 1.5874. In Table 13,
The result of COC=15.517 for is larger than that given in Table 7 in [3] with COC = 6.9. Here, the presented Algorithm 5 requires three evaluations of functions, but it needed to specify a rough range to include the solution as an inner point.
Table 13.
The NIs, COCs, and E.I.s for Algorithm 5.
An iterative method that uses the information from the current and previous iterations is called a method with memory. In addition to the given initial guesses and , and —calculated by Equations (86)–(88) and Equations (89)–(91)—only need the current values of , , , , , and , such that we point out that both Algorithms 4 and 5 are without the use of the memory of previous values. Other memory-dependent techniques can be seen in [3,52,53,54,55,56].
Moreover, we develop a more advanced updating technique using the information from and the second-order Newton polynomial interpolation, namely Algorithm 6, where we replace and with and , and update them with and . Algorithm 6 is depicted by (i) giving , , , and , such that , and computing and by Equation (82), and (ii) calculating for ,
Table 14 lists the NIs, COCs, and E.I.s obtained by Algorithm 6. All E.I.s are larger than 1.5874. We found that the NI is not sensitive to the initial values of and ; however, we adjust and to make E.I. as large as possible.
Table 14.
The NIs, COCs, and E.I.s for Algorithm 6.
At this point, we have finished three memory-dependent updating techniques of the three-step iterative Scheme (48) of fractional type. Through the numerical tests of six nonlinear equations, Algorithms 5 and 6 are proven to be better than Algorithm 4.
8. An Accelerated Two-Step Memory-Dependent Method
Instead of Equation (36), we consider
where is to be determined. Equation (98) supplies an extra datum for Equation (47).
Then, we impose an extra condition
to determine . Inserting
into Equation (99) yields
which can be used to update .
The accelerated two-step memory-updating method (ATSMUM) is coined as (i) giving , , , and , and computing and by Equation (82), and (ii) calculating for ,
The ATSMUM reasonably saves the computational cost.
Table 15 lists the results obtained by the ATSMUM. All E.I.s are larger than 1.5874. As designed, tended to be very small values.
Table 15.
The NIs, COCs, and E.I.s for the ATSMUM.
The result of COC = 9.865 for is larger than that given in Table 7 in [3] with COC = 6.9, which requires two extra evaluations of previous functions and .
In order to investigate the effect of on the convergence behavior, we give some testing values of in Equation (98) and do not consider the accelerating technique in Equation (108). Table 16 lists the results without using the accelerating technique in Equation (108). When the parameter in Equation (98) is given by trial and error, the resulting iterative scheme is usually not the optimal one. The particular benefit with the memory-dependent technique in terms of quickening the iterative schemes’ relaxation factor can be seen to increase the values of the COC and E.I.
Table 16.
The NIs, COCs, and E.I.s without using the accelerating technique in Equation (108).
9. Nonlinear Fractional Type Three-Step Iterative Scheme
As an extension of Equation (48), we propose the following nonlinear fractional type three-step iterative scheme:
Theorem 5.
The function is sufficiently differentiable on the domain I, and is a simple root with and . If
then the iterative Scheme (109) has sixteenth-order convergence.
Proof.
In Equations (109), and (110), we let , and , which are updated by a third-degree Newton interpolant , so that we have a three-step Algorithm 7 with memory-updating method: (i) giving , , , , and , such that , and computing and by Equation (82), and (ii) calculating for ,
where we can take , or .
We consider again. By starting from , using and with , and , we determine that NI = 3, COC = 19.74, and E.I. = 2.703 are much larger than the values in Table 3.
Table 17 lists the NIs, COCs, and E.I.s obtained by Algorithm 7. For and , two iterations can obtain the exact solution, such that COC and E.I. are not defined, which need at least three data.
Table 17.
The NIs, COCs, and E.I.s for Algorithm 7.
10. Conclusions
A nonlinear perturbation of the fixed-point Newton method was derived, which included two parameters, and a weight function permitted the fourth-order convergence equipped with three critical parameters. We developed a one-step memory-dependent iterative scheme by updating the optimal values of parameters by a third-degree Newton polynomial. For the data interpolation, a supplementary variable was computed. The E.I. was over , which is better than some two-step fourth-order convergence iterative schemes modified from the Newton method, whose E.I. was . We derived a one-step iterative scheme of fractional type in Algorithm 1. Because the order of convergence is three and the efficiency index is three, we simply extended Algorithm 1 to a two-step and three-step iterative scheme, namely Algorithm 2 and Algorithm 3. It is interesting that the orders of convergence are largely increased to six and twelve, and the E.I.s became and . For the two-step iterative scheme, the relaxation factor was accelerated, whose performance was very good. Moreover, for the nonlinear three-step iterative scheme of the fractional type, the E.I. became . We developed three memory-dependent updating techniques to gradually obtain the optimal values of these critical parameters for the three-step iterative scheme of fractional type. Through several numerical tests as listed and compared in the tables presented herein, we revealed that the new methods can find the solution quickly, without using the information of the differentials, and have better convergence efficiency and performance than most methods in the literature. Through the studies performed in the paper, we found that the proposed iterative schemes are cost-saving, with low computational complexity, where two functions’ evaluations and some algebraic operations for updating the optimal parameters were required for the two-step iterative scheme of fractional type, and three functions’ evaluations and some algebraic operations for updating the optimal parameters were required for the three-step iterative scheme of fractional type. These iterative schemes are especially useful for the practical viability with the simple structures of these algorithms and only need the implicit form of the nonlinear equation ; the explicit form of the function and the differential term are not required, which can be a great advantage in many practical engineering applications.
Author Contributions
Conceptualization, C.-S.L. and C.-W.C.; methodology, C.-S.L. and C.-W.C.; software, C.-S.L. and C.-W.C.; validation, C.-S.L. and C.-W.C.; formal analysis, C.-S.L. and C.-W.C.; investigation, C.-S.L. and C.-W.C.; resources, C.-W.C.; data curation, C.-W.C.; writing—original draft preparation, C.-S.L.; writing—review and editing, C.-W.C.; visualization, C.-W.C.; supervision, C.-S.L. and C.-W.C.; project administration, C.-W.C.; funding acquisition, C.-W.C. All authors have read and agreed to the published version of the manuscript.
Funding
This work was financially supported by the National Science and Technology Council [grant numbers: NSTC 112-2221-E-239-022].
Data Availability Statement
The data presented in this study are available on request from the corresponding authors.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
- Liu, C.S.; Chang, C.W.; Kuo, C.L. Memory-accelerating methods for one-step iterative schemes with Lie-symmetry method solving nonlinear boundary value problem. Symmetry 2024, 16, 120. [Google Scholar] [CrossRef]
- Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algorithms 2012, 63, 549–569. [Google Scholar]
- Lotfi, T.; Soleymani, F.; Noori, Z.; KJlJçman, A.; Khaksar Haghani, F. Efficient iterative methods with and without memory possessing high efficiency indices. Discret. Dyn. Nat. Soc. 2014, 2014, 912796. [Google Scholar] [CrossRef]
- Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameter. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. Appl. Math. Comput. 2019, 354, 286–298. [Google Scholar] [CrossRef]
- Torkashvand, V.; Kazemi, M.; Moccari, M. Sturcture a family of three-step with-memory methods for solving nonlinear equations and their dynamics. Math. Anal. Convex Optim. 2021, 2, 119–137. [Google Scholar]
- Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations. Mathematics 2023, 14, 2036. [Google Scholar] [CrossRef]
- Sharma, E.; Panday, S.; Mittal, S.K.; Joit, D.M.; Pruteanu, L.L.; Jäntschi, L. Derivative-free families of with- and without-memory iterative methods for solving nonlinear equations and their engineering applications. Mathematics 2023, 14, 4512. [Google Scholar] [CrossRef]
- Thangkhenpau, G.; Panday, S.; Bolundut, L.C.; Jäntschi, L. Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations. Symmetry 2023, 15, 1546. [Google Scholar] [CrossRef]
- Liu, C.S.; Chang, C.W. New memory-updating methods in two-step Newton’s variants for solving nonlinear equations with high efficiency index. Mathematics 2024, 12, 581. [Google Scholar] [CrossRef]
- He, H.; Peng, J.; Li, H. Iterative approximation of fixed point problems and variational inequality problems on Hadamard manifolds. UPB Sci. Bull. Ser. A 2022, 84, 25–36. [Google Scholar]
- Zhou, X.; Liu, X.; Zhang, G.; Jia, L.; Wang, X.; Zhao, Z. An iterative threshold algorithm of log-sum regularization for sparse problem. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 4728–4740. [Google Scholar] [CrossRef]
- Liu, C.S.; El-Zahar, E.R.; Chang, C.W. Dynamical optimal values of parameters in the SSOR, AOR and SAOR testing using the Poisson linear equations. Mathematics 2023, 11, 3828. [Google Scholar] [CrossRef]
- Hadjidimos, A. Successive overrelaxation (SOR) and related methods. J. Comput. Appl. Math. 2000, 123, 177–199. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
- Wu, X.Y. A new continuation Newton-like method and its deformation. Appl. Math. Comput. 2000, 112, 75–78. [Google Scholar] [CrossRef]
- Halley, E. A new exact and easy method for finding the roots of equations generally and without any previous reduction. Philos. Trans. R. Soc. Lond. 1964, 8, 136–147. [Google Scholar]
- Liu, C.S.; El-Zahar, E.R.; Chang, C.W. A two-dimensional variant of Newton’s method and a three-point Hermite interpolation: Fourth- and eighth-order optimal iterative schemes. Mathematics 2023, 11, 4529. [Google Scholar] [CrossRef]
- Chun, C. Iterative methods improving Newton’s method by the decomposition method. Comput. Math. Appl. 2005, 50, 1559–1568. [Google Scholar] [CrossRef]
- Noor, M.A.; Noor, K.I.; Al-Said, E.; Waseem, M. Some new iterative methods for nonlinear equations. Math. Probl. Eng. 2010, 2010, 198943. [Google Scholar] [CrossRef]
- Morlando, F. A class of two-step Newton’s methods with accelerated third-order convergence. Gen. Math. Notes 2015, 29, 17–26. [Google Scholar]
- Saqib, M.; Iqbal, M. Some multi-step iterative methods for solving nonlinear equations. Open J. Math. Sci. 2017, 1, 25–33. [Google Scholar] [CrossRef]
- Qureshi, U.K. A new accelerated third-order two-step iterative method for solving nonlinear equations. Math. Theory Model. 2018, 8, 64–68. [Google Scholar]
- Ali, F.; Aslam, W.; Ali, K.; Anwar, M.A.; Nadeem, A. New family of iterative methods for solving nonlinear models. Discret. Dyn. Nat. Soc. 2018, 2018, 1–12. [Google Scholar] [CrossRef]
- Zhanlav, T.; Chuluunbaatar, O.; Ulziibayar, V. Generating function method for constructing new iterations. Appl. Math. Comput. 2017, 315, 414–423. [Google Scholar] [CrossRef]
- Qureshi, S.; Soomro, A.; Shaikh, A.A.; Hincal, E.; Gokbulut, N. A novel multistep iterative technique for models in medical sciences with complex dynamics. Comput. Math. Methods Med. 2022, 2022, 7656451. [Google Scholar] [CrossRef] [PubMed]
- Argyros, I.K.; Regmi, S.; John, J.A.; Jayaraman, J. Extended convergence for two sixth order methods under the same weak conditions. Foundations 2023, 3, 127–139. [Google Scholar] [CrossRef]
- Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of optimal iterative methods with their applications and basins of attraction. Symmetry 2020, 14, 2020. [Google Scholar] [CrossRef]
- Noor, M.A.; Noor, K.I. Three-step iterative methods for nonlinear equations. Appl. Math. Comput. 2006, 183, 322–327. [Google Scholar]
- Noor, M.A.; Noor, K.I. Some iterative schemes for nonlinear equations. Appl. Math. Comput. 2006, 183, 774–779. [Google Scholar]
- Noor, M.A. New iterative schemes for nonlinear equations. Appl. Math. Comput. 2007, 187, 937–943. [Google Scholar] [CrossRef]
- Noor, K.I. New family of iterative methods for nonlinear equations. Appl. Math. Comput. 2007, 190, 553–558. [Google Scholar] [CrossRef]
- Noor, K.I.; Noor, M.A. Predictor-corrector Halley method for nonlinear equations. Appl. Math. Comput. 2007, 188, 1587–1591. [Google Scholar]
- Noor, M.A.; Noor, K.I.; Mohyud-Din, S.T.; Shabbir, A. An iterative method with cubic convergence for nonlinear equations. Appl. Math. Comput. 2006, 183, 1249–1255. [Google Scholar] [CrossRef]
- Noor, M.A.; Noor, K.I.; Waseem, M. Fourth-order iterative methods for solving nonlinear equations. Int. J. Appl. Math. Eng. Sci. 2010, 4, 43–52. [Google Scholar]
- Jain, P. Steffensen type methods for solving nonlinear equations. Appl. Math. Comput. 2007, 194, 527–533. [Google Scholar]
- Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Steffensen type methods for solving nonlinear equations. J. Comput. Appl. Math. 2012, 236, 3058–3064. [Google Scholar] [CrossRef]
- Soleymani, F.; Hosseinabadi, V. New third- and sixth-order derivative-free techniques for nonlinear equations. J. Math. Res. 2011, 3, 107–112. [Google Scholar] [CrossRef]
- Hafiz, M.A.; Bahgat, M.S.M. Solving nonsmooth equations using family of derivative-free optimal methods. J. Egypt. Math. Soc. 2013, 21, 38–43. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Roberts, S.M.; Shipman, J.S. On the closed form solution of Troesch’s problem. J. Comput. Phys. 1976, 21, 291. [Google Scholar] [CrossRef]
- Khuri, S.A.; Sayfy, A. Troesch’s problem: A B-spline collocation approach. Math. Comput. Model. 2011, 54, 1907–1918. [Google Scholar] [CrossRef]
- Abbasbandy, S.; Hashemi, M.S.; Liu, C.S. The Lie-group shooting method for solving the Bratu equation. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 4238–4249. [Google Scholar] [CrossRef]
- Qureshi, S.; Ramos, H.; Soomro, A.K. A new nonlinear ninth-order root-finding method with error analysis and basins of attraction. Mathematics 2021, 9, 1996. [Google Scholar] [CrossRef]
- Soheili, A.R.; Ahmadian, S.A.; Naghipoor, J. A Family of Predictor-Corrector Methods Based on Weight Combination of Quadratures for Solving Nonlinear equations. Int. J. Nonlinear Sci. 2008, 6, 29–33. [Google Scholar]
- Bahgat, M.S.M. New two-step iterative methods for solving nonlinear equations. J. Math. Res. 2012, 4, 128–131. [Google Scholar] [CrossRef]
- Li, S. Fourth-order iterative method without calculating the higher derivatives for nonlinear equation. J. Algorithms Comput. Technol. 2019, 13, 1–8. [Google Scholar] [CrossRef]
- Milovanovic, G.V.; Cvetkovic, A.S. A note on three-step iterative methods for nonlinear equations. Stud. Univ. Babes-Bolyai Math. 2007, 3, 137–146. [Google Scholar]
- Abdul-Hassan, N.Y. Two new predictor-corrector iterative methods with third- and ninth-order convergence for solving nonlinear equations. Math. Theory Model. 2016, 6, 44–56. [Google Scholar]
- Ahmad, F.; Hussain, S.; Hussain, S.; Rafiq, A. New twelfth order J-Halley method for solving nonlinear equations. Open Sci. J. Math. Appl. 2013, 1, 1–4. [Google Scholar]
- Wang, X. A family of Newton-type iterative methods using some special self-accelerating parameters. Int. J. Comput. Math. 2018, 95, 2112–2127. [Google Scholar] [CrossRef]
- Jain, P.; Chand, P.B. Derivative free iterative methods with memory having higher R-order of convergence. Int. J. Nonlinear Sci. Numer. Simul. 2020, 21, 641–648. [Google Scholar] [CrossRef]
- Zhou, X.; Liu, B. Iterative methods for multiple roots with memory using self-accelerating technique. J. Comput. Appl. Math. 2023, 428, 115181. [Google Scholar] [CrossRef]
- Džunić, J.; Petković, M.S.; Petković, L.D. Three-point methods with and without memory for solving nonlinear equations. Appl. Math. Comput. 2012, 218, 4917–4927. [Google Scholar]
- Džunić, J.; Petković, M.S. On generalized multipoint root-solvers with memory. J. Comput. Appl. Math. 2012, 236, 2909–2920. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).