Abstract
The local convergence analysis of the -step Newton-Jarratt composite scheme with order has been shown previously. But the convergence order is obtained using Taylor series and assumptions on the existence of at least the fifth derivative of the mapping involved, which is not present in the method. These assumptions limit the applicability of the method. A priori error estimates or the radius of convergence or uniqueness of the solution results have not been given either. These drawbacks are addressed in this paper. In particular, the convergence is based only on the operators on the method, which are the operator and its first derivative. Moreover, the radius of convergence is established, a priori estimates and the isolation of the solution is discussed using generalized continuity assumptions on the derivative. Furthermore, the more challenging semi-local convergence analysis, not previously studied, is presented using majorizing sequences. The convergence for both analyses depends on the generalized continuity of the Jacobian of the mapping involved, which is used to control it and sharpen the error distances. Numerical examples validate the sufficient convergence conditions presented in the theory.
MSC:
65H10; 47J25; 49M15
1. Introduction
The problem of solving nonlinear equations and systems of nonlinear equations is a fundamental and challenging task in numerical analysis with significant applications in many branches of science and engineering. Since closed-form or analytical solutions are rarely available, research has focused heavily on developing and analyzing iterative methods to approximate a solution to an equation of the form
where F is a Fréchet-differentiable operator defined on a convex subset D of a Banach space with values in a Banach space B. The solution is typically found as the limit of a sequence generated by an iterative method. The results obtained here hold for any norm denoted by . There are a variety of iterative methods for solving such equations and nonlinear systems [1,2,3,4,5]. The most well-known is Newton’s method, which is renowned for its quadratic convergence under standard assumptions [6]. To achieve a higher order of convergence, numerous modified Newton- or Newton-like methods have been proposed in the literature [7,8,9,10]. Among the most efficient and widely studied families are multipoint methods, which can attain a high order of convergence while keeping the number of function evaluations low. In this context, the Newton-Jarratt family and its variants are particularly prominent, offering fourth-order convergence or higher [11,12]. The development of even more efficient high-order multipoint methods continues to be an active area of research [10,13,14,15,16,17,18,19].
Recently, Sharma and Kumar [20] introduced an efficient and elegant class of Newton-Jarratt-like methods. Their approach involves a multi-step scheme designed to increase the order of convergence systematically. As described in their work, the key innovation is that in each step, the order of convergence is increased by two at the cost of only a single additional function evaluation. Furthermore, the computationally expensive inverse of the Fréchet derivative, , is calculated only once per iteration and then reused in all subsequent sub-steps. This design, which they term a method with a ’frozen inverse operator’, makes the algorithm highly efficient, especially for large systems of equations.
We provide a complete theoretical framework, covering both local and the more challenging semi-local convergence. The local analysis establishes a radius of convergence. It gives uniqueness of the results, improving the study in [20] by using larger convergence domains and tighter error bounds under our weaker hypotheses. Critically, and in a significant extension to this work, we also present the first semi-local convergence analysis for this class of methods using majorizing sequences. Owing to this analysis, we can guarantee convergence with a given initial point without assuming the existence of a solution beforehand. It is a crucial aspect for practical applications, where the existence of a solution is often not known. The theoretical results are then validated through numerical examples.
We extend the applicability of the multi-step method, introduced in [20], which is defined for a natural number , , and each .
The structure of the method begins with a Newton-Jarratt-like predictor step to find and then is followed by a series of additional steps. It is important to note that the operators and , and the inverse operator , are computed only once per iteration. The steps at use the fixed operator to iteratively improve the solution, enhancing the order of convergence without requiring additional costly derivative evaluations or inversions. The pseudo-code for the particular method is provided in Algorithm 1.
| Algorithm 1-step Newton-Jarratt Method (2) |
|
But there is a number of problems with the Taylor series approach which restrict the applicability of the method (2).
- Motivation for our paper
- (P1)
- The existence of at least the fifth derivative is assumed in [20] to show the local convergence and provided that , where m is a natural number. Let us consider , and define the function byHere are real numbers with and . It follows by the definition of the function f that is a solution of the equation . But the fifth derivative of the function f is not bounded, since it is not continuous at . Therefore the results in [20] cannot assure the convergence of the method to . However, the method converges to if we take . Thus, the sufficient convergence conditions in [20] can be weakened. It is also worth noting that only F and appear on the method.
- (P2)
- There is no knowledge in advance about the natural number K such that , where is the error tolerance. Thus, the number of such iterations K is unknown.
- (P3)
- Information about the isolation of is not available.
- (P4)
- The most important and challenging semi-local convergence is not given.
- (P5)
- The convergence is established only for .
The problems ()–() appear in studies utilizing the Taylor series approach. In this paper, we positively handle these problems in order to extend the applicability of the method.
- Novelty of our Paper
- (P1)′
- The local convergence is based on the operators F and , which only appear on the method. Moreover, generalized continuity assumptions [21] are used to control the derivative and sharpen the error distances .
- (P2)′
- The number of iterations K is known in advance, since a priori estimates on become available.
- (P3)′
- Domains are determined containing only one solution.
- (P4)′
- The semi-local convergence is developed using majorizing sequences [4,21,22,23].
- (P5)′
- The local and semi-local convergence is given in the more general setting of a Banach space. Notice that although our technique is applied on method (2) it can also be used on other methods with problems ()–() along the same lines [11,12,15,22,24,25,26,27,28,29,30,31,32,33,34,35,36].
The rest of this paper is organized as follows. Section 2 is devoted to the local convergence analysis of method (2), where we establish a radius of convergence and provide uniqueness results under our weaker hypotheses. The more demanding semi-local convergence analysis is presented in Section 3, based on the use of majorizing sequences. In Section 4, there are numerical results for validation of the theoretical conditions and demonstration of the performance of the method on several problems. Finally, we provide concluding remarks in Section 5.
2. Local Convergence
The analysis uses some scalar functions. Set .
- Suppose:
- (C1)
- There exists a continuous and nondecreasing function such that the function has a smallest zero in the interval . We shall denote such zero by and set .
- (C2)
- There exists a continuous and nondecreasing function such that for the functions:, ,the functions , are defined byare such that the functions , , have smallest zeros in the interval . We shall denote such zeros by . SetThis parameter shall be shown to be the radius of convergence for the method (2) in Theorem 1. It follows from the definition of r that for each :andThe real functions and w relate to the operators on the method (2).
- (C3)
- There exists a solution of the equation and an invertible operator such that for eachDefine the region .
- (C4)
- for each .
- (C5)
- .
Remark 1.
Some choices of M can be , the identity operator on or for some auxiliary point other than , or . In the last case, it follows by () that is a simple solution of the equation . The main local convergence result for the method (2) follows in the next result.
Theorem 1.
Suppose that conditions ()–() hold. Then, the sequence generated by the method (2) converges to , provided that . Moreover, the following items hold:
where
and for
Proof.
Notice that by the definition of the radius r, (4), (7)–(9), it follows that r, c, d exist and belong to the interval . We shall show by induction that for each
and for
It follows by (13) and the Banach lemma on invertible operators that is invertible and
In particular, for , the operator is invertible. So, the iterates are well defined by the method (2). Then, by the first substep, we can write in turn
We need estimates
by (),
so
Then, by (3), (6) (for ), (14) (for ), (16) and (17), identity (15) can give
Thus, the iterate and item (10) hold if . Then, by the second substep of the method (2), we get in turn that
Notice that
or
Hence, the identity (19) can give
So, the iterate and the item (11) holds if . Then, we can write for by method (2) if
In view of (3), (6) (for ) and (20)–(22), identity (23) can give
Thus, the iterate , the items (12) hold for and . Simply, exchange by respectively in the preceding calculations to complete the induction for items (10)–(12). Notice also that all the iterates of the method (2) belong to . Moreover, the existence of d is guaranteed by (3)–(6) and items (10)–(12). Finally, by letting in (7), we deduce that . □
The following result provides a region inside which the only solution of the equation is .
Proposition 1.
Suppose the condition () holds in the ball for some and there exists such that
Define the region . Then, the only solution of the equation in the region is .
Proof.
Suppose that there exists a solution of the equation such that . Define the linear operator . Then, it follows by the condition () and (25) that
Therefore, the operator is invertible. Finally, from the identity
we deduce that . □
Remark 2.
- (1)
- The limit point r can be replaced by in the condition ().
- (2)
- Under all the conditions ()–() one can set and in the Proposition 1.
3. Semi-Local Convergence
The choice of the initial point is challenging in the general setting of a Banach space. In the special case when one can certainly employ the bisection method [2,3]:
- (a)
- The midpoint is taken as an approximation to the solution .
- (b)
- The interval is replaced by if , or, if . The convergence of this method can then always be guaranteed.
One can refer to [2,3] and the references therein for more information on the bisection method and its assistance in the determination of initial points.
We shall first recall the definition of a majorizing sequence [4,21,22,23].
Definition 1.
Let be a sequence in a complete normed linear space. Suppose that there exists a nonnegative scalar sequence such that for each
It is worth noting that if the inequality above holds, then . Hence, the scalar sequence is nonnegative. Then, the sequence is majorizing for . Moreover, if the sequence converges to , then so does and
where .
As in the local case, the analysis relies on some scalar functions. Moreover, the formulas and calculations are the same, but , , and w are replaced by , , and v, respectively.
- Suppose:
- (H1)
- There exists a continuous and nondecreasing function such that the function has a smallest zero in the interval . Let us denote such zero by s and set .
- (H2)
- There exists a continuous and nondecreasing function . Define the scalar sequences for ; ; , some byandThese sequences are shown to be majorizing for the sequences generated by method (2). But let us first present a convergence result for them.
- (H3)
- There exists such that for each ,
- (H4)
- There exists and an invertible operator such that for eachDefine the region . By this condition and () it follows that for : . So the linear operator is invertible. Therefore, we can take .
- (H5)
- .
- (H6)
- .
Remark 3.
Possible selections for or , with .
- The semi-local analysis for the method (2) follows next.
Theorem 2.
Suppose the conditions ()–() hold. Then, the sequence generated by method (2) exists and converges to some such that
Proof.
We shall establish by induction. Items for and each
Notice that for , (31) gives by the method (2) and (27) that
Item (29) holds if by the definition of in () and (27), since
Moreover, the iterate . Let . Then, we have by () and ()
so
and iterates for exist by the method (2) for .
Then, we can write by the second substep and induction
which can give
where we also used the estimates
We also have
So, the iterate .
By the first substep of the method (2), we have the identity
leading to
so by the third, fourth …, i-th substep of the method (2)
and
Thus, the items (31) hold and the iterate .
Given the first substep for n exchanged by , we have then
so,
Then, it also follows by (27), (33) (for ) and the first substep of the method (2)
and
The induction for items (29)–(32) is completed and all iterates belong to . By the triangle inequality (32) gives
Therefore, the sequence is complete in the Banach space and, as such, it converges to some (since is a closed set). Moreover, by letting in (35) and using the continuity of the operator F, we obtain . Finally, by letting in (36) we show item (28). □
As in the local case, a domain is determined with only one solution of the equation .
Proposition 2.
Suppose there exists a solution of the equation for some , the condition holds in the ball and there exists such that
Define the domain . Then, z is the only solution of the equation in the domain .
Proof.
Suppose there exists a solution of the equation such that . Define the linear operator
Then, it follows by () and (37) that
So L is invertible, and from
we conclude that . □
Remark 4.
- (1)
- The limit point can be replaced by s in the condition ().
- (2)
- Under all the conditions ()–() we can take and in the Proposition 2.
- (3)
- The sufficient semi-local convergence conditions ()–() are very general. Clearly, if the functions and v are specialized more, concrete results can be obtained, which include the rate and order of convergence. But in this paper, we wanted to minimize the limitations of our approach.
4. Numerical Results
In this section, we present numerical results to validate the theoretical analysis discussed in the preceding sections. We apply our proposed -step method, given by (2), to several nonlinear systems of equations to demonstrate its efficiency and confirm its high order of convergence. The numerical experiments were performed using the C++ programming language on a machine equipped with an Intel(R) Core(TM) i7-10510U CPU @ 1.80 GHz processor. For more precise calculations, we used the Boost Multiprecision Library. It allowed us to utilize high-precision floating-point arithmetic, which was configured to 200 digits, to get more accurate results that are close to theoretical ones.
To validate our results, we tested the implemented approach on some test examples. As in paper [20], for all test cases, the iteration process is terminated when the following condition is met:
For each example, we track the error at each iteration. To numerically verify the theoretical convergence rate, we use the Computational Order of Convergence (COC), which is approximated using the following formula based on four consecutive iterations [17]:
where and are iterates close to the final solution.
We have tested the method (2) on several examples that were used in [20] and also added some general functions that are used for testing algorithms for solving systems of nonlinear equations that are suggested in the article [28]. We used the exact initial approximations that were utilized in the previously mentioned articles. The numerical results will be presented for problems including the following:
Example 1.
A system of 2 equations given by: with initial approximation (see [32]).
Example 2.
A system of 3 equations [16] given by: with initial approximation .
Example 3.
A system of n equations [33] given by:
with initial approximation , where the solution is: for . We select .
Example 4.
A system of three equations:
, .
Example 5.
Discrete boundary value function [29]
where , , , and , where . We take .
Example 6.
Discrete integral equation function [29]
where , ,, and . , where . We take .
Example 7.
Broyden tridiagonal function [28]
where , . We take .
Example 8.
Broyden banded function [28].
where , , and . We take .
The results for each problem are summarized in Table 1, showing the total number of iterations that are required for meeting the stopping criterion (k), the elapsed time (e-time), norm of the difference between approximate solution and (), norm of (), and the calculated . It enables a comprehensive comparison of the method’s performance for various values of m. From these tables, we can conclude that the computational order of convergence is quite close to the theoretical one for almost all examples (the theoretical order of convergence for m = 2, 3, 4 will be 5, 7, and 9, respectively). and converge even further to smaller number than established accuracy. Increasing the value of m does not always lead to better performance time, as it may not always result in a decrease in the number of iterations. For instance, in Example 8, there is no need to increase m from 3 to 4, since the iteration number is not reduced; however, it is a good idea to increase it from 2 to 3. For example, in Example 6, it is more crucial to increase m from 3 to 4, as it leads to a significant reduction in performance time.
Table 1.
Performance of -step Newton-Jarrat (2) (Examples 1–8).
We also prepared plots that show the values of the previously mentioned norms at each iteration on a logarithmic scale (see Figure 1 and Figure 2). The convergence of the method (2) for different values of m is graphically shown, and the comparison for all examples (colors represent different examples). The lines in Figure 1 show how quickly the function approaches zero, in other words, how quickly the algorithm converges to an approximate solution with accuracy . As you can see, an increasing number of m from 2 to 4 leads to either faster convergence or smaller error. For instance, Example 4 requires one fewer iteration with each increase in the value of m by one. Example 7 does not reduce the number of iterations when we increase m from 3 to 4, but we can clearly see that the norm of the function is closer to zero. Also, from all these plots, we can conclude that the most significant step the algorithm makes is at the last iteration. A similar situation is also shown in Figure 2. Notice that the convergence plots for Example 5 and Example 6 are identical for all tested values of m (m = 2, 3, 4).
Figure 1.
Convergence Analysis of for different values of m.
Figure 2.
Convergence Analysis of for different values of m.
We have also provided an example of a nonlinear equation involving a mapping where the majorant functions and w are determined without the usage of high-order differentiability (see Example 9). Moreover, this example has error bounds on the distances that involve Lipschitz constants, as you requested. Notice that such a priori bounds are not available in [20] or other studies using Taylor series.
Example 9.
Let us consider the mapping for , defined by
Then, notice that solves the equation . Moreover, the Fréchet derivative of the mapping F is given by
It follows that . We shall take . Furthermore the conditions (), () and () are validated if we choose , and . For this case, we can calculate the radius of convergence. In the case , we must calculate , and using formulas from (). After getting these functions, we find such for which , where . Then we can find the radius of convergence . Notice that we can take two options of taking :
We calculated and r for both options and collected results in Table 2. So, when we take radius of convergence is greater in comparison with the selection .
Table 2.
Radii of Convergence for Newton-Jarrat Method (2).
We also wanted to test our method on some ordinary and partial differential equations. For this purpose, we take the Lene–Emden equation [30], which states as
and Klein–Gordon equation [26]:
We compare results with the -step version of Newton–Raphson (NRM) presented in [35]:
We also employ the Chebyshev pseudo-spectral collocation method for temporal and spatial discretizations, as was done in the previously mentioned article:
for (39), and
for (40).
We tested this solution in a slightly different environment; we did not use a high-precision library. We made this choice to ensure that our results were more representative in a realistic computing environment that relies on standard hardware floating-point types. We also set the dimension of our system to 500 for (39) and 4420 for (40) to test our method on high-dimensional problems. We set the same initial approximation and domain as was done in [35]. For (39), the method (2) converges to the solution with accuracy after 16 steps, while Newton-Raphson needs 23 ones. For (40), it took only three steps for the Newton-Jarrat-like method to converge with such accuracy, while for Newton-Raphson it took four steps. The results of the described experiments are presented in Table 3 and Table 4, and they are in the same format as they are presented in [35] for comparison purposes. As you might notice, the method (2) is quite expensive in comparison with NRM, since it requires additional evaluation of Jacobian, matrix vector multiplications, and solving a system of linear equations per iteration when the right side is a matrix. In the case of partial derivative (40), it plays a crucial role, since both algorithms converge in a small number of steps that also leads to the same theoretical order of convergence, and because of these costly operations, it takes more time for the method (14) to converge with the previously mentioned accuracy. However, in the case of (39), the order of convergence of the method (2) is greater than that of NRM, and it plays a more crucial role in terms of performance time. So, in this case, the method (2) takes less time to converge.
5. Conclusions
In this work, we have presented a significant extension of the convergence analysis for the class of multi-step Newton-Jarratt-like methods of order , initially proposed in the paper [20]. While the computational efficiency of their scheme is noteworthy, its practical application was limited by a theoretical framework that relied on strong conditions, namely the existence and boundedness of derivatives up to the fifth order. Other limitations of the Taylor series approach and how to handle them can be found in items ()–() and ()’–()’ of the Introduction.
The primary contribution of this paper is the removal of these restrictive assumptions. We have developed a new convergence analysis using only conditions on the first Fréchet derivative. This approach, based on generalized Lipschitz-type continuity, not only confirms the local convergence of the method but does so under a much weaker and more practical set of hypotheses. This significantly broadens the range of nonlinear problems to which this efficient method can be applied with confidence.
Furthermore, we have established the first semi-local convergence analysis for this family of methods. By employing the concept of majorizing sequences, we provide criteria that guarantee convergence from an initial guess without prior knowledge of the existence of a solution. This is a crucial step forward, as it provides a practical tool for verifying the method’s applicability to real-world problems where the solution’s location is unknown.
The theoretical results were rigorously validated through numerical experiments. We expanded the range of challenging examples for testing the method (2) by using the large-scale problems from [28], and thereby proved the effectiveness of such method. The computational order of convergence (COC) was found to be in close agreement with the theoretical order of . We also tested the method on ordinary and partial differential equations and compared the results with the existing famous method.
In summary, by providing a more robust theoretical foundation with both local and semi-local convergence under weaker assumptions, our work substantially extends the applicability and utility of this powerful class of Newton-Jarratt-like methods for solving systems of nonlinear equations. The ideas in this paper shall be used to extend the applicability of other methods in our future research [11,12,15,22,24,25,26,27,28,29,30,31,32,33,34,35,36].
Author Contributions
Conceptualization, I.K.A., S.S. and M.S.; methodology, I.K.A., S.S. and M.S.; software, I.K.A., S.S. and M.S.; validation, I.K.A., S.S. and M.S.; formal analysis, I.K.A., S.S. and M.S.; investigation, I.K.A., S.S. and M.S.; resources, I.K.A., S.S. and M.S.; data curation, I.K.A., S.S. and M.S.; writing—original draft preparation, I.K.A., S.S. and M.S.; writing—review and editing, I.K.A., S.S. and M.S.; visualization, I.K.A., S.S. and M.S.; supervision, I.K.A., S.S. and M.S.; project administration, I.K.A., S.S. and M.S.; funding acquisition, I.K.A., S.S. and M.S. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Argyros, I.K.; Shakhno, S. Extended Local Convergence for the Combined Newton-Kurchatov Method Under the Generalized Lipschitz Conditions. Mathematics 2019, 7, 207. [Google Scholar] [CrossRef]
- Costabile, F.; Gualtieri, M.; Capizzano, S. An iterative method for the computation of the solutions of nonlinear equations. Calcolo 1999, 36, 17–34. [Google Scholar] [CrossRef]
- Nisha, S.; Parida, P.K. An improved bisection Newton-like method for enclosing simple zeros of nonlinear equations. SeMA J. 2015, 72, 83–92. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
- Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
- Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
- Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
- Noor, M.A.; Saleem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
- Xiao, X.; Yin, H. A new class of methods with higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2016, 264, 300–309. [Google Scholar] [CrossRef]
- Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–80. [Google Scholar] [CrossRef]
- Cordero, A.; Feng, L.; Magreñán, Á.A.; Torregrosa, J.R. A new fourth-order family for solving nonlinear problems and its dynamics. J. Math. Chem. 2015, 53, 893–910. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
- Esmaeili, H.; Ahmadi, M. An efficient three-step method to solve system of non linear equations. Appl. Math. Comput. 2015, 266, 1093–1101. [Google Scholar]
- Sharma, J.R.; Sharma, R.; Bahl, A. An improved Newton-Traub composition for solving systems of nonlinear equations. Appl. Math. Comput. 2016, 290, 98–110. [Google Scholar]
- Sharma, J.R.; Arora, H. An efficient derivative-free family of seventh order methods for systems of nonlinear equations. SeMA J. 2016, 73, 39–75. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A Variant of Newton’s Method with Accelerated Third-Order Convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar]
- Xiao, X.; Yin, H. Accelerating the convergence speed of iterative methods for solving nonlinear systems. Appl. Math. Comput. 2018, 333, 8–19. [Google Scholar] [CrossRef]
- Sharma, J.R.; Kumar, S. A class of accurate Newton–Jarratt-like methods with applications to nonlinear models. Comput. Appl. Math. 2022, 41, 46. [Google Scholar] [CrossRef]
- Argyros, I.K. The Theory and Application of Iteration Methods, 2nd ed.; Taylor and Francis: Boca Raton, FL, USA, 2022. [Google Scholar]
- Argyros, I.K.; Shakhno, S. Extended Two-Step-Kurchatov Method for Solving Banach Space Valued Nondifferentiable Equations. Int. J. Appl. Comput. Math. 2020, 6, 2. [Google Scholar] [CrossRef]
- Argyros, I.K.; Shakhno, S.; Yarmola, H. Two-Step Solver for Nonlinear Equations. Symmetry 2019, 11, 128. [Google Scholar] [CrossRef]
- Alzahrani, A.K.H.; Behl, R.; Alshomrani, A.S. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
- Cordero, A.; Lotfi, T.; Bakhtiari, P.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar]
- Jang, T.S. An integral equation formalism for solving the nonlinear Klein–Gordon equation. Appl. Math. Comput. 2014, 243, 322–338. [Google Scholar] [CrossRef]
- Madhu, K.; Elango, A.; Landry, R.J.; Al-arydah, M. New Multi-Step Iterative Methods for Solving Systems of Nonlinear Equations and Their Application on GNSS Pseudorange Equations. Sensors 2020, 20, 5976. [Google Scholar] [CrossRef]
- Moré, J.J.; Garbow, B.S.; Hillstrom, K.H. Testing Unconstrained Optimization Software. ACM Trans. Math. Softw. 1981, 7, 17–41. [Google Scholar] [CrossRef]
- Moré, J.J.; Cosnard, M.Y. Numerical solution of nonlinear equations. ACM Trans. Math. Softw. 1979, 5, 64–85. [Google Scholar] [CrossRef]
- Motsa, S.S.; Shateyi, S. New Analytic Solution to the Lane–Emden Equation of Index 2. Math. Probl. Eng. 2012, 614796. [Google Scholar] [CrossRef]
- Regmi, S. Optimized Iterative Methods with Applications in Diverse Disciplines; Nova Science Publisher: New York, NY, USA, 2021. [Google Scholar]
- Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA J. 2016, 74, 147–163. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
- Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
- Ullah, M.Z.; Serra-Capizzano, S.; Ahmad, F.; Al-Aidarous, E.S. Higher order multi-step iterative method for computing the numerical solution of systems of nonlinear equations: Application to nonlinear PDEs and ODEs. Appl. Math. Comput. 2015, 269, 972–987. [Google Scholar] [CrossRef]
- Usman, M.; Iqbal, J.; Khan, A.; Ullah, I.; Khan, H.; Alzabut, J.; Alkhawar, H.M. A new iterative multi-step method for solving nonlinear equation. MethodsX 2025, 15, 103394. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).