Abstract
A new family of iterative methods for solving systems of nonlinear equations is introduced, extending and unifying several existing approaches within a general framework. In the single-variable case, Newton’s method was accelerated using Padé approximations. This strategy is then extended to systems of nonlinear equations, leading to the construction of higher-order iteration functions with convergence orders ranging from two to four. By varying the order (p, q) of the Padé approximation, several known methods and their modifications emerge as special cases of this generalized approach. Theoretical convergence rates are established, and numerical experiments, conducted using high-precision arithmetic, confirm both the predicted behavior and improved computational efficiency across a variety of test problems.
MSC:
65H05
1. Introduction
A new family of iterative methods for numerical solution of systems of nonlinear equations, achieving orders of convergence from two to four, is developed and presented. The main contribution of this work is in the generalization and extension of several well-known approaches, including methods originally formulated for single-variable nonlinear equations [1].
Methods that accelerate Newton’s method are commonly encountered in the literature. Multistep methods of order three, four, and higher can be constructed using weight functions [1,2,3,4]. The convergence order of classical methods has been improved through numerous approaches, including those of Chebyshev, Halley, King, Newton–Steffensen, Ostrowski, and Potra-Pták; see [1] and references therein.
Methods that avoid second- and higher-order derivatives are of particular interest for nonlinear equations [1,5,6,7,8,9]. Likewise, approaches that optimize or eliminate function, matrix, and Jacobian evaluations, while emphasizing computational efficiency, are especially relevant in the context of nonlinear systems [10,11,12,13,14,15,16]. For example, in [15], the authors propose a parametric fourth-order method that reduces evaluation complexity by employing divided differences in place of full Jacobian evaluations. In [17], a parametric family of fourth-order Chebyshev–Halley type methods for systems of nonlinear equations is introduced. Fourth-order convergence is achieved in just two steps, without requiring second derivatives. In [18], a one-parameter family of iterative methods for solving nonlinear systems is proposed that avoids Jacobian computation by using divided differences. Fourth-order convergence is attained with minimal computational cost, requiring only one function evaluation, one divided difference, and one matrix inversion per iteration. Emphasis is placed on maximizing computational efficiency and balancing convergence order with operation count. In [19], a new class of frozen Jacobian multi-step iterative methods is presented, requiring only one Jacobian evaluation and LU decomposition per iteration. The two-step method achieves third-order convergence, while the three-step method is of order four. In [20], a new fourth-order parametric family is introduced. This two-step iterative method achieves fourth-order convergence using a single Jacobian inversion per iteration. The method depends on one free parameter, allowing for tuning in terms of stability and performance. The concept of maximally efficient iterative schemes is introduced in [21], defined as methods that attain the highest possible efficiency index for a given convergence order. The work builds on the Cordero–Torregrosa conjecture, which generalizes Kung–Traub’s optimality principle to vector-valued systems and helps define new metrics for method comparison. The approach is validated on large-scale nonlinear systems.
A central feature of this work is the use of Padé approximations to construct higher-order iteration functions. The well-known Cauchy’s method is defined as
where
However, the method depends on the square root function and second derivatives that can make the iterative process costly or impractical. To address this, the term is replaced with a rational approximant, and a suitable substitute for is chosen. Padé approximation of order (p, q) is employed as the rational approximant. By varying the order (p, q) of the Padé approximation, one obtains a family of methods, including extensions of several classical root-finding methods to the setting of systems of nonlinear equations.
We establish the theoretical convergence rates of the proposed methods and validate them through high-precision numerical experiments, demonstrating both accuracy and computational efficiency.
The paper is structured as follows: Section 2 presents the development of the new family of methods, based on the generalization of single-variable techniques to nonlinear systems. Section 3 provides the convergence analysis. Section 4 illustrates several well-known methods that fall within our proposed family. Section 5 offers numerical examples, along with a discussion and comparison of computational complexity between the proposed methods and other recently developed approaches.
Table 1 summarizes the third- and fourth-order methods cited in references, including their convergence order and computational complexity, expressed in terms of function evaluations, first derivative evaluations, and matrix inversions or linear system solutions performed in each iterative step.
Table 1.
Comparison of methods by the number of function evaluations and matrix inversions.
2. Development of the Methods
We consider a family of iterative methods for solving systems of nonlinear equations , where is assumed to be continuously differentiable on an open convex set . We also assume that there exists such that , and that the Fréchet derivative is continuous and nonsingular in .
Let be an matrix, and let be defined as the Padé approximation of order to the function at 0. Padé approximations for are shown in Table 2.
Table 2.
Padé approximations for .
The matrices and are defined as follows. Let polynomials and denote the numerator and the denominator of the function :
By evaluating these polynomials at the matrix , we obtain the matrices
where denotes the identity matrix. Based on and , we define the matrix as
This way, the Padé approximation is associated with the matrix .
From now on, we consider a family of iterative methods defined by
with the iteration function of the form
where
The proposed family (PM) can also be written as
Theorem 1, which establishes the convergence of the proposed family of methods, is stated and proved in the following section.
3. Convergence Analysis
Theorem 1.
Assume the vector function be k-time Fréchet differentiable in a convex set D containing a solution of and the Fréchet derivative is continuous and nonsingular in . Then the family of methods (9), (10), and (11) has an order of convergence . The corresponding asymptotic error constants are for , and in other cases, as in Table 3, with
and
Table 3.
Coefficients for .
Proof of Theorem 1.
Let us consider the proposed family of methods (9), (10), and (11). For the sake of simplicity, let us omit the indices p, q by and , and let us denote . Now, with given by (12), we have
where
It directly follows that
and the following relations are obtained:
Since
from (14) and (16), the following is derived:
where
This leads to the following:
The Padé approximation of order is a rational function whose Taylor expansion agrees with the ordinary series expansion up to order . Different choices of the parameters p and q yield different rational approximants. For our discussion, the following approximation for is of interest:
with different values of , and depending on . The form (18) is chosen to facilitate a simpler and more transparent analysis.
Let and . From (15) and (18), we obtain the following by direct multiplication:
Two cases are distinguished. In the first case, when , the corresponding values of a, b, and d are listed in Table 4. In the second case, for , the parameters take the fixed values and .
Table 4.
Coefficients associated with (18).
For each choice of p, q, the following expressions are obtained from (19):
With from (13), we obtain asymptotic error constants from Table 3.
The order of convergence of the presented family of methods is four for . In the remaining three cases, for (Newton’s method) the order of convergence is two, and for the order of convergence is three. □
4. Some Special Cases of the Family
Some well-known methods for solving nonlinear equations are special cases of our family for simple roots [1]. Their extensions to systems of nonlinear equations are likewise special cases of our family of methods for such systems. This can be illustrated on three fourth-order methods, the first from [39], the second from [35,39], and the third being Jarratt’s method.
4.1. First Method
The general form of the first method [39] is given by
Using , we directly obtain
The method (20) belongs to the proposed family (6) for and . According to Theorem 1, the order of convergence is .
4.2. Second Method
The general form of the second method [35,39] is given by
Using , we obtain
The method (21) belongs to our family (6) for and . According to Theorem 1, the order of convergence is .
4.3. Jarratt’s Method
The general form of Jarratt’s method [2] is
Using , we obtain
The method (22) belongs to our family (6) for and . According to Theorem 1, the order of convergence is .
5. Numerical Examples
All computations were carried out using high-precision arithmetic with up to 20,000 digits in Mathematica 8. For each group of nonlinear systems, we used the exit criterion, which was used in the respective original paper. Initial values were also taken from the corresponding original papers for each system. They are listed either with the systems or in tables.
The proposed family of methods is denoted by PM. In all cases, the computational order of convergence of our methods was very close to the corresponding theoretical order of convergence, which supports the theoretical result obtained in this paper. Theoretical order of convergence given by . Members of the proposed family with third-order convergence are denoted by PM(3), and those with fourth-order convergence by PM(4).
Every iterate is obtained from the previous one, , by adding one or more terms of the form where M is a real matrix and . The matrix M and the vector v differ across the methods used, but in any case, the inverse calculation is carried out, solving the linear system using Gaussian elimination with partial pivoting. This part of the calculation can be delegated to standard linear solvers, such as the LinearSolve function in Mathematica.
Let
Using
we obtain
Since it holds that
from (9), (10), and (11), it follows that
and
Within the procedure, each iteration involves two Jacobian evaluations and one function evaluation, accompanied by two LU factorizations used to construct the LM and FB operators, for PM members with fourth-order convergence. Subsequently, three linear systems are solved—two using the LM operator and one using the FB operator. These computational steps are comparable to the profiles presented in Table 1.
For and any p, the FB operator is omitted, resulting in only two linear system solves per iteration. Nevertheless, for p > 1, the method retains fourth-order convergence, consistent with the theoretical results.
The computational cost of the PM family can be compared with the fourth-order method (2) from [17], which will be referred to as KC. Both the PM family and the KC method share the same structure, as both involve one function evaluation, two Jacobian evaluations, and the solution of two linear systems. The first step is identical in both methods. The distinction appears in the second step, where the involved matrices are different. In the general case of the KC method, matrices are raised to the second or third power, except in the special case for , where no matrix powers are needed. In PM, the maximum matrix exponent is limited to . Additionally, for , the PM family requires one fewer linear system solution. Notably, both KC and PM include Jarratt’s method as a special case.
Following the analysis outlined in [18], the total number of products and quotients required by fourth-order PM family methods can be expressed as follows:
while the fourth-order KC methods require a similar number of operations, depending on the value of the parameter a:
In Figure 1, the operational efficiency index , as defined in [18], is compared across different parameter values for the PM(4) and KC methods. It can be concluded that the computational cost of our proposed family lies within the range of the computational costs of the KC family.
Figure 1.
Operational efficiency index comparison between PM(4) and KC methods.
In the presented examples, the equation was solved using test functions from [10,11,12,17,18,19,20,31,33,35,36,40,41,42] with corresponding starting values.
Example 1.
We consider the following nonlinear systems of equations: system (1f), appearing in [10,12,20,36], system (1g), also appearing in [42], and the remaining systems, taken from [33]:
| (1a) | , , |
| (1b) | , |
| (1c) | , |
| (1d) | , , |
| (1e) | , |
| (1f) | , for odd n, , , |
| (1g) | , |
Exact solutions are denoted as or, if there are two exact solutions, as and . In cases where the exact solution was not available, we used approximations.
The computational order of convergence (COC) was calculated as
In the presented examples, a COC value of 4 was reached after several initial iterations.
Corresponding starting values for the following systems of equations are listed in Table 5. The stopping criterion, as in [33], was
Table 5.
Comparison of iteration counts for PM(3), PM(4), and reference methods in Example 1.
Members of the PM family were generated for all .
In Table 5 and Figure 2, we present a comparison of results obtained using the proposed family of methods, PM (with orders of convergence being three and four), alongside methods from [33]: CN (order 2), Tr (order 3), and NAd (order 4). For each system, the initial estimate is specified, and the number of iterations (iter) required to satisfy the exit criterion is reported for each method. Results for CN, Tr, and NAd are quoted directly from [33]. Results for PM include the corresponding (p, q) pairs for which the reported number of iterations was obtained. For PM(4), “all” refers to all (p, q) pairs except (0, 1), (1, 0), and (0, 0), as these values yield third- and second-order convergence, respectively.
Figure 2.
Comparison of iteration counts across methods and examples.
Overall, the PM methods demonstrate strong stability and require fewer iterations compared to the methods from [33], while also exhibiting higher orders of convergence. When considering only the number of iterations to reach the exit criterion, the third-order method Tr outperforms the third-order PM method in one instance (Example 1d), whereas in all other cases, PM performs equally or better. Compared to NAd, the fourth-order PM methods perform better in 10 cases, require one additional iteration in two cases, and yield identical results in the remaining cases.
Figure 2 shows the iteration counts across methods and examples, derived from the data presented in Table 5. Fourth-order methods are marked in blue, third-order methods in green, and Newton’s method in red.
Example 2.
We consider nonlinear systems of equations from [35]. Example (2b) also appears in [39].
| (2a) | , , |
| (2b) | , , |
| (2c) | , , |
The following stopping criterion was used:
Table 6 refers to methods and examples from [35], which are the sources of the reported results for methods NM, AM(3), AM(4), and NR, compared to the members of the proposed family PM(3) and PM(4).
Table 6.
Comparison of iteration counts for PM(3), PM(4), and reference methods in Example 2.
NM refers to Newton’s method. AM(3) is the third-order method free from second derivatives, defined by (2.7) in [35], while AM(4) is the improved fourth-order Arithmetic Mean Newton method given by (3.5) in [35].
NR denotes the iterative scheme defined by (1.1) in [35]. PM(3) and PM(4) are third- and fourth-order members of the proposed family, with specific values of (p, q).
Observing only the number of iterations required to satisfy the exit criterion in Table 6, the third-order method, AM, outperforms the third-order PM method in Example (2a). Similarly, in Example (2b), the fourth-order method NR performs better than the fourth-order PM method. In all other cases, PM requires fewer or equal iterations compared to AM and NR while also achieving a smaller error (Err).
Example 3.
We consider Example 1 from [17], which also appears in [3,11]:
| (3a) | . |
Performance of fourth-order methods from PM is compared to the method (2) from [17], referred to as KC, with KC1, KC2, and KC3 denoting the cases with parameter values , , and , respectively. A comparison of the results in Table 7 and Table 8 highlights a favorable performance of the PM(4) methods.
Table 7.
Numerical results for KC methods from [17], for example, (3a).
Table 8.
Numerical results of the PM(4) family, for example, (3a).
Example 4.
We consider Example 5 from [17], also known as the Bratu problem, which also appears in [18]:
Taking and , with an equidistant mesh , and using the standard second-order difference scheme
we obtain the nonlinear system
| (4a) | . |
Results are compared between the KC1, KC2, and KC3 methods from [17] and the PM(4) methods in the same manner as in Example 3, and are presented in Table 9 and Table 10.
Table 9.
Numerical results for KC methods from [17], for example, (4a).
Table 10.
Numerical results of the PM(4) family, for example, (4a).
Example 5.
The Broyden Banded Function is often used for numerical verification of iterative methods for nonlinear systems of equations. The proposed family of methods is tested on Example 5 from [18]:
| (5a) | . |
The dimension of the system and the stopping criterion were the same as in [18]: and .
The performance of several best-performing members of our family is shown in Table 11. The results are presented after three iterations and are comparable to the methods compared in Table 5 in [18]. Our family exhibits slightly better performance than the best methods in Table 5 in [18] for (p, q) = (4, 2) and (p, q) = (2, 2).
Table 11.
Numerical results of the PM(4) family, for example, (5a).
Example 6.
The proposed family of methods is tested on Examples 1 to 4 from [20], using identical initial estimates and stopping criterion, , or . Example (6d) is also studied in [10,12,20,31,36].
| (6a) | |
| (6b) | |
| (6c) | |
| (6d) |
We present the range of parameters that yielded the best results for each example. Notably, our family includes Jarratt’s method as a special case for , which was also favorably compared to other methods in [20]. As shown in Table 12, Table 13, Table 14, Table 15 and Table 16, a specific subset of the proposed family exhibits good performance across the tested examples, both in absolute terms and when compared with Jarratt’s method.
Table 12.
Performance of the PM(4) methods, for example, (6a), .
Table 13.
Comparison results of the PM(4) methods, for example, (6b), .
Table 14.
Comparison results of the PM(4) methods, for example, (6c), .
Table 15.
Comparison results of the PM(4) methods, for example, (6d), .
Table 16.
Comparison results of the PM(4) methods, for example (6d), .
Example 7.
The performance of the proposed family of methods was evaluated using three problems (Experiments 1 to 3) from [19]. Example (7c) also appears in [40].
| (7a) | |
| (7b) | |
| (7c) |
In [19], the stopping criteria were for Experiment 1, and for Experiments 2 and 3. These criteria were satisfied after four iterations in most cases, in one case in five iterations, and after three iterations in the remaining ones, using fourth-order methods. Table 17, Table 18 and Table 19 present the results obtained with methods from the PM(4) family after four iterations, which enables comparison with the computations reported in [19]. For each example, the range of parameters that yielded the best results is presented, including the case (p, q) = (1, 1), which corresponds to Jarratt’s method.
Table 17.
Comparison results of PM(4) methods, for example, (7a), .
Table 18.
Comparison results of PM(4) methods, for example, (7b), .
Table 19.
Comparison results of the PM(4) methods, for example, (7c), .
Performance in the case of large nonlinear systems is an important aspect to investigate. We tested several examples on a Windows 11 PC equipped with an Intel Core i5-1135G7 CPU (2.4 GHz) and 8 GB RAM. Example (7c) was analyzed by measuring the CPU time required for the fifth iteration, with , across varying system sizes and different numerical precisions (100, 500, 1000, and 2000 digits), as shown in Figure 3. A similar dependence can be observed in all three cases.
Figure 3.
CPU time as a function of system size and numerical precision d.
Furthermore, we provide a comparison of the quantity among different fourth-order members of the PM family , computed with a 2000-digit precision, and . The results for Examples (7b) and (7c), each after five iterations, with the initial approximations and , respectively, are shown in Figure 4 and Figure 5. These results highlight the influence of the parameters p and q, i.e., the Padé approximations employed. In Figure 4, we observe that for larger values of the parameters p and q, the norm does not change significantly, whereas the computational time increases.
Figure 4.
Comparison of for PM(4) methods applied to Example (7b).
Figure 5.
Comparison of for PM(4) methods applied to Example (7c).
In both cases, for each choice of the parameters p and q, all methods converged. A comparison of the results in Figure 4 and Figure 5 indicates that, in general, no single member of the proposed family can be regarded as “best”, since convergence depends on the problem, the choice of , and other parameters. It should be noted that for , the PM family member corresponds to a generalization of the Ostrowski method [43] for systems of equations, which exhibits numerical results comparable to those of other members of the PM family. A particular case with is noteworthy, since the denominator in (4) reduces to 1 and the FB operator is omitted, lowering the number of linear solves per iteration from three to two, and reducing CPU time.
Figure 6 and Figure 7 present the ratios of CPU times for three iterations of Example (7c), computed with 1000-digit precision, using the PM(4) methods. Figure 6 shows the ratio of CPU time for n = 100 relative to n = 50, while Figure 7 shows the ratio for n = 200 relative to n = 100. Although the value of changes only slightly as the system size increases, the CPU time grows noticeably.
Figure 6.
Ratio of CPU times for n = 100 compared to n = 50 in Example (7c).
Figure 7.
Ratio of CPU times for n = 200 compared to n = 100 in Example (7c).
6. Conclusions
We have presented a family of iterative methods for the numerical solution of systems of nonlinear equations. The methods for nonlinear systems are adapted from the single-variable case, where the original PM framework was first developed and analyzed. By extending the structure and convergence properties of these one-dimensional methods to higher dimensions, we preserve their theoretical foundations while enabling practical application to nonlinear systems.
These methods exhibit convergence orders ranging from second to fourth, and the numerical results align well with the theoretical analysis. Based on both theoretical insights and computational examples, the proposed methods are efficient and practical for solving nonlinear systems.
Notably, cases (1f), (4a), (5a), and (6d) demonstrate that the efficiency of the methods is preserved even for large-scale systems. Several known methods or special cases are included within our family, and many other established methods are closely related, as illustrated throughout the paper. It is likely that additional methods may also be interpreted as members or near variants of this family.
In terms of computation time, methods with higher indices tend to require more time when applied to simpler functions such as polynomials. However, for more complex functions, most methods in the family exhibit comparable runtimes. These contrasting behaviors are exemplified in the results for solving nonlinear equations in one variable, as shown in [4].
Compared to reference methods from the literature, the PM family demonstrates favorable numerical performance. In the presented examples, PM methods require fewer or equal iterations to satisfy the exit criterion, while also achieving smaller error values. This balance of convergence order, stability, and computational efficiency highlights the practical value of the proposed family.
Author Contributions
Conceptualization, Đ.H. and D.H.; methodology, Đ.H. and D.H.; software, Đ.H. and V.T.; validation, D.H. and V.T.; formal analysis, Đ.H. and D.H.; investigation, Đ.H. and D.H.; data curation, V.T. and P.O.; writing—original draft preparation, Đ.H. and D.H.; writing—review and editing, Đ.H., D.H., V.T. and P.O.; visualization, Đ.H., D.H., V.T. and P.O.; supervision, Đ.H. and P.O.; project administration, D.H. and V.T. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia under projects 451-03-137/2025-03/200125, 451-03-136/2025-03/200125, and 451-03-137/2025-03/200156.
Data Availability Statement
The original contributions presented in this study are included in the article. For further inquiries, please contact the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Herceg, Đ.; Herceg, D. A family of methods for solving nonlinear equations. Appl. Math. Comput. 2015, 259, 882–895. [Google Scholar] [CrossRef]
- Jarratt, P. Some fourth-order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
- Yaseen, S.; Zafar, F. A new sixth-order Jarratt-type iterative method for systems of nonlinear equations. Arab. J. Math. 2022, 11, 585–599. [Google Scholar] [CrossRef]
- Babajee, D.K.R.; Madhu, K.; Jayaraman, J. A family of higher order multipoint iterative methods based on power mean for solving nonlinear equations. Afr. Mat. 2016, 27, 865–876. [Google Scholar] [CrossRef]
- Li, S.; Liu, X.; Zhang, X. A Few Iterative Methods by Using [1, n]-Order Padé Approximation of Function and the Improvements. Mathematics 2019, 7, 55. [Google Scholar] [CrossRef]
- Özban, A.Y. Some new variants of Newton’s method. Appl. Math. Lett. 2004, 17, 677–682. [Google Scholar] [CrossRef]
- Kou, J. Fourth-order variants of Cauchy’s method for solving non-linear equations. Appl. Math. Comput. 2007, 192, 113–119. [Google Scholar] [CrossRef]
- Kou, J.; Li, Y.; Wang, X. Fourth-order iterative methods free from second derivative. Appl. Math. Comput. 2007, 184, 880–885. [Google Scholar] [CrossRef]
- Frontini, M.; Sormani, E. Some variants of Newton’s method with third-order convergence. Appl. Math. Comput. 2003, 140, 419–426. [Google Scholar] [CrossRef]
- Esmaeili, H.; Ahmadi, M. An efficient three-step method to solve system of nonlinear equations. Appl. Math. Comput. 2015, 266, 1093–1101. [Google Scholar] [CrossRef]
- Singh, H.; Sharma, J.R. Simple and efficient fifth-order solvers for systems of nonlinear problems. Math. Model. Anal. 2023, 28, 527–543. [Google Scholar] [CrossRef]
- Wang, X.; Sun, M. A new family of fourth-order Ostrowski-type iterative methods for solving nonlinear systems. AIMS Math. 2024, 9, 10255–10266. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
- Babajee, D.K.R.; Madhu, K.; Jayaraman, J. On some improved harmonic mean Newton-like methods for solving systems of nonlinear equations. Algorithms 2015, 8, 895–909. [Google Scholar] [CrossRef]
- Cordero, A.; Maimó, J.G.; Rodríguez-Cabral, A.; Torregrosa, J.R. Two-step fifth-order efficient Jacobian-free iterative method for solving nonlinear systems. Mathematics 2024, 12, 3341. [Google Scholar] [CrossRef]
- Quinga, S.; Pavon, W.; Ortiz, N.; Calvopiña, H.; Yépez, G.; Quinga, M. Numerical solution of the nonlinear convection–diffusion equation using the fifth-order iterative method by Newton–Jarratt. Mathematics 2025, 13, 1164. [Google Scholar] [CrossRef]
- Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. New fourth- and sixth-order classes of iterative methods for solving systems of nonlinear equations and their stability analysis. Numer. Algorithms 2021, 87, 1017–1060. [Google Scholar] [CrossRef]
- Bhalla, S.; Singh, G.; Ramos, H.; Behl, R.; Alshehri, H. A Fourth-Order Parametric Iterative Approach for Solving Systems of Nonlinear Equations. Computation 2025, 13, 241. [Google Scholar] [CrossRef]
- Al-Obaidi, R.H.; Darvishi, M.T. Constructing a Class of Frozen Jacobian Multi-Step Iterative Solvers for Systems of Nonlinear Equations. Mathematics 2022, 10, 2952. [Google Scholar] [CrossRef]
- Cordero, A.; Jordán, C.; Sanabria-Codesal, E.; Torregrosa, J.R. Design, Convergence and Stability of a Fourth-Order Class of Iterative Methods for Solving Nonlinear Vectorial Problems. Fractal Fract. 2021, 5, 125. [Google Scholar] [CrossRef]
- Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. Maximally efficient damped composed Newton-type methods to solve nonlinear systems of equations. Appl. Math. Comput. 2025, 492, 129231. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
- Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariable case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
- Noor, M.A.; Wasteem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
- Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
- Grau-Sanchez, M.; Grau, A.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
- Khirallah, M.Q.; Hafiz, M.A. Novel third-order methods for solving a system of nonlinear equations. Bull. Math. Sci. Appl. 2012, 2, 1–14. [Google Scholar] [CrossRef]
- Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
- Xiao, X.; Yin, H. A new class of methods with higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2015, 264, 300–309. [Google Scholar] [CrossRef]
- Liu, Z.; Fang, Q. A new Newton-type method with third order for solving systems of nonlinear equations. J. Appl. Math. Phys. 2015, 3, 1256–1261. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Darvishi, M.T.; Barati, A. A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar] [CrossRef]
- Cordero, A.; Martínez, E.; Torregrosa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2009, 231, 541–551. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton–Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
- Babajee, D.K.R.; Cordero, A.; Soleymani, F.; Torregrosa, J.R. On a novel fourth-order algorithm for solving systems of nonlinear equations. J. Appl. Math. 2012, 1, 165452. [Google Scholar] [CrossRef]
- Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth-order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
- Narang, M.; Bhatia, S.; Kanwar, V. New two-parameter Chebyshev–Halley-like family of fourth and sixth-order methods for systems of nonlinear equations. Appl. Math. Comput. 2016, 275, 394–403. [Google Scholar] [CrossRef]
- Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
- Madhu, K.; Jayaraman, J. Some higher-order Newton-like methods for solving system of nonlinear equations and its applications. Int. J. Appl. Comput. Math. 2017, 3, 2213–2230. [Google Scholar] [CrossRef]
- Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef]
- Liu, Z.; Zheng, Q.; Huang, C.-E. Third- and fifth-order Newton–Gauss methods for solving nonlinear equations with n variables. Appl. Math. Comput. 2016, 290, 250–257. [Google Scholar] [CrossRef]
- Moscoso-Martínez, M.; Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Performance of a New Sixth-Order Class of Iterative Schemes for Solving Non-Linear Systems of Equations. Mathematics 2023, 11, 1374. [Google Scholar] [CrossRef]
- Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press Inc.: New York, NY, USA, 1966. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).