Abstract
Multistep methods typically use Taylor series to attain their convergence order, which necessitates the existence of derivatives not naturally present in the iterative functions. Other issues are the absence of a priori error estimates, information about the radius of convergence or the uniqueness of the solution. These restrictions impose constraints on the use of such methods, especially since these methods may converge. Consequently, local convergence analysis emerges as a more effective approach, as it relies on criteria involving only the operators of the methods. This expands the applicability of such methods, including in non-Euclidean space scenarios. Furthermore, this work uses majorizing sequences to address the more challenging semi-local convergence analysis, which was not explored in earlier research. We adopted generalized continuity constraints to control the derivatives and obtain sharper error estimates. The sufficient convergence criteria are demonstrated through examples.
MSC:
65H10; 65Y20; 65G99; 41A58
1. Introduction
A scalar nonlinear equation (NE) or a system of nonlinear equations (SNE) can be derived by transforming models from science, engineering, and nature. Such equations are crucial in mathematics [1,2,3,4,5,6]. Solutions to these nonlinear problems enable us to forecast weather, analyze fluid dynamics, model populations, and predict financial markets. Therefore, to evaluate the following SNE by approximating its solution, we choose the initial approximation
The above defined operator F maps an open subset of a Banach space into . It is usually impossible to find analytical solutions to equations such as (1). As a result, we are limited to using iterative methods. For instance, we have one of the most often used iterative techniques, the Newton–Raphson method, which help us to obtain an approximate solution of such problems.
Using an iterative technique, researchers apply the Newton–Raphson method or similar approaches to achieve the desired accuracy of the solution. Additionally, they study the convergence, extensions or modifications, stability, and basin of attraction. The following are popular single-step iterative methods for solving (1):
These methods are called Newton’s secant and Steffensen’s method, respectively. But their convergence order (CO) does not exceed two [1,2,4,5,7,8]. The CO can be increased if we add more steps. That is why we have a sixth-order method of convergence without memory and a 6.60 CO with memory, as proposed by Cordero et al. [9]. These schemes are defined for and each by:
- (1)
- Sixth CO without memory:
Method (2) specializes to the preceding ones provided that we stop only at the first substep and choose for Newton’s, for secant and for Steffensen’s method. Thus, the convergence study of (2) includes the preceding methods.
- (2)
- 6.60 CO with memory:
The CO is shown in [9] by adopting Taylor series and the fifth derivative of F. However, this approach has certain constraints.
- Motivation:
- (P1)
- The inverses of derivatives, divided differences, or high-order derivatives are typically needed for the local convergence analysis. Local analysis of convergence (LAC) in [9] shows that the proof requires derivatives up to the fifth order, which is not present in the method. Their application in the case when is restricted by these limitations. As an example, we choose the following fundamental and inspirational function F on . If , defined aswhere are three parameters with .We can easily see that the function is not bounded on at and . Thus, the convergence of procedures (2) and (3) is not guaranteed by the local convergence findings in [9]. However, if, for instance, , and , then the iterative schemes (2) and (3) converge to . This observation implies that these criteria can be weakened.
- (P2)
- The results are applicable only on .
- (P3)
- There is no subset that exclusively contains as its solution of (1).
- (P4)
- The more important to obtain and difficult semi-local analysis (SLAC) is not given previously [9].
- (P5)
- There is no indication or information regarding the integer j that satisfies the condition such that for each , where .
We note that in the previous research [9], we had to deal with the above-mentioned issues –. This is the primary reason we are conducting this investigation. Our method addresses these issues. The generality of the new technique makes it useful on extending the applicability of other methods in the same way [3,6,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Method (1.2) outperforms existing methods in terms of residual error and the absolute error difference between consecutive iterations. Additionally, it requires significantly less computational time and exhibits a more stable CO than the existing methods. That is the novelty of our paper.
The rest of the article includes LAC and SLAC of the method in Section 2 and Section 3, respectively. Section 4 provides numerical examples to illustrate the practical implementation and effectiveness of the methods. These examples demonstrate the accuracy, efficiency, radii of convergence, and convergence of the iterative schemes in solving SNE. Section 5 concludes this study with a summary of findings, key contributions, and potential directions for future research.
2. Local Convergence Analysis
The analysis uses some conditions.
- Suppose:
- (1)
- There exist functions , continuous and nondecreasing, defined on the interval , and so that the equationhas a positive smallest solution (SS), which is denoted by .
- (2)
- There exist majorant functions (see conditions (Q)) defined on the interval , continuous and nondecreasing, so that the equationhas an SS , where
- (3)
- The equation has an SS , for function is given by
- (4)
- The equation has an SS , for function is given asLet us introduce the parameter byThe parameter is established as the radius of convergence for the method, as demonstrated in Theorem 1. This result confirms that represents the threshold within which the method always converges.Let us denote by the open and closed balls in , respectively, with center and of radius . We shall use the same symbol for the norm of linear operators involved as well as that of the elements of the Banach space to simplify the presentation.The real functions and the parameter are connected to the divided differences on method (2).
- Suppose:
- (H1)
- There exists a solution to , along with a linear operator M that is invertible. This means that the equation has at least one solution within the domain , specifically the value . Additionally, the operator M is guaranteed to be invertible, implying that it possesses a unique inverse.
- (H2)
- for each .Set .
- (H3)
- for each .and
- (H4)
- for some to be given later.
- Moreover, we consider:
- (Q)
- All the iterates on the method (2) exist andwhere the functions are continuous and nondecreasing and the sequencesare nonnegative sequences.
- The functions and sequences appearing in the conditions are specialized later in terms of the conditions (see Remarks 1 and 2).
The local convergence result for (2) follows from the following conditions.
Theorem 1.
Under the conditions and , sequence converges to , as long as the initial value lies within the ball but is not equal to itself.
Proof.
The estimate (11) and the standard lemma on invertible operators due to Banach [2,4,5,6,8,14,19] give that the linear operator is invertible and
The first substep of the method (2) implies
It follows in turn by (4), (6), , (12) and (13),
So, the iterate , and the expression (8) holds if .
Mathematical induction is employed to demonstrate the validity of the following items:
where the parameter is defined by (4).
- By hypothesis, . Then, the conditions and (4) give in turn
The second substep of the method (2) gives, as previously,
Hence, the iterate , and the assertion (9) holds if .
Next, the third substep of the method (2) gives
The following estimate is needed:
so by (14) and (15),
where we also used the conditions ,
and
Hence, the iterate and the assertion (10) holds if . The proof for the statements in Equations (7)–(10) can be quickly finished by replacing with in the previous steps of the calculation. This substitution allows us to apply the same reasoning to the next case in the sequence. Additionally, based on the calculation for , we have
and it follows that the iterate and . □
Proposition 1.
There is a solution to within a neighborhood of , specifically in the ball for some representing a certain radius. Then, we have
where is a continuous and nondecreasing function and there exists such that
Let . Then, is the only solution of in .
Proof.
Remark 1.
The LAC of the method (3) is analyzed in an analogous way in two interesting cases. Let L be any invertible linear operator.
- (I)
- Let . Suppose that the hypotheses and hold but with and and the functions and are and . In order to define the corresponding function , notice in turn the calculationleading toTherefore, we can chooseLet us assume that the equation has an SS .
- (II)
- Let for each Suppose that there exists such thatandThen, notice thatand similarly,
Remark 2.
The conditions shall be dropped by expressing the scalar sequences in terms of the conditions and
for each , are nonnegative parameters and is a continuous and nondecreasing function.
We first determine and :
but
So, we have
Consequently, we obtain
Thus,
and
so
and
but
Thus, we obtain
In addition, we have
similarly,
and
so
In a similar way, we obtain
Consequently, we set
In view of these calculations, the parameter R appearing in condition is defined by
- (III)
- Possible choices for the linear operator can be(secant [2,4,5,7]);(Kurchatov [8,10,19]);;(Kurchatov [8,10,19]).Moreover, for M, we can choose or or . Other choices exist [4,5,14].
3. Semi-Local Convergence
Sequences are developed that are shown to be majorizing for the sequence given by the Formula (2). Let be functions such that is continuous and nondecreasing and are continuous.
Suppose:
- the equation has a positive SS, say . Define the scalar sequence , where are given nonnegative sequences for , and each by
Lemma 1.
Suppose that there exists such that for and each
Then, we have
and there exists so that .
Proof.
The functions and the parameter relate to the method (2) as follows:
- Suppose:
- (C1)
- There exist a point , a linear operator M which is invertible and.
- (C2)
- for each .It follows that the linear operator is invertible, since. Hence, we can choose .Set .
- (C3)
- for each .
- (C4)
- , where the parameter is specified later.
The following conditions are imposed on the iterates .
- (E)
- The iterates exist and satisfy for each , ,where are nonnegative sequences. In addition, and depending on the iterates of the method and are nonnegative sequences that will later be expressed in terms of the conditions –.
- The main semi-local result for the method (2) follows.
Theorem 2.
Given the conditions through and , there exists a solution to the expression . Further, the following properties hold:
Proof.
The assertions (21)–(23) are established by induction. The assertions (21) hold if by (18), (2) and , since
Thus, the iterate . Notice that
so
and iterates are well defined.
Then, from the second substep of the iterative scheme (2), we obtain
So, by the conditions , , (24) and (25), we have
and
Thus, the iterate and the assertion (22) holds.
Similarly, from the third substep of the method (2), we obtain in turn, as in the local case,
and
where we used
and
Therefore, the iterate and the assertion (23) holds.
Using the first substep of method (2), we can express
leading to
Hence, we have in turn
and
The proof by induction for the assertions in Equations (21)–(23) has been completed, and it is shown that the sequence is contained within the ball . As a result, the sequence is a Cauchy sequence in the Banach space . Since every Cauchy sequence in a Banach space converges, it follows that converges to some limit lying in the ball . Notice that . Hence, we obtain by (28) and the continuity of F.
Finally, from the estimate
we obtain
provided that . □
The region of uniqueness for the solution is then identified and defined. This means that we will now establish the specific area or set within which the solution is guaranteed to be unique.
Proposition 2.
Set .
Assume the following:
- There exists a solution to for some ;
- The condition is satisfied within the ball ;
- There exists a constant such that
Then, there are no other points within satisfying the equation , ensuring that is the unique root in this domain.
Proof.
Let with with . Define the linear operator . In view of condition and (31), one obtains
Thus, we deduce that □
Remark 3.
Under the full set of conditions outlined in Theorem 2, which include all the necessary assumptions and requirements, set and .
Remark 4.
Replace the conditions by given by
- (E′)
- ,, ,, ,, and .
Next, the scalar sequences appearing in the condition are specialized. The following calculations are required:
and thus,
In a similar way, we have
so, we obtain
Thus, we have
which further yields
Therefore, we obtain
In the similar fashion, we have
so
Moreover, we obtain
and
so
Hence, we obtain
Under these choices,
Remark 5.
- (I)
- (II)
- . Notice thatsimilarly,where .
4. Numerical Experiments
4.1. Local Area Convergence
In Example 1, we examined the LAC and presented the computational results in Table 1. These findings were derived from a system of differential equations, with a focus on analyzing LAC.
4.2. Semi-Local Area of Convergence
On the other hand, the other examples illustrate the SLAC. Table 2 provides the numerical results of SLAC for the boundary value problem in Example 2. We have chosen a larger SNE of order with 60 variables. In Example 3, we investigate another applied science problem, namely the Hammerstein integral equation, to demonstrate the applicability and efficacy of method (2). The values of abscissas and weights are depicted in Table 3 and numerical outcomes in Table 4. In the last Example 4, we consider an SNE, and the numerical results are shown in Table 5. We compared method (2) with existing sixth-order methods, selecting the following approaches for evaluation: method (8) from Abbasbandy et al. [24], method (14) from Hueso et al. [25], and method (6) from Wang and Li [26]. Finally, we included method (5) proposed by Lotfi et al. [27] in our comparison.
Moreover, we study the computational order of convergence that has been determined by the following formulas:
or approximated COC [15,16] by:
The conditions for terminating the program are specified as follows:
- (i)
- The difference between successive iterations satisfies
- (ii)
- The norm of the operator at the current point meets the condition .
Here , represents an extremely small tolerance level, ensuring high precision and stability in the computational results. All computations used Mathematica 11 with multi-precision arithmetic, and the computer’s configuration details are listed below. Device Name: HP; Windows 10 Enterprise; OS Build: 19045.2006; Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz 3.60 GHz; Installed RAM: 8.00 GB (7.89 GB usable); System type: 64-bit operating system, x64-based processor; Version: 22H2.
Example 1.
Let us examine the following system of differential equations:
which are defined as the movement of a particle in three dimensions with for . The solution is then associated with given as
By (33), we obtain
For and , we have
Then, compute for both methods (see Remarks 1 and 2). For the selection of the function in the condition , other parameters are given below:
The radii of convergence methods (2) and (3) for Example 1 are depicted in Table 1.
Table 1.
Distinct convergence radii for Example 1.
Table 1.
Distinct convergence radii for Example 1.
| Methods | ||||||
|---|---|---|---|---|---|---|
| Method (2) | 0.2616 | 0.1536 | 0.1393 | 0.1164 | - | 0.1164 |
| Method (3) | 0.2616 | 0.1536 | 0.1393 | - | 0.1125 | 0.1125 |
Example 2.
Boundary value problems (BVPs) [17] are fundamental in applied science, involving differential equations with conditions at distinct points, often boundaries. These issues represent real-world processes such as fluid dynamics, nuclear reactors, heat transfer, optimization, quantum mechanics, and applied science, leading us to choose the following BVP (for details, see [18]):
with . Divide the interval into ℓ parts, which further provides
Set . Then, we have
by discretization. Thus, we obtain a system of
For example, we have an SNE of order with and . The following is the required solution for the system (36) mentioned above:
Table 2 provides a detailed overview of the performance for Example 2, including the Coefficient of Convergence (COC), CPU time, number of iterations, residual errors, and the error differences between consecutive iterations. These computational results offer us valuable insight into the efficiency and accuracy of the required solution. This table provide us both computational time and the convergence behavior as the iterative methods with other existing methods.
Table 2.
Computational results of Example 2.
Table 2.
Computational results of Example 2.
| Methods | n | CPU Timing | ||||
|---|---|---|---|---|---|---|
| Lotfi et al. [27] | 4 | 5.0541 | 135.752 | |||
| Wang and Li [26] | 4 | 6.0281 | 127.473 | |||
| Abbasbandy [24] | 4 | 6.0293 | 347.9 | |||
| Hueso et al. [25] | 4 | 5.0358 | 222.306 | |||
| Method (2) | 4 | 6.0616 | 104.265 |
CPU timing and are calculated based on the number of iterations required to reach the desired accuracy.
Example 3.
We investigate a widely recognized problem, the Hammerstein integral equation, as detailed in [2] (pp. 19–20). The primary objective is to evaluate and contrast the effectiveness and practicality of iterative scheme (2) against the earlier ones of the same CO. The Hammerstein integral equation is given below:
The kernel G is given by
To convert the given equation into a finite-dimensional problem, we utilize the Gauss–Legendre quadrature formula (GLQF), which allows us to approximate integrals with greater accuracy. Then, we obtain
The values (abscissas) and (weights) are calculated using the GLQF for and shown in the Table 3. Let be defined as the approximations of by . This leads to an SNE, which is defined as follows:
where
Table 3.
The values (abscissas) and (weights).
Table 3.
The values (abscissas) and (weights).
| j | ||
|---|---|---|
| 1 | ||
| 2 | ||
| 3 | ||
| 4 | ||
| 5 | ||
| 6 | ||
| 7 | ||
| 8 | ||
| 9 | ||
| 10 |
Table 4 provides a detailed overview of the performance for Example 3, including the Coefficient of Convergence (COC), CPU time, number of iterations, residual errors, and the error differences between consecutive iterations. These computational results offer us a valuable insight into the efficiency and accuracy of the required solution. This table provide us both computational time and the convergence behavior as the iterative methods with other existing methods. Our required solution is given below:
Table 4.
Computational results of Example 3.
Table 4.
Computational results of Example 3.
| Methods | n | CPU Timing | ||||
|---|---|---|---|---|---|---|
| Lotfi et al. [27] | 3 | 5.1023 | 2.86834 | |||
| Wang and Li [26] | 3 | 6.0410 | 2.82447 | |||
| Abbasbandy [24] | 3 | 6.0428 | 3.67107 | |||
| Hueso et al. [25] | 4 | 5.0099 | 10.559 | |||
| Method (2) | 3 | 5.9995 | 3.30025 |
CPU timing and are calculated based on the number of iterations required to reach the desired accuracy.
Example 4.
Further, we analyze another larger SNE of order 100 by 100 variables, to demonstrate the method’s effectiveness and scalability when applied to more complex and large-scale systems. The analysis highlights the method’s capacity to tackle significant computational challenges and its applicability to a wide range of practical problems involving large-scale nonlinear systems. Thus, we consider the following system:
The required zero for this problem is . In Table 5, we provide the Coefficient of Convergence (COC), CPU time, number of iterations, residual errors, and the error differences between two iterations for Example 4.
Table 5.
Computational results of Example 4.
Table 5.
Computational results of Example 4.
| Methods | n | CPU Timing | ||||
|---|---|---|---|---|---|---|
| Lotfi et al. [27] | 4 | 6.0252 | 48.3817 | |||
| Wang and Li [26] | 4 | 6.0260 | 51.2171 | |||
| Abbasbandy [24] | 4 | 6.0268 | 78.1555 | |||
| Hueso et al. [25] | 4 | 5.0318 | 100.235 | |||
| Method (2) | 4 | 6.0244 | 50.3581 |
CPU timing and are calculated based on the number of iterations required to reach the desired accuracy.
5. Concluding Remarks
A new methodology has been proposed in this study, which expands the use of multistep approaches without relying on Taylor series expansions that impose derivative-based conditions which are not present in the method. Other drawbacks involve the absence of a priori and computable error distances and uniqueness of the solution results. These concerns are all positively addressed in this paper. Indeed, the sufficient convergence requirements of our approach depend solely on the operators involved in the technique. Additionally, our approach establishes the uniqueness of the solution, determines the radii of convergence, and provides upper a priori estimates for the error distances involved in the methods (2) and (3). Furthermore, the SLAC, which is not discussed in earlier papers, is also presented in this work. Method (2) outperforms existing methods in terms of residual error and the absolute difference in error between two consecutive iterations. Furthermore, method (1.2) consumes significantly less time compared to the existing methods. Moreover, it demonstrates a stable CO compared to the existing ones. In future research, we will explore how this methodology can be utilized to extend other iterative methods in a similar manner due to its generality [7,8,10,11,12,13,14,15,16,17,18,19,28,29,30,31,32,33,34,35,36,37,38,39].
Author Contributions
Conceptualization, R.B. and I.K.A.; methodology, R.B. and I.K.A.; software, R.B. and I.K.A.; validation, R.B. and I.K.A., formal analysis, R.B. and I.K.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B. and I.K.A.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B., I.K.A., S.A. and A.M.A.; visualization, R.B., I.K.A., S.A. and A.M.A., supervision, R.B. and I.K.A. All authors have read and agreed to the published version of the manuscript.
Funding
This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2024/01/31597).
Data Availability Statement
The original contributions presented in this study are included in the article.
Acknowledgments
The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number (PSAU/2024/01/31597).
Conflicts of Interest
The authors declare that there are no conflicts of interest.
References
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient high-order iterative methods for solving nonlinear systems and their application on heat conduction problems. Complexity 2017, 2017, 6457532. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magrenán, Ȧ.M. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA; Taylor & Francis: Boca Raton, FL, USA, 2018. [Google Scholar]
- Argyros, I.K. The Theory and Aplication of Iteration Methods; CRC Press: New York, NY, USA; Taylor & Francis: Boca Raton, FL, USA, 2022. [Google Scholar]
- Hernández, M.A.; Martinez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algor. 2015, 70, 377–392. [Google Scholar] [CrossRef]
- Steffensen, J.F. Remarks on iteration. Skand. Aktuarietidskr. 1933, 16, 64–72. [Google Scholar] [CrossRef]
- Shakhno, S.M. Gauss-Newton-Kurchatov method for the solution of nonlinear least-squares problems. J. Math. Sci. 2020, 247, 58–72. [Google Scholar] [CrossRef]
- Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Design of iterative methods with memory for solving nonlinear system. Math. Methods Appl. Sci. 2023, 46, 12361–12377. [Google Scholar] [CrossRef]
- Wang, X.; Jin, Y.; Zhao, Y. Derivative-free iterative methods with some Kurchatov-type accelerating parameters for solving nonlinear systems. Symmetry 2021, 13, 943. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
- Behl, R.; Bhalla, S.; Magrenán, Ȧ.A.; Kumar, S. An efficient high order iterative scheme for large nonlinear systems with dynamics. J. Comput. Appl. Math. 2022, 404, 113249. [Google Scholar] [CrossRef]
- Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar]
- Grau-Sánchez, M.; Grau, A.; Noguera, M. On the computational efficiency index and some iteratíve methods for solving system of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Grau, A.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
- Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
- Kou, J.; Li, Y.; Wang, X. Some modification of Newton’s method with fifth-order convergence. J. Comput. Appl. Math. 2007, 209, 146–152. [Google Scholar] [CrossRef]
- Shakhno, S.M. Nonlinear majoriants for investigation of methods of linear interpolation for the solution of nonlinear equations. In Proceedings of the ECCOMAS 2004-European Congress on Computational Methods in applied Sciences and Engineering, Jyvaskyla, Finland, 24–28 July 2004. [Google Scholar]
- Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. Maximally efficient damped composed Newton-type methods to solve nonlinear systems of equations. Appl. Math. Comput. 2025, 492, 129231. [Google Scholar] [CrossRef]
- Cordero, A.; Maimó, J.G.; Rodríguez-Cabral, A.; Torregrosa, J.R. Two-Step Fifth-Order Efficient Jacobian-Free Iterative Method for Solving Nonlinear Systems. Mathematics 2024, 12, 3341. [Google Scholar] [CrossRef]
- Singh, H.; Sharma, J.R. A two-point Newton-like method of optimal fourth order convergence for systems of nonlinear equations. J. Complex. 2025, 86, 101907. [Google Scholar] [CrossRef]
- Kumar, S.; Sharma, J.R.; Jäntschi, L. An Optimal Family of Eighth-Order Methods for Multiple-Roots and Their Complex Dynamics. Symmetry 2024, 16, 1045. [Google Scholar] [CrossRef]
- Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287–288, 94–103. [Google Scholar] [CrossRef]
- Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
- Wang, X.; Li, Y. An Efficient Sixth-Order Newton Type Method for Solving Nonlinear Systems. Algorithms 2017, 10, 45. [Google Scholar] [CrossRef]
- Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef]
- Wang, X. Fixed-point iterative method with eighth-order constructed by undetermined parameter technique for solving nonlinear systems. Symmetry 2021, 13, 863. [Google Scholar] [CrossRef]
- Cordero, A.; Villalba, E.G.; Torregrosa, J.R.; Triguero-Navarro, P. Convergence and stability of a parametric class of iterative schemes for solving nonlinear systems. Mathematics 2021, 9, 86. [Google Scholar] [CrossRef]
- Amiri, A.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. A fast algorithm to solve systems of nonlinear equations. J. Comput. Appl. Math. 2019, 354, 242–258. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. A new efficient parametric family of iterative methods for solving nonlinear systems. J. Differ. Equ. Appl. 2019, 25, 1454–1467. [Google Scholar] [CrossRef]
- Singh, A. An efficient fifth-order Steffensen-type method for solving systems of nonlinear equations. Int. J. Comput. Sci. Math. 2021, 9, 501–514. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algor. 2015, 70, 545–558. [Google Scholar] [CrossRef]
- Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional generalization of iterative methods for solving nonlinear problems by means of weight-function procedure. Appl. Math. Comput. 2015, 268, 1064–1071. [Google Scholar] [CrossRef][Green Version]
- Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
- Wang, X.; Chen, X. Derivative-free Kurchatov-type accelerating iterative method for solving nonlinear systems: Dynamics and applications. Fractal Fract. 2022, 6, 59. [Google Scholar] [CrossRef]
- Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Iterative methods with memory for solving systems of nonlinear equations using a second order approximation. Mathematics 2019, 7, 1069. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the improvement of the order of convergence of iterative methods for solving nonlinear systems by means of memory. Appl. Math. Lett. 2020, 104, 106277. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).