Abstract
The local convergence analysis of a two-step vectorial method of accelerators with order five has been shown previously. But the convergence order five is obtained using Taylor series and assumptions on the existence of at least the fifth derivative of the mapping involved, which is not present in the method. These assumptions limit the applicability of the method. Moreover, a priori error estimates or the radius of convergence or uniqueness of the solution results have not been given. All these concerns are addressed in this paper. Furthermore, the more challenging semi-local convergence analysis, not previously studied, is presented using majorizing sequences. The convergence for both analyses depends on the generalized continuity of the Jacobian of the mapping involved, which is used to control it and sharpen the error distances. Numerical examples validate the sufficient convergence conditions presented in the theory.
MSC:
65H10; 47J25; 49M15
1. Introduction
Let j denote a fixed natural number, the set be open and convex, and stand for a continuously differentiable mapping with a Jacobian denoted by .
Numerous problems from applied mathematics, scientific computing, and engineering can be written using mathematical modeling [,,,,,,,,,] as a system of equations in the form
A solution of the system of equations is attainable in analytical form only in special cases. That explains why most solution schemes for such a system of equations are iterative.
Newton’s is undoubtedly the most popular method and it is defined for and each by
Newton’s method is of convergence order two and has served as the first substep of higher convergence order schemes due to its computational efficiency (CE). In particular, Newton’s method is the first optimal (vectorial) scheme in the sense of an assumption which is given next.
Conjecture 1
([]). The convergence order of any Newton-type method (without memory), which is defined on cannot exceed the bound , , where is the number of function evaluations in the entries of per iteration and is the number of function evaluations of F. Moreover, the iterative method is called optimal if it reaches .
Two-step optimal Newton-type methods with accelerators of order four have already been studied in Refs. [,], respectively, and are defined for by
where is a function such that
or
where, for ,
We shall also use the equivalent versions of given for by
The local convergence order four is established in [] for method (4) using Taylor series expansions and by assuming the existence of at least the fifth derivative of the mapping F, which is not on method (4) (or method (3)). However, there are several issues limiting the applicability of these methods.
1.1. Motivational Issues
- ()
- The convergence order four is shown in [] by utilizing Taylor series expansions and assuming the existence of at least the fifth derivative of F, which is not in method (4). In particular, the following local convergence result is shown in Ref. [] for method (4).Theorem 1.Suppose that is sufficiently many times differentiable in a neighborhood of a simple solution . Then, the sequence generated by method (4) is convergent to . Moreover, the following error equation holdswhere .It is worth noting that the proof of this result uses Taylor series expansions and requires the existence of at least the fifth derivative of the mapping F, which is not in the method.We look at a toy example where the results of Ref. [] cannot apply.Let , . Define bywhere and . It follows by this definition that the fourth derivative of F does not exist, since for , is discontinuous at zero. Notice that solves the equation . Moreover, if , both methods (3) and (4) converge to . So, this observation suggests that the sufficient convergence conditions in [] or other studies using Taylor series can be replaced by weaker ones.
- ()
- There are no computable a priori estimates on the error distances . Hence, we cannot tell in advance how many iterations are required to achieve a desired error tolerance .
- ()
- There is no information on the uniqueness of the solution in a neighborhood of it.
- ()
- The more challenging and important semi-local convergence analysis has not been studied previously.
1.2. Innovation
These problems constitute our motivation for this paper. Problems – are addressed as follows.
- The local convergence analysis is presented using only conditions on the mappings which appear in method (4), i.e., F and .
- A natural number k is determined in advance such that for each . Moreover, the radius of convergence is given. Consequently, the initial points are picked from a specific neighborhood of such that .
- A domain is specified containing only one solution of the system of equations .
- The semi-local convergence analysis is provided using majorizing sequences [].
Both types of convergence analysis relying on generalized continuity are used to control and sharpen the error estimates and .
Since, we will refer to neighborhoods of points in , we use the standard notation for open and closed balls. Given a point and a radius , we obtain the following:
- The open ball of radius r around x is as follows:which includes all points strictly within distance r from x.
- The closed ball of radius r around x is as follows:which also contains the boundary points exactly at distance r.
The remainder of the paper is organized as follows. In Section 2, we introduce our notation and establish the local convergence theorems, culminating in the error bounds and uniqueness results. Section 3 contains the semi-local convergence analysis via majorizing sequences. Section 4 discusses the implementation details, including the computational cost and comparisons with classical schemes. Finally, in Section 5, we conclude with numerical experiments that showcase the benefits of our approach, highlighting cases where existing fourth-order methods struggle or require more stringent assumptions.
2. Local Area Convergence
Some convergence conditions are needed. Let .
Suppose the following hold:
- ()
- There exists a continuous, nondecreasing function such that the function has a minimal positive zero. We shall call such zero by and set .
- ()
- There exists a continuous, nondecreasing function such that for defined bythe function has a minimal positive zero in . We shall denote such a zero by .
- ()
- For , the function defined asis such that has a minimal positive zero in , which is called .Set , where .
- ()
- For functions , , and and given byandthe function has a minimal positive zero in , which is denoted by .SetWe show in Theorem 2 that the number is a convergence radius for method (4).Next, the parameter and functions and w are associated with .
- ()
- There exists an invertible linear operator , where k is a natural number and a solution of the nonlinear system of equations such that for each ,Set .
- ()
- for each .
- ()
- .
- ()
- There exists a parameter such that
Remark 1.
One can choose , the identity operator, or , where is a convenient point other than , or . The last selection implies that is a simple solution. However, no such assumption is made or implied here. Thus, the method (4) can be used to approximate solutions of multiplicity 2, 3, …. Other choices for S are also possible [,,,].
The main local area convergence is based on the conditions ()–(). Set .
Theorem 2.
Suppose that conditions ()–() hold. Then, the following items hold for the sequence , provided :
and .
Proof.
Notice that item (9) holds if . Induction is used to show items (9)–(11). Let . The application of the conditions () and () and definitions (5) and (6) can, in turn, give
The Banach lemma on linear invertible operators [,] and (12) assure the existence of and the following estimate:
Notice now that for , the iterate is well defined by the first substep of method (4) if and
It follows that, by estimate (15), item (10) holds for and the iterate . Notice that by (13), for and the condition (), the accelerators , and and the iterate are well defined. Moreover, we can, in turn, write the following:
Some estimates are needed before we revisit (16).
By condition (), (15), and the induction hypotheses, we obtain, in turn,
But we have
so by (),
and
Thus, by (18) and the condition (), we have
so
which gives
and
where we also used
or
and
Finally, by (23), we deduce that . □
A domain is defined with only as a solution of the system of equations .
Proposition 1.
Suppose the following:
There exists solving system of equations for some , and there exists such that the condition () holds in ,
Set . Then, the only solution of the system of equations in the domain is .
Proof.
Let us consider , provided that . In view of the condition () and (24), we have
Therefore, . Finally, from the identity
we conclude that . □
Remark 2.
- (i)
- The function has two versions. So, in practice, we shall use the smaller of the two. Notice that if the two versions cross on , then is chosen as the smallest on each interval.
- (ii)
- A choice for and is provided if all conditions ()–() hold in Proposition 1.
3. Semi-Local Area Convergence
In this section, the formulas and calculations are similar to the local area convergence. But the terms are switched by and , respectively.
Suppose the following:
- ()
- There exists a continuous, nondecreasing function so that the function has a minimal positive zero. Let us denote such a zero by s. Set .
- ()
- There exists a continuous and nondecreasing function .Define the sequences , and for , some and each , byandA general convergence condition for the sequence is needed since it is shown to be majorizing for in Theorem 3.
- ()
- There exists such that for each ,In view of the condition () and (26), it follows that and there exists such thatIt is known that is the unique least upper bound of the sequence .There exists a relationship between the functions and and the mappings in the method (4).
- ()
- There exist and an invertible mapping S so that for each ,Set .Notice that if , the condition () givesThus, the linear mapping is invertible. Consequently, the iterate is well defined by the first substep of the method. Thus, we can choose .
- ()
- for each .
- ()
- There exists so that for each ,and
- ()
- .
Remark 3.
Some possible selections for S are the following: , , or , where is an auxiliary point other than . Other choices for S are also possible.
The semi-local area convergence for method (4) can now follow.
Theorem 3.
Suppose that conditions ()–() hold. Then, the sequence generated by method (4) satisfies the assertions
and
Moreover, the point is well-defined and solves the system of equations .
Proof.
The assertions (27)–(29) are shown by induction. Notice that by the definition of , (26) and the method (4) we have that (27) holds, if and . So, the assertions (27) and (28) hold if and the iterate .
Then, by ()–() and the induction hypotheses, we obtain that exists and
As in the local case, we need some estimates.
so
and
where we also used
Thus, assertion (29) holds, and the iterate .
Then, in view of the first substep of the method (4), we can write, in turn, that
It follows that
The induction for assertions (27)–(29) is completed. Notice that by condition (), the sequence is Cauchy as convergent. But all the iterates are such that and (27)–(30) hold. It follows that the sequence is also Cauchy in and, as such, it has a limit denoted by .
Let in estimate (42) to obtain . Furthermore, by assertions (28) and (29), and the triangle inequality, one can have
So, for , and using the triangle inequality,
Therefore, we conclude by (46) that, if ,
□
Next, a domain is given with only one solution for the system of equations .
Proposition 2.
Suppose that there exists a solution for some , the condition () holds on , and there exists so that
Set .
Then, the only solution of system of equations in the domain is .
Proof.
Suppose that there exists solver for the system of equations in the domain satisfying . Let us define the linear mapping by . Then, by condition () and (48), we have, in turn, that
we deduce that . □
Remark 4.
- (i)
- The limit point in the condition () can be switched by s, given in ().
- (ii)
- If all conditions, ()–(), hold in Proposition 2, then one can take and .
4. Numerical Work
In this section, we first consider some alternatives to conditions () and ().
Case: and . Local area convergence. In view of the estimate
we obtain
Thus, we have from (50) that
If , we obtain
Thus, and
Hence, we can have
Consequently, we can drop the condition () and replace the function h by
where
Semi-local area convergence
We have the estimates
and
Thus, we can choose
so
General Case:
Local area convergence
Let us introduce the following conditions:
for some and each , or
for each .
In this case, notice that
for finite , since
or equivalently . Thus, we can set
Thus, we have
Semi-local area convergence
Suppose that
for some finite and each .
Then,
but is replaced by , defined for some finite i by the following:
where
To further validate the efficiency and accuracy of the proposed method, we present three numerical examples of varying complexity. These examples serve as benchmarks to compare the convergence behavior, computational efficiency, and numerical stability of different iterative methods.
In all calculations, the default tolerance was set to to ensure high precision in the numerical results. The maximum number of iterations for each method was limited to 50 to prevent excessive computational overhead while maintaining convergence efficiency. This limitation was chosen based on empirical observations that most efficient methods reach convergence well within this range, making further iterations redundant and computationally wasteful. Additionally, the reported CPU timing was obtained as the average over 50 independent runs, providing a more stable and representative measure of computational performance by reducing the impact of potential fluctuations in execution time. This approach is valuable in ensuring that the results are not unduly influenced by temporary variations in computational load, background processes, or system-specific execution conditions. Such fluctuations can distort comparative performance assessments, leading to misleading conclusions about the relative efficiency of the evaluated methods. All numerical experiments were conducted using Google Colab’s cloud computing resources. The runtime used for computations was equipped with an Intel Xeon CPU @2.20 GHz, 13 GB RAM, and a Tesla K80 accelerator with 12 GB of GDDR5 VRAM. This environment ensured that performance benchmarks were consistent and comparable across different test cases.
The first example focuses on a small system to illustrate the fundamental properties of the methods, while the second example scales up the problem size to assess medium-sized system performance. The third example examines a large-scale nonlinear system to demonstrate how well the methods handle computational challenges at an increased scale.
In this section, we compare the performance of Method (4) with two established iterative methods for solving systems of nonlinear equations. The first is Method (8) from []. The second is a sixth-order method without memory []. For completeness, we recall their definitions below as used in the numerical experiments.
- 1.
- Abbasbandy []
- 2.
- A sixth-order method of convergence without memory []
where
Example 1.
Let 3 and Ω = B, and define the mapping for , by
From this definition, it follows that the Jacobian of mapping F is given by
Notice that solves the system of equations and . Then, for , the conditions hold if and , since .
Next, we compute ρ using (5), yielding the following radii: , , , and . Among these values, the minimum radius of convergence is . The chosen parameters are and . For the numerical experiment, the initial guess was set to .
We compare the methods’ performance for this example in Table 1.
Table 1.
Comparison of methods for Example 1.
Example 2.
Consider the nonlinear system of equations of size 200 as follows:
We set the initial estimate , , and to obtain the solution The results are summarized in Table 2.
Table 2.
Comparison of methods for Example 2.
Example 3.
We analyze a large-scale nonlinear equation system of order 300 with 300 variables to demonstrate the method’s efficiency and scalability in handling complex computational problems. This study highlights the method’s capability to address significant computational challenges in large-scale systems.
To demonstrate its broad applicability to real-world large-scale nonlinear problems, we consider the following system:
The required solution for this system is given by . We used as the initial guess, and the parameters were set to and . Given the complexity of this system, computational efficiency becomes a critical factor. We compare the performance of various methods in Table 3.
Table 3.
Comparison of methods for Example 3.
The numerical experiments presented demonstrate the efficiency and robustness of the tested methods across different problem sizes. The proposed approach consistently showed reduced computation time while maintaining high accuracy, particularly in large-scale systems. These results confirm the method’s potential for practical applications in solving complex systems of nonlinear equations efficiently.
5. Conclusions
A finer local convergence analysis for method (4) is presented without Taylor series expansions, as used in Ref. [], which in turn brings the drawbacks . The new analysis uses generalized continuity assumptions to control the derivative and sharpen the bounds on the error distances . Moreover, the rest of the assumptions rely only on the mapping of method (4), i.e., F and . Furthermore, the more challenging and important semi-local convergence analysis, which has not been studied previously, is also provided by relying on majorizing sequences. The same technique can be used to extend the applicability of other methods, such as (3), or other methods along the same lines [,,,,,,,,,,,,,]. This is the direction of our future work.
Numerical experimentations complete this paper.
Author Contributions
Conceptualization, I.K.A., S.S., Y.S., S.R. and N.S. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
- Argyros, I.K.; Shakhno, S. Extended Two-Step-Kurchatov Method for Solving Banach Space Valued Nondifferentiable Equations. Int. J. Appl. Comput. Math. 2020, 6, 2. [Google Scholar] [CrossRef]
- Arroyo, V.; Cordero, A.; Torregrosa, J.R. Approximation of artificial satellites’ preliminary orbits: The efficiency challenge. Math. Comput. Model. 2011, 54, 1802–1807. [Google Scholar] [CrossRef]
- Behl, R.; Bhalla, S.; Magreñán, Á.A.; Kumar, S. An efficient high order iterative scheme for large nonlinear systems with dynamics. J. Comput. Appl. Math. 2022, 404, 113249. [Google Scholar] [CrossRef]
- Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileeva, M.P. A highly efficient class of optimal fourth-order methods for solving nonlinear systems. Numer. Algorithms 2024, 95, 1879–1904. [Google Scholar] [CrossRef]
- Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.G. Iterative Solutions of Nonlinear Equations in Several Variables; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar] [CrossRef]
- Shakhno, S.M.; Yarmola, H.P.; Shunkin, Y.V. Convergence analysis of the Gauss-Newton-Potra method for nonlinear least squares problems. Mat. Stud. 2018, 50, 211–221. [Google Scholar] [CrossRef]
- Sharma, J.R.; Kumar, S. A class of accurate Newton-Jarratt-like methods with applications to nonlinear models. Comput. Appl. Math. 2022, 41, 46. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
- Singh, H.; Sharma, J.R.; Kumar, S. A simple yet efficient two-step fifth-order weighted-Newton method for nonlinear models. Numer. Algorithms 2022, 93, 203–225. [Google Scholar] [CrossRef]
- Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287–288, 94–103. [Google Scholar] [CrossRef]
- Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Design of iterative methods with memory for solving nonlinear systems. Math. Methods Appl. Sci. 2023, 46, 12361–12377. [Google Scholar] [CrossRef]
- Budzko, D.A.; Cordero, A.; Torregrosa, J.R. A new family of iterative methods widening areas of convergence. Appl. Math. Comput. 2015, 252, 405–417. [Google Scholar] [CrossRef]
- Chun, C.; Lee, M.Y.; Neta, B.; Dzunic, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 2012, 218, 6427–6438. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
- Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. Fractal complexity of a new biparametric family of fourth optimal order based on the Ermakov-Kalitkin scheme. Fractal Fract. 2023, 7, 459. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Noguera, M.; Gutiérrez, J. On new computational local orders of convergence. Appl. Math. Lett. 2012, 25, 2023–2030. [Google Scholar] [CrossRef]
- King, R. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).