Abstract
Comparisons between Newton’s and Steffensen-like methods are given for solving systems of equations as well as Banach space valued equations. Our idea of the restricted convergence domain is used to compare the sufficient convergence criteria of these methods under the same conditions as in previous papers. It turns out that the following advantages are shown: enlarged convergence domain; tighter error estimates and a more precise information on the location of the solution. Advantages are obtained under the same or at least as tight Lipschitz constants, which are specializations of earlier ones. Hence, the applicability of these methods is extended. Numerical experiments complete this study.
MSC:
47H99; 65J15
1. Introduction
Many problems in computational sciences and other areas are related with the problem of approximating a locally solution using mathematical modeling [1] of a general nonlinear equation or a system of equations in the form
with G being a continuous operator mapping a convex subset D of a Banach space into a Banach space .
Solutions of such equations can hardly be found in closed form. So, most solution methods for these equations are iterative. The study about convergence matter of iterative procedures is usually based on two types: semi-local and local convergence analysis. The semi-local convergence (SL) is using data around an initial point, to give criteria assuring the convergence of the iterative procedure. The local one (LC) uses information around a solution, to find the radii of convergence balls. Note that in computational sciences, the practice of numerical analysis for finding such solutions is connected to Newton’s method (NM),
NM is undoubtedly the most popular method for generating a sequence quadratically (under certain hypotheses [1]) converging to . Here, the space of bounded linear operators from into . There is extensive literature on local as well as semi-local convergence results for NM under Lipschitz-type conditions.
There are several iterative processes that consider the use of divided differences instead of derivatives because the operator G can be not differentiable. The operator , , with , is the first divided difference when
We can consider the approximation , where and are known data at the point . Depending on the data, this approximation will improve and we can show the Secant-like method [2,3,4,5]
In general, symmetric divided differences perform better. We can see this in the Center–Steffensen ( and ) [2,6] and Kurchatov methods ( and ) [7]; they both maintain the quadratic convergence of Newton’s method by approximating the derivative through symmetric divided differences with respect to the [1,3,8,9,10,11]. Following this idea, in this work we consider the derivative-free point-to-point iterative process of the Steffensen-like method given by
where for a real number . Thus, we are considering a symmetric divided difference to approximate the derivative of NM. Furthermore, by varying the parameter , we can approach the value .
In a similar way, we can consider the Inexact Steffensen-like method
2. Convergence for Steffensen-like Method
Along the work, we denote and
, respectively, for the open and closed balls with center and of radius .
We start by presenting our extension of the celebrated Newton–Kantorovich theorem for solving nonlinear equations given in [1] under the following conditions:
- There exist and such that and.
- There exists such that for that.Set .
- There exists such that for that.
Theorem 1
(Extended Newton–Kantorovich theorem). Let be a continously Fréchet differentiable operator. Assume that conditions – and are satisfied. Further, assume
Then, sequence generated by Newton’s method (2) is well defined in , remains in and converges to a solution of equation , where is the smallest positive zero of polynomial . Moreover, the following estimates hold
and
where
and
Furthermore, if there exists , such that
the limit point is the only solution of equation in the set .
Remark 1.
The following Lipschitz condition has been used for someand all:
However, then
and
since
The sufficient SL convergence condition given by Kantorovich [1] (see also [3,6,8,9,10,11]) under and (in non-affine invariant form) is
Then, by (6)–(10), we have
but not vice versa, unless if . The error estimates under (10) are less precise as well as the uniqueness results, since replaces M (and ).
Similar extensions hold in the case of the Steffensen-like method (4). Indeed, let us consider and present the semi-local convergence result in Theorem 1 of [2], but given in affine form.
Theorem 2.
Let be a continuously differentiable operator. Assume conditions and with
are satisfied. Moremover, assume
and
where is the smallest positive root of polinomial
Then, sequence generated by Steffensen-like method (4) is well defined in , remains in and converges to a solution of equation . Moreover, the following error estimates hold
and
where
and
Furthermore, the limit point is unique in , where
Remark 2.
In order to compare Theorem 2 with its extension that follows as in [2], we introduce the divided difference of order one [2,3,7] given by
with
Using ) (instead used in [2]) and (19), we get the following extension of Theorem 2.
Assume
Then, we have
from which it follows by the Banach lemma on invertible operators [1,5,8] that
Hence, we get:
Theorem 3.
Let be a continuously differentiable operator. Assume conditions – and (20) are satisfied. Moreover, assume
and
Then, sequence generated by Steffensen-like method (4) is well defined in , remains in and converges to a solution of equation . Moreover, the following error estimates hold
and
Furthermore, the limit point is unique in .
Proof.
Simply repeat proof in Theorem 2 in [2] but in affine invariant form, and use instead of for the upper bounds on the inverse of the operators involved. □
Remark 3.
In view of (7)–(9), we have
and
which justify the advantages as stated in the introduction. The computation of parameter requires that of and M. Hence, advantages are obtained under the same computational cost as before. A further improvement can be obtained if in and is replaced by , since in this case, tighter can replace M in all previous results with since .
3. Convergence of Inexact Steffensen-like Method
We shall first develop an auxiliary result conerning a majorizing sequence for method (5). Let a and b be real numbers. Define parameters and functions
where,
Notice that is the unique positive root of equation .
Sequence shall be shown to be majorizing for sequence in Theorem 4. However, first two convergence results are presented for sequence .
Lemma 1.
Assume that for
Then, the following items hold
and
Proof.
It follows immediately by the definition of sequence and condition (24). □
Next, a second convergent result follows for sequence .
Lemma 2.
Let , and be real numbers. Assume that
Then, sequence generated by (23) is well defined, non-decreasing, bounded from above by and converges to its unique least upper bound , which satisfies
Proof.
We shall show, using mathematical induction, that for
Estimate (27) is satisfied for by condition (25). Then, by (23), we have
Assume that,
Estimate (29) motivates us to introduce the sequence of functions .
We need a relationship between consecutive functions . We can write
In particular, by the definition of and Equation (30), we get . However, then, (29) holds if
where
So, we must show instead of (31), that , which is true by condition (25). Hence, the induction for (27) is completed.
It follows that the sequence is non-decreasing, bounded from above by , and as such it converges to its unique least upper bound , which satisfies (26). □
Next, we present the semi-local convergence analysis of method (5) using conditions , , , and the preceding Lemma and notations.
Theorem 4.
Assume conditions , , , with replaced by and (24) or (25) are satisfied. Then, sequence generated by method (5) is well defined in , remains in for and converges to a solution of equation . Moreover, the following estimates hold
and
Furthermore, the limit point is the only solution of equation in the set
, where .
Proof.
It follows from the estimates
Therefore,
leading to
so, we conclude
The rest follows as in Theorem 2 in [2]. □
4. Numerical Examples
Example 1.
Let , and , , . Define function g on D by
Then, we have
We can see that condition (10) is not verified and there is no SL convergence using this condition of Kantorovich for Newton’s method. In contrast, the advantage of the new method (5) is that there is convergence, as we can see in Table 1 and Table 2 and obtaining the solutions
Table 1.
Sequences for method (4).
Table 2.
Sequence for method (5).
In a similar way, in Table 2, we can see the convergence of sequence built using method (5) with the parameters defined above.
Example 2.
Let be the domain of continuous real functions defined on the interval . We consider the max–norm. Suppose that , and F on D is defined as
where are given in , and M represents a Kernel which is defined as a Green function
From Equation (33) for , the derivative is given by:
We choose , and we conclude from Equations (33)–(35) that , , , , , and , then . Observe that and . Since , the condition given in Equation (10) is not achieved. Therefore, convergence is not assured by the Kantorovich criterion. In contrast, the advantage of the new method (5) is that there is convergence, as we can see in Table 3, obtaining the solution and verifying :
Table 3.
Sequence for method (5).
Analogous to Example 1, we can construct the sequence that converges to its limit point.
5. Discussion
Note that if all methods are convergent, the new error bounds are at least as tight, since the Lipschitz constants are at least as small. For instance, in the Example 1, the Lipschitz constant (see ()) used before is , so , where and M are the constants used. Then, the new majorizing sequence (see Theorem 1) is tighter than the one used by Kantorovich where (for Newton’s method). The same is true for Steffensen-like methods (4) and (5) (see Theorem 2 and Remark 3). Besides, if all Newton and Steffensen methods are compared at the same time, then if the error estimated are obtained using majorizing sequences which in turn depend on the “M” and “K” constants, respectively, then the tighter error bounds will be given by those with the smallest constants. Notice also that such a comparison was the main topic in the motivational paper [2]. Observe that the methods (4) and (5) are derivative free. They should be used when the derivative is hard to find or it does not exist. It is clear that for sufficiently small , these methods will be similar to Newton’s (see also [2]).
6. Conclusions
More precise majorizing sequences have been used to expand the convergence domain for methods (4) and (5) under the same or weaker conditions than in earlier works [1,2,4,5,10,11]. Further benefits include improved error estimates and uniqueness ball. The technique can apply to extend the usage of other methods in a similar way [5,6,7,8].
Author Contributions
Conceptualization, I.K.A., C.A., J.C. and D.G.; data curation, I.K.A., C.A., J.C. and D.G.; methodology, I.K.A., C.A., J.C. and D.G.; project administration, D.G.; formal analysis, I.K.A., C.A., J.C. and D.G.; investigation, I.K.A., C.A., J.C. and D.G.; resources, I.K.A., C.A., J.C. and D.G.; writing—original draft preparation, I.K.A., C.A., J.C. and D.G.; writing—review and editing, I.K.A., C.A., J.C. and D.G.; visualization, I.K.A., C.A., J.C. and D.G.; supervision, I.K.A., C.A., J.C. and D.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Universidad de Las Américas, Quito, Ecuador, grant number FGE.DGS.21.04.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
- Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. On a Steffensen-like method for solving nonlinear equations. Calcolo 2016, 53, 171–188. [Google Scholar] [CrossRef]
- Argyros, I.K. On the secant method. Publ. Math. Debrecen 1993, 43, 223–238. [Google Scholar]
- Hernández-Verón, M.Á.; Magreñán, Á.A.; Rubio, M.J. Dynamics and local convergence of a family of derivative–free iterative processes. J. Comput. Appl. Math. 2019, 354, 414–430. [Google Scholar] [CrossRef]
- Magreñán, Á.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
- Argyros, I.K. Unified convergence criteria for iterative Banach space valued methods with applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
- Shakhno, S.M. On a Kurtachov’s method of linear interpolation for solving nonlinear equations. Proc. Apll. Math. Mech. 2004, 4, 650–651. [Google Scholar] [CrossRef]
- Argyros, I.K. The Convergence Theory and Applications of Iterative Methods, 2nd ed.; CRC Press: Boca Raton, FL, USA; Taylor and Francis Publishing Group: Boca Raton, FL, USA, 2022. [Google Scholar]
- Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
- Ezquerro, J.A.; Hernández-Verón, M.Á. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Frontiers in Mathematics; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Ezquerro, J.A.; Hernández-Verón, M.Á. Mild Differentiability Conditions for Newton’s Method in Banach Spaces; Frontiers in Mathematics; Springer: Cham, Switzerland, 2020. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).