Abstract
A plethora of sufficient convergence criteria has been provided for single-step iterative methods to solve Banach space valued operator equations. However, an interesting question remains unanswered: is it possible to provide unified convergence criteria for single-step iterative methods, which are weaker than earlier ones without additional hypotheses? The answer is yes. In particular, we provide only one sufficient convergence criterion suitable for single-step methods. Moreover, we also give a finer convergence analysis. Numerical experiments involving boundary value problems and Hammerstein-like integral equations complete this paper.
1. Introduction
Numerous applications from mathematics, economics, engineering, physics, chemistry, biology, and medicine, to mention a few, can be modeled as follows:
with operator acting between and which are Banach spaces, whereas is nonempty. That is why determining a solution denoted by of Equation (1) is of extreme importance. However, this task is difficult in general. Ideally, one desires to be in closed form, but this task is only accomplished in some instances. Practitioners and researchers resort to mostly iterative methods, generating a sequence approximating under certain conditions on the initial data. The most popular single step methods are as follows:
Newton’s [1,2]
Secant [3]
where
Steffensen’s-like [4]
for and being parameters.
Newton’s-type [5,6,7,8]
where
Stirling’s [9]
where and are used to find fixed points of equation
Picard’s [10,11]
Numerous other single step methods can be found in [12,13,14] and the references therein.
Clearly, all the preceding methods can be written in a unified way as follows:
where and
We usually study two types of convergence for iterative methods. The local convergence uses information about to find the radii of convergence balls. The semilocal uses information about that guarantees convergence to . Sufficient convergence criteria for these methods have been provided by many authors [2,12,13].
The following common questions (Q) arise in the semilocal study of these methods:
- Q1
- Can the convergence region be extended since it is small in general?
- Q2
- Can the estimates on become tighter? Otherwise, we compute more iterates than we should to reach a predecided error tolerance.
- Q3
- Can the convergence criteria be weakened?
- Q4
- Can the location of solution be more precise?
- Q5
- Is there a uniform way of studying single-step methods?
- Q6
- Are there uniform convergence criteria for single-step methods?
The novelty of our paper is that we answer positively to all these questions (Q), without additional conditions.
In order to deal with single-step methods, we first consider the following iteration:
where is a function related to the initial data. The task of choosing so that sequence is majorizing for all methods listed previously is very difficult in general.
We define a special case of sequences given by (9) as follows:
for each where are nonnegative parameters. We shall show that all majorizing sequences used to study the preceding methods are specializations of given by (10).
Similarly, in the case of local convergence we show all preceding methods can be studied using the estimate as follows:
where are nonnegative parameters, and
We suppose from now on that is a majorizing sequence for Recall that an increasing real sequence is majorizing for a sequence in a Banach space if for each [11]. Additional conditions are needed to show that where
2. Majorizing Sequences and Convergence Analysis
In this section, we use majorizing sequence (10) to deal first with the semi-local convergence analysis for sequence
We provide very general sufficient criteria for the convergence of sequence (10).
Theorem 1.
Suppose that for each and
and
Then, sequence developed by (10) exists, is nondecreasing, bounded from above by and converges to its unique least upper bounds denoted by , which satisfies
Proof.
Using the definition of sequence we see that holds for each Moreover, by condition (12), So, sequence converges to □
Remark 1.
Condition (12) can be satisfied only in some special cases. Next, we provide stronger conditions, which can easily be verified.
It is convenient for the following convergence analysis to develop real functions, parameters and sequences. Define functions on the interval for by the following:
and sequence
Suppose that equations
and
have minimal solutions and respectively in the interval satisfying the following:
Notice that
and
Indeed, by the definition of sequence and function h, we obtain in turn by adding and subtracting (in the definition of ) the following:
Remark 2.
Functions h and g appear in the proof of Theorem 1. The former is related to two consecutive functions and (see (20).) Then, (21) is true if (17) holds for The latter relates to the limit of these recurrent functions and is independent of This function g then should satisfy (27) and that happens if (18) holds. Condition (see (19)) is needed to show that (22) holds for , which will imply the following:
and the induction for can begin. The condition is needed to show (29).
Theorem 2.
Proof.
We shall show by induction
Remark 3.
The conditions of Theorem 2 imply condition (12) of Theorem 1 but not necessarily vice versa.
Next, we specialize in some interesting cases, justifying the already stated advantages.
Case 1: Newton’s method.
Let us abbreviate what is known. Suppose the following conditions (C) hold:
and
where
Next, we present the celebrated Newton–Kantorovich theorem (NKT) [10].
Theorem 3.
Suppose the conditions (C) hold. Then, Newton’s method converges to a unique solution of equation in and
and
where and
Let us see what we obtain under our conditions. Suppose the following conditions (A) hold:
Set
and
where
Remark 4.
Notice that but Hence, U is used to define It is important to see that in practice the computation of Lipschitz constant requires that of center Lipschitz constant and that of restricted Lipschitz constant ℓ as special cases. Hence, the conditions involving and ℓ are not additional to the one involving Moreover, they are also weaker. This is also verified in the numerical section. In other words, the condition involving implies the other two but not necessarily vice versa.
Next, we present our extended version of the Newton–Kantorovich Theorem 3.
Theorem 4.
Suppose the conditions (A) hold. Then, Newton’s method converges to a unique solution of equation in and the following:
for each
Proof.
Simply choose and Then, (19) reduces to (31). In particular, we use the following estimates:
so by the Banach perturbation lemma on invertible linear operators [10] and the following:
Then, since
we obtain the following:
where we also used the following:
and
for each So, the sequence is majorizing for Then, sequence is fundamental in , which is a Banach space, so which solves Equation (1), since
as Then, we conclude that since F is a continuous operator, where Let with Set Using the center Lipschitz condition, we have the following:
so follows since exists and □
Remark 5.
- (a)
- We have by the definition of UsoandHence, we haveand
- (b)
- The proof in Theorem 3 used the less precise estimate as follows:Our modification leads to (31) instead of (30). Moreover, in [15] we showed Theorem 4 but using the following:wheresoHence, our results extend the ones in [15] too.
- (c)
Comments similar to the ones given in the previous five remarks can be made for the methods that follow in this Section.
Case 2: Secant method [14]
Choose and
The nonzero parameters are again connected to the following:
for each
for each provided that
The standard condition used in connection to the secant method [14] is the following:
for each Then, we have again the following:
and
The old majorizing sequence [14] is defined by the following:
with the following estimates:
However, ours is as follows:
with corresponding estimates
which are tighter, where
The old sufficient convergence criterion [14] is but the new one is (for ) , which is weaker. Hence, we obtain the semi-local convergence of the secant method.
Theorem 5.
Under the preceding conditions secant method and with
Proof.
As in Theorem 4, we obtain the following:
and
(see also [14]). □
Case 3: Newton-type method [8,16]
Choose: and
The parameters are connected to the following:
and
Set
and
The conditions in [8,16] use the following:
and
We have the following:
so
and
The old majorizing sequence [8,16] is defined for by
with the following estimates:
However, ours is for
with the following estimates:
The old sufficient convergence criterion [8,16] is the following:
The new one is the following:
However, so again condition C is weaker than
Hence, we obtain the semilocal convergence of the Newton-type method.
Theorem 6.
Under the preceding conditions Newton-type method and with
Proof.
It follows from the aforementioned estimates (see also [8,16]). Hence, again the results are extended.
Similar benefits are derived in the local convergence case.
Suppose the conditions (B) hold:
is a simple solution of equation
and
where Then, we have the following local convergence result arrived at independently by Rheinboldt [17] and Traub [18]. □
Theorem 7.
Suppose that the conditions (B) hold. Then, Newton’s method converges to so that the following holds:
for each provided that
In our case, we consider the conditions (D):
is a simple solution of equation
Set
where
Theorem 8.
Suppose that the conditions (D) hold. Then, Newton’s method converges to so the following holds:
for each provided that where
Proof.
Choose in (11). Then, we obtain the following:
□
Remark 6.
We have again the following:
so
where and (see also the numerical section).
The same benefits can be obtained for the other single-step methods. Moreover, our idea can similarly be extended to multi-step and multi-point methods [4,5,13,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37].
3. Numerical Experiments
We contact some experiments showing that the old convergence criteria are not verified, but ours are. Hence, there is no assurance that the methods converge under the old conditions. However, under our approach, convergence can be established.
Example 1.
Define function as the following:
where are parameters. Then, clearly for large and small, can be small (arbitrarily). Notice that as too. So, the utilization of Newton’s method is extended numerous (infinitely many) times under the data ().
Example 2.
Let and for Let function f on Ω as the following:
We consider case 1 of Newton’s method. Then, we obtain and However, then, for all So, the Newton–Kantorovich theorem cannot assure convergence. However, we have for all Hence, our result guarantees convergence to as long as
Example 3.
Let the domain of functions given on which are continuous. We consider the norm-max. Choose Define F on Ω by the following:
ξ is a number and K is the Green’s kernel given by the following:
Example 4.
Let and Ω be as in the Example 3. It is well known that the boundary value problem [2]
can be given as a Hammerstein-like nonlinear integral equation as follows:
where λ is a parameter. Then, define by the following:
Choose and Then, clearly since Suppose Then, by conditions (C) they are satisfied for the following:
and Notice that
The rest of the examples are given for the local convergence study of Newton’s method.
Example 5.
Let and Define mapping E on Ω for as
Then, conditions (B) and (D) hold, provided that and since Notice that
and
Hence, our radius of convergence is larger.
Example 6.
Let and Ω be as in Example 3. Define F on Ω as
By this definition, we obtain the following:
for all So, we can choose However, then, we again obtain the following:
4. Conclusions
We have provided a single sufficient criterion for the semi-local convergence of single step methods. Upon specializing the parameters involved, we showed that although our majorizing sequence is more general than earlier ones, the convergence criteria are weaker (i.e., the utility of the methods is extended), the upper error estimates are more accurate (i.e., at least as few iterates are required to achieve a predecided error tolerance), and we have, at most, an as-small ball containing the solution. These benefits are obtained without additional hypotheses. According to our new technique, we locate a more accurate domain than the earlier ones containing the iterates, leading to a more accurate Lipschitz condition (at least as small).
Our theoretical results are further justified using numerical experiments. In the future, we plan to extend these results by replacing the Lipschitz constants by generalized functions along the same lines [2,12,13].
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Conflicts of Interest
The author declares no conflict of interest.
References
- Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: Berlin, Germany, 2008. [Google Scholar]
- Ezquerro, J.A.; Hernandez, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Springer: Cham, Switzerland, 2018. [Google Scholar]
- Shakhno, S.M.; Gnatyshyn, O.P. On aan iterative algorithm of order 1.839... for solving nonlinear least squares problems. Appl. Math. Math. 2005, 161, 253–264. [Google Scholar]
- Steffensen, J.F. Remarks on iteration. Skand Aktuar Tidsr. 1993, 16, 64–72. [Google Scholar] [CrossRef]
- Cătinaş, E. The inexact, inexact perturbed, and quasi-Newton methods are equivalent models. Math. Comp. 2005, 74, 291–301. [Google Scholar] [CrossRef]
- Dennis, J.E., Jr. On Newton-like methods. Numer. Math. 1968, 11, 324–330. [Google Scholar] [CrossRef]
- Nashed, M.Z.; Chen, X. Convergence of Newton-like methods for singular operator equations using outer inverses. Numer. Math. 1993, 66, 235–257. [Google Scholar] [CrossRef]
- Yamamoto, T. A convergence theorem for Newton-like methods in Banach spaces. Numer. Math. 1987, 51, 545–557. [Google Scholar] [CrossRef] [Green Version]
- Argyros, I.K. Computational Theory of Iterative Methods; Series: Studies in Computational Mathematics, 15; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Co.: New York, NY, USA, 2007. [Google Scholar]
- Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; First published by Academic Press, New York and London, 1997; SIAM Publications: Philadelphia, PA, USA, 2000. [Google Scholar]
- Argyros, I.K.; Magréñan, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
- Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier Academic Press: New York, NY, USA, 2018. [Google Scholar]
- Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Research Notes in Mathematics, 103; Pitman (Advanced Publishing Program): Boston, MA, USA, 1984. [Google Scholar]
- Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
- Chen, X.; Yamamoto, T. Convergence domains of certain iterative methods for solving nonlinear equations. Numer. Funct. Anal. Optim. 1989, 10, 37–48. [Google Scholar] [CrossRef]
- Rheinboldt, W.C. An Adaptive Continuation Process of Solving Systems of Nonlinear Equations; Banach Ctr. Publ. 3; Polish Academy of Science: Warsaw, Poland, 1978; pp. 129–142. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea: Prentice Hall, NJ, USA, 1964. [Google Scholar]
- Behl, R.; Maroju, P.; Martinez, E.; Singh, S. A study of the local convergence of a fifth order iterative method. Indian J. Pure Appl. Math. 2020, 51, 439–455. [Google Scholar]
- Deuflhard, P. Newton methods for nonlinear problems. In Affine Invariance and Adaptive Algorithms; Springer Series in Computational Mathematics, 35; Springer: Berlin, Germany, 2004. [Google Scholar]
- Grau-Sánchez, M.; Grau, À.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 281, 2377–2385. [Google Scholar] [CrossRef]
- Gutiérrez, J.M.; Magreñán, Á.A.; Romero, N. On the semilocal convergence of Newton-Kantorovich method under center-Lipschitz conditions. Appl. Math. Comput. 2013, 221, 79–88. [Google Scholar] [CrossRef]
- Magréñan, A.A.; Argyros, I.K.; Rainer, J.J.; Sicilia, J.A. Ball convergence of a sixth-order Newton-like method based on means under weak conditions. J. Math. Chem. 2018, 56, 2117–2131. [Google Scholar] [CrossRef]
- Magréñan, A.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
- Soleymani, F.; Lotfi, T.; Bakhtiari, P. A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
- Argyros, I.K. On the Newton—Kantorovich hypothesis for solving equations. J. Comput. Math. 2004, 169, 315–332. [Google Scholar] [CrossRef] [Green Version]
- Argyros, I.K.; Hilout, S. On an improved convergence analysis of Newton’s method. Appl. Math. Comput. 2013, 225, 372–386. [Google Scholar] [CrossRef]
- Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; First published by Prentice-Hall, Englewood Cliffs, New Jersey, 1983; SIAM: Philadelphia, PA, USA, 1996. [Google Scholar]
- Deuflhard, P.; Heindl, G. Affine invariant convergence theorems for Newton’s method and extensions to related methods. SIAM J. Numer. Anal. 1979, 16, 1–10. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Gutiérrez, J.M.; Hernández, M.A.; Romero, N.; Rubio, M.J. The Newton method: From Newton to Kantorovich (Spanish). Gac. R. Soc. Mat. Esp. 2010, 13, 53–76. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Proinov, P.D. General local convergence theory for a class of iterative processes and its applications to Newton’s method. J. Complex. 2009, 25, 38–62. [Google Scholar] [CrossRef] [Green Version]
- Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
- Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
- Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted—Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
- Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
- Zabrejko, P.P.; Nguen, D.F. The majorant method in the theory of Newton-Kantorovich approximations and the Pták error estimates. Numer. Funct. Anal. Optim. 1987, 9, 671–684. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).