Abstract
The aim of this article is to present a unified semi-local convergence analysis for a k-step iterative method containing the inverse of a flexible and frozen linear operator for Banach space valued operators. Special choices of the linear operator reduce the method to the Newton-type, Newton’s, or Stirling’s, or Steffensen’s, or other methods. The analysis is based on center, as well as Lipschitz conditions and our idea of the restricted convergence region. This idea defines an at least as small region containing the iterates as before and consequently also a tighter convergence analysis.
MSC Subject Classification:
65G99; 65H10; 47H17; 49M15
1. Introduction
Let be Banach spaces and be a nonempty and open set. By we denote the space of bounded linear operators from into Let also stand for an open set centered at and of radius and stand for its closure.
There is a plethora of problems from diverse disciplines, such as mathematics [1,2,3,4,5,6,7,8,9,10,11,12,13], optimization [3,4,5,6,7,8], mathematical programming [7,8], chemistry [7], biology [1,2,12], physics [9,13], economics [8], statistics [13], engineering [1,2,9,10,11,12,13] and other disciplines, that can be reduced to finding a solution of the equation:
where is a continuous operator. The solution of Equation (1) should be unique in a neighborhood about it and in closed form. However, the latter can be achieved only in special cases. This problem leads researchers to the construction of iterative methods that generate a sequence converging to
The most widely-used iterative method is Newton’s, defined for each by:
Newton’s method is a special case of one-point iterative methods without memory defined for each by:
where has some properties. The order of convergence depends explicitly on the first derivatives of the functions appearing in the method. Moreover, the computational cost increases in general especially when the convergence order increases, since successive derivatives must be computed [1,2,3,4,5,6,7,8,9,10,11,12,13].
That is why researchers and practitioners have developed iterative methods that on the one hand avoid the computation of derivatives and on the other hand achieve a high order of convergence. In particular, we unify the study of such methods by considering k-step iterative methods with a frozen linear operator defined for each by:
where and for each Special choices of operator A lead to well-known methods. If and for each we obtain Newton’s method (2), whereas, if and for each we obtain a method whose semi-local convergence was given in [12]. If for each or where and we obtain Steffensen-type methods. Stirling’s and other one-point methods are also special cases of method (4). Based on the above, it is important to study the semi-local convergence analysis of method (4). It is well known that as the convergence order increases, the convergence region decreases in general. To avoid this problem as well, we introduce a center-Lipschitz-type condition that helps us determine an at least as small region as before containing the iterates This way, the resulting Lipschitz constants are at least as small. A tighter convergence analysis is obtained this way.
2. Convergence Conditions
We shall assume that for some and The semi-local convergence analysis of Method (4) is based on Condition (A) (see also the Conclusion Section 4):
- (a1)
- is a differentiable operator in the sense of Fréchet, , and there exists such that
- (a2)
- There exist such that for each :Set
- (a3)
- There exist such that for each :
- (a4)
- There exist and such that and for each :
- (a5)
- There exists such that . Set
From now on, we assume Condition (A).
3. Semi-Local Convergence
We need some auxiliary results to show the semi-local convergence of Method (4).
Lemma 1.
Proof.
We have that are well defined for each Using (5), (a1) and (a2), we have in turn that:
Let and We assume from now on that the previous hypotheses are satisfied. Let and Then, we have:
Set:
Similarly, we can write:
so:
and:
Hence, we arrive at:
Lemma 2.
The following assertions hold for
and:
Proof.
We have that for each since and:
It follows that for belong in Define:
Next, we study Method (4) for in an analogous way to It follows from Lemma 1 that and:
Hence, with , is well defined,
so:
where:
Define as previously,
Then, we have again that:
and:
Next, we continue for By Lemma 1, and:
Notice that Then, for and since we get as in (14):
where Then, as before, we can write:
so for We are motivated by the preceding items to define recurrent relations:
Hence, we arrive at:
Lemma 3.
Suppose that the hypotheses of Lemma 1 hold. Then, for each
Proof.
As in the cases we get for each
and for
That is, we obtain:
and:
□
Define function on the interval by:
We have that and It then follows from the intermediate value theorem that equation has at least one solution in Denote by s the smallest such solution. Notice that for:
a simple inductive argument shows that:
and:
Hence, we arrive at:
Lemma 4.
Suppose that (20) holds. Then, sequences and are decreasing.
Proof.
It follows immediately from (19). □
Then, we can show:
Theorem 1.
Suppose Condition (A) is satisfied and for each fixed number of steps equation:
has at least one positive solution. Denote by r the smallest such solution. Moreover, suppose that (20) is satisfied and Then, sequence generated by Method (4) is well defined, remains in for each and converges to a solution of equation The solution is unique in
Proof.
It follows from the previous results that and belong in We must show that sequence is complete:
so is complete in a Banach space X, and as such, it converges to some since is a closed set. Moreover, we have:
so Furthermore, to show the uniqueness part, let with Set By (a4) and (a6), we get in turn that:
by (27), so Then, from the identity we conclude that
Remark 1.
As noted in the Introduction, even if specialized to Theorem 1 can give better results, since As an example, consider the uniqueness result in [12], where:
but for
4. Conclusions
We presented a semi-local convergence analysis for a k-steps iterative method with a flexible and frozen linear operator. The results obtained in this article reduce to the ones given in [1,2,12], if we choose for each On top of that, in the special case, our results have the following advantages over these works:
- (1)
- Larger convergence region, leading to more initial points;
- (2)
- Tighter upper bound estimates on , as well as , which means that fewer iterations are needed to arrive at a desired error tolerance.
- (3)
- The information on the location of the solution is at least as precise.
These advantages are obtained, since we locate a ball inside the old ball containing the iterates. Then, the Lipschitz constants depend on the smallest ball and that is why these constants are at least as small as the old ones. It is also worth noticing that these advantages are attained, because the new constants are special cases of the old ones. That is no additional effort is required to compute the new constants. A plethora of numerical examples where the new constants are strictly smaller than the old ones can be found in [3,4,5,6,7,8]. Finally, other choices of operator A lead to methods not studied before.
Author Contributions
Conceptualization, I.K.A.; Editing, S.G.; Data Curation, S.G.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Amat, S.; Busquier, S.; Plaza, S. On two families of high order Newton type methods. Appl. Math. Comput. 2012, 25, 2209–2217. [Google Scholar] [CrossRef]
- Amat, S.; Argyros, I.K.; Busquier, S.; Hernandez, M.A. On two high-order families of frozen Newton-type methods. Numer. Linear Algebra Appl. 2018, 25, e2126. [Google Scholar] [CrossRef]
- Argyros, I.K.; Ezquerro, J.A.; Gutierrez, J.M.; Hernandez, M.A.; Hilout, S. On the semi-local convergence of efficient Chebyshev-Secant-type methods. J. Comput. Appl. Math. 2011, 235, 3195–3206. [Google Scholar] [CrossRef]
- Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
- Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
- Argyros, I.K.; George, S.; Thapa, N. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publishes: New York, NY, USA, 2018; Volume I. [Google Scholar]
- Argyros, I.K.; George, S.; Thapa, N. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publishes: New York, NY, USA, 2018; Volume II. [Google Scholar]
- Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. An eighth-order family of optimal multiple root finders and its dynamics. Numer. Algorithms 2018, 77, 1249–1272. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Generating optimal derivative free iterative methods for nonlinear equations by using polynomial interpolation. Math. Comput. Mod. 2013, 57, 1950–1956. [Google Scholar] [CrossRef]
- Kantorovich, L.V.; Akilov, G.P. Functional Analysis in Normed Spaces; Pergamon Press: New York, NY, USA, 1982. [Google Scholar]
- Hernandez, M.A.; Martinez, E.; Tervel, C. Semi-local convergence of a k-step iterative process and its application for solving a special kind of conservative problems. Numer. Algorithm 2017, 76, 309–331. [Google Scholar]
- Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted Newton method for systems of nonlinear equations. Numer. Algorithm 2013, 62, 307–323. [Google Scholar] [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).