Uniﬁed Convergence Criteria for Iterative Banach Space Valued Methods with Applications

: A plethora of sufﬁcient convergence criteria has been provided for single-step iterative methods to solve Banach space valued operator equations. However, an interesting question re-mains unanswered: is it possible to provide uniﬁed convergence criteria for single-step iterative methods, which are weaker than earlier ones without additional hypotheses? The answer is yes. In particular, we provide only one sufﬁcient convergence criterion suitable for single-step methods. Moreover, we also give a ﬁner convergence analysis. Numerical experiments involving boundary value problems and Hammerstein-like integral equations complete this paper.


Introduction
Numerous applications from mathematics, economics, engineering, physics, chemistry, biology, and medicine, to mention a few, can be modeled as follows: with operator F : Ω ⊆ T 1 → T 2 acting between T 1 and T 2 , which are Banach spaces, whereas Ω is nonempty. That is why determining a solution denoted by x * of Equation (1) is of extreme importance. However, this task is difficult in general. Ideally, one desires x * to be in closed form, but this task is only accomplished in some instances. Practitioners and researchers resort to mostly iterative methods, generating a sequence approximating x * under certain conditions on the initial data. The most popular single step methods are as follows: Newton's [1,2] x m+1 = x m − F (x m ) −1 F(x m ). (2) where [., .; F] : Ω × Ω −→ L(T 1 , T 2 ). Steffensen's-like [4] x for T 1 = T 2 and λ 1 , λ 2 being parameters. Newton's-type [5][6][7][8] x m+1 = x m − A −1 m F(x m ), (5) where A m = A(x m ), A : Ω −→ L(T 1 , T 2 ). Stirling's [9] x m+1 = x m − G (x m ) −1 G(x m ), where T 1 = T 2 and G(y) = y − F(y) are used to find fixed points of equation x = G(x).
The following common questions (Q) arise in the semilocal study of these methods: Q1 Can the convergence region be extended since it is small in general? Q2 Can the estimates on x m − x * , x m+1 − x m become tighter? Otherwise, we compute more iterates than we should to reach a predecided error tolerance. Q3 Can the convergence criteria be weakened? Q4 Can the location of solution be more precise? Q5 Is there a uniform way of studying single-step methods? Q6 Are there uniform convergence criteria for single-step methods?
The novelty of our paper is that we answer positively to all these questions (Q), without additional conditions.
In order to deal with single-step methods, we first consider the following iteration: where ψ : is a function related to the initial data. The task of choosing ψ so that sequence {t n } is majorizing for all methods listed previously is very difficult in general.
We define a special case of sequences given by (9) as follows: . . , 6 are nonnegative parameters. We shall show that all majorizing sequences used to study the preceding methods are specializations of {t m } given by (10). Similarly, in the case of local convergence we show all preceding methods can be studied using the estimate as follows: where c 1 , c 2 , c 3 , d 1 , d 2 , d 3 are nonnegative parameters, e m = x m − x * and λ m = c 1 e m +c 2 e m−1 +c 3 1−(d 1 e m +d 2 e m−1 +d 3 ) . We suppose from now on that {t m } is a majorizing sequence for {x n }. Recall that an increasing real sequence {t m } is majorizing for a sequence {x m } in a Banach space T 1 if x m+1 − x m ≤ t m+1 − t m , for each m = 0, 1, 2, . . . [11]. Additional conditions are needed to show that F(ρ) = 0, where ρ := lim m−→∞ x m .
The paper contains also the semi-local as well as the local convergence of method (10) in Section 2. The numerical experiments can be found in Section 3. Conclusions appear in Section 4.

Majorizing Sequences and Convergence Analysis
In this section, we use majorizing sequence (10) to deal first with the semi-local convergence analysis for sequence {x n }.
We provide very general sufficient criteria for the convergence of sequence (10).

Remark 1. Condition
It is convenient for the following convergence analysis to develop real functions, parameters and sequences. Define functions on the interval [0, 1) for µ = t 2 − t 1 by the following: and sequence Suppose that equations h(t) = 0 (17) and have minimal solutions δ and λ, respectively in the interval (0, 1) satisfying the following: Notice that and Indeed, by the definition of sequence { f i } and function h, we obtain in turn by adding and subtracting f i (t) (in the definition of f i+1 (t)) the following: In particular, by the definition of δ and (20) we obtain (21) since h(δ) = 0.

Remark 2.
Functions h and g appear in the proof of Theorem 1. The former is related to two consecutive functions f i and f i+1 (see (20).) Then, (21) is true if (17) holds for t = δ. The latter relates to the limit of these recurrent functions f i and is independent of i. This function g then should satisfy (27) and that happens if (18) holds. Condition 0 ≤ δ 1 ≤ δ (see (19)) is needed to show that (22) holds for i = 1, which will imply the following: Next, we show the convergence of sequence {t n } under conditions (17)- (19). (17)- (19), the conclusions of Theorem 1 hold for sequence {t k } but t * * is replaced by s = t 1 + µ 1−δ .

Theorem 2. Under conditions
Proof. We shall show by induction Item (22) holds for i = 1 by (19). Then, the definition of sequence {t i } and (22) give Suppose (22) holds. Then, we have the following: and (since δ i+1 ≥ 0). Evidently, item (25) holds, by (23) and (24), if we have the following: by the definition of f i , (20) and (21). In view of (21), one obtains the following: So, we can show instead of (27) that the following holds: which is true by the definition of λ and (19). Hence, the induction for (22) is completed. Then, items (23) and (24) hold. Consequently, sequence {t k } converges to t * .
Remark 3. The conditions of Theorem 2 imply condition (12) of Theorem 1 but not necessarily vice versa.
Next, we specializeā i ,b i , a i , b i in some interesting cases, justifying the already stated advantages. Case 1: Newton's method. Let us abbreviate what is known. Suppose the following conditions (C) hold: and Next, we present the celebrated Newton-Kantorovich theorem (NKT) [10].
where u 0 = η and Let us see what we obtain under our conditions. Suppose the following conditions (A) hold: and

Remark 4.
Notice that = (Ω, 0 ) but 1 = 1 (Ω). Hence, U is used to define . It is important to see that in practice the computation of Lipschitz constant 1 requires that of center Lipschitz constant 0 and that of restricted Lipschitz constant as special cases. Hence, the conditions involving 0 and are not additional to the one involving 1 . Moreover, they are also weaker. This is also verified in the numerical section. In other words, the condition involving 1 implies the other two but not necessarily vice versa.

Proof. Simply choose
Then, (19) reduces to (31). In particular, we use the following estimates: 1 ) by the Banach perturbation lemma on invertible linear operators [10] and the following: Then, since we obtain the following: where we also used the following: and Using the center Lipschitz condition, we have the following:

Remark 5. (a) We have by the definition of
Hence, we have and t * ≤ u * .
Estimates ( Our modification leads to (31) instead of (30). Moreover, in [15] we showed Theorem 4 but using the following: Hence, our results extend the ones in [15] too. (c) Let us see how parameters δ 1 , δ, λ and functions h, g look like in the case of Newton's method.
Comments similar to the ones given in the previous five remarks can be made for the methods that follow in this Section.

Case 2: Secant method [14] Choose
The nonzero parameters are again connected to the following: The standard condition used in connection to the secant method [14] is the following: for each v 1 , v 2 , z, w ∈ Ω. Then, we have again the following: ≤ 1 and 0 ≤ 1 . The old majorizing sequence {u n } [14] is defined by the following: with the following estimates: However, ours is as follows: with corresponding estimates which are tighter, wherẽ = 0 , k = 0 , k = 1, 2, . . . The old sufficient convergence criterion [14] is β + 2 1 η ≤ 1 but the new one is (for 0 = ) β + 2 η ≤ 1, which is weaker. Hence, we obtain the semi-local convergence of the secant method.

Theorem 5. Under the preceding conditions secant method
Proof. As in Theorem 4, we obtain the following: (see also [14]).

Case 3: Newton-type method [8,16]
The parameters are connected to the following: The conditions in [8,16] use the following: We have the following: 5 ≤ 8 and 6 ≤ 9 . The old majorizing sequence {u n } [8,16] with the following estimates:

However, ours is for
with the following estimates: The old sufficient convergence criterion [8,16] is the following: The new one is the following: However, σ ≤ σ 1 , so again condition C is weaker than C 1 . Hence, we obtain the semilocal convergence of the Newton-type method.

Theorem 6. Under the preceding conditions Newton
Proof. It follows from the aforementioned estimates (see also [8,16]). Hence, again the results are extended. Similar benefits are derived in the local convergence case. Suppose the conditions (B) hold: . Then, we have the following local convergence result arrived at independently by Rheinboldt [17] and Traub [18].
In our case, we consider the conditions (D): x * ∈ Ω is a simple solution of equation F(x) = 0.

Numerical Experiments
We contact some experiments showing that the old convergence criteria are not verified, but ours are. Hence, there is no assurance that the methods converge under the old conditions. However, under our approach, convergence can be established.
The rest of the examples are given for the local convergence study of Newton's method.
Hence, our radius of convergence is larger.

Conclusions
We have provided a single sufficient criterion for the semi-local convergence of single step methods. Upon specializing the parameters involved, we showed that although our majorizing sequence is more general than earlier ones, the convergence criteria are weaker (i.e., the utility of the methods is extended), the upper error estimates are more accurate (i.e., at least as few iterates are required to achieve a predecided error tolerance), and we have, at most, an as-small ball containing the solution. These benefits are obtained without additional hypotheses. According to our new technique, we locate a more accurate domain than the earlier ones containing the iterates, leading to a more accurate Lipschitz condition (at least as small).
Our theoretical results are further justified using numerical experiments. In the future, we plan to extend these results by replacing the Lipschitz constants by generalized functions along the same lines [2,12,13].