Extended Kung–Traub Methods for Solving Equations with Applications

: Kung and Traub (1974) proposed an iterative method for solving equations deﬁned on the real line. The convergence order four was shown using Taylor expansions, requiring the existence of the ﬁfth derivative not in this method. However, these hypotheses limit the utilization of it to functions that are at least ﬁve times differentiable, although the methods may converge. As far as we know, no semi-local convergence has been given in this setting. Our goal is to extend the applicability of this method in both the local and semi-local convergence case and in the more general setting of Banach space valued operators. Moreover, we use our idea of recurrent functions and conditions only on the ﬁrst derivative and divided difference, which appear in the method. This idea can be used to extend other high convergence multipoint and multistep methods. Numerical experiments testing the convergence criteria complement this study.


Introduction
We consider approximating a solution x * of equation where F : Ω ⊂ V 1 −→ V 2 is an operator acting between Banach spaces V 1 and V 2 with Ω = ∅. Kung and Traub, in [1], introduced a fourth-order iterative method for solving nonlinear equations on the real line. This method in Banach space is defined for n = 0, 1, 2, . . . by y n = x n − F (x n ) −1 F(x n ) x n+1 = y n − [y n , x n ; F] −1 F (x n )[y n , x n ; F] −1 F(y n ). (2) Here [., .; F] : Ω × Ω −→ L(V 1 , V 2 ) is a divided difference of order one [2]. The convergence order was obtained using Taylor expansions and hypotheses on the derivative of F of order up to five. Note that the method involves also the derivative of order one, so the assumptions on the fifth derivative reduce the applicability of the method [1,[3][4][5].
Obviously λ (t) is not bounded on Ω. Therefore, the convergence of method (2) is not guaranteed by the analysis in [1]. In order to avoid Taylor series expansions but still obtain the fourth order of convergence for method (2), we use the computational order of convergence and the approximate computational order of convergence, which do not require more than one derivative (see Remark 1.2b).
In this paper, we introduce a majorant sequence and use our idea of recurrent functions to extend the applicability of method (2). Our analysis includes error bounds and results on uniqueness of x * based on computable Lipschitz constants not given before in [1] and in other similar studies using Taylor series [3][4][5][6][7][8][9][10][11][12][13]. The advantages of the extended method include: Applications for solving nonlinear Banach space valued equations are not limited to systems of finite dimensional Euclidean space. Local convergence includes computable upper error bounds not given before. Moreover, the semi-local convergence not given before is proved. The motivation for writing this paper is the extension of the applicability of method (2), as already illustrated by the example. The novelty of the paper includes the extension of the convergence domain in both the local as well as the semi-local convergence case and the introduction of the recurrent functions proving technique, which can be used in other methods too [14][15][16][17][18][19][20][21][22][23][24][25][26][27].
The rest of the paper is set up as follows: In Section 2, we present results on majorizing sequences. Sections 3 and 4 contain the semi-local and local convergence, respectively, where in Section 5, the numerical experiments are presented. Concluding remarks are given in Section 6.

Majorizing Sequences
We present results on majorizing sequences. Definition 1. Let {u n } be a sequence in a Banach space. Then, a nondecreasing scalar sequence {m n } is called majorizing for {u n } if u n+1 − u n ≤ m n+1 − m n for each n = 0, 1, 2, . . . .
Proof. It follows from the definition of sequence {s n }, {t n } and hypotheses (5)-(7). (6) and (7) are verified only in special cases. That is why we introduce stronger hypotheses implying those of Lemma 1 but not necessarily vice versa.
By these definitions we have It then follows by the intermediate value theorem that functions f and g have zeros in the interval (0, 1). Denote the smallest such zero by b 1 and b 2 , respectively. Moreover, we have for each t ∈ M f ∞ (t) ≤ 0 and Furthermore, define scalar sequences {γ n } and {δ n } by Next, we present a second auxiliary result on majorizing sequences.
We also have (15)- (17), which hold for k = 0 by (11). Suppose that estimates (15) and (16) hold for k = 1, 2, . . . n. Then, we obtain It follows by the induction hypotheses and (17) that sequences {s k } and {t k } are nondecreasing. Estimates (15) holds if we instead show for 5 We need a relationship between two consecutive functions f k . By the definition of function f k , we can write, in turn, by adding and subtracting f k Then, we can show instead of (18) that which is true by (8).
Function g k (t) can be written as Then, we again need a relationship between two consecutive functions g k . Notice that By adding and substracting g k from g k+1 we obtain which is true by (11). The induction for estimates (15)-(17) is completed. Hence, sequences {s n }, {t n } are nondecreasing, bounded from above by t * * so they converge to t * .

Theorem 1.
Suppose that conditions (H1)-(H4) hold. Then, sequence {x n } generated by method (2) is well defined in U[x 0 , t * ], remain in U[x 0 , t * ] for each n = 0, 1, 2, . . . and converge to a solution Proof. Assertions shall be proven using induction on k. It follows from the first substep of method (2) that Hence, (A 0 ) is true and y 0 ∈ U[x 0 , t * ]. We can write by the first substep of method (2) for n = 0 and (H2) Next, we show the invertability of linear operator [y 0 , x 0 ; F]. Indeed, we have by (H2) that so by the Banach lemma on linear invertible operators [20], [y 0 , x 0 ; F] −1 exists, and iterate x 1 is well defined by the second substep of method (2) for n = 0. We can write leading to showing (B 0 ). We also obtain [y k , x k ; F] −1 exist for each k = 1, 2, . . . . We shall show they hold for k = n + 1. By the second substep of method (2), we can write, in turn Then, by conditions (H3) and the induction hypotheses, in turn, we obtain that We must show F (x n+1 ) is invertible. Indeed, we have by (H2) Hence, we obtain by method (2) and the two preceding estimates that showing (A k ) for k = n + 1. We also obtain so y n+1 ∈ U[x 0 , t * ]. In view of the first substep of method (2), we can write showing (B k ) for k = n + 1. Moreover, we obtain Hence, we deduce x n+2 ∈ U[x 0 , t * ] and sequence {t n } is Cauchy in a Banach space V 1 . Hence, it converges to some x * ∈ U[x 0 , t * ]. By letting n −→ ∞ in the estimate and the continuity of F, we obtain F (x * ) = 0.
Concerning the uniqueness of the solution x * we have: Then, by (H2) and (ii), we obtain
Then, parameter r 1 is defined by solves equation Moreover, define functions q, p on interval S by Suppose that equations Suppose that equation ψ 2 (t) = 0 has the smallest solution r 2 ∈ S 0 − {0}. We shall prove that is a convergence radius for method (2). Set S 1 = [0, r). By these definitions, we have that and As in the semi-local convergence case we develop the following conditions (C1)-(C4). Suppose: Set Then, we can show the local convergence result for method (2).

Remark 2.
(a) The value r 1 was given by us in [6] for the radius of convergence for Newton's method. It then follows from (24) that r ≤ r 1 .
Hence, the radius of convergence r for method (2) cannot be larger than Newton's. Notice that the radius of convergence given independently by Rheinboldt [7] and Traub [8] is ρ = 2 3K , where K is the Lipschitz constant on Ω. We also have ρ ≤ r, since L 0 ≤ K and L ≤ K. (b ) We compute the computational order of convergence (COC) defined by or the approximate computational order of convergence (ACOC) Then, we obtain in practice the convergence order and avoid the existence of the higher order Fréchet derivatives for operator F.
Next, we present a uniqueness of the solution result.
In general the radius of convergence decreases, when the order increases. However, notice that in the local convergence Examples 2 and 3, the radii for the fourth-order method (2) compare favorably to the ones given in [7,8] for Newton's (see r and ρ).

Conclusions
The Kung-Traub method was revisited, and its applicability was extended in both the semi-local and local convergence case from the real to the Banach space setting. Our analysis includes error bounds and uniqueness on x * information not available before and under weak conditions. This idea is very general and can be used to extend the applicability of other methods.