Extending the Convergence Domain of Methods of Linear Interpolation for the Solution of Nonlinear Equations

: Solving equations in abstract spaces is important since many problems from diverse disciplines require it. The solutions of these equations cannot be obtained in a form closed. That difﬁculty forces us to develop ever improving iterative methods. In this paper we improve the applicability of such methods. Our technique is very general and can be used to expand the applicability of other methods. We use two methods of linear interpolation namely the Secant as well as the Kurchatov method. The investigation of Kurchatov’s method is done under rather strict conditions. In this work, using the majorant principle of Kantorovich and our new idea of the restricted convergence domain, we present an improved semilocal convergence of these methods. We determine the quadratical order of convergence of the Kurchatov method and order 1 + √ 5 2 for the Secant method. We ﬁnd improved a priori and a posteriori estimations of the method’s error.


Introduction
We consider solving equation is a popular device for solving nonlinear equations. It is due to the following: simplicity of the method; small amount of calculations on each iteration and use of the value of an operator from only two previous iterations in the iterative formula of the method. A lot of works are dedicated to this method [1][2][3]. In [4] the Secant method is used for solving the nonlinear least squares problem. The Kurchatov's method of linear interpolation x n+1 = x n − [2x n − x n−1 , x n−1 ; F] −1 F(x n ), n = 0, 1, . . .
Definition 1 ([6]). Let F be a nonlinear operator defined on a subset Ω of a Banach space B 1 with values in a Banach space B 2 and let x, y be two points of Ω. A linear operator from B 1 to B 2 which is denoted by [x, y; F] and satisfies the conditions: (1) for all fixed two points x, y ∈ Ω [x, y; F](x − y) = F(x) − F(y), (2) if exist a Fréchet derivative F (x), then [x, x; F] = F (x) (5) is called a divided difference of F at the points x and y.
Note that (4) and (5) do not uniquely determine the divided difference with the exception of the case when B 1 is one-dimensional. For specific spaces, the differences are defined in Section 6.
If the divided difference [x, y; F] of F satisfies (7) or (8), then F is differentiable by Fréchet on Ω. Moreover, if (7) and (8) are fulfilled, then the Fréchet derivative is continuous by Lipschitz on Ω with the Lipschitz constant L = p +p [8].
but not necessarily vice versa, r ≤r andh(t) ≤ h(t) for each t ∈ [0, r]. Hence, the applicability of the Secant method is extended and under no additional conditions, since all new Lipschitz conditions are specializations of the old condition. Then, in practice the computation of p 0 requires that of of the other p as special cases. Some more advantages are reported after Proposition 1. It is also worth noticing that (c 3 ) and (c 4 ) help define Ω 0 through which p 0 ,p 0 and p are defined too. With the old approach p depends only on Ω, which contains Ω 0 . In our approach the iterates x n remain in Ω 0 (not Ω used in [1]). That is why our new p constants are at least as tight as p 0 . There is where the novelty of our paper lies and the new idea helps us extend the applicability of these methods. It is also worth noticing that the new constants are specializations of the old ones. Hence, no additional conditions are added to obtain these extensions.
It is worth noting from the proof of Theorem 1 that Ω 0 can be defined as Theorem 1. Suppose that the conditions (C) hold. Then, the iterative procedure (2) is well defined and the sequence generated by it converges to a root x * of the equation F(x) = 0. Moreover, the following error estimate holds: where t n+1 = t n pt n−1 1 − pa − 2pr 0 + p(t n + t n−1 ) , n = 0, 1, . . .
The semilocal convergence of the discussed methods was based on the verification of the criterion (11). If this criterion is not satisfied there is no guarantee that the methods converge. We have now replaced (11) by (10) which is weaker (see (12)).
Proof. Notice that the sequence {t n } n≥0 is generated by applying the iterative method (2) to a real polynomial It is easy to see that the sequence monotonically converges to zero. In addition, we have We prove by using of induction that the iterative method is well defined and that Using (c 2 ), (c 5 ) , (13), (14) and it follows that (17) holds for n = −1, 0. Let k be a nonnegative integer and for all the n ≤ k fulfills (17).
In view of the Banach lemma [7] A k+1 is invertible, and Next, we prove that the iterative method exist for n = k + 1. We get By condition (c 4 ), we have Then, it follows from (20)-(22) In view of (16) and (17), we obtain Hence, the iterative method is well defined for each n. Hence, it follows that Estimate (23) shows that {x n } n≥0 is a Cauchy sequence in space B 1 so, it is converging. Let k tend to infinity in formula (23), then we get (13). It is easy to see that x * is the root of equation F(x) = 0, because accordingly to (22), we can write Proof. From equality (15) it follows that the order of convergence of the real sequence {t n } n≥0 is the only positive root of the equation (13), according to Kantorovich's majorant principle, we obtain that the sequence {x n } n≥0 also has an order of convergence 1+ Concerning the uniqueness of the solution, we have the result.

Proposition 1.
Under the conditions (C) further suppose that for d > 0 where y * ∈ Ω 1 and F(y * ) = 0. Then, we get If, additionally, the second divided difference of function F exists and satisfies the Lipschitz condition with constant q, then the majorizing function for F(x) is a cubical polynomial. Then, the following theorem holds.

Theorem 2.
Under the (C) conditions (except (c 5 )) further suppose (h 1 ) Let us presume that pa + qa 2 ≤ 1 and denote Let h be a real polynomialh It the following inequality is satisfied Then, the iterative method (2) is well defined and the generated by it sequence converges to the solution x * of the equation F(x) = 0. Moreover, the following estimate is satisfied This proof is analogous to the proof of Theorem 1.
(b) Similar advantages we report in the case of Theorem 2, see e.g., [1] where instead of (h 2 ) the following condition is used on Ω

A Posteriori Estimation of Error of the Secant Method
If the constants a, c, p, q are known, then we can compute the sequence {x n } n≥0 before generating the sequence {x n } n≥0 by the iterative Secant algorithm. With help of inequalities (13) and (27), the a priori estimation of error of the Secant method is given. We obtain an a posteriori estimation of the method's error, which is sharper than the a priori one.
Proof. By condition (c 4 ), we have It is easy to see that pa + 2pt 0 ≤ 1. Then, according to the Banach lemma [x n , x * ; F] is invertible, and From (4) we can write . Using (24) and (31), we obtain an inequality from which follows that If the second divided difference of function F exists and satisfies the Lipschitz condition with constant q, then the following theorem holds.
Proof. The proof of this theorem is similar to the previous theorem, but instead of inequalities (32), the following majorizing inequalities are used

Semilocal Convergence of the Kurchatov's Method
Sufficient conditions of semilocal convergence and the speed of convergence of the Kurchatov's method (3) are determined by the following theorem.
Leth be a real polynomialh (3) is well defined and the sequence generated by it converges to the solution x * of the equation (1). Moreover, the following inequality is being satisfied where t 0 =r 0 , t −1 =r 0 + a, Proof. The proof of the theorem is realized with help of the majorants of Kantorovich. As in Theorem 1 but we also use the crucial estimate

Corollary 2. The convergence order of iterative Kurchatov's procedure (3) is quadratic.
Proof. As a result that according to (34) the convergence of the sequence {t n } n≥0 to zero not higher than quadratic, then there are C ≥ 0 and N > 0, that for all n ≥ N the inequality holds Given this inequality, a quadratic order convergence of the sequence {t n } n≥0 follows from (34), and according to (33) follows a quadratic convergence order of the sequence {x n } n≥0 of the Kurchatov's method (3).
Thus, Kurchatov's method has a quadratic convergence order as Newton's method but does not require the calculation of derivatives.

Remark 4.
We obtain similar advantages as the ones reported earlier for Theorem 2.

A Posteriori Estimation of Error of the Kurchatov's Method
If the constants a, c, p, q, are known, then we can compute the sequence {t n } n≥0 before receiving the sequence {x n } n≥0 by the iterative algorithm (3). With help of inequality (33) the a priori estimation of error of the Kurchatov's method is given. We will receive a posteriori estimation of the method's error, which is coarser than the a priori one. Theorem 6. Let the conditions of the Theorem 5 be fulfilled. Denote Then for n = 1, 2, 3, ... the following estimation holds Proof. The proof of the theorem is similar [8]. From the conditions (7) and (9), we get It is easy to see that qa 2 + 2pt 0 ≤ 1. Then by Banach lemma [x n , x * ; F] has inverse and From (4) we can write Using (35) and (36) we get inequality So, it follows

Proposition 2.
Under the conditions of Theorem 6 further suppose that for µ > 0

Proof. This time, we have
The rest follows as in Proposition 1.

Remark 5.
The results reported here can immediately be extended further, if we work inΩ 0 instead of the set , if a <r. The new p constants will be at least as tight as the ones presented previously in our paper, sinceΩ 0 ⊆ Ω 0 .

Numerical Experiments
In this Section, we verify the conditions of the theorems on convergence of the considered methods for some nonlinear operators, and also compare the old and new radii of the convergence domains and error estimates. We first consider the representation of the first-order divided differences for specific nonlinear operators [5,6].
Let us consider a nonlinear integral equation where K(s, t, x) is a continuous function of its arguments and continuously differentiable by x. In this case [x, y; F] is defined by formula If x(t) − y(t) = 0 holds for some t = t j , then lim Let us choose x 0 = 0.5 and x −1 = 0.5001. Then we getp = 2x 0 +x −1 For the corresponding theorems in [1] In Table 1, there are radii and convergence domains of considered methods. They are solutions of corresponding equations and satisfy the conditionr 0 ≤r. We see that U(x 0 ,r 0 ) ⊂ Ω hold. Moreover, for Kurchatov's method U(x 0 , 3r 0 ) ≈ (−0.04166, 1.04166) and V 0 ⊂ Ω. So, the assumptions of the theorems are fulfilled. Next, we show that error estimates hold, i.e., x n − x * ≤ t n , and compare them with corresponding ones in [1]. Tables 2 and 3 give results for Secant method (2), and Table 4 for Kurchatov's method (3). Tables 2-4 show the superiority of our results over the earlier ones, i.e., obtained error estimates are tighter in all cases. That means fewer iterates than before are needed to reach a predetermined error tolerance.
In view of (14) and (15),r 0 < r 0 and Remark 1, we get t NEW n ≤ t OLD n .
Secant and Kurchatov's methods solve this system under 5 iterations for ε = 10 −10 and the specified initial approximations. Let us choose x 0 (s) = 5 and x −1 (s) = 6. Both methods give approximate solution of the integral equation under 13 iterations for ε = 10 −10 . To solve a linear integral equation at each iteration was applied Nystrom method. We use a trapezoidal quadrature formula with 101 nodes. On the graphs P n denotes x n − x n−1 and E n denotes x n − x * (see Figure 1). We can see that E n = O(h 2 ), where h = 0.01. This corresponds to the error estimation of the trapezoidal quadrature formula.
(a) (b) Figure 1. Values of (a) x n − x n−1 and (b) x n − x * at each iteration.

Conclusions
The investigations conducted showed the effectiveness of applying the Kantorovich majorant principle for determining the convergence and the order of convergence of iterative difference methods.
The convergence of the Secant method (2) with the order 1 + √ 5 2 and the quadratic convergence order of the Kurchatov's method (3) are established. According to this technique, nonlinear majorants for a nonlinear operator are constructed, taking into account the conditions imposed on it. By using our idea of restricted convergence regions, we find tighter Lipschitz constants leading to a finer local convergence analysis of these methods than in [1]. Our new technique can be used to extend the applicability of other methods along the same lines. More details on the extensions were given in Remark 1.