Next Article in Journal
FUSION: Measuring Binary Function Similarity with Code-Specific Embedding and Order-Sensitive GNN
Previous Article in Journal
Generating Many Majorana Corner Modes and Multiple Phase Transitions in Floquet Second-Order Topological Superconductors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Convergence of Two-Step Kurchatov-Type Methods under Generalized Continuity Conditions for Solving Nonlinear Equations

1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
3
Department of Mathematics, University of Houston, Houston, TX 77204, USA
4
Department of Computational Mathematics, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
*
Authors to whom correspondence should be addressed.
Symmetry 2022, 14(12), 2548; https://doi.org/10.3390/sym14122548
Submission received: 23 October 2022 / Revised: 12 November 2022 / Accepted: 30 November 2022 / Published: 2 December 2022
(This article belongs to the Section Mathematics)

Abstract

:
The study of the microworld, quantum physics including the fundamental standard models are closely related to the basis of symmetry principles. These phenomena are reduced to solving nonlinear equations in suitable abstract spaces. Such equations are solved mostly iteratively. That is why two-step iterative methods of the Kurchatov type for solving nonlinear operator equations are investigated using approximation by the Fréchet derivative of an operator of a nonlinear equation by divided differences. Local and semi-local convergence of the methods is studied under conditions that the first-order divided differences satisfy the generalized Lipschitz conditions. The conditions and speed of convergence of these methods are determined. Moreover, the domain of uniqueness is found for the solution. The results of numerical experiments validate the theoretical results. The new idea can be used on other iterative methods utilizing inverses of divided differences of order one.

1. Introduction

Let X and Y stand for Banach spaces and Ω be a convex and nonempty subset of X. A plethora of applications from diverse disciplines can be solved if reduced to a nonlinear equation of the form
F ( x ) = 0 .
This reduction takes place using Mathematical Modeling [1,2]. Then, a solution denoted by x * Ω is to be found that answers the application. The solution may be a number or a vector or a matrix or a function. This task is very challenging in general. Obviously, the solution x * is desired in closed form. However, in practice, this is achievable only in rare cases. That is why researchers mostly develop iterative methods convergent to x * under some conditions on the initial data.
A popular method is the Newton’s method [2,3,4,5] defined, respectively, for a starting point x 0 Ω and all n = 0 , 1 , 2 , by
x n + 1 = x n F ( x n ) 1 F ( x n ) .
Here, F is the notation for the Fréchet derivative of the operator F. The convergence rate of Newton’s method is quadratic. However, this method requires the calculation of the derivative of the operator F [1,2,3]. It is not always easy or impossible to do, in particular, in the case when the operator is not given analytically, but only the algorithm for its calculation on the computer is known. Then, Newton’s method (2) and its modifications [4,5,6,7,8] using derivatives are not suitable for solving (1). In this case, we can use difference methods [1,3,9,10,11]. The simplest of them is the Secant method [2,3,6,7]
x n + 1 = x n [ x n 1 , x n ; F ] 1 F ( x n )
for all n = 0 , 1 , 2 , , x 1 , x 0 are starting points. The Secant method was extended for the solution of (1) in Banach spaces by J.W. Schmidt [9]. This method under different conditions was studied in many papers [2,7]. The convergence order of the method (3) is equal to 1 + 5 2 = 1,618 . The method with a higher quadratic convergence is described for all n = 0 , 1 , 2 , by the formula
x n + 1 = x n [ x n 1 , 2 x n x n 1 ; F ] 1 F ( x n ) .
This method is famous as the method of the linear interpolation or the Kurchatov’s method. It does not interfere with Newton’s method in the convergence order, and it does not require analytically given derivatives as the Secant method does. The method (4) was proposed for the first time by V.A. Kurchatov in [12] for the one-dimensional case. In the Banach space, the method (4) was first presented in the works of S.M. Shakhno [13,14]. In addition, this method was studied by many authors I.K. Argyros, H. Ren, J.A. Ezquerro, and M.A. Hernández [15,16,17]. The Kurchatov method uses only first-order divided differences in its iterative formula. However, often the studying of its convergence additionally requires conditions for the second-order divided differences. This ensures theoretically obtaining the second order of convergence. Kurchatov’s two-step methods were studied by I.K. Argyros, S. George, H. Kumar, P.K. Parida, and S.M. Shakhno [18,19].
In this article, we propose the following modification of the method (4).
Let x 1 , x 0 Ω . Define the two-step Kurchatov-type methods for all n = 0 , 1 , 2 , by
A n = [ x n 1 , 2 x n x n 1 ; F ] , y n = x n A n 1 F ( x n ) , x n + 1 = y n A n 1 F ( y n )
and
y n = x n A n 1 F ( x n ) , B n = [ x n , 2 y n x n ; F ] , x n + 1 = y n B n 1 F ( y n ) .
It is known that multi-step methods converge faster than the corresponding one-step methods. Therefore, there is a growing interest in the development and theoretical studying of the convergence of such algorithms. It is worth noting that the method (5) uses the same inverse operator in both steps. This helps to reduce the total number of calculations compared to the corresponding one-step method, especially for large scale problems.
We provide the local as well as the semi-local convergence analysis for these methods under generalized conditions. Moreover, these conditions include only operators that appear in methods. The local convergence is given in Section 2. The semi-local convergence is presented in Section 3, followed by the examples and the concluding remarks in Section 4 and Section 5, respectively.

2. Local Convergence

It is convenient for the study of the local convergence for the methods to introduce some parameters and real functions. Set M = [ 0 , ) .
Suppose:
( 1 ) There exists a function φ 0 : M × M R which is continuous and nondecreasing in both variables such that the equation
φ 0 ( t , 3 t ) 1 = 0
has the smallest solution ϱ M { 0 } .
Set M 0 = [ 0 , ρ ) .
( 2 ) There exists a function φ : M 0 × M 0 R , which is continuous and nondecreasing in both variables such that the equation
h 1 ( t ) 1 = 0
has a smallest solution r 1 M 0 { 0 } , where the function h 1 : M 0 R is given by
h 1 ( t ) = φ ( 2 t , 3 t ) 1 φ 0 ( t , 3 t ) .
Define the function h 2 : M 0 R by
h 2 ( t ) = φ ( ( 1 + h 1 ( t ) ) t , 3 t ) 1 φ 0 ( t , 3 t ) h 1 ( t ) .
( 3 ) The equation
h 2 ( t ) 1 = 0
has a smallest solution r 2 M 0 { 0 } .
Define the parameter
r = min { t j } , j = 1 , 2 .
This parameter will be shown to be a radius of convergence in Theorem 1 for the method (5).
Set M 1 = [ 0 , 1 ) . Then, follows by definition (7) that for all t M 1
0 φ 0 ( t , 3 t ) < 1
and
0 h j ( t ) < 1 .
Let U ( v , d ) , U [ v , d ] stand for the open and closed ball in X, respectively, of center v X and radius d > 0 . By £ ( X , Y ) we denote the space of bounded linear operators from X into Y.
The convergence analysis uses the conditions ( C ) for both methods.
Suppose:
( C 1 ) The equation F ( x ) = 0 has a simple solution x * Ω such that F ( x * ) 1 £ ( Y , X ) .
( C 2 ) F ( x * ) 1 ( [ u 1 , u 2 ; F ] F ( x * ) ) φ 0 ( u 1 x * , u 2 x * ) for all u 1 , u 2 Ω .
Set Ω 0 = U ( x * , ϱ ) Ω .
( C 3 ) F ( x * ) 1 ( [ u 3 , u 4 ; F ] [ u 5 , x * ; F ] ) φ ( u 3 u 5 , u 4 x * ) for all u 3 , u 4 , u 5 Ω 0 .
( C 4 ) U [ x * , 3 r ] Ω .
Next, the local convergence is established for the method (5).
Theorem 1.
Suppose that the conditions ( C ) hold. Moreover, if the starting points x 1 , x 0 U ( x * , r ) { x * } , then the sequence { x n } generated by Formula (5) exists in U ( x * , r ) , stays in U ( x * , r ) for all n = 0 , 1 , 2 , and is convergent to x * . Moreover, the following assertions hold
y n x * h 1 ( r ) x n x * x n x * < r
and
x n + 1 x * h 2 ( r ) x n x * x n x * < r ,
where the radius r is given by Formula (7) and the functions h j are as previously defined.
Proof. 
By hypothesis x 1 , x 0 U ( x * , r ) { x * } . Then, by applying conditions ( C 1 ) , ( C 2 ) , the definition of the radius r and (8), we have
F ( x * ) 1 ( A 0 F ( x * ) ) φ 0 ( x 1 x * , 2 x 0 x * + x 1 x * ) = q 0 φ 0 ( r , 3 r ) < 1 .
It follows by (12) and the Banach lemma on the invertible operator [4] that A 0 1 £ ( Y , X ) and
A 0 1 F ( x * ) 1 1 φ 0 ( x 1 x * , 2 x 0 x * + x 1 x * ) .
Moreover, the iterates y 0 and x 1 are well defined by the two substeps of the method (5). In view of that, we can write in that
y 0 x * = x 0 x * A 0 1 ( A 0 [ x 0 , x * ; F ] ) ( x 0 x * ) .
Using (7) and (9) (for j = 1 ), ( C 3 ) , ( C 4 ) , (13) and (14) we obtain
y 0 x * p 0 1 q 0 x 0 x * h 1 ( r ) x 0 x * x 0 x * < r ,
where we also used
F ( x * ) 1 ( A 0 [ x 0 , x * ; F ] ) φ ( x 1 x 0 , 2 x 0 x 1 x * ) φ ( x 1 x * + x 0 x * , 2 x 0 x * + x 1 x * ) = p 0 φ ( 2 r , 3 r ) ,
and
2 x 0 x 1 x * 2 x 0 x * + x 1 x * 3 r .
Similarly, by the second substep of the method (5), we can write
x 1 x * = y 0 x * A 0 1 F ( y 0 ) = A 0 1 ( A 0 [ y 0 , x * ; F ] ) ( y 0 x * ) ,
so
x 1 x * φ ( x 1 y 0 , 2 x 0 x 1 x * ) 1 q 0 y 0 x * h 2 ( r ) x 0 x * x 0 x * .
Hence, the estimates (10) and (11) hold for n = 0 . By simply replacing the role of x 1 , x 0 , y 0 , x 1 by x m , x m + 1 , y m + 1 , x m + 2 , m = 1 , 0 , 1 , in the preceding calculations the induction for the estimates (10) and (11) is terminated. It follows that
x m + 1 x * c x m x * < r ,
where c = h 2 ( r ) [ 0 , 1 ) . Thus, we conclude lim m x m = x * . □
Next, a unique result is presented for the solution of the equation F ( x ) = 0 .
Proposition 1.
Suppose:
( a ) There exists a solution u * U ( x * , ϱ 1 ) of the equation F ( x ) = 0 for some ϱ 1 > 0 .
( b ) The conditions ( C 1 ) and ( C 2 ) hold.
( c ) There exists ϱ 2 ϱ 1 such that
φ 0 ( ϱ 2 , 0 ) < 1 .
Set Ω 1 = U [ x * , ϱ 2 ] Ω .
Then, the equation F ( x ) = 0 is uniquely solvable by the element x * in the region Ω 1 .
Proof. 
Let T = [ u * , x * ; F ] . If then follows by ( a ) ( c ) in turn that
F ( x * ) 1 ( T F ( x * ) ) φ 0 ( u * x * , 0 ) φ 0 ( ϱ 2 , 0 ) < 1 .
Hence, the operator T is invertible. Then, by the identity
u * x * = T 1 ( F ( u * ) F ( x * ) ) = T 1 ( 0 0 ) = 0 ,
we conclude that u * = x * . □
Concerning the local convergence analysis of the method (6), clearly the function h 1 is the same, whereas the function h 3 corresponding to h 2 is given by
h 3 ( t ) = φ ( ( 1 + h 1 ( t ) ) t , ( 2 h 1 ( t ) + 1 ) t ) h 1 ( t ) 1 φ 0 ( t , ( 2 h 1 ( t ) + 1 ) t ) .
This is due to the similar computation
x n + 1 x * = B n 1 ( B n [ y n , x * ; F ] ) y n x * B n 1 F ( x * ) F ( x * ) 1 ( B n [ y n , x * ; F ] ) y n x * φ ( y n x n , 2 y n x * + x n x * ) 1 φ 0 ( x n x * , 2 y n x * + x n x * ) y n x * h 3 ( r ) x n x * ,
where
r ¯ = min { r 1 , r 3 }
and r 3 is the smallest solution of the equation h 3 ( t ) 1 = 0 in the interval [ 0 , ϱ ¯ ) , where ϱ ¯ = min { ϱ , ϱ 3 } and ϱ 3 is the smallest positive solution of the equation
φ 0 ( t , ( 2 h 1 ( t ) + 1 ) t ) 1 = 0
(if it exists). Hence, we arrived at the corresponding semi-local convergence result for the method (6).
Theorem 2.
Suppose that the conditions ( C ) hold with r ¯ replacing r. Then, the conclusions of Theorem 1 hold for the method (6) with the function h 2 .
Clearly, the uniqueness of the solution results of Proposition 1 holds for the method (6).

3. Semi-Local Convergence

The analysis is based on majorizing sequences. Let c 0 and η 0 be given parameters. Suppose that there exists a function ψ 0 : M R which is continuous and nondecreasing such that the equation
ψ 0 ( t c , t c ) 1 = 0
has a smallest solution ϱ 4 > c . Set M 2 = [ 0 , ϱ 4 ) . Moreover, suppose that there exist a function ψ : M 2 R which is continuous and nondecreasing.
Define the sequence { t n } for t 1 = 0 , t 0 = c , s 0 = c + η and all n = 0 , 1 , 2 , by
t n + 1 = s n + ψ ( s n t n 1 , t n t n 1 ) ( s n t n ) 1 ψ 0 ( t n 1 c , 2 t n t n 1 c ) , s n + 1 = t n + 1 + b n + 1 1 ψ 0 ( t n c , 2 t n + 1 t n c ) ,
where b n + 1 = ( 1 + ψ 0 ( t n + 1 , s n ) ) ( t n + 1 s n ) + α n .
Next we present a convergence result for the sequence { t n } .
Lemma 1.
Suppose that for all n = 0 , 1 , 2 ,
0 ψ 0 ( t n c , 2 t n t n 1 c ) < 1 , c t n < λ f o r s o m e λ > 0 .
Then, the sequence { t n } given by Formula (23) is nondecreasing and convergent to its unique least upper bound t * [ 0 , 1 ] .
Proof. 
The sequence { t n } is nondecreasing and bounded from above by λ and as such it is convergent to t * . □
The condition ( H ) shall be used in the semi-local convergence analysis first of the method (5).
Suppose:
( H 1 ) There exist points x 1 , x 0 Ω , parameters c 0 , η 0 such that F ( x 0 ) 1 , A 0 1 £ ( Y , X ) , x 0 x 1 c and A 0 1 F ( x 0 ) η .
( H 2 ) F ( x 0 ) 1 ( [ x , y ; F ] F ( x 0 ) ) ψ 0 ( x x 0 , y x 0 ) for all x , y Ω .
Set Ω 2 = U ( x 0 , ϱ 4 ) Ω .
( H 3 ) F ( x 0 ) 1 ( [ x , y ; F ] [ z , u ; F ] ) ψ ( x z , y u ) for all x , y , z , u Ω 2 .
( H 4 ) Conditions (24) holds
and
( H 5 ) U [ x 0 , 3 t * ] Ω .
Next, the semi-local convergence of the method (5) is presented based on the conditions ( H ) and the preceding terminology.
Theorem 3.
Suppose that the conditions ( H ) hold. Then, the sequence { x n } generated by the method (5) is well defined in U ( x 0 , t * ) , remains in U ( x 0 , t * ) for all n = 1 , 0 , 1 , 2 , and is convergent to a solution x * U [ x 0 , t * ] of the equation F ( x ) = 0 . Moreover, the following error estimates hold
x * x n t * t n .
Proof. 
It follows as in the proof of Theorem 1 but there are some small differences. Iterates y 0 and x 0 are well defined by the condition ( H 1 ) and the first substep of the method (5) for n = 0 . We also have
y 0 x 0 = A 0 1 F ( x 0 ) η = s 0 t 0 < t * ,
so the iterate y 0 U ( x 0 , t * ) . Then, as in Theorem 1 but using the ψ , x 0 instead of ψ , x * , we obtain the estimates
F ( x 0 ) 1 ( A n F ( x 0 ) ) ψ 0 ( x n 1 x 0 , 2 x n x n 1 x 0 ) ψ 0 ( t n 1 t 0 , t n t n 1 + t n t 0 ) < 1 , n = 1 , 2 ,
so
A n 1 F ( x 0 ) 1 ψ 0 ( t n 1 c , t n t n 1 + t n c ) ,
and
F ( x 0 ) 1 F ( y n ) F ( x 0 ) 1 ( [ y n , x n ; F ] A n ) ( y n x n ) ψ ( y n x n 1 , x n 2 x n + x n 1 ) y n x n ψ ( s n t n 1 , t n t n 1 ) ( s n t n )
leading to
x n + 1 y n A n 1 F ( x 0 ) F ( x 0 ) 1 ( [ y n , x n ; F ] A n ) ( y n x n ) ψ ( s n t n 1 , t n t n 1 ) ( s n t n ) 1 ψ 0 ( t n 1 c , 2 t n t n 1 c ) = α n 1 ψ 0 ( t n 1 c , 2 t n t n 1 c ) = t n + 1 s n ,
and
x n + 1 x 0 x n + 1 y n + y n x 0 t n + 1 s n + s n t 0 = t n + 1 c < t * ,
where we also used
y n x n 1 y n x n + x n x n 1 s n t n + t n t n 1 = s n t n 1 ,
2 x n x n 1 x 0 2 x n x 0 + x n 1 x 0 3 t * .
Moreover, we can write
F ( x n + 1 ) = F ( x n + 1 ) F ( y n ) + F ( y n ) = [ x n + 1 , y n ; F ] ( x n + 1 y n ) + F ( y n ) ,
so
F ( x 0 ) 1 F ( x n + 1 ) F ( x 0 ) 1 ( ( [ x n + 1 , y n ; F ] F ( x 0 ) ) + F ( x 0 ) ) x n + 1 y n + α n ( 1 + ψ 0 ( x n + 1 x 0 , y n x 0 ) ) x n + 1 y n + α n ( 1 + ψ 0 ( t n + 1 t 0 , s n t 0 ) ) ( t n + 1 s n ) + α n = b n + 1 .
Thus, we obtain by (23) and the second substep of the method (5) that
y n + 1 x n + 1 A n + 1 1 F ( x 0 ) F ( x 0 ) 1 F ( x n + 1 ) b n + 1 1 ψ 0 ( t n c , 2 t n + 1 t n c ) = s n + 1 t n + 1
and
y n + 1 x 0 y n + 1 x n + 1 + x n + 1 x 0 s n + 1 t n + 1 + t n + 1 c = s n + 1 c < t * .
It follows from (28) and (30) that the sequence { t n } is complete (since (28) is also complete as convergent) in a Banach space X and as such is convergence to some point x * U [ x 0 , t * ] . Furthermore, by letting n in (29) and using the continuity of F we conclude that F ( x * ) = 0 . Then, from the estimate
x n + i x n x n + i x n + i 1 + + x n + 1 x n t n + i t n + i 1 + + t n + 1 t n = t n + i t n
and letting i we show (25). □
Proposition 2.
Suppose:
(1) There exists a solution v * U ( x 0 , ϱ 6 ) of the equation F ( x ) = 0 for some ϱ 6 > 0 .
(2) Conditions on F ( x 0 ) 1 and ( H 2 ) hold on U ( x 0 , ϱ 6 ) .
(3) There exist ϱ 7 > ϱ 6 such that
ψ 0 ( ϱ 7 , ϱ 6 ) < 1 .
Set Ω 4 = U [ x 0 , ρ 7 ] Ω .
Then, the equation F ( x ) = 0 is uniquely solvable by x * in the region Ω 4 .
Proof. 
Let w * Ω 4 with F ( w * ) = 0 . Define the linear operator G by G = [ w * , w * ; F ] . By applying (2) and (31), we obtain
F ( x 0 ) 1 ( G F ( x 0 ) ) ψ 0 ( w * x 0 , v * x 0 ) ψ 0 ( ϱ 7 , ϱ 6 ) < 1 .
So, the linear operator G is invertible. Therefore, from the identity
w * v * = G 1 ( F ( w * ) F ( v * ) ) = G 1 ( 0 0 ) = 0 ,
we conclude that w * = v * . □
The majorizing sequence { t n } for the method (6) is defined similarly by
t n + 1 = s n + α n 1 ψ 0 ( t n c , 2 s n t n c ) a n d s n + 1 = t n + 1 + b n + 1 1 ψ 0 ( t n c , 2 t n + 1 t n c ) .
Lemma 2.
Suppose that for all n = 1 , 0 , 1 , 2 ,
0 ψ 0 ( t n c , 2 s n t n c ) < 1 , ψ 0 ( t n c , 2 t n + 1 t n c ) < 1 , t n μ f o r s o m e μ > 0 .
Then, the sequence { t n } given by Formula (32) is nondecreasing and convergent to its unique least upper bound t ¯ * [ 0 , μ ] .
Theorem 4.
Suppose that the conditions ( H ) hold with (32), t ¯ * replacing (24) and t * , respectively. Then, the conclusions of Theorem 3 hold for the method (6).
The uniqueness of the solution x * is given in Proposition 2.
Remark 1.
(1) Proposition 2 is shown without using all the conditions of the Theorem 3; however, if all conditions are used, we can set ϱ 6 = t * . In this case x * = t * .
(2) If Ω = X , then we have 2 x n x n 1 Ω for all x n , x n 1 Ω . Consequently, the conditions ( C 4 ) or ( H 5 ) can be replaced by ( C 4 ) U [ x * , r ] Ω for the method (5) or U [ x 0 , t ¯ * ] Ω for the method (6) and similarly ( H 5 ) U [ x 0 , t * ] Ω for the method (5) or U [ x * , r ¯ ] Ω for the method (6).
(3) The parameter ϱ 4 given in closed form can be replaced t * or t ¯ * in the condition ( H 5 ) or ( H 5 ) .

4. Numerical Examples

In this section, we provide examples to verify the theoretical result.
Example 1.
Let X = Y = R and Ω R . Define the function F on Ω by
F ( x ) = x 3 q , q 0 , x * = q 3 .
Then,
φ 0 ( t 1 , t 2 ) = A 0 t 1 + B 0 t 2 , A 0 = max x Ω | x + 2 x * | 3 ( x * ) 2 , B 0 = max x Ω | 2 x + x * | 3 ( x * ) 2 ,
φ ( t 1 , t 2 ) = A 1 t 1 + B 1 t 2 , A 1 = max x Ω 0 | x | ( x * ) 2 , B 1 = max x Ω 0 | 2 x + x * | 3 ( x * ) 2 ,
ψ 0 ( t 1 , t 2 ) = A 0 t 1 + B 0 t 2 , A 0 = max x Ω | x + 2 x 0 | 3 x 0 2 , B 0 = max x Ω | 2 x + x 0 | 3 x 0 2 ,
ψ ( t 1 , t 2 ) = A 1 t 1 + B 1 t 2 , A 1 = B 1 = max x Ω 2 | x | x 0 2 .
Local case Let Ω = ( 0 , 1.5 ) and q = 0.9 . Then, x * 0.9655 , Ω 0 ( 0.7830 , 1.1479 ) , r = r ¯ 0.0874 , U [ x * , 3 r ] [ 0.7033 , 1.2277 ] Ω .
Semi-local case. Let Ω = ( 0 , 1.5 ) , q = 0.9 , x 0 = 1 , x 1 = 1.05 . Then, Ω 2 = ( 0.55 , 1.45 ) . Majorizing sequences for method (5) and (6) are
{ t n } = { 0 , 0.0500 , 0.0898 , 0.1084 , 0.1152 , 0.1164 , , 0.1165 } ,
{ t ¯ n } = { 0 , 0.0500 , 0.0904 , 0.1101 , 0.1177 , 0.1192 , , 0.1193 } ,
respectively. So, U [ x * , 3 t * ] [ 0.6506 , 1.3494 ] Ω and U [ x * , 3 t ¯ * ] [ 0.6422 , 1.3578 ] Ω .
Example 2.
Consider the system of m equations
j = 1 m x j + e x i 1 = 0 , i = 1 , , m .
Here X = Y = R n , Ω R and x * = ( 0 , , 0 ) T .
Then
φ 0 ( t 1 , t 2 ) = A 0 t 1 + B 0 t 2 , A 0 = B 0 = max x Ω e x 1 2 γ ,
φ ( t 1 , t 2 ) = A 1 t 1 + B 1 t 2 , A 1 = B 1 = max x Ω 0 e x 2 γ , γ = | ( F ( x * ) ) 1 , 1 1 | ,
ψ 0 ( t 1 , t 2 ) = A 0 t 1 + B 0 t 2 , A 0 = B 0 = max x Ω e x 2 γ ,
ψ ( t 1 , t 2 ) = A 1 t 1 + B 1 t 2 , A 1 = B 1 = max x Ω 2 e x 2 γ , γ = | ( F ( x 0 ) ) 1 , 1 1 | .
Local case. Let m = 5 , Ω = U ( x * , 1 ) . Then, Ω 0 ( x * , 0.3492 ) , r = r ¯ 0.1719 , U [ x * , 3 r ] [ 0.5157 , 0.5157 ] Ω .
Semi-local case. Let m = 5 , Ω = U ( x * , 1 ) , x 0 = ( 0.02 , , 0.02 ) and x 1 = x 0 + 0.0001 . Then, Ω 2 ( x 0 , 0.4502 ) . Majorizing sequences for method (5) and (6) are
{ t n } = { 0 , 0.0001 , 0.1047 , 0.1266 , 0.1372 , , 0.1387 } ,
{ t ¯ n } = { 0 , 0.0001 , 0.1064 , 0.1323 , 0.1460 , , 0.1487 } ,
respectively. So, U [ x * , 3 t * ] [ 0.3962 , 0.4362 ] Ω and U [ x * , 3 t ¯ * ] [ 0.4260 , 0.4660 ] Ω .
Let us apply methods (4)–(6) for solving considered nonlinear problems under different initial approximations x 0 . All these methods require addition approximation x 1 . It is computed by the rule x 1 = x 0 + 10 4 . The stopping conditions for the iterative process are x n + 1 x n 10 8 .
Table 1 and Table 2 show number of iterations that are needed for solving one equation and system of equations for m = 10 .
Figure 1 and Figure 2 demonstrate that norms F ( x n ) and x n x n 1 for the two-step Kurchatov’s methods (5) and (6) decrease faster than for Kurchatov’s method (4).

5. Conclusions

The objective in this work is to develop a process for studying the convergence of iterative methods containing inverses of linear operators under weak conditions. These conditions involve only operators appearing in the methods. In particular, a local and a semi-local convergence analysis of the two-step Kurchatov-type methods is provided under the generalized Lipschitz conditions for only divided differences of order one. Regions of convergence and uniqueness of the solution are established. The results of the numerical experiment are given. The developed technique does not rely on the studied methods. That is why it can also be used on other methods that contain inverses of divided differences or inverses of linear operators in general.
The future work involves the application of this process on other single step, multi-step iterative methods with inverses [14,15,17,20,21]. We will also study the analogs of the studied methods when the Fréchet is replaced by the Gateaux derivative.

Author Contributions

Conceptualization, I.K.A., S.S., S.R. and H.Y.; methodology, I.K.A., S.S., S.R. and H.Y.; software, I.K.A., S.S., S.R. and H.Y.; validation, I.K.A., S.S., S.R. and H.Y.; formal analysis, I.K.A., S.S., S.R. and H.Y.; investigation, I.K.A., S.S., S.R. and H.Y.; resources, I.K.A., S.S., S.R. and H.Y.; data curation, I.K.A., S.S., S.R. and H.Y.; writing—original draft preparation, I.K.A., S.S., S.R. and H.Y.; writing—review and editing, I.K.A., S.S., S.R. and H.Y.; visualization, I.K.A., S.S., S.R. and H.Y.; supervision, I.K.A., S.S., S.R. and H.Y.; project administration, I.K.A., S.S., S.R. and H.Y.; funding acquisition, I.K.A., S.S., S.R. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1983. [Google Scholar]
  3. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  4. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  5. Schwetlick, H. Numerische Lösung Nichtlinearer Gleichungen; VEB Deutscher Verlag der Wissenschaften: Berlin, Germany, 1979. [Google Scholar]
  6. Wang, X. Convergence of Newton’s method and uniqueness of the solution of equations in Banach space. IMA J. Numer. Anal. 2000, 20, 123–134. [Google Scholar] [CrossRef] [Green Version]
  7. Yamamoto, T. Historical developments in convergence analysis for Newton’s and Newton-like methods. J. Comput. Appl. Math. 2000, 124, 1–23. [Google Scholar] [CrossRef] [Green Version]
  8. Werner, W. Über ein Verfahren der Ordnung 1 + 2 zur Nullstellenbestimmung. Numer. Math. 1979, 32, 333–342. [Google Scholar] [CrossRef]
  9. Schmidt, J.W. Untere Fehlerschranken für Regula–Falsi Verhahren. Period. Math. Hungar. 1978, 9, 241–247. [Google Scholar] [CrossRef]
  10. Argyros, I.K. On the Secant method. Publ. Math. 1993, 43, 223–238. [Google Scholar] [CrossRef]
  11. Potra, F.A. On an iterative algorithm of order 1.839…for solving nonlinear operator equations. Numer. Funct. Anal. Optimiz. 1985, 7, 75–106. [Google Scholar] [CrossRef]
  12. Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Akad. Nauk SSSR 1971, 198, 524–526. Translation in Soviet Math. Dokl. 1971, 12, 835–838. (In Russian) [Google Scholar]
  13. Shakhno, S.M. On a Kurchatov’s method of linear interpolation for solving nonlinear equations. PAMM—Proc. Appl. Math. Mech. 2004, 4, 650–651. [Google Scholar] [CrossRef]
  14. Shakhno, S.M. On the difference method with quadratic convergence for solving nonlinear operator equations. Mat. Stud. 2006, 26, 105–110. (In Ukrainian) [Google Scholar]
  15. Argyros, I.K. A Kantorovich-type analysis for a fast iterative method for solving nonlinear equations. J. Math. Anal. Appl. 2007, 332, 97–108. [Google Scholar] [CrossRef] [Green Version]
  16. Argyros, I.K.; Ren, H. On the Kurchatov method for solving equations under weak conditions. Appl. Math. Comput. 2016, 273, 98–113. [Google Scholar] [CrossRef]
  17. Ezquerro, J.A.; Grau, A.; Grau-Sánchez, M.; Ángel Hernández, M. On the efficiency of two variants of Kurchatov’s method for solving nonlinear systems. Numer. Algor. 2013, 64, 685–698. [Google Scholar] [CrossRef]
  18. Argyros, I.K.; Shakhno, S. Extended Two-Step-Kurchatov Method for Solving Banach Space Valued Nondifferentiable Equations. Int. J. Appl. Comput. Math. 2020, 6, 32. [Google Scholar] [CrossRef]
  19. Kumar, H.; Parida, P.K. On semilocal convergence of two-step Kurchatov method. Int. J. Comput. Math. 2019, 96, 1548–1566. [Google Scholar] [CrossRef]
  20. Dhiman, D.; Mishra, L.N.; Mishra, V.N. Solvability of some non-linear functional integral equations via measure of noncompactness. Adv. Stud. Contemp. Math. 2022, 32, 157–172. [Google Scholar]
  21. Sharma, N.; Mishra, L.N.; Mishra, V.N.; Pandey, S. Solution of Delay Differential equation via N 1 v iteration algorithm. Eur. J. Pure Appl. Math. 2020, 13, 1110–1130. [Google Scholar] [CrossRef]
Figure 1. Example 1: norm of residual—(A) and norm of correction—(B) at each iteration.
Figure 1. Example 1: norm of residual—(A) and norm of correction—(B) at each iteration.
Symmetry 14 02548 g001
Figure 2. Example 2: norm of residual—(A) and norm of correction—(B) at each iteration.
Figure 2. Example 2: norm of residual—(A) and norm of correction—(B) at each iteration.
Symmetry 14 02548 g002
Table 1. Results for Example 1.
Table 1. Results for Example 1.
x 0 Method (4)Method (5)Method (6)
1433
101197
100171310
Table 2. Results for Example 2.
Table 2. Results for Example 2.
x 0 Method (4)Method (5)Method (6)
(1,…,1) T 543
(10,…,10) T 13108
(20,…,20) T 251914
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Regmi, S.; Yarmola, H. On the Convergence of Two-Step Kurchatov-Type Methods under Generalized Continuity Conditions for Solving Nonlinear Equations. Symmetry 2022, 14, 2548. https://doi.org/10.3390/sym14122548

AMA Style

Argyros IK, Shakhno S, Regmi S, Yarmola H. On the Convergence of Two-Step Kurchatov-Type Methods under Generalized Continuity Conditions for Solving Nonlinear Equations. Symmetry. 2022; 14(12):2548. https://doi.org/10.3390/sym14122548

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, Samundra Regmi, and Halyna Yarmola. 2022. "On the Convergence of Two-Step Kurchatov-Type Methods under Generalized Continuity Conditions for Solving Nonlinear Equations" Symmetry 14, no. 12: 2548. https://doi.org/10.3390/sym14122548

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop