Next Article in Journal
An Incompressible Smoothed Particle Hydrodynamics (ISPH) Model of Direct Laser Interference Patterning
Previous Article in Journal
Acknowledgement to Reviewers of Computation in 2019
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Convergence Analysis of the Newton–Kurchatov Method under Weak Conditions

1
Department of Mathematics, Cameron University, Lawton, OK 73505, USA
2
Faculty of Applied Mathematics and Informatics, Ivan Franko National University of Lviv, 79000 Lviv, Ukraine
*
Author to whom correspondence should be addressed.
Computation 2020, 8(1), 8; https://doi.org/10.3390/computation8010008
Submission received: 8 January 2020 / Revised: 20 January 2020 / Accepted: 23 January 2020 / Published: 26 January 2020

Abstract

:
The technique of using the restricted convergence region is applied to study a semilocal convergence of the Newton–Kurchatov method. The analysis is provided under weak conditions for the derivatives and the first order divided differences. Consequently, weaker sufficient convergence criteria and more accurate error estimates are retrieved. A special case of weak conditions is also considered.

1. Introduction

Nonlinear equations, in particular systems of nonlinear algebraic or transcendental equations, arise often when numerical methods are used for solving applied problems. A popular method for solving such equations is Newton’s [1,2,3]. However, it requires differentiability of the nonlinear function. This is not a requirement for difference methods [1,2,3,4]. They can be applied to equations with a nondifferentiable function [5]. For some problems, the nonlinear function can be represented as the sum of the differentiable and nondifferentiable parts. In this case, methods with operator decomposition are often used [1,2,6,7,8,9]. Numerical examples show that the convergence is faster than difference methods and the Newton-type method [10,11,12].
Let us consider an equation
H ( x ) F ( x ) + G ( x ) = 0 ,
where F and G : D X Y . Here F is a differentiable operator, G is a continuous operator, D is an open convex set, X and Y are Banach spaces.
We use Newton–Kurchatov method [8,13,14,15,16] for solving Equation (1) numerically
x n + 1 = x n A n 1 H ( x n ) , n 0 ,
where A n = F ( x n ) + G ( 2 x n x n 1 ; x n 1 ) . It is a combination of Newton and Kurchatov methods [3,4,17]. Their formulas for solving equation F ( x ) = 0 are of the form
x n + 1 = x n F ( x n ) 1 F ( x n ) , n 0
and
x n + 1 = x n F ( 2 x n x n 1 ; x n 1 ) 1 F ( x n ) , n 0 ,
respectively; G ( · ; · ) and F ( · ; · ) denote a first-order divided difference. Let x ˜ D . Denote
B ( x ˜ , R ) = { x X : x x ˜ < R } ,
and
B ( x ˜ , R ) ¯ = { x X : x x ˜ R } .
Our semilocal convergence is based on some generalized conditions. Suppose that for each x , y D :
A 0 1 ( F ( x ) F ( x 0 ) ) ω 1 0 ( x x 0 ) ,
A 0 1 ( G ( x ; y ) G ( 2 x 0 x 1 ; x 1 ) ) ω 2 0 ( x ( 2 x 0 x 1 ) , y x 1 ) ,
where ω 1 0 : R + R + and ω 2 0 : R + × R + R + are nondecreasing functions. Let a > 0 . Suppose that equation
ω 1 0 ( u ) + ω 2 0 ( 3 u + a , u + a ) = 1
has at least one positive solution. Denote by R 0 the smallest such solution. Set D 0 = D B ( x 0 , 3 R 0 ) and suppose that for each x , y , u , v D 0 with 2 y x D 0
A 0 1 ( F ( x ) F ( y ) ) ω 1 ( x y ) ,
A 0 1 ( G ( 2 y x ; x ) G ( u ; v ) ) ω 2 ( 2 y x u , x v ) ,
where ω 1 : R + R + and ω 2 : R + × R + R + are nondecreasing functions. Moreover, ω 1 ( t r ) h ( t ) ω 1 ( r ) , t [ 0 , 1 ] , r [ 0 , R ] , h : [ 0 , 1 ] R [11].
In this article, we also consider ε -type conditions for x , y D
A 0 1 ( F ( x ) F ( x 0 ) ) ε 1 0 ,
A 0 1 ( G ( x ; y ) G ( 2 x 0 x 1 ; x 1 ) ) ε 2 0 ,
for x ¯ , y ¯ , u , v D 0 with 2 y ¯ x ¯ D 0
A 0 1 ( F ( x ) F ( y ) ) ε 1 ,
A 0 1 ( G ( 2 y ¯ x ¯ ; x ¯ ) G ( u ; v ) ) ε 2 ,
where ε 1 0 , ε 1 , ε 2 0 and ε 2 are positive constants, for some D 0 D .

2. Semilocal Analysis

Set Φ = 0 1 h ( t ) d t .
Theorem 1.
Let F and G be nonlinear operators with the specified properties. Assume that:
  • linear operator A 0 , where x 1 and x 0 D , is invertible;
  • A 0 1 ( F ( x 0 ) + G ( x 0 ) ) η , x 0 x 1 α ;
  • Equations (3) and (4) hold on D, Equations (6) and (7) hold on D 0 ;
  • equation
    u 1 m 1 ω 1 0 ( u ) ω 2 0 ( 3 u + α , u + α ) η = 0 ,
    where m = Φ ω 1 ( η ) + max { ω 2 ( η + α , α ) , ω 2 ( 2 η , η ) } , has at least one positive solution greater than η and α. Denote by R the smallest such solution;
  • ω 1 0 ( R ) + ω 2 0 ( 3 R + α , R + α ) < 1 , M = m 1 ω 1 0 ( R ) ω 2 0 ( 3 R + α , R + α ) < 1 , B ( x 0 , 3 R ) D .
Then, the sequence { x n } n 0 , generated by Equation (2), is well-defined, remains in B ( x 0 , R ) and converges to a unique solution x * B ( x 0 , R ) ¯ of Equation (1), and R < R 0 .
Proof. 
The proof of Theorem 1 is carried out by mathematical induction and is similar to the one in [8] but there are some differences. By Equations (2) and (12), for n = 0 we have
x 1 x 0 A 0 1 ( F ( x 0 ) + G ( x 0 ) ) η < R
and x 1 B ( x 0 , R ) . Using the conditions in Equations (3) and (4), we get
I A 0 1 A 1 = A 0 1 ( A 0 A 1 )
ω 1 0 ( x 1 x 0 ) + ω 2 0 ( 2 x 0 x 1 2 x 1 + x 0 , x 1 x 0 )
ω 1 0 ( η ) + ω 2 0 ( 2 η + α , α ) ω 1 0 ( R ) + ω 2 0 ( 2 R + α , α )
ω 1 0 ( R ) + ω 2 0 ( 3 R + α , R + α ) < 1 .
According to the Banach lemma on inverse operators [1] A 1 1 A 0 exists and
A 1 1 A 0 1 1 ω 1 0 ( R ) ω 2 0 ( 3 R + α , R + α ) .
Then, we have
A 0 1 ( F ( x 1 ) + G ( x 1 ) )
= A 0 1 ( F ( x 1 ) F ( x 0 ) F ( x 0 ) ( x 1 x 0 ) )
+ A 0 1 ( G ( x 1 ) G ( x 0 ) G ( 2 x 0 x 1 ; x 1 ) ( x 1 x 0 ) )
= 0 1 A 0 1 ( F ( x 0 + t ( x 1 x 0 ) ) F ( x 0 ) ) d t ( x 1 x 0 )
+ A 0 1 ( G ( x 1 ; x 0 ) G ( 2 x 0 x 1 ; x 1 ) ) ( x 1 x 0 ) .
Hence, by the conditions in Equations (6) and (7), we obtain
x 2 x 1 = A 1 1 ( F ( x 1 ) + G ( x 1 ) ) A 1 1 A 0 A 0 1 ( F ( x 1 ) + G ( x 1 ) )
Φ ω 1 ( x 1 x 0 ) + ω 2 ( 2 x 0 x 1 x 1 , x 1 x 0 ) 1 ω 1 ( R ) ω 2 ( 3 R + α , R + α ) x 1 x 0
Φ ω 1 ( η ) + ω 2 ( η + α , α ) 1 ω 1 0 ( R ) ω 2 0 ( 3 R + α , R + α ) x 1 x 0 M x 1 x 0 < η .
On the other hand,
x 2 x 0 x 2 x 1 + x 1 x 0
( M + 1 ) x 1 x 0 ( M + 1 ) η = 1 M 2 1 M η < 1 1 M η = R .
Therefore, x 2 B ( x 0 , R ) . Suppose that for k = 1 , , n 1 holds:
  • A k 1 A 0 exists and A k 1 A 0 1 1 ω 1 0 ( R ) ω 2 0 ( 3 R + α , R + α ) ;
  • x k + 1 x k M x k x k 1 M k x 1 x 0 < η ;
  • x k + 1 B ( x 0 , R ) .
Then, using the conditions in Equations (3) and (4), for k = n we have
I A 0 1 A n = A 0 1 ( A 0 A n )
ω 1 0 ( x 0 x n ) + ω 2 0 ( 2 x 0 x 1 2 x n + x n 1 , x 1 x n 1 )
ω 1 0 ( R ) + ω 2 0 ( 3 R + α , R + α ) < 1 .
According to the Banach lemma on inverse operators [1] A n 1 A 0 exists and
A n 1 A 0 1 1 ω 1 0 ( R ) ω 2 0 ( 3 R + α , R + α ) .
By equality
A 0 1 ( F ( x n ) + G ( x n ) ) = A 0 1 ( F ( x n ) F ( x n 1 ) F ( x n 1 ) ( x n x n 1 ) )
+ A 0 1 ( G ( x n ) G ( x n 1 ) G ( 2 x n 1 x n 2 ; x n 2 ) ( x n x n 1 ) )
= 0 1 A 0 1 ( F ( x n 1 + t ( x n x n 1 ) ) F ( x n 1 ) ) d t ( x n x n 1 )
+ A 0 1 ( G ( x n ; x n 1 ) G ( 2 x n 1 x n 2 ; x n 2 ) ) ( x n x n 1 )
and the conditions in Equations (6) and (7), we have
x n + 1 x n = A n 1 ( F ( x n ) + G ( x n ) ) A n 1 A 0 A 0 1 ( F ( x n ) + G ( x n ) )
1 1 ω 1 0 ( R ) ω 2 0 ( 3 R + α , R + α ) ( Φ ω 1 ( x n x n 1 )
+ ω 2 ( 2 x n 1 x n 2 x n , x n 1 x n 2 ) ) x n x n 1
Φ ω 1 ( η ) + ω 2 ( 2 η , η ) 1 ω 1 0 ( R ) ω 2 0 ( 3 R + α , R + α ) x n x n 1
M x n x n 1 M n x 1 x 0 < η .
Next, we show that x n + 1 B ( x 0 , R ) . Indeed,
x n + 1 x 0 x n + 1 x n + x n x n 1 + + x 1 x 0
( M n + M n 1 + + M + 1 ) x 1 x 0 1 M n + 1 1 M η < 1 1 M η = R ,
and x n + 1 B ( x 0 , R ) . Moreover, we show that sequence { x n } n 0 is fundamental. Indeed, for p 1
x n + p x n x n + p x n + p 1 + x n + p 1 x n + p 2 + + x n + 1 x n
( M p 1 + M p 2 + + 1 ) x n + 1 x n
= 1 M p 1 M x n + 1 x n 1 M p 1 M M n η < M n 1 M η .
Therefore, { x n } n 0 is a fundamental sequence and converges to x * B ( x 0 , R ) ¯ . Furthermore, we prove that x * is a unique solution of Equation (1). Since
A 0 1 H ( x n ) ( Φ ω 1 ( η ) + ω 2 ( 2 η , η ) ) x n x n 1
and x n x n 1 0 for n , so H ( x * ) = 0 . Finally, suppose there exists y * B ( x 0 , R ) ¯ , y * x * such that H ( y * ) = 0 . Denote
A = 0 1 F ( x * + t ( y * x * ) ) d t + G ( y * ; x * ) .
Using the conditions in Equations (3) and (4), we get
A 0 1 ( A 0 A ) 0 1 ω 1 0 ( x 0 x * t ( y * x * ) ) d t
+ ω 2 0 ( 2 x 0 x 1 y * , x 1 x * )
0 1 ω 1 0 ( ( 1 t ) x 0 x * + t x 0 y * ) d t
+ ω 2 0 ( x 0 x 1 + x 0 y * , x 1 x 0 + x 0 x * )
ω 1 0 ( R ) + ω 2 0 ( R + α , R + α ) < 1 .
According to the Banach lemma on inverse operators A is invertible and in view of
A ( y * x * ) = H ( y * ) H ( x * )
it follows y * = x * . □
Theorem 2.
Let F and G be nonlinear operators with the specified properties. Assume that
  • linear operator A 0 , where x 1 and x 0 D , is invertible;
  • Equations (8)–(11) hold;
  • numbers η > 0 , γ and R > 0 such that
    A 0 1 ( F ( x 0 ) + G ( x 0 ) ) η , x 0 x 1 < R , 0 < ε 1 0 + ε 2 0 < 1 ,
    γ = ε 1 + ε 2 1 ( ε 1 0 + ε 2 0 ) < 1 , η 1 γ < R , B ( x 0 , 3 R ) D .
Then, the sequence { x n } n 0 , generated by Equation (2), is well-defined, remains in B ( x 0 , R ) and converges to a unique solution x * B ( x 0 , R ) ¯ of Equation (1). Moreover, the following inequality holds for each n 0
x n x * γ n 1 γ η .
Notice that one possibility is ε 1 0 and ε 2 0 in practice may be functions of x x 0 . That is
ε 1 0 : [ 0 , ) [ 0 , ) and ε 2 0 : [ 0 , ) [ 0 , )
to be nondecreasing and continuous functions.
Suppose that equation
ε 1 0 ( u ) + ε 2 0 ( u ) = 1
has a minimal positive solution ρ . Then, the set D 0 can be defined as D 0 = D B ( x 0 , ρ ) . Moreover, in this case we have
A 0 1 ( A 0 A n ) ε 1 0 + ε 2 0 < 1 ,
so A n 1 A 0 1 1 ( ε 1 0 + ε 2 0 ) .
However, D 0 can also be defined in other ways too depending on the construction of F and G.
Let
ω 1 0 ( x y ) = 2 0 x y , ω 2 0 ( x u , y v ) = p 0 ( x u + y v ) ,
ω 1 ( x y ) = 2 x y , ω 2 ( x u , y v ) = p ( x u + y v ) .
Next, we obtain from Theorem 1 the convergence analysis of the method in Equation (2) under the Lipschitz conditions.
Corollary 1.
Let F and G be nonlinear operators with the specified properties. Assume that:
  • linear operator A 0 , where x 1 and x 0 D , is invertible;
  • numbers η > 0 and α > 0 such that Equation (12) is satisfied;
  • the Lipschitz conditions are fulfilled for each x , y D
    A 0 1 ( F ( x 0 ) F ( y ) ) 2 0 x y ,
    A 0 1 ( G ( x ; y ) G ( 2 x 0 x 1 ; x 1 ) ) p 0 ( x ( 2 x 0 x 1 ) + y x 1 ) ,
    and for each x , y , u , v D 0
    A 0 1 ( F ( x ) F ( y ) ) 2 x y ,
    A 0 1 ( G ( x ; y ) G ( u ; v ) ) p ( x u + y v ) ;
  • equation
    s 1 m 1 2 0 s p 0 ( 4 s + 2 α ) η = 0 ,
    where m = η + max { p ( η + 2 α ) , 3 p η } has at least one positive solution greater than η and α. Denote by R the smallest such solution;
  • 2 0 R + p 0 ( 4 R + 2 α ) < 1 , M = m 1 2 0 R p 0 ( 4 R + 2 α ) < 1 , B ( x 0 , 3 R ) D .
Then, the sequence { x n } n 0 , generated by Equation (2), is well-defined, remains in B ( x 0 , R ) and converges to a unique solution x * B ( x 0 , R ) ¯ of Equation (1).
Remark 1.
The corresponding Equations (6) and (7) to conditions in [8] are given for each x , y , u , v D by
A 0 1 ( F ( x ) F ( y ) ) ω 1 1 ( x y ) ,
A 0 1 ( G ( x ; y ) G ( u ; v ) ) ω 2 1 ( x u , y v ) .
Notice that ω 1 = ω 1 ( ω 1 0 ) and ω 2 = ω 2 ( ω 2 0 ) , i.e., they are functions of ω 1 0 and ω 2 0 , and since D 0 D
ω 1 0 ( t ) ω 1 1 ( t ) ,
ω 2 0 ( t ) ω 2 1 ( t ) ,
ω 1 ( t ) ω 1 1 ( t ) ,
ω 2 ( t ) ω 2 1 ( t ) ,
m m 1 ,
ε 1 0 ε 1 1 , ε 2 0 ε 2 1 , ε 1 ε 1 1 , ε 2 ε 2 1 ,
l 0 l 1 ,
a n d p 0 p 1 ,
where m 1 = Φ ω 1 1 ( η ) + max { ω 2 1 ( η + α , α ) , ω 2 1 ( 2 η , η ) } , M 1 = m 1 1 ω 1 1 ( R 1 ) ω 2 1 ( 3 R 1 + α , R 1 + α ) .
It’s easy to see that if R 1 R then M 1 M , and if R 1 R then M 1 M .
If ω 1 1 , ω 2 1 are constant functions
ω 1 1 ( x y ) = 2 1 x y , ω 2 1 ( x u , y v ) = p 1 ( x u + y v ) .
It follows from the above that we obtain the following improvements:
1. Weaker sufficient convergence criteria, since
ω 1 0 ( x 0 x n ) + ω 2 0 ( 2 x 0 x 1 2 x n + x n 1 , x 1 x n 1 ) ω 1 1 ( x 0 x n ) + ω 2 1 ( 2 x 0 x 1 2 x n + x n 1 , x 1 x n 1 )
and
Φ ω 1 ( x n x n 1 ) + ω 2 ( 2 x n 1 x n 2 x n , x n 1 x n 2 ) Φ ω 1 1 ( x n x n 1 ) + ω 2 1 ( 2 x n 1 x n 2 x n , x n 1 x n 2 ) M n M n 1 M n x n x n 1 M n 1 x n x n 1 ,
where
M n = Φ ω 1 ( x n x n 1 ) + ω 2 ( 2 x n 1 x n 2 x n , x n 1 x n 2 ) 1 ω 1 0 ( x 0 x n ) ω 2 0 ( 2 x 0 x 1 2 x n + x n 1 , x 1 x n 1 ) M ,
M n 1 = Φ ω 1 1 ( x n x n 1 ) + ω 2 1 ( 2 x n 1 x n 2 x n , x n 1 x n 2 ) 1 ω 1 1 ( x 0 x n ) ω 2 1 ( 2 x 0 x 1 2 x n + x n 1 , x 1 x n 1 ) M 1 ,
but not necessarily vice versa unless, if
ω 1 1 = ω 1 0 and ω 2 1 = ω 2 0 .
2. Fewer iterates to obtain a desired error accuracy on x n + 1 x n .
3. Better information on the location of the solution x * .
Notice that ( ω 1 0 , ω 1 ), ( ω 2 0 , ω 2 ) are special cases of the old functions ω 1 1 , ω 2 1 , respectively. So no additional information or computational effort are required to obtain these improvements. This technique of using the restricted convergence region can be used to extend the applicability of other iterative methods along the same lines. Finally, Lipschitz functions and parameters can become at least as small, if D 0 is replaced by D 1 : = D B ( x 1 , R 0 b ) , b = max { η , α } . Notice that D 1 D 0 . The results then can be adjusted in this setting. Numerical examples where Equations (17)–(24) hold as strict inequalities can be found in [1,6].

3. Numerical Results

In this Section, we test the old and the new convergence criteria.
Let X = Y = R . In this case x = | x | for x X or x Y , D = ( a , b ) , D 0 = ( a 0 , b 0 ) . Let us define function F + G : R R , where
F ( x ) = e x 0.5 + x 3 1.3 , G ( x ) = 0.2 x | x 2 2 | .
The exact solution of F ( x ) + G ( x ) = 0 is x * = 0.5 . Let D = ( 0 , 1.4 ) . Then, we have
F ( x ) = e x 0.5 + 3 x 2 ,
G ( x , y ) = 0.2 x ( 2 x 2 ) 0.2 y ( 2 y 2 ) x y = 0.2 ( 2 x 2 x y y 2 ) .
A 0 = e x 0 0.5 + 3 x 0 2 + 0.2 ( 2 x 1 2 x 1 x 0 x 0 2 ) ,
and
| A 0 1 ( F ( x ) F ( y ) ) | e b 0.5 + 3 | x + y | | A 0 | | x y | ,
| A 0 1 ( G ( x , y ) G ( u , v ) ) | 0.2 | A 0 | | u + x + y | | u x | + | v + y + u | | v y | .
In view of this, we can write
ω 1 0 ( | x x 0 | ) = e b 0.5 + 3 | 1.4 + x 0 | | A 0 | | x x 0 | ,
ω 2 0 ( | x ( 2 x 0 x 1 ) | , | y x 1 | ) = 0.2 | A 0 | | 2 x 0 x 1 + 2 b | | x ( 2 x 0 x 1 ) | + | 2 x 0 + b | | y x 1 | ,
ω 1 ( | x y | ) = e b 0 0.5 + 6 b 0 | A 0 | | x y | , ω 2 ( | 2 y x u , | x v | ) = 0.6 b 0 | A 0 | ( | 2 y x u | + | x v | ) ,
ω 1 1 ( | x y | ) = e b 0.5 + 6 b | A 0 | | x y | , ω 2 1 ( | x u | , | y v | ) = 0.6 b | A 0 | | x u | + | y v | .
Let x 0 = 0.55 , x 1 = 0.551 . Then, we get α = 0.001 , η 0.0479 , R 0 0.2011 , m 0.1431 , m 1 0.1750 , D 0 = D B ( x 0 , 3 R 0 ) = ( 0 , 1.153 ) . To get the radius of convergence we solve Equation (13) and similar one from [8]. Every such equation has two positive solutions. The smallest solutions satisfy conditions of appropriate theorems. So, we get R 0.0602 , R 1 0.0714 , M 0.2043 , M 1 0.3283 , B ( x 0 , 3 R ) = ( 0.3693 , 0.7307 ) D and B ( x 0 , 3 R 1 ) = ( 0.3359 , 0.7641 ) D . The error estimates are given in Table 1. For error | x n + 1 x n | , n 1 , holds
| x n + 1 x n | M n | x n x n 1 | M | x n x n 1 |
and
| x n + 1 x n | M n 1 | x n x n 1 | M 1 | x n x n 1 | .
Let x 0 = 0.53 , x 1 = 0.6 . Then, we get α = 0.07 , η 0.0293 , R 0 0.1892 , m 0.1114 , m 1 0.1431 , D 0 = D B ( x 0 , 3 R 0 ) = ( 0 , 1.0975 ) . In this case only the largest solutions satisfy conditions of appropriate theorems. So, we get R 0.1624 , R 1 0.1109 , and M 0.8199 , M 1 0.7362 . Moreover, B ( x 0 , 3 R ) = ( 0.3676 , 0.6924 ) D and B ( x 0 , 3 R 1 ) = ( 0.4191 , 0.6409 ) D . The error estimates are given in Table 2.
So, more accurate error estimates are retrieved because M n | x n x n 1 | M n 1 | x n x n 1 | . Although M | x n x n 1 | M 1 | x n x n 1 | in the first case and M | x n x n 1 | M 1 | x n x n 1 | in the second case.

Author Contributions

Conceptualization, editing, I.K.A.; investigation, H.Y. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Magrenán, Á.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  2. Argyros, I.K.; Magrenán, Á.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  4. Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Akad. Nauk SSSR. Ser. Math. Phys. 1971, 198, 524–526. [Google Scholar]
  5. Hernandez, M.A.; Rubio, M.J. A uniparametric family of iterative processes for solving nondiffrentiable operators. J. Math. Anal. Appl. 2002, 275, 821–834. [Google Scholar] [CrossRef] [Green Version]
  6. Argyros, I.K.; Shakhno, S.; Yarmola, H. Two-Step Solver for Nonlinear Equations. Symmetry 2019, 11, 128. [Google Scholar] [CrossRef] [Green Version]
  7. Shakhno, S.; Babjak, A.-V.; Yarmola, H. Combined Newton-Potra method for solving nonlinear operator equations. J. Comput. Appl. Math. 2015, 3, 170–178. [Google Scholar]
  8. Shakhno, S.M.; Yarmola, H.P. Convergence of the Newton–Kurchatov Method Under Weak Conditions. J. Math. Sci. 2019, 243, 1–10. [Google Scholar] [CrossRef]
  9. Shakhno, S.; Yarmola, H. On the two-step method for solving nonlinear equations with nondifferentiable operator. Proc. Appl. Math. Mech. 2012, 12, 617–618. [Google Scholar] [CrossRef]
  10. Argyros, I.K.; Ren, H. On the convergence of a Newton-like method under weak conditions. Commun. Korean Math. Soc. 2011, 26, 575–584. [Google Scholar] [CrossRef] [Green Version]
  11. Argyros, I.K.; Hilout, S. Newton–Kantorovich approximations under weak continuity conditions. J. Appl. Math. Comput. 2011, 37, 361–375. [Google Scholar] [CrossRef]
  12. Zabrejko, P.P.; Nguen, D.F. The majorant method in the theory of Newton-Kantorovich approximations and the Pta’k error estimates. Numer. Funct. Anal. Optim. 1987, 9, 671–684. [Google Scholar] [CrossRef]
  13. Argyros, I.K.; Shakhno, S. Extended Local Convergence for the Combined Newton–Kurchatov Method Under the Generalized Lipschitz Conditions. Mathematics 2019, 7, 207. [Google Scholar] [CrossRef] [Green Version]
  14. Iakymchuk, R.; Shakhno, S.; Yarmola, H. Combined Newton-Kurchatov method for solving nonlinear operator equations. PAMM 2016, 16, 719–720. [Google Scholar] [CrossRef] [Green Version]
  15. Shakhno, S.M. Combined Newton–Kurchatov method under the generalized Lipschitz conditions for the derivatives and divided differences. J. Comput. Appl. Math. 2015, 2, 78–89. [Google Scholar]
  16. Shakhno, S.M.; Yarmola, H.P. Two-point method for solving nonlinear equation with nondifferentiable operator. Mat. Stud. 2009, 36, 213–220. [Google Scholar]
  17. Shakhno, S.M. On the difference method with quadratic convergence for solving nonlinear operator equations. Mat. Stud. 2006, 26, 105–110. [Google Scholar]
Table 1. Results for ε = 10 15 .
Table 1. Results for ε = 10 15 .
n | x n + 1 x n | M n | x n x n 1 | M | x n x n 1 | M n 1 | x n x n 1 | M 1 | x n x n 1 |
12.0602 × 10 3 6.8518 × 10 3 9.7951 × 10 3 9.1409 × 10 3 1.5738× 10 2
23.1428 × 10 6 8.9551 × 10 5 4.2097 × 10 4 1.1958 × 10 4 6.7636 × 10 4
37.0617 × 10 12 5.2824 × 10 9 6.4218 × 10 7 7.0457 × 10 9 1.0318 × 10 6
401.8032 × 10 17 1.4429 × 10 12 2.4050 × 10 17 2.3184 × 10 12
Table 2. Results for ε = 10 15 .
Table 2. Results for ε = 10 15 .
n | x n + 1 x n | M n | x n x n 1 | M | x n x n 1 | M n 1 | x n x n 1 | M 1 | x n x n 1 |
17.3779 × 10 4 3.1227 × 10 3 2.3990 × 10 2 4.2444 × 10 3 2.1541 × 10 2
23.9991 × 10 7 1.6564 × 10 5 6.0489 × 10 4 2.2443 × 10 5 5.4314 × 10 4
31.1419 × 10 13 2.1230 × 10 10 3.2787 × 10 7 2.8737 × 10 10 2.9440 × 10 7
403.2808 × 10 20 9.3616 × 10 14 4.4410 × 10 20 8.4060 × 10 14

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Yarmola, H. Improving Convergence Analysis of the Newton–Kurchatov Method under Weak Conditions. Computation 2020, 8, 8. https://doi.org/10.3390/computation8010008

AMA Style

Argyros IK, Shakhno S, Yarmola H. Improving Convergence Analysis of the Newton–Kurchatov Method under Weak Conditions. Computation. 2020; 8(1):8. https://doi.org/10.3390/computation8010008

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, and Halyna Yarmola. 2020. "Improving Convergence Analysis of the Newton–Kurchatov Method under Weak Conditions" Computation 8, no. 1: 8. https://doi.org/10.3390/computation8010008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop