Next Article in Journal
On a Schrödinger Equation in the Complex Space Variable
Previous Article in Journal
A Two-Stage Feature Selection Approach Based on Artificial Bee Colony and Adaptive LASSO in High-Dimensional Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Convergence of a Kurchatov-Type Method for Solving Nonlinear Equations and Its Applications

1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
3
Department of Computational Mathematics, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
*
Author to whom correspondence should be addressed.
AppliedMath 2024, 4(4), 1539-1554; https://doi.org/10.3390/appliedmath4040082
Submission received: 8 November 2024 / Revised: 5 December 2024 / Accepted: 5 December 2024 / Published: 19 December 2024

Abstract

:
A local and a semi-local convergence analysis are presented for the Kurchatov-type method to solve numerically nonlinear equations in a Banach space. The method depends on a real parameter. By specializing the parameter, we obtain methods already studied in the literature under different types of conditions, such us Newton’s, and Steffensen’s, and Kurchatov’s methods, the Secant method, and other methods. This study is carried out under generalized conditions for first-order divided differences, as well as first-order derivatives. Both in the local case and in the semi-local case, the error estimates, the radii of the region of convergence, and the regions of the solution’s uniqueness are determined. A numerical majorizing sequence is constructed for studying semi-local convergence. The approach of restricted convergence regions is used to develop a convergence analysis of the considered method. The new approach allows a comparison of the convergence of different methods under a uniform set of conditions. In particular, the assumption of generalized continuity used to control the divided difference provides more precise knowledge on the location of the solution as well as tighter error estimates. Moreover, the generality of the approach makes it useful for studying other methods in an analogous way. Numerical examples demonstrate the applicability of our theoretical results.

1. Introduction

Many mathematical models that describe physical or technological processes require solving nonlinear problems. These can include systems of nonlinear algebraic or transcendental equations, nonlinear integral equations, nonlinear boundary value problems for ordinary differential equations, and more complex problems described by nonlinear partial differential equations. Generally, these problems are represented by an equation of the form [1,2,3]
F ( z ) = 0 .
Here, F : D B 1 B 2 is a nonlinear operator, B 1 and B 2 denote Banach spaces, and D is an open and convex set. Recall that Banach is a complete linear normed space, that is, a linear space equipped with some norm such that every Cauchy sequence converges [4]. Moreover, the operator F : D B 2 is said to be Fréchet-differentiable at x D if there exists a bounded linear operator A from D into B 2 such that
lim h 0 1 h F ( x + h ) F ( x ) A ( h ) = 0 .
The linear operator A is denoted by F ( x ) , and is called the Fréchet derivative of F at x [4]. Furthermore, let { x n } be a sequence in B 1 . Then, a sequence { m n } [ 0 , ) for which
x n + 1 x n m n + 1 m n for each n = 0 , 1 , 2 ,
holds is a majorizing sequence for { x n } [4].
It is very rare to find an exact solution to such problems. Therefore, an important task is the development and study of numerical methods for solving (1). Nonlinear problems are usually solved by iterative methods, in particular, by methods with derivatives and methods with divided differences.
The most widely used method for solving the nonlinear Equation (1) is Newton’s with the quadratic convergence order [1,2]
z 0 D , z n + 1 = z n [ F ( z n ) ] 1 F ( z n ) , n 0 .
But it can be applied only for the differentiable operator F. If there are difficulties with the calculation of the derivative, then we can apply the approximation of the derivative by the first-order divided differences [3,5,6].
Definition 1.
Let F be a nonlinear operator defined on a subset D of a Banach space B 1 with values in a Banach space B 2 , and let x , y be two points of D. A linear operator from B 1 to B 2 which is denoted by [ x , y ; F ] and satisfies the following conditions is called a first-order divided difference of F at the points x and y:
(1) For all points, x , y D and x y
[ x , y ; F ] ( x y ) = F ( x ) F ( y ) ,
(2) If there exists a Fréchet derivative F ( x ) , then
[ x , x ; F ] = F ( x ) ,
For Fréchet-differentiable operators, the following equality holds:
[ x , y ; F ] = 0 1 F ( x + t ( y x ) ) d t .
One of the methods with divided differences is the Secant method [3,7]:
z 1 , z 0 D , z n + 1 = z n [ z n , z n 1 ; F ] 1 F ( z n ) , n 0
with a convergence order that is equal to 1 + 5 2 . The method of linear interpolation (Kurchatov method), such as Newton’s, has a quadratic convergence order and is described by the formula [1,6]
z 1 , z 0 D , z n + 1 = z n [ 2 z n z n 1 , z n 1 ; F ] 1 F ( z n ) , n 0 .
The order of convergence of method (4) is theoretically obtained under the assumption that the first- and second-order divided differences of the nonlinear operator satisfy the classical Lipschitz conditions. Derivative-free methods are often employed to solve nonlinear problems involving a non-differentiable operator.
In this article, we study the uniparametric family of Kurchatov-type methods
λ R , z 1 , z 0 D , x n = ( 1 λ ) z n + λ z n 1 , y n = ( 1 + λ ) z n λ z n 1 , A n = [ x n , y n ; F ] , a n d z n + 1 = z n A n 1 F ( z n ) , n 0 .
It should be noted that by setting λ = 1 in method (5), the Kurchatov method (4) is obtained. If λ = 0 and the operator F is differentiable, then we obtain the Newton method (2). Other choices of λ are possible, leading to other methods [1,3,6].
Motivation for the paper.
There are certain restrictions limiting the applicability of (5). This method was proposed in [6]. The local convergence was studied under the condition that F C 4 ( D ) , while the semi-local convergence was analyzed for a non-differentiable integral operator F. Let us look at a toy example. Choose D = ( 1.5 , 1.5 ) and define the function f : D R as
f ( t ) = α 1 t 3 log t 4 + α 2 t 5 + α 3 t 4 , for t 0 0 , for t = 0 ,
where α 1 , α 2 , α 3 R and satisfy α 1 0 , α 2 + α 3 = 0 . It follows based on the definition of f that f ( 3 ) ( t ) is not continuous at t = 0 D . Consequently the results in [6] cannot assure that lim n z n = z * , which denotes a solution to the equation f ( t ) = 0 . However, method (5) converges to the solution z * = 1 D , if, e.g., z 1 = 0.95 , z 0 = 1.05 , α 1 = 1 , α 2 = 1 , and α 3 = 1 . These observations indicate that the conditions in [6] can be replaced by new ones that are weaker.
The convergence analysis in [6] uses conditions on F C ( 4 ) ( D ) . But such derivatives do not appear in the method.
Novelty of the paper.
The new local and the semi-local convergence analyses are shown using conditions only on the operators which are present in method (5), that is to say F and its divided difference of order one. The analysis is valid in the Banach space for operators more general than an integral equation. The generalized continuity used to control the divided difference allows for tighter estimates of | | z n z * | | as well as better knowledge on the location of the solution z * .
As can be seen in Section 2 and Section 3, the developed approach is very general. Thus, it can be used to extend the applicability of other methods along the same lines [4,8,9,10,11,12,13,14,15]. Another advantage of this approach is that a comparison between different methods studied under different conditions becomes possible.
This paper is structured as follows: We conduct a local and a semi-local convergence analysis of method (5) under generalized conditions for first-order divided differences, as well as first-order derivatives, using the approach of restricted convergence regions. These results are presented in Section 2 and Section 3, respectively. In both cases, the uniqueness regions for the solution of the nonlinear problem are obtained. Furthermore, in Section 4, we present numerical examples to demonstrate the reliability of the theoretical results. The concept of our investigation is succinctly represented in Figure 1.

2. Local Convergence

The results of the local analysis are important, since they illustrate how difficult it is to select initial points.
Certain real functions play a role in the local convergence analysis of method (5). The notations U ( x , r ) and U [ x , r ] denote open and closed balls, respectively, centered at the point x with the radius r. Let us set S = [ 0 , ) .
Define the functions g 1 : S S and g 2 : S S as
g 1 ( t ) = ( | 1 λ | + | λ | ) t a n d g 2 ( t ) = ( | 1 + λ | + | λ | ) t .
Suppose the following:
(C1
There exists the function ω 0 : S × S S , which is continuous on S × S and strictly increasing in both variables such that equation ω 0 ( g 1 ( t ) , g 2 ( t ) ) 1 = 0 has at least one positive root. We denote using ρ 0 the smallest such root and set S 0 = [ 0 , ρ 0 ) .
(C2
There exists the function ω : S 0 × S 0 S , which is continuous on S 0 × S 0 and strictly increasing in both variables such that for function h : S 0 S , given by
h ( t ) = ω ( t + g 1 ( t ) , g 2 ( t ) ) 1 ω 0 ( g 1 ( t ) , g 2 ( t ) )
equation h ( t ) 1 = 0 has at least one root in the interval ( 0 , ρ 0 ) . We denote using r * the smallest such root and set S 1 = [ 0 , r * ) .
It follows according to these definitions that for each t S 1 ,
0 ω 0 ( g 1 ( t ) , g 2 ( t ) ) < 1
and
0 h ( t ) < 1 .
The parameter r * is shown to be a radius of convergence for method (5) in Theorem 1.
Define the parameter
ρ * = max { | 1 λ | + | λ | , | 1 + λ | + | λ | } r * .
There is a connection between the real functions ω 0 and ω and the operators in method (5).
(C3
There exists a solution z * D to the equation F ( z ) = 0 , L L ( B 1 , B 2 ) such that L 1 L ( B 2 , B 1 ) and
L 1 ( [ x , y ; F ] L ) ω 0 ( x z * , y z * )
for each x , y D . Set D 0 = D U ( z * , ρ 0 ) .
(C4
L 1 ( [ x , y ; F ] [ z , z * ; F ] ) ω ( x z , y z * ) for each z , y , x D 0 .
(C5
U ( z * , ρ * ) D 0 .
Remark 1. 
(1)
Some popular selections, but not necessarily the most flexible for the operator, are L = I or L = F ( x ¯ ) , or in particular, L = F ( z * ) , where x ¯ is an auxiliary point. In the case of L = F ( z * ) , the solution z * is simple. However, this assumption is not made or implied by the conditions ( C 1 ) ( C 5 ) . Consequently, our results can be used to find solutions to multiplicity greater than one using method (5).
(2)
The proof of Theorem 1 that follows shows that the condition ( C 4 ) can be replaced by ( C 4 ) L 1 ( [ x , y ; F ] [ z , z * ; F ] ) ω ¯ ( x z , y z * ) for each y , x D 1 and z = z n + 1 = z n A n 1 F ( z n ) , z U ( z * , ρ 0 ) , where ω ¯ is as ω. In this case, ω ¯ ω and the results are more precise. However, the condition ( C 4 ) is verified only in special cases.
Next, the local convergence of method (5) relies on the conditions ( C 1 ) ( C 5 ) and the preceding notation.
Theorem 1.
Suppose that conditions ( C 1 ) ( C 5 ) hold and choose z 0 , z 1 U ( z * , r * ) such that z 0 z 1 . Then, for x 0 = ( 1 λ ) z 0 + λ z 1 and y 0 = ( 1 + λ ) z 0 λ z 1 , the sequence { z n } generated by method (5) is well defined in U ( z * , r * ) for each n = 0 , 1 , and converges to the solution z * U ( z * , r * ) of the equation. Moreover, the following error estimates hold for each n = 0 , 1 , :
z n + 1 z * ω ( z n z * + x n z * , y n z * ) 1 ω 0 ( x n z * , y n z * ) z n z * < h ( r * ) z n z *   =   z n z * < r * .
Proof. 
The estimate (11) is shown through mathematical induction. According to hypothesis z 0 , z 1 U ( z * , r * ) . We can write, in turn, that
x 0 z * = ( 1 λ ) z 0 + λ z 1 z * = ( 1 λ ) ( z 0 z * ) + λ ( z 1 z * ) ,
so
x 0 z * | 1 λ | z 0 z * + | λ | z 1 z * = ( | 1 λ | + | λ | ) r * ρ * .
Similarly, we obtain
y 0 z * ( | 1 + λ | + | λ | ) r * ρ * .
Thus, according to condition ( C 5 ) , we have y 0 , x 0 U [ z * , ρ * ] . Notice that y 0 x 0 , since z 0 z 1 . Thus, the divided difference A 0 is well defined. Next, we show that A 0 = [ x 0 , y 0 ; F ] is invertible.
Using (6), (8), (12) and (13) and condition ( C 3 ) , we determine, in turn, that
L 1 ( A 0 L ) ω 0 ( x 0 z * , y 0 z * ) ω 0 ( ρ * , ρ * ) < 1 .
The Banach Lemma on invertible linear operators [4] and (14) implies that A 0 1 L ( B 2 , B 1 ) and
A 0 1 L 1 1 ω 0 ( x 0 z * , y 0 z * ) .
Moreover, the iterate z 1 is well defined by the third substep of method (5) for n = 0 . We need to show that z 1 U ( z * , r * ) and (9) holds if n = 0 . The third substep of method (5) gives
z 1 z * = z 0 z * A 0 1 F ( z 0 ) = A 0 1 ( A 0 [ z 0 , z * ; F ] ) ( z 0 z * ) .
According to (8)–(11), (15) and (16) and conditions ( C 3 ) and ( C 4 ) , we determine, in turn, that
z 1 z * A 0 1 L L 1 ( A 0 [ z 0 , z * ; F ] ) z 0 z * ω ( x 0 z 0 , y 0 z * ) 1 ω 0 ( x 0 z * , y 0 z * ) z 0 z * ω ( z 0 z * + x 0 z * , y 0 z * ) 1 ω 0 ( x 0 z * , y 0 z * ) z 0 z * < h ( r * ) z 0 z * z 0 z * < r *
showing that (11) if n = 0 and the iterate z 1 U ( z * , r * ) , where we used
x 0 z 0 x 0 z * + z 0 z * .
The preceding calculations can be repeated simply by exchanging z 1 , z 0 , A 0 with z m 1 , z m , A m , respectively, where m is a natural number. So, we obtain
z m + 1 z * ω ( x m z m , y m z * ) 1 ω 0 ( x m z * , y m z * ) z m z * ω ( z m z * + x m z * , y m z * ) 1 ω 0 ( x m z * , y m z * ) z m z * < h ( r * ) z m z * = z m z * < r *
which completes the induction for (11) and also shows that the iterate z m + 1 U ( z * , r * ) .
Finally, according to (18), there exists α [ 0 , 1 ) such that
z m + 1 z * α z m z * α m + 1 z 0 z * < r * .
Consequently, according to (19), we conclude that the iterate z m + 1 U ( z * , r * ) and lim m z m = z * . □
A region is determined in the next result which contains only z * as a solution to the equation F ( z ) = 0 .
Proposition 1.
Suppose the following:
(a
The condition ( C 3 ) holds in U ( z * , r 1 ) for some r 1 > 0 .
(b
There exists r 2 r 1 such that
ω 0 ( r 2 , 0 ) < 1 .
Set D 1 = D U [ z * , r 2 ] .
Then, the only solution to the equation F ( z ) = 0 in the region D 1 is z * .
Proof. 
Suppose that there exists x ˜ D 1 , solving the equation F ( z ) = 0 , and z ˜ z * . It follows that the divided difference M = [ z ˜ , z * ; F ] is well defined. Then, according to (a)–(b), we obtain
L 1 ( M L ) ω 0 ( z ˜ z * , 0 ) ω 0 ( r 2 , 0 ) < 1 .
Then, according to (21) and the Banach Lemma on invertible linear operators, we determine that M 1 L ( B 2 , B 1 ) . Thus, from the identity
z ˜ z * = M 1 ( F ( z ˜ ) F ( z * ) ) = M 1 ( 0 0 ) = M 1 ( 0 ) = 0 ,
we determine that z ˜ = z * . □
Remark 2.
Clearly, if all the conditions ( C 1 ) ( C 5 ) hold in Proposition 1, then we can certainly choose r 1 = r * .

3. Semi-Local Convergence

This analysis uses majorizing sequences [3,4] developed to control the iterate { z n } .
The conditions and computations are similar to the local analysis of method (5). But the roles of z * , ω 0 , and ω are exchanged with z 0 , v 0 , and v, respectively, where v 0 and v are real functions.
Suppose the following:
(H1
There exists the function v 0 : S × S S , which is continuous on S × S and nondecreasing in both variables such that equation v 0 ( g 1 ( t ) , g 2 ( t ) ) 1 = 0 has at least one positive root. We denote using R 0 the smallest such root. Set S 2 = [ 0 , R 0 ) .
(H2
There exists the function v : S 2 × S 2 S , which is continuous on S 2 × S 2 and nondecreasing in both variables. Define the sequence { a n } for a 1 = 0 , a 0 0 , a 1 a 0 , and each n = 0 , 1 , as
a n + 2 = a n + 1 + v ( a n + 1 a n + | λ | ( a n a n 1 ) , | λ | ( a n a n 1 ) ) 1 v 0 ( | 1 λ | a n + | λ | a n 1 | , | 1 + λ | a n + | λ | a n 1 ) ( a n + 1 a n ) .
The sequence { a n } shall be shown to be majorizing for { z n } in Theorem 2. But first, a general convergence condition is given for the sequence { a n } .
(H3
There exists R [ 0 , R 0 ) such that for each n = 0 , 1 , ,
v 0 ( | 1 λ | a n + | λ | a n 1 | , 1 + λ | a n + | λ | a n 1 ) < 1 a n d a n R 0 .
It follows based on the initial conditions that a 1 a 0 a 1 . Then, according to (22) for n = 0 , the condition ( H 3 ) , and the hypothesis that the functions v 0 and v are nondecreasing in each variable, it follows that a 1 a 2 . Suppose that a m a m + 1 for all integers m = 0 , 1 , 2 , , n . Then, according to the same hypothesis about the functions v 0 and v and ( H 3 ) , it follows that a m + 1 a m + 2 , which completes the induction for
0 a n a n + 1 R 0
and there exists a * [ 0 , R 0 ] such that
lim n + a n = a * .
There is a connection between the real functions v 0 and v and the operators in method (5).
(H4
There exists a point z 0 D , L L ( B 1 , B 2 ) such that L 1 L ( B 2 , B 1 ) and
L 1 ( [ x , y ; F ] L ) v 0 ( x z 0 , y z 0 ) .
Let z 1 , z 0 D . Then take z 0 z 1 a 0 . We can write, based on the first two substeps of method (5),
x n z 0 = ( 1 λ ) z n + λ z n 1 z 0 = ( 1 λ ) ( z n z 0 ) + λ ( z n 1 z 0 ) ,
x n z 0 | 1 λ | z n z 0 + | λ | z n 1 z 0 ( | 1 λ | + | λ | ) a *
and similarly,
y n z 0 | 1 + λ | z n z 0 + | λ | z n 1 z 0 ( | 1 + λ | + | λ | ) a *
provided that these iterates exists and belong in U ( z 0 , γ ) .
In particular, for n = 0 , the condition ( H 4 ) gives
L 1 ( A 0 L ) v 0 ( ( | 1 λ | + | λ | ) a * , ( | 1 + λ | + | λ | ) a * ) < 1 .
Hence, A 0 1 L ( B 2 , B 1 ) and the iterate z 1 is well defined by the third substep of method (5). Let us choose a 1 a 0 + A 0 1 F ( z 0 ) .
Set D 3 = D U ( z 0 , R 0 ) .
(H5
L 1 ( [ x , y ; F ] [ z , u ; F ] ) v ( x z , y u ) for each y , x , z , u D 3 and
(H6
U ( z 0 , γ ) D 3 , where γ = max { | 1 λ | + | λ | , | 1 + λ | + | λ | } a * .
Remark 3.
As in the local convergence analysis, possible choices for L = I or L = F ( z 0 ) or L = [ u 1 , u 2 ; F ] , where u 1 , u 2 are auxiliary points with u 1 u 2 , the last choice can be taken in the case when the operator F is not necessarily differentiable.
The main semi-local convergence analysis of method (5) follows.
Theorem 2.
Suppose that the conditions ( H 1 ) ( H 6 ) hold. Then, the sequence { z n } generated by method (5) is well defined in U ( z 0 , a * ) and remains in U ( z 0 , a * ) for each n = 0 , 1 , , and there exists a solution z * U [ z 0 , a * ] such that the sequence { z n } converges to z * and
z n z * a * a n .
Proof. 
Mathematical induction is used to establish the estimate
z m + 1 z m a m + 1 a m
for each m = 1 , 0 , . Estimate (24) holds for m = 1 , 0 based on the initial conditions z 0 z 1 a 0 < a * and z 1 z 0 = A 0 1 F ( z 0 ) a 1 a 0 < a * . Moreover, we determine that the iterates z 1 , z 1 U ( z 0 , a * ) . According to the arguments below the condition ( H 3 ) , the iterates y m + 1 , x m + 1 U ( z 0 , γ ) .
We also have the estimate
L 1 ( A m + 1 L ) v 0 ( x m + 1 z 0 , y m + 1 z 0 ) < 1 .
Thus, A m + 1 1 L ( B 2 , B 1 ) ,
A m + 1 1 L 1 1 v 0 ( x m + 1 z 0 , y m + 1 z 0 ) ,
and the iterate z m + 2 is well defined by the third substep of method (5). Furthermore, based on the condition ( H 6 ) , the iterate z m + 1 U ( z 0 , a * ) . Then, we can write, based on the third substep of method (5),
F ( z m + 1 ) = F ( z m + 1 ) F ( z m ) A m ( z m + 1 z m ) = ( [ z m + 1 , z m ; F ] A m ) ( z m + 1 z m )
leading to
z m + 1 z m A m + 1 1 L L 1 F ( z m + 1 ) v ( z m + 1 x m , z m y m ) z m + 1 z m 1 v 0 ( x m + 1 x 0 , y m + 1 x 0 ) v ( a m + 1 a m + | λ | ( a m a m 1 ) , | λ | ( a m a m 1 ) ) ( a m + 1 a m ) 1 v 0 ( | 1 λ | a m + | λ | a m 1 , | 1 + λ | a m + | λ | a m 1 ) a m + 2 a m + 1 .
The induction for (24) is completed, and
z m + 2 z 0 z m + 2 z m + 1 + z m + 1 z 0 a m + 2 a m + 1 + a m + 1 a 0 = a m + 2 a 0 a * a 0 .
Hence, the iterate z m + 2 U ( z 0 , a * ) . Therefore, the sequence { a m } is majorizing for { z m } . So, there exists z * U [ z 0 , a * ] such that lim m + z m = z * . According to (26), we obtained the estimate
L 1 F ( z m + 1 ) v ( a m + 1 a m + | λ | ( a m a m 1 ) , | λ | ( a m a m 1 ) ) ( a m + 1 a m ) .
If m + in (27), we conclude that F ( z * ) = 0 . Finally, from (24) and the triangle inequality,
z m + i z m a m + i a m , i = 0 , 1 , 2 , .
Thus, by letting i + in (28), we deduce (23). □
Next, a region is specified that contains only one solution.
Proposition 2.
Suppose the following:
(i
The equation F ( z ) = 0 has a solution y * U ( z 0 , R 2 ) for some R 2 > 0 .
(ii
The condition ( H 3 ) holds in the ball U ( z 0 , R 2 ) .
(iii
There exists R 3 R 2 such that
v 0 ( R 3 , R 2 ) < 1 .
Define the region D 4 = D U [ z 0 , R 3 ] .
Then, the only solution to the equation F ( z ) = 0 in the region D 4 is y * .
Proof. 
Suppose that the equation F ( z ) = 0 has a solution q D 4 such that q y * . Then, the divided difference T = [ q , y * ; F ] is well defined. In view of the conditions (ii) and (29), we determine, in turn, that
L 1 ( T L ) v 0 ( q z 0 , y * z 0 ) v 0 ( R 3 , R 2 ) < 1 .
Hence, T is invertible.
Finally, from the identity
q y * = T 1 ( F ( q ) F ( y * ) ) = T 1 ( 0 ) = 0 ,
we deduce that q = y * . □
Remark 4.
(i)
Under the conditions ( H 1 ) ( H 6 ) , we can let y * = z * and R 2 = a * .
(ii)
It follows from the proof of Theorem 2 that the iterates { z n } U ( z 0 , a * a 0 ) .

4. Numerical Examples

This section presents the results of verifying the convergence conditions of theorems 1 and 2 for method (5) and shows the applicability of the considered method for solving different nonlinear problems. The study was conducted for a nonlinear equation, a system of nonlinear equations, a Hammerstein integral equation, and a boundary value problem. These problems and similar ones are often used to test the applicability of iterative methods (see [1,3,4]). The nonlinear Hammerstein integral equations are a special case of Fredholm integral equations of the second kind and have a physical foundation, as they originate from electromagnetic fluid dynamics.The experiments were conducted in GNU Octave 7.3.0 software. The condition z n + 1 z n ε was used for stopping the iterative process. The calculations were performed with ε = 10 8 (for problems 1 and 2) and ε = 10 5 (for problem 3), and the norms · and · C [ a , b ] were used.
Example 1.
Consider the system of m nonlinear equations
F i ( z ) = j = 1 m z j + e z i 1 = 0 , i = 1 , , m .
Here, B 1 = B 2 = R m , D = ( 1 , 1 ) m R m and the exact solution is z * = ( 0 , , 0 ) T .
It is easy to see that the elements of the Jacobian matrix and the divided difference matrix have the following forms:
F ( z ) i , j = e z i + 1 , i = j , 1 , i j , a n d [ x , y ; F ] i , j = e x i e y i x i y i + 1 , i = j , 1 , i j .
Let us consider a local case and choose L = F ( z * ) . Then, we have
F ( z * ) i , j = 2 , i = j , 1 , i j , [ F ( z * ) ] i , j 1 = α , i = j , β , i j ,
and L 1 ( [ x , y ; F ] L ) = L 1 d i a g e x 1 e y 1 x 1 y 1 1 , , e x m e y m x m y m 1 . Therefore, we can write that function ω 0 and ω have the following forms:
ω 0 ( x z * , y z * ) = ( e 1 ) L 1 2 ( x z * + y z * )
and
ω ( x z , y z * ) = e min { 1 , ρ 0 } L 1 2 ( x z + y z * ) .
Let m = 25 and λ = 0.4 . Then, ρ 0 0.2206 , D 0 U ( z * , 0.2206 ) , r * 0.1111 , U ( z * , r * ) ( 0.1111 , 0.1111 ) , ρ * max { 0.1111 , 0.2000 } = 0.2000 , U ( z * , ρ * ) D 0 .
Table 1 shows results that are obtained for the initial approximations z 0 = ( 0.1 , , 0.1 ) T and z 1 = ( 0.11 , , 0.11 ) . Method (5) converges at three iterations. Thus, error estimate (11) holds for all n 0 , and the sequence { z n } n 1 remains in U ( z * , r * ) and converges to an exact solution.
Let us consider a semi-local case. Choosing L as L = [ x 0 , y 0 ; F ] , we obtain the following functions:
v 0 ( x z 0 , y z 0 ) = e L 1 2 ( x z 0 + y z 0 + x 0 z 0 + y 0 z 0 )
and
v ( x z , y u ) = e κ 0 L 1 2 ( x z + y u ) .
Let m = 25 , λ = 0.2 and the initial approximations z 0 = ( 0.1 , , 0.1 ) T , z 1 = ( 0.11 , , 0.11 ) T . Then, we determine that R 0 0.1784 , D 3 ( 0.0785 , 0.2784 ) m , and κ 0 = 0.2784 , and the majorizing sequence
{ a n } = { 0 , 0.0100 , 0.1098 , 0.1221 , 0.1237 } ,
converges to a * 0.1237 . The convergence ball is U ( z 0 , a * ) ( 0.0237 , 0.2237 ) m , and γ max { 0.1237 , 0.1732 } = 0.1732 and U ( z 0 , γ ) ( 0.0732 , 0.2732 ) m D 3 .
Table 2 shows that the error estimates (23) hold for all n 0 and (24) holds for all n 1 . The sequence { z n } n 1 remains in U ( z 0 , a * ) and converges to an exact solution.
Example 2.
Consider the nonlinear integral equation
F ( z ( t ) ) = z ( t ) α 0 1 t s z 3 ( s ) d s = 0 .
Here, B 1 = B 2 = C [ 0 , 1 ] , α > 0 is some constant and the exact solution is z * ( t ) = 0 .
Then, we can write
F ( z ( t ) ) h ( t ) = h ( t ) 3 α 0 1 t s z 2 ( s ) h ( s ) d s
and
[ x ( t ) , y ( t ) ; F ] h ( t ) = h ( t ) α 0 1 t s x 2 ( s ) + x ( s ) y ( s ) + y 2 ( s ) h ( s ) d s .
Since z * ( t ) = 0 , then F ( z * ( t ) ) h ( t ) = h ( t ) 3 α 0 1 t s ( z * ( s ) ) 2 h ( s ) d s = h ( t ) = I h ( t ) , where I is the identity operator. In the local case, we obtain for L = F ( z * ( t ) ) the following functions:
ω 0 ( x z * , y z * ) = 2 α ( x z * + y z * )
and
ω ( x z , y z * ) = 2 min { 1 , ρ 0 } α ( x z + y z * ) .
Let us choose λ = 0.1 and α = 1 . Then, ρ 0 0.2273 , D 0 U ( z * , 0.2273 ) , r * 0.1708 , ρ * max { 0.1708 , 0.2050 } = 0.2050 , and U ( z * , ρ * ) D 0 .
Let us choose λ = 0.01 and α = 1 . Then ρ 0 0.2475 , D 0 U ( z * , 0.2475 ) , r * 0.1807 , ρ * max { 0.1807 , 0.1843 } = 0.1843 , and U ( z * , ρ * ) D 0 .
To solve the integral equation, the quadrature method using Simpson’s rule with h = 1 m was applied. The calculation was carried out for m = 50 , α = 1 , and λ = 0.1 . The initial approximations were z 0 ( t ) = 0.1 t and z 1 ( t ) = 0.1 t + 0.01 for t [ 0 , 1 ] . Table 3 shows that the error estimates (11) hold for all n 0 , and the sequence { z n } n 1 remains in U ( z * , r * ) and converges to an exact solution.
Figure 2 shows the error value | z * ( t ) z n ( t ) | at each iteration. These graphs illustrate the decrease in the error at each iteration and its distribution over the specified interval. The maximum error values at each iteration are presented in Table 3.
Let us consider a semi-local case and choose z 0 ( t ) = 0.1 t , z 1 ( t ) = 0.1 t + 0.01 , t [ 0 , 1 ] , L = [ x 0 , y 0 ; F ] . Then, x 0 ( t ) = 0.1 t + 0.01 λ , y 0 ( t ) = 0.1 t 0.01 λ , and
I L α ( 0.1 + 0.01 λ ) 2 + | ( 0.1 + 0.01 λ ) ( 0.1 0.01 λ ) | + ( 0.1 0.01 λ ) 2 = p .
Moreover, the values α and λ are chosen so that p < 1 . As a result, we obtain the estimate
L 1 1 1 p ,
and the functions
v 0 ( x z 0 , y z 0 ) = α ( 2 + | 0.1 + 0.01 λ | ) 1 p ( x z 0 + y z 0 ) + α 1 p 3 z 0 2 x 0 2 y 0 2 x 0 y 0
and
v ( x z , y u ) = 3 α κ 0 1 p ( x z + y u ) .
Let us choose λ = 0.1 and α = 1 . Then, p = 0.030001 , R 0 0.2099 , D 3 U ( z 0 , 0.2099 ) , κ 0 = 0.3099 , a * 0.1213 , γ max { 0.1213 , 0.1455 } = 0.1455 , and U ( z 0 , γ ) D 3 .
Let us choose λ = 0.01 and α = 1 . Then, p = 0.0300 , R 0 0.2287 , D 3 U ( z 0 , 0.2287 ) , κ 0 = 0.3287 , a * 0.1214 , γ max { 0.1214 , 0.1238 } = 0.1238 , and U ( z 0 , γ ) D 3 .
Table 4 shows that the error estimates (23) hold for all n 0 and (24) holds for all n 1 . The sequence { z n } n 1 remains in U ( z 0 , a * ) and converges to an exact solution. These results are obtained for α = 1 , m = 50 , and λ = 0.1 .
Example 3.
Consider the nonlinear boundary-value problem [4]
u ( t ) = 2 ( u ( t ) 0.5 t + 1 ) 3 , 0 < t < 1 , u ( 0 ) = 0 , u ( 1 ) = 0 .
Here, B 1 = B 2 = C [ 0 , 1 ] and the exact solution is u * ( t ) = 1 1 + t + 1 2 t 2 .
Let t i = i h , i = 0 , , m , h = 1 m , and m be a natural number and denote θ i u ( t i ) , i = 1 , , m 1 . To solve problem 3, we use the finite difference method. As a result, we obtain the system of nonlinear equations F ( z ) = 0 , where
F i ( z ) = θ i + 1 2 θ i + θ i 1 2 h 2 ( θ i 0.5 t i + 1 ) 3 = 0 , i = 1 , , m 1 , θ 0 = θ m = 0
and z = ( θ 1 , , θ m 1 ) T .
The considered method (5) converges at five iterations for m = 100 , λ = 0.5 , and ε = 10 5 . The initial approximation z 0 = u * ( t ) 0.5 and z 1 = u * ( t ) 0.51 , t = t i , i = 1 , , m 1 . Figure 3 shows the error value | u * ( t i ) θ i | , i = 0 , , m at the last iteration, max i = 0 , , m | u * ( t i ) θ i | 1.5021 × 10 5 .
Example 4.
Let B 1 = B 2 = R and D = ( 0 , 2 ) and let F : Ω R be defined by
F ( z ) = z 3 1 .
The exact solution for F ( z ) = 0 is z * = 1 .
Let us show that the assumptions C 1 C 5 hold. We can write that F ( z ) = 3 z 2 and [ x , y ; F ] = x 2 + x y + y 2 . Let us choose L = F ( z * ) . Next, we obtain
[ x , y ; F ] F ( z * ) = x 2 + x y + y 2 3 ( z * ) 2 = x 2 ( z * ) 2 + y 2 ( z * ) 2 + x y x z * + x z * ( z * ) 2 = ( x z * ) ( x + 2 z * ) + ( y z * ) ( x + y + z * ) , [ x , y ; F ] [ z , z * ; F ] = x 2 + x y + y 2 z 2 z z * ( z * ) 2 = x 2 z 2 + x y z y + z y z z * + y 2 ( z * ) 2 = ( x z ) ( x + z + y ) + ( y z * ) ( y + z + z * ) ,
Therefore,
ω 0 ( | x z * | , | y z * | ) = A 0 | x z * | + B 0 | y z * | , A 0 = max x D | x + 2 z * | 3 ( z * ) 2 , B 0 = max x , y D | x + y + z * | 3 ( z * ) 2 , ω ( | x z | , | y z * | ) = A | x z | + B | y z * | , A max x , y , z D 0 | x + y + z | 3 ( z * ) 2 , B = max x , y D 0 | x + y + z * | 3 ( z * ) 2 .
Let λ = 0.1 . Then, g 1 ( t ) = t , g 2 ( t ) = 6 5 t , A 0 = 4 3 , B 0 = 5 3 , and
ω 0 ( g 1 ( t ) , g 2 ( t ) ) 1 = 4 3 t + 2 t 1 = 10 3 t 1 = 0 .
The last equation has root ρ 0 = 3 10 and D 0 = 7 10 , 13 10 . Then, A = 13 10 , B = 6 5 , and the equation h ( t ) 1 = 0 has the form
2 A + 6 5 B t 1 10 3 t 1 = 101 25 t 1 10 3 t = 0 .
The solution of this equation is r * = 75 553 , ρ * = max 75 553 , 90 553 = 90 553 , and
U ( z * , ρ * ) = 463 553 , 643 553 D 0 .
So, assumptions C 1 C 5 hold.

5. Conclusions

We have developed unified local and semi-local convergence of a family of Kurchatov-type methods depending on one parameter for solving nonlinear operator equations under generalized conditions in a Banach space. Moreover, we have studied the uniqueness of the solution of the nonlinear Equation (1). Numerical examples that demonstrate the applicability of our theoretical results are also provided. Some of the advantages of the new approach are as follows:
  • A comparison between different methods becomes possible, since their convergence is studied under uniform conditions;
  • The assumptions involve only the operators which are present in the method, in contrast to earlier studies using assumptions involving derivatives not in the method [6,7,16,17,18,19];
  • The generalized continuity assumption imposed on the divided difference leads to better information on the location of the solution z * and fewer iterates to obtain the error tolerance than before, since the bounds on z n z * are tighter;
  • Finally, the generality of the new approach helps with the extension of the applicability of other methods in a similar way [4,8,9,10,11,12,13,14,15]. This is the direction of our future research.

Author Contributions

Conceptualization, I.K.A.; Investigation, I.K.A., S.S. and H.Y.; Visualization, I.K.A., S.S. and H.Y.; Writing—original draft, I.K.A., S.S. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Argyros, I.K.; George, S. Improved convergence analysis for the Kurchatov method. Nonlinear Funct. Anal. Appl. 2017, 22, 41–58. [Google Scholar]
  2. Ezquerro, J.A.; Hernández-Verón, M.A. Mild Differentiability Conditions for Newton’s Method in Banach Spaces; Frontiers in Mathematics; Springer: Cham, Switzerland, 2020. [Google Scholar]
  3. Argyros, I.K.; Shakhno, S.; Regmi, S.; Yarmola, H. On the complexity of a unified convergence analysis for iterative methods. J. Complex. 2023, 79, 101781. [Google Scholar] [CrossRef]
  4. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  5. Balázs, M.; Goldner, G. On existence of divided differences in linear spaces. Rev. Anal. Numér. Théorie Approx. 1973, 2, 5–9. [Google Scholar] [CrossRef]
  6. Hernández-Verón, M.A.; Magreñán, Á.A.; Martńez, E.; Villalba, E.G. Solving non-differentiable Hammerstein integral equations via first-order divided differences. Numer. Algorithms 2024, 97, 567–594. [Google Scholar] [CrossRef]
  7. Ezquerro, J.A.; Grau, A.; Grau-Sánchez, M.; Hernández-Verón, M.A. A new class of secant-like methods for solving nonlinear systems of equations. Commun. Appl. Math. Comput. Sci. 2014, 9, 201–213. [Google Scholar] [CrossRef]
  8. Abad, M.F.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes for solving nonlinear systems. Bull. Math. Soc. Sci. Math. Roum. 2014, 57, 133–145. [Google Scholar]
  9. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional generalization of iterative methods for solving nonlinear problems by means of weight-function procedure. Appl. Math. Comput. 2015, 268, 1064–1071. [Google Scholar] [CrossRef]
  10. Behl, R.; Kanwar, V.; Sharma, K.K. Optimal equi-scaled families of Jarratt’s method. Int. J. Comput. Math. 2013, 90, 408–422. [Google Scholar] [CrossRef]
  11. Chicharro, F.; Cordero, A.; Gutiérrez, J.M.; Torregrosa, J.R. Complex dynamics of derivative-free methods for nonlinear equations. Appl. Math. Comput. 2013, 219, 7023–7035. [Google Scholar] [CrossRef]
  12. Lin, Y.; Bao, L.; Jia, X. Convergence analysis of a variant of the Newton method for solving nonlinear equations. Comput. Math. Appl. 2010, 59, 2121–2127. [Google Scholar] [CrossRef]
  13. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
  14. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  15. Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algorithms 2013, 62, 429–444. [Google Scholar] [CrossRef]
  16. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  17. Ezquerro, J.A.; Gonzalez, D.; Hernández, M.A. A variant of the Newton-Kantorovich theorem for nonlinear integral equations of mixed Hammerstein type. Appl. Math. Comput. 2012, 218, 9536–9546. [Google Scholar] [CrossRef]
  18. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2011, 237, 363–372. [Google Scholar] [CrossRef]
  19. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–88. [Google Scholar] [CrossRef]
Figure 1. The concept of the investigation.
Figure 1. The concept of the investigation.
Appliedmath 04 00082 g001
Figure 2. Error for problem 2.
Figure 2. Error for problem 2.
Appliedmath 04 00082 g002
Figure 3. Error for problem 3.
Figure 3. Error for problem 3.
Appliedmath 04 00082 g003
Table 1. Error estimates (11) for Example 1.
Table 1. Error estimates (11) for Example 1.
n z n + 1 z * Right Side of Estimate (11)
−1 1.0000 × 10 1 -
0 2.0480 × 10 4 3.4786 × 10 2
1 3.3397 × 10 9 2.4639 × 10 5
2 4.1561 × 10 14 4.0265 × 10 10
Table 2. Error estimates (23) and (24) for Example 1.
Table 2. Error estimates (23) and (24) for Example 1.
n z n + 1 z * a * a n + 1 z n + 1 z n a n + 1 a n
−2 1.1000 × 10 1 1.2369 × 10 1 --
−1 1.0000 × 10 1 1.1369 × 10 1 1.0000 × 10 2 1.0000 × 10 2
0 2.0480 × 10 4 1.3894 × 10 2 9.9795 × 10 2 9.9795 × 10 2
1 1.4398 × 10 9 1.5640 × 10 3 2.0480 × 10 4 1.2330 × 10 2
2 4.5643 × 10 15 3.4364 × 10 5 1.4398 × 10 9 1.5296 × 10 3
Table 3. Error estimate (11) for Example 2.
Table 3. Error estimate (11) for Example 2.
n z n + 1 z * Right Side of Estimate (11)
−1 1.0000 × 10 1
0 4.0245 × 10 4 1.5167 × 10 2
1 1.0295 × 10 8 4.2258 × 10 6
2 2.6081 × 10 13 1.0769 × 10 10
3 6.6072 × 10 18 2.7281 × 10 15
Table 4. Error estimates (23) and (24) for Example 2.
Table 4. Error estimates (23) and (24) for Example 2.
n z n + 1 z * a * a n + 1 z n + 1 z n a n + 1 a n
− 2 1.1000 × 10 1 1.2128 × 10 1
−1 1.0000 × 10 1 1.1128 × 10 1 1.0000 × 10 2 1.0000 × 10 2
0 4.0245 × 10 4 1.0882 × 10 2 1.0040 × 10 1 1.0040 × 10 1
1 1.0295 × 10 8 5.8292 × 10 4 4.0246 × 10 4 1.0299 × 10 2
2 2.6081 × 10 13 3.4152 × 10 6 1.0295 × 10 8 5.7951 × 10 4
3 6.6072 × 10 18 9.2440 × 10 10 2.6081 × 10 13 3.4143 × 10 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Yarmola, H. On the Convergence of a Kurchatov-Type Method for Solving Nonlinear Equations and Its Applications. AppliedMath 2024, 4, 1539-1554. https://doi.org/10.3390/appliedmath4040082

AMA Style

Argyros IK, Shakhno S, Yarmola H. On the Convergence of a Kurchatov-Type Method for Solving Nonlinear Equations and Its Applications. AppliedMath. 2024; 4(4):1539-1554. https://doi.org/10.3390/appliedmath4040082

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, and Halyna Yarmola. 2024. "On the Convergence of a Kurchatov-Type Method for Solving Nonlinear Equations and Its Applications" AppliedMath 4, no. 4: 1539-1554. https://doi.org/10.3390/appliedmath4040082

APA Style

Argyros, I. K., Shakhno, S., & Yarmola, H. (2024). On the Convergence of a Kurchatov-Type Method for Solving Nonlinear Equations and Its Applications. AppliedMath, 4(4), 1539-1554. https://doi.org/10.3390/appliedmath4040082

Article Metrics

Back to TopTop