Next Article in Journal
The Fear of Covid-19 Scale (FCV-19S) in Spain: Adaptation and Confirmatory Evidence of Construct and Concurrent Validity
Next Article in Special Issue
Comparison of Molecular Geometry Optimization Methods Based on Molecular Descriptors
Previous Article in Journal
Investigation of Drift Phenomena at the Pore Scale during Flow and Transport in Porous Media
Previous Article in Special Issue
Some Variants of Normal Čech Closure Spaces via Canonically Closed Sets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Analysis and Dynamical Nature of an Efficient Iterative Method in Banach Spaces

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal 148106, India
2
Department of Mathematics, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Channai 601103, India
3
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
4
Chemical Doctoral School, Babes-Bolyai University, 400028 Cluj-Napoca, Romania
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(19), 2510; https://doi.org/10.3390/math9192510
Submission received: 14 August 2021 / Revised: 16 September 2021 / Accepted: 26 September 2021 / Published: 7 October 2021
(This article belongs to the Special Issue Mathematical and Molecular Topology)

Abstract

:
We study the local convergence analysis of a fifth order method and its multi-step version in Banach spaces. The hypotheses used are based on the first Fréchet-derivative only. The new approach provides a computable radius of convergence, error bounds on the distances involved, and estimates on the uniqueness of the solution. Such estimates are not provided in the approaches using Taylor expansions of higher order derivatives, which may not exist or may be very expensive or impossible to compute. Numerical examples are provided to validate the theoretical results. Convergence domains of the methods are also checked through complex geometry shown by drawing basins of attraction. The boundaries of the basins show fractal-like shapes through which the basins are symmetric.

1. Introduction

Let X, Y be Banach spaces and D X be a closed and convex set. In this study, we locate a solution x * of the nonlinear equation
G ( x ) = 0 ,
where G : D X Y is a Fréchet-differentiable operator. In computational sciences, many problems can be written in the form of (1). See, for example, [1,2,3]. The solutions of such equations are rarely attainable in closed form. This is why most methods for solving these equations are usually iterative. The most well-known method for approximating a simple solution x * of Equation (1) is Newton’s method, which is given by
x m + 1 = x m G ( x m ) 1 G ( x m ) , for each m = 0 , 1 , 2 ,
and has a quadratic order of convergence. In order to attain the higher order of convergence, a number of modified Newton’s or Newton-like methods have been proposed in the literature (see [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]) and references cited therein. In particular, Sharma and Kumar [18] recently proposed a fifth order method for approximating the solution of G ( x ) = 0 using the Newton–Chebyshev composition defined for each n = 0 , 1 , 2 , by
y m = x m Γ m G ( x m ) , z m = y m Γ m G ( y m ) , x m + 1 = z m 2 I Γ n [ z m , y m ; G ] Γ m G ( z m ) ,
where Γ m = G ( x m ) 1 , and [ z m , y m ; G ] is the first order divided difference of G. The method has been shown to be computationally more efficient than existing methods of a similar nature.
The important part in the development of an iterative method is to study its convergence analysis. This is usually divided into two categories, namely the semilocal and local convergence. The semilocal convergence is based on the information around an initial point and gives criteria that ensure the convergence of iteration procedures. The local convergence is based on the information of a convergence domain around a solution and provides estimates of the radii of the convergence balls. Local results are important since they provide the degree of difficulty in choosing initial points. There exist many studies which deal with the local and semilocal convergence analysis of iterative methods such as [3,4,5,7,8,9,10,11,13,16,19,21,22,23]. The semilocal convergence of the method (3) in Banach spaces has been established in [18]. In the present work, we study the local convergence of this method and its multi-step version, including the computable radius of convergence, error bounds on the distances involved, and estimates on the uniqueness of the solution.
We summarize the contents of the paper. In Section 2, the local convergence (including radius of convergence, error bounds, and uniqueness results of method (3)) is studied. The generalized multi-step version is presented in Section 3. Numerical examples are performed to verify the theoretical results in Section 4. In Section 5, the basins of attractors are studied to visually check the convergence domain of the methods. Finally, some conclusions are reported in Section 6.

2. Local Convergence

The local convergence analysis of method (3) is presented in this section. Let L 0 > 0 , L > 0 , L 1 > 0 , and M 0 be given parameters. It is convenient to generate some functions and parameters for the local convergence study that follows. Define function g 1 ( t ) on interval [ 0 , 1 L 0 ) by
g 1 ( t ) = L t 2 ( 1 L 0 t )
and parameter
r 1 = 2 2 L 0 + L < 1 L 0 .
Then, we have that g 1 ( r 1 ) = 1 and 0 g 1 ( t ) 1 for each t [ 0 , r 1 ) . Moreover, define the function g 2 ( t ) and h 2 ( t ) on interval [ 0 , 1 L 0 ) by
g 2 ( t ) = 1 + M 1 L 0 t g 1 ( t )
and
h 2 ( t ) = g 2 ( t ) 1 .
We have that h 2 ( 0 ) = 1 < 0 and h 2 ( r 1 ) = M 1 L 0 r 1 > 0 . According to the intermediate value theorem, function h 2 ( t ) has zeros in the interval ( 0 , r 1 ) . Denote such zeros by r 2 . Finally, define functions K ( t ) , g 3 ( t ) , and h 3 ( t ) on the interval [ 0 , 1 L 0 ) by
K ( t ) = 1 + 1 1 L 0 t L 0 + L 1 t ( g 2 ( t ) + g 1 ( t ) ) t ,
g 3 ( t ) = 1 + M K ( t ) 1 L 0 t g 2 ( t )
and
h 3 ( t ) = g 3 ( t ) 1 .
We have that h 3 ( 0 ) = 1 < 0 and h 3 ( r 2 ) = M K ( r 2 ) 1 L 0 r 2 > 0 . According to the intermediate value theorem, function h 3 ( t ) has zeros in ( 0 , r 2 ) . Denote such zeros by r 3 of function h 3 ( t ) in interval [ 0 , r 2 ) . Set
r = min { r i } , i = 1 , 2 , 3 .
Then, we obtain that
0 < r r 1 .
Then, for each t [ 0 , r )
0 g 1 ( t ) 1 ,
0 g 2 ( t ) 1
and
0 g 3 ( t ) 1 .
Let U ( v , ρ ) and U ¯ ( v , ρ ) symbolise the open and closed balls in X, with a radius ρ > 0 and a centre v X .
Using the above notations, we then describe the local convergence analysis of method (3).
Theorem 1.
Suppose G : D X Y is a Fréchet-differentiable function. Let [ . , . ; G ] : X × X L ( Y ) be the divided difference operator. Consider that there exist x * D , L 0 > 0 , L > 0 , L 1 > 0 , and M 1 , such that for each x , y D
G ( x * ) = 0 , G ( x * ) 1 L ( Y , X ) ,
G ( x * ) 1 G ( x ) G ( x * ) L 0 x x * ,
G ( x * ) 1 G ( x ) G ( y ) L x y ,
G ( x * ) 1 G ( x ) M ,
G ( x * ) 1 [ x , y ; G ] G ( x * ) L 1 ( x x * + y x * ) ,
and
U ¯ ( x * , r ) D ,
where r is defined by (5). Then, for each m = 0 , 1 , , the sequence { x m } generated by method (3) for x 0 U ( x * , r ) { x * } is well defined, stays in U ( x * , r ) , and converges to x * . Furthermore, the following estimates hold:
y m x * g 1 ( x m x * ) x m x * < x m x * < r ,
z m x * g 2 ( x m x * ) x m x * < x m x * < r
and
x m + 1 x * g 3 ( x m x * ) x m x * ,
where the “ g ” functions are defined previously. Furthermore, if there exists T [ r , 2 L 0 ) such that U ¯ ( x * , T ) D , then x * is the only solution of G ( x ) = 0 in U ¯ ( x * , T ) .
Proof. 
We shall show the estimates (16)–(18) using mathematical induction. Using (4), (11), and the hypotheses x 0 U ( x * , r ) { x * } , we obtain that
G ( x * ) 1 G ( x 0 ) G ( x * ) L 0 x 0 x * < L 0 r < 1 .
It follows from (19) and the Banach Lemma [3] that G ( x 0 ) 1 L ( Y , X ) and
G ( x 0 ) 1 G ( x * ) 1 1 L 0 x 0 x * < 1 1 L 0 r .
Hence, y 0 is well defined for m = 0 . Then, by using (4), (7), (12), and (20), we have
y 0 x * x 0 x * G ( x 0 ) 1 G ( x 0 ) G ( x 0 ) 1 G ( x * ) 0 1 G ( x * ) 1 [ G ( x * + θ ( x 0 x * ) ) G ( x 0 ) ] ] d θ × x 0 x * L x 0 x * 2 2 ( 1 L 0 x 0 x * ) = g 1 ( x 0 x * ) x 0 x * < x 0 x * < r ,
which shows (16) for m = 0 and y 0 U ( x * , r ) .
Notice that for each θ [ 0 , 1 ] and x * + θ ( x 0 x * ) x * = θ x 0 x * < r . That is, x * + θ ( x 0 x * ) U ( x * , r ) . We can write
G ( x 0 ) = G ( x 0 ) G ( x * ) = 0 1 G ( x * + θ ( x 0 x * ) ) ( x 0 x * ) d θ .
Then, using (13) and (21), we have
G ( x * ) 1 G ( x 0 ) = 0 1 G ( x * ) 1 G ( x * + θ ( x 0 x * ) ) ( x 0 x * ) d θ M x 0 x * .
Similarly, we obtain
G ( x * ) 1 G ( y 0 ) M y 0 x * ,
G ( x * ) 1 G ( z 0 ) M z 0 x * .
Using the second substep of method (3), (8), (20), (21), (27), and (24), we obtain that
z 0 x * y 0 x * + G ( x 0 ) 1 G ( y 0 ) = y 0 x * + G ( x 0 ) 1 G ( x * ) G ( x * ) 1 G ( y 0 ) y 0 x * + M y 0 x * 1 L 0 x 0 x * 1 + M 1 L 0 x 0 x * y 0 x * 1 + M 1 L 0 x 0 x * g 1 ( x 0 x * ) x 0 x * g 2 ( x 0 x * ) x 0 x * < x 0 x * < r .
Which shows (17) for m = 0 and z 0 U ( x * , r ) .
Next, we have the linear operator A 0 = 2 I G ( x 0 ) 1 [ y 0 , x 0 ; G ] ; by using (11), (14), and (20), we obtain
A 0 = 2 I G ( x 0 ) 1 [ z 0 , y 0 ; G ] 1 + G ( x 0 ) 1 G ( x 0 ) [ z 0 , y 0 ; G ] 1 + G ( x 0 ) 1 G ( x * ) G ( x * ) 1 ( G ( x 0 ) [ z 0 , y 0 ; G ] ) 1 + G ( x 0 ) 1 G ( x * ) G ( x * ) 1 G ( x 0 ) G ( x * ) + G ( x * ) [ z 0 , y 0 ; G ] 1 + G ( x 0 ) 1 G ( x * ) G ( x * ) 1 ( G ( x 0 ) G ( x * ) ) + G ( x * ) 1 ( G ( x * ) [ z 0 , y 0 ; G ] ) 1 + 2 1 L 0 x 0 x * L 0 x 0 x * + L 1 z 0 x * + y 0 x * 1 + 2 1 L 0 x 0 x * L 0 x 0 x * + L 1 g 2 ( x 0 x * ) + g 1 ( x 0 x * ) x 0 x * 1 + 2 1 L 0 x 0 x * L 0 + L 1 g 2 ( x 0 x * ) + g 1 ( x 0 x * ) x 0 x * = K ( x 0 x * ) .
Then, using Equations (4), (9), (25), and (26), we obtain that 2.0
x 1 x * z 0 x * + A 0 G ( x 0 ) 1 G ( z 0 ) = z 0 x * + A 0 G ( x 0 ) 1 G ( x * ) G ( x * ) 1 G ( z 0 ) z 0 x * + M K ( x 0 x * ) z 0 x * 1 L 0 x 0 x * 1 + M K ( x 0 x * ) 1 L 0 x 0 x * z 0 x * 1 + M K ( x 0 x * ) 1 L 0 x 0 x * g 2 ( x 0 x * ) x 0 x * g 3 ( x 0 x * ) x 0 x * < x 0 x * < r ,
which proves the (18) for m = 0 and x 1 U ( x * , r ) . By simply replacing x 0 , y 0 , z 0 , and x 1 by x m , y m , z m , and x m + 1 in the preceding estimates, we arrive at (16)–(18). Then, from the estimates x m + 1 x * < x m x * < r , we deduce that lim m x m = x * and x m + 1 U ( x * , r ) .
Finally, we show the uniqueness part; let Q = 0 1 G ( x * + t ( x * y * ) ) d t for some y * U ¯ ( x * , r ) with G ( y * ) = 0 . Using (15), we obtain that
G ( x * ) 1 ( Q G ( x * ) 0 1 L 0 y * + t ( x * y * ) x * d t 0 1 ( 1 t ) x * y * d t L 0 2 T < 1 .
It follows from (29) that Q is invertible. Then, from the identity 0 = G ( x * ) G ( y * ) = Q ( x * y * ) , we deduce that x * = y * . □
Remark 1.
By (11) and the estimate
G ( x * ) 1 G ( x ) = G ( x * ) 1 ( G ( x ) G ( x * ) ) + I 1 + G ( x * ) 1 ( G ( x ) G ( x * ) ) 1 + L 0 x x *
condition (13) can be dropped and be replaced by
M ( t ) = 1 + L 0 t
or
M ( t ) = M = 2 , since t [ 0 , 1 L 0 ) .

3. Generalized Method

The multistep version of (3) consisting of q + 1 , ( q N ) , steps is expressed as
z m ( 0 ) = y m Γ m G ( y m ) , z m ( 1 ) = z m ψ ( x m , y m , z m ) G ( z m ) , z m ( 2 ) = z m ( 1 ) ψ ( x m , y m , z m ) G ( z m ( 1 ) ) , z m ( q 1 ) = z m ( q 2 ) ψ ( x m , y m , z m ) G ( z m ( q 2 ) ) , z m ( q ) = x m + 1 = z m ( q 1 ) ψ ( x m , y m , z m ) G ( z m ( q 1 ) ) ,
where y m = x m Γ m G ( x m ) , z m ( 0 ) = z m , ψ ( x m , y m , z m ) = ( 2 I Γ m [ z m , y m ; G ] ) Γ m , and Γ m = G ( x m ) 1 .
Next, we show that the generalized scheme (30) possesses convergence order 2 q + 3 .

3.1. Order of Convergence

The definition of divided difference is required to derive (30) convergence order. Recalling the result of Taylor’s expansion on vector functions (see [24]) for this:
Lemma 1.
G : D R n R n be r-times Fréchet differentiable in a convex set D R n then for any x , h R n , the following expression holds:
G ( x + h ) = G ( x ) + G ( x ) h + 1 2 ! G ( x ) h 2 + 1 3 ! G ( x ) h 3 + + 1 ( r 1 ) ! G ( r 1 ) ( x ) h r 1 + R r ,
where
| | R r | | 1 r ! sup 0 t 1 | | G ( r ) ( x + t h ) | | | | h | | r a n d h r = ( h , h , r , h ) .
The divided difference operator [ · , · ; G ] : D × D R n × R n L ( R n ) is defined by (see [24])
[ x + h , x ; G ] = 0 1 G ( x + t h ) d t , x , h R n .
When we expand G ( x + t h ) in the Taylor series at point x and integrate, we obtain
[ x + h , x ; G ] = 0 1 G ( x + t h ) d t = G ( x ) + 1 2 G ( x ) h + 1 6 G ( x ) h 2 + O ( h 3 ) .
where h i = ( h , h , i , h ) , h R n .
Let e m = x m x * . Expanding G ( x m ) in a neighbourhood of x * and assuming Γ = G ( x * ) 1 exists, we obtain
G ( x m ) = G ( x * ) ( e m + A 2 ( e m ) 2 + A 3 ( e m ) 3 + A 4 ( e m ) 4 + A 5 ( e m ) 5 + O ( ( e m ) 5 ) ) ,
where A i = 1 i ! Γ G ( i ) ( x * ) L i ( R n , R n ) and ( e m ) i = ( e m , e m , i , e m ) , e m R n , i = 2 , 3 ,
Additionally,
G ( x m ) = G ( x * ) ( I + 2 A 2 e m + 3 A 3 ( e m ) 2 + 4 A 4 ( e m ) 3 + O ( ( e m ) 4 ) ) ,
G ( x m ) = G ( x * ) ( 2 A 2 + 6 A 3 e m + 12 A 4 ( e m ) 2 + O ( ( e m ) 3 ) ) ,
G ( x m ) = G ( x * ) ( 6 A 3 + 24 A 4 e m + O ( ( e m ) 2 ) ) .
The inversion of G ( x m ) yields
G ( x m ) 1 = ( I 2 A 2 e m + ( 4 A 2 2 3 A 3 ) ( e m ) 2 ( 4 A 4 6 A 2 A 3 6 A 3 A 2 + 8 A 2 3 ) ( e m ) 3 + O ( ( e m ) 4 ) ) Γ .
We are in a position to investigate scheme (30)’s convergence behaviour. As a result, the following theorem is established:
Theorem 2.
Suppose that
(i) G : D R n R n is many times differentiable mapping.
(ii) There exists a solution x * D of equation G ( x ) = 0 such that G ( x * ) is nonsingular.
Then, sequence { x n } generated by method (30) for x 0 D converges to x * with order 2 q + 3 , q N .
Proof. 
Employing (34) and (38) in the Newton iteration y m , we obtain that
e ˜ m = y m x * = A 2 e m 2 + ( 2 A 2 2 A 3 ) e m 3 + ( 4 A 2 3 4 A 2 A 3 3 A 3 A 2 + 3 A 4 ) e m 4 ( 8 A 2 4 + 6 A 3 2 + 6 A 2 A 4 + 4 A 4 A 2 8 A 2 2 A 3 6 A 2 A 3 A 2 6 A 3 A 2 2 ) e m 5 + O ( e m 6 ) .
The Taylor series of G ( y m ) about x * yields
G ( y m ) = G ( x * ) ( e ˜ m + A 2 e ˜ m 2 + A 3 e ˜ m 3 + A 4 e ˜ m 4 + O ( e ˜ m 5 ) ) ,
Substituting (38)–(40) in first step of (30), we obtain
e ¯ m = z m x * = 2 A 2 2 e m 3 + ( 4 A 2 A 3 9 A 2 3 + 3 A 3 A 2 ) e m 4 + O ( e m 5 ) .
Using Equations (35)–(37) in (33) for x + h = z m , x = y m , and h = e ¯ m e ˜ m , it follows that
[ z m , y m ; G ] = G ( x * ) I + A 2 ( e ¯ m + e ˜ m ) + O ( ( e ˜ m ) 2 , ( e ¯ m ) 2 )
and
Γ m [ z m , y m ; G ] = I 2 A 2 e m + ( 4 A 2 2 3 A 3 ) ( e m ) 2 + A 2 ( e ¯ m + e ˜ m ) + O ( ( e m ) 3 ) .
As a result, we arrive at the conclusion
ψ ( x m , y m , z m ) = I 5 A 2 2 ( e m ) 2 + 2 ( 10 A 2 3 4 A 2 A 3 3 A 3 A 2 ) ( e m ) 3 + O ( ( e m ) 4 ) ) G ( x * ) 1 .
In addition, we have
G ( z m ) = G ( x * ) ( e ¯ m + O ( ( e ¯ m ) 2 ) ) .
Using (42) and (43) in the second step of method (30), it follows that
z m ( 1 ) x * = 10 A 2 4 ( e m ) 5 + O ( e m ) 6 .
The expansion of G ( z m ( q 1 ) ) about x * yields
G ( z m ( q 1 ) ) = G ( x * ) ( z m ( q 1 ) x * ) + A 2 ( z m ( q 1 ) x * ) 2 + .
Then, we have
ψ ( x m , y m , z m ) G ( z m ( q 1 ) ) = I 5 A 2 2 ( e m ) 2 + 2 ( 10 A 2 3 4 A 2 A 3 3 A 3 A 2 ) ( e m ) 3 + O ( ( e m ) 4 ) ) G ( x * ) 1 × G ( x * ) ( z m ( q 1 ) x * ) + A 2 ( z m ( q 1 ) x * ) 2 + = ( z m ( q 1 ) x * ) 5 A 2 2 ( z m ( q 1 ) x * ) ( e m ) 2 + A 2 ( z m ( q 1 ) x * ) 2 + .
Using (46) in (30), we obtain
z m ( q ) x * = 5 A 2 2 ( z m ( q 1 ) x * ) ( e m ) 2 A 2 ( z m ( q 1 ) x * ) 2 + .
As we know from (44) that z m ( 1 ) x * = 10 A 2 4 ( e m ) 5 + O ( e m ) 6 , from (47) for q = 2 , 3 , we therefore have
z m ( 2 ) x * = 5 A 2 2 ( e m ) 2 ( z m ( 1 ) x * ) + = 50 A 2 6 ( e m ) 7 + O ( e m ) 8
and
z m ( 3 ) x * = 5 A 2 2 ( e m ) 2 ( z m ( 2 ) x * ) + = 250 A 2 8 ( e m ) 9 + O ( e m ) 10 .
Proceeding by induction, it follows that
e m + 1 = z m ( q ) x * = 2 · 5 q A 2 2 q + 2 ( e m ) 2 q + 3 + O ( e m ) 2 q + 4 .
This completes the proof of Theorem 2. □
Remark 2.
Note that method (3) utilizes three functions, one derivative, and one inverse operator per full iteration and converges to the solution with the fifth order of convergence. The generalized scheme (30) based on (3) (for q = 1 ) generates the methods with increasing convergence orders 5 , 7 , 9 , corresponding to q = 1 , 2 , 3 , at an additional cost of one function evaluation per each iteration. This fulfils the main aim of developing higher order methods, keeping computational cost under control.

3.2. Local Convergence

Along the same lines as method (3), we offer the local convergence analysis of method (30). Define g ¯ 2 , λ , μ , and h μ on the interval [ 0 , r 2 ) by
g ¯ 2 ( t ) = K ( t ) 1 w 0 ( t ) ,
λ ( t ) = 1 + g ¯ 2 ( t ) M ,
μ ( t ) = λ q ( t ) g 2 ( t ) t λ 1
and
h μ ( t ) = μ ( t ) 1 .
We have that h μ ( 0 ) < 0 . Suppose that
μ ( t ) + or a positive number as t r 2 .
Denote by r ( q ) the smallest zero on the interval ( 0 , r 2 ) of function h μ . Define r * by
r * = min { r 1 , r ( q ) } .
Proposition 1.
Suppose that the conditions of Theorem 2 hold. Then, sequence { x m } generated for x 0 U ( x * , r * ) { x * } by method (30) is well defined in U ( x * , r * ) , remains in U ( x * , r * ) , and converges to x * . Moreover, the following estimates hold:
y m x * g 1 ( x m x * ) x m x * x m x * < r * , z m x * g 2 ( x m x * ) x m x * x m x * , z m ( i ) x * λ i ( x m x * ) z m x * λ i ( x m x * ) g 2 ( x m x * ) x m x * λ x m x * , i = 1 , 2 , , q 1 ,
and
x k + 1 x * = z m ( q ) x * λ q ( x m x * ) z m x * μ ( x m x * ) x m x * .
Furthermore, x * is the only solution of G ( x ) = 0 in D 1 = D U ( x * , r * ) .
Proof. 
Only new estimations (50) and (51) will be shown. We show the first two estimations using the evidence of Theorem 1. Then, we will be able to obtain that
ψ ( x m , y m , z m ) G ( x * ) 2 I G ( x m ) 1 [ z m , y m ; G ] G ( x m ) 1 G ( x * ) 2 I G ( x m ) 1 [ z m , y m ; G ] G ( x m ) 1 G ( x * ) K ( x m x * ) 1 w 0 ( x m x * ) g ¯ 2 ( x m x * ) .
Moreover, we have
z ( 1 ) x * = z m x * ψ ( x m , y m ) G ( z m ) z m x * + ψ ( x m , y m , z m ) G ( x * ) G ( x * ) 1 G ( z m ) z m x * + g ¯ 2 ( x m x * ) M z m x * λ ( x m x * ) z m x * μ ( x m x * ) x m x * .
Similarly, we obtain
z m ( 2 ) x * λ ( x m x * ) z m ( 1 ) x * λ 2 ( x m x * ) z m x * z m ( i ) x * λ i ( x m x * ) z m x * x m + 1 x * z m ( q ) x * λ q ( x m x * ) z m x * μ ( x m x * ) x m x * .
That is, we have x m , y m , z m , z m ( i ) U ( x * , r * ) , i = 1 , 2 , , q , and
x m + 1 x * c ¯ x m x * ,
where c ¯ = μ ( x 0 x * ) [ 0 , 1 ) , so lim m x m = x * and x m + 1 U ( x * , r * ) . The uniqueness result is standard, as shown in Theorem 1. □

4. Numerical Examples

Here, we shall demonstrate the theoretical results of local convergence which we have proved in Section 2 and Section 3. To do so, the methods of the family (30) of order five, seven, and nine are chosen. Let us denote these methods by M 5 , M 7 , and M 9 , respectively. The divided difference in the examples is computed by [ x , y ; F ] = 0 1 F ( y + θ ( x y ) ) d θ . We consider three numerical examples, which are presented as follows:
Example 1.
Let us consider B = R m 1 for natural integer m 2 . B is equipped with the max-norm x = max 1 i m 1 x i . The corresponding matrix norm is A = max 1 i m 1 j = 1 j = m 1 | a i j | for A = ( a i j ) 1 i , j m 1 . Consider the two-point boundary value problem on interval [0, 1]:
v + v 3 / 2 = 0 , v ( 0 ) = v ( 1 ) = 0 .
Let us denote Δ = 1 / m , u i = Δ i , and v i = V ( u i ) for each i = 0 , 1 , , m . We can write the discretization of v at points u i in the following form:
v i v i 1 2 v i + v i + 1 Δ 2 for each i = 2 , 3 , , m 1 .
Using the initial conditions in (54), we obtain that v 0 = v m = 0 , and (54) is equivalent to the system of the nonlinear equation F ( v ) = 0 with v = ( v 1 , v 2 , , v m 1 ) in the following form:
Δ 2 v 1 3 / 2 2 v 1 + v 2 = 0 , v i 1 + Δ 2 v i 3 / 2 2 v i + v i + 1 = 0 for each i = 2 , 3 , , m 1 .
Using (55), the Fréchet-derivative of operator F is given by
F ( v ) = 3 2 Δ 2 v 1 1 / 2 2 1 0 0 1 3 2 Δ 2 v 1 1 / 2 2 1 0 0 1 1 0 0 1 3 2 Δ 2 v 1 1 / 2 2 .
Choosing m = 11 , the corresponding solution is x * = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) T , and we have L 0 = L = L 1 = 3.942631477 and M = 2 . The parameters using method (30) are given in Table 1.
Thus, it follows that the above-considered methods of scheme (30) converge to x * and remain in U ¯ ( x * , r * ) .
Example 2.
Scholars have determined that the speed of blood in a course is an element of the distance of the blood from the conduit’s focal pivot (Figure 1). As per Poiseuille’s law, the speed (cm/s) of blood that is r cm from the focal hub of a supply route is given by the capacity
S ( r ) = C ( R 2 r 2 ) ,
where R is the range of the course, and C is a consistent that relies upon the thickness of the blood and the tension between the two closures of the vein. Assume that for a specific course,
C = 1.76 × 10 5 c m / s
and
R = 1.2 × 10 2 c m .
Using the numerical values, the problem reduces to
f 2 ( x ) = 25.344 176 , 000 x 2 = 0 ,
where x = r .
The graph of the function f 2 ( x ) is shown in Figure 2.
The zero of f 2 ( x ) = 0 is x * = 0.012 ; then, we have L 0 = L = L 1 = 84.2803 and M = 5280 . The parameters using method (30) are given in Table 2.
It follows that the above-considered methods of scheme (30) will converge to x * and remain in U ¯ ( x * , r * ) if r * is chosen as shown in Table 2.
Example 3.
Consider the quasi-one-dimensional isentropic flow of a perfect gas through a variable-area channel, shown in Figure 3.
The relationship between the Mach number M and the flow area A, derived by Zucrow and Hoffman [25], is given by
ε = A A * = 1 M 2 γ + 1 1 + γ 1 2 M 2 ( γ + 1 ) / 2 ( γ 1 ) ,
where A * is the choking area (i.e., the area where M = 1 ), and γ is the specific heat ratio of the flowing gas shown in Figure 4.
For each value of ε , two values of M exist, one less than unity (i.e., subsonic flow) and one greater than unity (i.e., supersonic flow). For the values of ε = 5.00 and γ = 1.4 , Equation (57) becomes
f 3 ( x ) = 5 0.578704 ( 1 + 0.2 x 2 ) 3 x .
where x = M . The graph of the function f 3 ( x ) is shown in Figure 5, and the zero is x * = 0.116689 . Then, we have that
L = L 0 = L 1 = 8.137146 , and M = 0.610065 .
The parameters using method (30) are given in Table 3.
The computed values of r * show that the considered methods of the scheme (30) will converge to x * and remain in U ¯ ( x * , r * ) .

5. Study of Complex Dynamics of the Method

To view the geometry of the methods of the family (30) of five, seven, and nine order methods, in the complex plane, we present the attraction of basins of the roots by performing the methods on some functions (see Table 4). The basins are displayed in Figure 6, Figure 7 and Figure 8 concerning capacities. To draw basins, we use square shapes R C of size [ 2 , 2 ] × [ 2 , 2 ] and allot various shadings to the basins. The dark region is appointed to the focuses for which the strategy is disparate.

6. Conclusions

In this work, we have extended the utilization of technique (3) by introducing its assembly investigation and complex elements. Rather than using different procedures depending on the higher subordinate request just as a Taylor series, we have utilized only a subsidiary of request one, since this actually shows up in the technique. One more benefit of our methodology is the calculation of uniqueness balls where the repeats lie just as appraisals on x n x * . These objectives are accomplished utilizing our Lipschitz-like conditions. The hypothetical outcomes so determined are confirmed on some useful issues. Finally, we have checked the security of the technique through utilizing a complex element apparatus, specifically a bowl of fascination.

Author Contributions

Conceptualization, S.K. and J.R.S.; Formal analysis, S.K. and L.J.; Investigation, S.K. and L.J.; Methodology, D.K.; Writing-original draft, D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Technical University of Cluj-Napoca open access publication grant.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to express our gratitude to the anonymous reviewers for their help with the publication of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Potra, F.-A.; Ptak, V. Nondiscrete Induction and Iterative Process, Research Notes in Mathematics; Pitman: Boston, MA, USA, 1984. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publishing Company: Singapore, 2013. [Google Scholar]
  4. Amat, S.; Hernández, M.A.; Romero, N. Semilocal convergence of a sixth order iterative method for quadratic equation. Appl. Num. Math. 2012, 62, 833–841. [Google Scholar] [CrossRef]
  5. Argyros, I.K.; Ren, H. Improved local analysis for certain class of iterative methods with cubic convergence. Numer. Algor. 2012, 59, 505–521. [Google Scholar]
  6. Argyros, I.K.; Regmi, S. Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  7. Argyros, I.K.; Sharma, J.R.; Kumar, D. Ball convergence of the Newton-Gauss method in Banach space. SeMA 2017, 74, 429–439. [Google Scholar] [CrossRef]
  8. Babajee, D.K.R.; Dauhoo, M.Z.; Darvishi, M.T.; Barati, A. A note on the local convergence of iterative methods based on Adomian decomposition method and 3-node quadrature rule. Appl. Math. Comput. 2008, 200, 452–458. [Google Scholar] [CrossRef]
  9. Candela, V.; Marquina, A. Recurrence relations for rational cubic methods I: The Halley method. Computing 1990, 44, 169–184. [Google Scholar] [CrossRef]
  10. Candela, V.; Marquina, A. Recurrence relations for rational cubic methods II: The Chebyshev method. Computing 1990, 45, 355–367. [Google Scholar] [CrossRef]
  11. Chun, C.; Stănică, P.; Neta, B. Third-order family of methods in Banach spaces. Comput. Math. Appl. 2011, 61, 1665–1675. [Google Scholar] [CrossRef] [Green Version]
  12. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 199, 686–698. [Google Scholar] [CrossRef]
  13. Ezquerro, J.A.; Hernández, M.A. Recurrence relation for Chebyshev-type methods. Appl. Math. Optim. 2000, 41, 227–236. [Google Scholar] [CrossRef]
  14. Usurelu, G.I.; Bejenaru, A.; Postolache, M. Newton-like methods and polynomiographic visualization of modified Thakur processes. Int. J. Comp. Math. 2021, 98, 1049–1068. [Google Scholar] [CrossRef]
  15. Gdawiec, K.; Kotarski, W.; Lisowska, A. Polynomiography Based on the Nonstandard Newton-Like Root Finding Methods. Abst. Appl. Anal. 2015, 2015, 797594. [Google Scholar] [CrossRef] [Green Version]
  16. Hasanov, V.I.; Ivanov, I.G.; Nebzhibov, F. A new modification of Newton’s method. Appl. Math. Eng. 2002, 27, 278–286. [Google Scholar]
  17. Sharma, J.R.; Arora, H. On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 2013, 222, 497–506. [Google Scholar] [CrossRef]
  18. Sharma, J.R.; Kumar, D. A fast and efficient composite Newton Chebyshev method for systems of nonlinear equations. J. Complex. 2018, 49, 56–73. [Google Scholar] [CrossRef]
  19. Ren, H.; Wu, Q. Convergence ball and error analysis of a family of iterative methods with cubic convergence. Appl. Math. Comput. 2009, 209, 369–378. [Google Scholar] [CrossRef]
  20. Weerkoon, S.; Fernado, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  21. Hernández, M.A.; Salanova, M.A. Modification of Kantrovich assumptions for semi local convergence of Chebyshev method. J. Comput. Appl. Math. 2000, 126, 131–143. [Google Scholar] [CrossRef] [Green Version]
  22. Gutiérrez, J.M.; Magreñán, A.A.; Romero, N. On the semilocal convergence of Newton-Kantrovich method under center-Lipschitz conditions. Appl. Math. Comput. 2013, 221, 79–88. [Google Scholar]
  23. Kou, J.S.; Li, Y.T.; Wang, X.H. A modification of Newton’s method with third-order convergence. Appl. Math. Comput. 2006, 181, 1106–1111. [Google Scholar]
  24. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  25. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
Figure 1. Cut-away view of an artery.
Figure 1. Cut-away view of an artery.
Mathematics 09 02510 g001
Figure 2. Graph of f 2 ( x ) .
Figure 2. Graph of f 2 ( x ) .
Mathematics 09 02510 g002
Figure 3. In quasi-one-dimension flows, the stream tube cross section area is allowed to vary in one direction A = A ( x ) .
Figure 3. In quasi-one-dimension flows, the stream tube cross section area is allowed to vary in one direction A = A ( x ) .
Mathematics 09 02510 g003
Figure 4. The area–Mach-number relation.
Figure 4. The area–Mach-number relation.
Mathematics 09 02510 g004
Figure 5. Graph of f 3 ( x ) .
Figure 5. Graph of f 3 ( x ) .
Mathematics 09 02510 g005
Figure 6. Basins of attraction of M 5 , M 7 , and M 9 for polynomial P 1 ( z ) .
Figure 6. Basins of attraction of M 5 , M 7 , and M 9 for polynomial P 1 ( z ) .
Mathematics 09 02510 g006
Figure 7. Basins of attraction of M 5 , M 7 , and M 9 for polynomial P 2 ( z ) .
Figure 7. Basins of attraction of M 5 , M 7 , and M 9 for polynomial P 2 ( z ) .
Mathematics 09 02510 g007
Figure 8. Basins of attraction of M 5 , M 7 , and M 9 for polynomial P 3 ( z ) .
Figure 8. Basins of attraction of M 5 , M 7 , and M 9 for polynomial P 3 ( z ) .
Mathematics 09 02510 g008
Table 1. Numerical results for example 1.
Table 1. Numerical results for example 1.
M 5 M 7 M 9
r 1 = 0.00791011 r 1 = 0.00791011 r 1 = 0.00791011
r ( 1 ) = 0.00470691 r ( 2 ) = 8.50886 × 10 10 r ( 3 ) = 1.61122 × 10 13
r * = 0.00470691 r * = 8.50886 × 10 10 r * = 1.61122 × 10 13
Table 2. Numerical results for example 2.
Table 2. Numerical results for example 2.
M 5 M 7 M 8
r 1 = 0.169092 r 1 = 0.169092 r 1 = 0.169092
r ( 1 ) = 0.0724823 r ( 2 ) = 0.0331151 r ( 3 ) = 0.0140628
r * = 0.0724823 r * = 0.0331151 r * = 0.0140628
Table 3. Numerical results for example 3.
Table 3. Numerical results for example 3.
M 5 M 7 M 9
r 1 = 0.0819303 r 1 = 0.0819303 r 1 = 0.0819303
r ( 1 ) = 0.050974 r ( 2 ) = 0.0355748 r ( 3 ) = 0.0254287
r * = 0.050974 r * = 0.0355748 r * = 0.0254287
Table 4. Comparison of performance based on basins of attraction of methods.
Table 4. Comparison of performance based on basins of attraction of methods.
S. No.Test ProblemsRootsColor of FractalBest PerformerPoor Performer
1 P 1 ( z ) = z 2 4 2 red M 5 , M 7 , M 9
2green
2 P 2 ( z ) = z 3 z 1 red M 5 M 7 , M 9
0green
1blue
3 P 3 ( z ) = z 6 + 15 7 z 5 + 5 z 4 0.8277 cyan M 5 , M 7 M 9
+ 7 3 z 3 z 2 + z + 1 0.7654 1.9514 iyellow
0.6562 purple
0.7654 + 1.9514 iblue
0.4357 0.4786 igreen
0.4357 + 0.4786 ired
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kumar, D.; Kumar, S.; Sharma, J.R.; Jantschi, L. Convergence Analysis and Dynamical Nature of an Efficient Iterative Method in Banach Spaces. Mathematics 2021, 9, 2510. https://doi.org/10.3390/math9192510

AMA Style

Kumar D, Kumar S, Sharma JR, Jantschi L. Convergence Analysis and Dynamical Nature of an Efficient Iterative Method in Banach Spaces. Mathematics. 2021; 9(19):2510. https://doi.org/10.3390/math9192510

Chicago/Turabian Style

Kumar, Deepak, Sunil Kumar, Janak Raj Sharma, and Lorentz Jantschi. 2021. "Convergence Analysis and Dynamical Nature of an Efficient Iterative Method in Banach Spaces" Mathematics 9, no. 19: 2510. https://doi.org/10.3390/math9192510

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop