Next Article in Journal
A Backward Technique for Demographic Noise in Biological Ordinary Differential Equation Models
Previous Article in Journal
Chen Inequalities for Statistical Submanifolds of Kähler-Like Statistical Manifolds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Convergence and Attraction Basins of Higher Order, Jarratt-Like Iterations

1
Department of Mathematics, Sant Longowal Institute of Engineering & Technology, Longowal, Punjab 148106, India
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(12), 1203; https://doi.org/10.3390/math7121203
Submission received: 11 November 2019 / Revised: 2 December 2019 / Accepted: 4 December 2019 / Published: 8 December 2019

Abstract

:
We studied the local convergence of a family of sixth order Jarratt-like methods in Banach space setting. The procedure so applied provides the radius of convergence and bounds on errors under the conditions based on the first Fréchet-derivative only. Such estimates are not proposed in the approaches using Taylor expansions of higher order derivatives which may be nonexistent or costly to compute. In this sense we can extend usage of the methods considered, since the methods can be applied to a wider class of functions. Numerical testing on examples show that the present results can be applied to the cases where earlier results are not applicable. Finally, the convergence domains are assessed by means of a geometrical approach; namely, the basins of attraction that allow us to find members of family with stable convergence behavior and with unstable behavior.

1. Introduction

We provide local criteria for finding a unique solution δ of the nonlinear equation
H ( x ) = 0 ,
for Banach space valued mappings with H : D X Y , and H is differentiable according to Fréchet [1,2]. Many authors have studied local and semilocal convergence criteria of iterative methods (see, for example [3,4,5,6,7,8,9,10,11,12,13,14]).
The most well-known iterative method for approximating a solution δ of Equation (1) is Newton’s method, which is given by
x n + 1 = x n H ( x n ) 1 H ( x n ) , for each n = 0 , 1 , 2 , ,
which has a quadratic order of convergence. In order to achieve higher convergence order, a number of modified, multistep Newton’s or Newton-type iterations have been developed in the literature; see [3,4,6,7,9,10,11,12,15,16,17,18,19] and references cited therein.
There is another important class of multistep methods based on Jarratt methods or Jarratt-type methods [20,21,22]. Such methods have been extensively studied in the literature; see [23,24,25,26,27,28] and references therein. In particular, Alzahrani et al. [23] have recently proposed a class of sixth order methods for approximating solution of H ( x ) = 0 using a Jarratt-like composite scheme. These methods are very attractive and their local convergence analysis is worthy of study. The authors have shown some important special cases of the class which are defined for each n = 0 , 1 , 2 , by
Method-I:
y n = x n 2 3 H ( x n ) 1 H ( x n ) , z n = x n 12 ( H ( x n ) + H ( y n ) ) 1 H ( x n ) 9 I ( H ( x n ) + H ( y n ) ) 1 H ( x n ) + 5 2 I H ( x n ) 1 H ( x n ) , x n + 1 = z n 2 H ( x n ) 1 + 6 ( H ( x n ) + H ( y n ) ) 1 H ( z n ) .
Method-II:
y n = x n 2 3 H ( x n ) 1 H ( x n ) , z n = x n 9 ( H ( x n ) + H ( y n ) ) 1 H ( x n ) + 3 2 H ( x n ) 1 ( H ( x n ) + H ( y n ) ) 13 2 I H ( x n ) 1 H ( x n ) , x n + 1 = z n 2 H ( x n ) 1 + 6 ( H ( x n ) + H ( y n ) ) 1 H ( z n ) .
Method-III:
y n = x n 2 3 H ( x n ) 1 H ( x n ) , z n = x n 1 4 3 ( 3 H ( y n ) H ( x n ) ) 1 ( H ( x n ) + H ( y n ) ) + I H ( x n ) 1 H ( x n ) , x n + 1 = z n 2 H ( x n ) 1 + 6 ( H ( x n ) + H ( y n ) ) 1 H ( z n ) .
The sixth order of convergence for the methods was established in [23] by using Taylor expansions and hypotheses requiring derivatives up to sixth order, although only the first order derivatives appear in the methods. The hypotheses of considering higher derivatives restrict the applicability of these methods. As a motivational example, let us consider a function Q on X = Y = R , D = [ 1 2 , 5 2 ] by
Q ( x ) = x 3 ln x 2 + x 5 x 4 , x 0 , 0 , x = 0 .
We have that
Q ( x ) = 3 x 2 ln x 2 + 5 x 4 4 x 3 + 2 x 2 ,
Q ( x ) = 6 x ln x 2 + 20 x 3 12 x 2 + 10 x
and
Q ( x ) = 6 ln x 2 + 60 x 2 24 x + 22 .
Then, Q is unbounded on D. Notice also that the proofs of convergence in [23] use Taylor expansions up to the term containing sixth Fréchet-derivative. In this study, we discuss the local convergence of the methods defined above by employing the hypotheses only on the first Fréchet-derivative, taking advantage of the Lipschitz continuity of the first Fréchet-derivative. In addition, we present results in the more general setting of a Banach space.
The rest of the paper is organized as follows. In Section 2, we present the local convergences of Method-I, II and III. Theoretical results are validated through numerical examples in Section 3. Section 4 is devoted to checking the stability of the methods by means of using complex dynamical tool; namely, basin of attraction. Concluding remarks are given in Section 5.

2. Local Convergence Analysis

Here we discuss the local convergence analysis of the Method-I, Method-II and Method-III. In the analysis we find radius of convergence, computable error bounds on the distances x n δ , and then establish the uniqueness of the solution δ in a certain ball based on some Lipschitz constants.

2.1. Convergence for Method-I

Let ξ 0 : [ 0 , + ) [ 0 , + ) be an increasing and continuous function with ξ 0 ( 0 ) = 0 . Assume that equation
ξ 0 ( s ) = 1 ,
has at least one positive solution. Denote by ρ 0 , the smallest such solution.
Let ξ : [ 0 , ρ 0 ) [ 0 , + ) and ξ 1 : [ 0 , ρ 0 ) [ 0 , + ) also be increasing and continuous functions with ξ ( 0 ) = 0 . Moreover, define scalar functions on the interval [ 0 , ρ 0 ) by
g 1 1 ( s ) = 1 1 ξ 0 ( s ) 0 1 ξ ( ( 1 θ ) s ) d θ + 1 3 0 1 ξ 1 ( θ s ) d θ ,
and
h 1 1 ( s ) = g 1 1 ( s ) 1 .
Suppose that
ξ 1 ( 0 ) < 3 .
We have by (5), h 1 1 ( 0 ) < 0 and h 1 1 ( s ) + as s ρ 0 . It follows by the intermediate value theorem that equation h 1 1 ( s ) = 0 has at least one solution in the interval ( 0 , ρ 0 ) . Denote by r 1 , the smallest such solution.
Suppose that equation
p ( s ) = 1 ,
has at least one positive solution, where p ( s ) = 1 2 ξ 0 ( s ) + ξ 0 ( g 1 1 ( s ) s ) . Denote by ρ 1 , the smallest such solution.
Set: ρ = min { ρ 0 , ρ 1 } . Define functions g 2 1 , h 2 1 , g 3 1 , h 3 1 on the interval [ 0 , ρ ) by
g 2 1 ( s ) = 0 1 ξ ( ( 1 θ ) s ) d θ 1 ξ 0 ( s ) + 3 ( ξ 0 ( s ) + ξ 0 ( g 1 1 ( s ) s ) ) 2 0 1 ξ 1 ( θ t ) d θ 8 ( 1 p ( s ) ) 2 ( 1 ξ 0 ( s ) ) + 3 ( ξ 0 ( s ) + ξ 0 ( g 1 1 ( s ) s ) ) 0 1 ξ 1 ( θ s ) d θ 4 ( 1 p ( s ) ) 2 ,
h 2 1 ( s ) = g 2 1 ( s ) 1 ,
g 3 1 ( s ) = 1 + ( ξ 0 ( s ) + ξ 0 ( g 1 1 ( s ) s ) + ξ 1 ( s ) ) 0 1 ξ 1 ( θ g 2 1 ( s ) s ) d θ ( 1 ξ 0 ( s ) ) ( 1 p ( s ) ) g 2 1 ( s )
and
h 3 1 ( s ) = g 3 1 ( s ) 1 .
We get that h 2 1 ( 0 ) = h 3 1 ( 0 ) = 1 , h 2 1 ( s ) + as s ρ , and h 3 1 ( s ) + as s ρ . Denote by r 2 and r 3 , the smallest solutions of equations h 2 1 ( s ) = 0 and h 3 1 ( s ) = 0 in ( 0 , ρ ) , respectively.
Define a radius of convergence r by
r = min { r i } , i = 1 , 2 , 3 .
Then, we have that for each s [ 0 , r ) ,
0 g i 1 ( s ) < 1 .
In order to study Method-I, we need to rewrite it in a more convenient form.
Lemma 1.
Suppose that iterates { x n } , { y n } and { z n } are well defined for each n = 0 , 1 , 2 , . Then, Method-I can be rewritten as
y n = x n 2 3 H ( x n ) 1 H ( x n ) , z n = x n H ( x n ) 1 H ( x n ) 3 2 A n 1 ( H ( x n ) H ( y n ) ) A n 1 ( H ( x n ) H ( y n ) ) H ( x n ) 1 H ( x n ) 3 A n 1 ( H ( x n ) H ( y n ) ) A n 1 H ( x n ) , x n + 1 = z n + 2 H ( x n ) 1 ( H ( y n ) H ( x n ) ) 2 H ( x n ) A n 1 H ( z n ) ,
where A n = H ( x n ) + H ( y n ) .
Proof. 
By the second sub-step of Method-I, we have in turn that
z n = x n H ( x n ) 1 H ( x n ) ( 12 A n 1 H ( x n ) 9 I ) A n 1 H ( x n ) + 3 2 I H ( x n ) 1 H ( x n ) = x n H ( x n ) 1 H ( x n ) 3 ( 4 A n 1 H ( x n ) 3 I ) A n 1 H ( x n ) + 1 2 I H ( x n ) 1 H ( x n ) = x n H ( x n ) 1 H ( x n ) 3 4 ( A n 1 H ( x n ) 1 2 I ) A n 1 H ( x n ) 1 4 I H ( x n ) 1 H ( x n ) .
But by using the estimates
A n 1 H ( x n ) 1 4 I = 1 4 A n 1 4 H ( x n ) H ( x n ) H ( y n ) = 1 4 A n 1 ( H ( x n ) H ( y n ) ) + 2 H ( x n )
and
A n 1 H ( x n ) 1 2 I = A n 1 H ( x n ) 1 2 A n = 1 2 A n 1 ( H ( x n ) H ( y n ) ) ,
in the preceding estimates, we show the equivalent second sub-step of Method-I.
To show the equivalent third sub-step of Method-I notice that
H ( x n ) 3 ( H ( x n ) + H ( y n ) ) 1 = H ( x n ) 1 H ( x n ) + H ( y n ) 3 H ( x n ) A n 1 .
Then, from the third sub-step of Method-I and the preceding estimate, we obtain in turn, that
x n + 1 = z n + 2 ( H ( x n ) 1 3 A n 1 ) H ( z n ) ,
leading to the equivalent third sub-step of Method-I. □
Let U ( v , ρ ) and U ¯ ( v , ρ ) denote the open and closed balls in X respectively, with the center v X and of radius ρ > 0 . Next, we study the local convergence of Method-I.
Theorem 1.
Let H : D X Y be a continuously Fréchet-differentiable operator. Suppose that there exists, δ D and functions ξ 0 , ξ, and ξ 1 as defined previously, such that for each x D
H ( δ ) = 0 , H ( δ ) 1 L ( Y , X ) ,
H ( δ ) 1 H ( x ) H ( δ ) ξ 0 ( x δ ) .
Set: D 0 = D U ( δ , ρ 0 ) .
H ( δ ) 1 H ( x ) H ( y ) ξ ( x y ) , f o r e a c h x , y D 0 ,
H ( δ ) 1 H ( x ) ξ 1 ( x δ ) , f o r e a c h x , y D 0 ,
U ¯ ( δ , r ) D
and (4)–(6) hold, where ρ 0 and r are defined previously. Then, the sequence { x n } generated by Method-I for x 0 U ( δ , r ) { δ } is well defined, remains in U ( δ , r ) for each n = 0 , 1 , , and converges to δ. Moreover, the following estimates hold.
y n δ g 1 1 ( x n δ ) x n δ < x n δ < r ,
z n δ g 2 1 ( x n δ ) x n δ < x n δ
and
x n + 1 δ g 3 1 ( x n δ ) x n δ ,
where the “ g ” functions were defined previously. Furthermore, if there exists r * r such that 0 1 ξ 0 ( θ r * ) d θ < 1 , then the limit point δ is the only solution of equation H ( x ) = 0 in D 1 = D U ¯ ( δ , r * ) .
Proof. 
We shall show the estimates (14)–(16) using mathematical induction. By hypothesis x 0 U ( δ , r ) { δ } , (4), (9), and (10), we get that
H ( δ ) 1 H ( x 0 ) H ( δ ) ξ 0 ( x 0 δ ) < 1 .
It follows from (17) and the Banach Lemma on invertible operators [3,16] that H ( x 0 ) 1 L ( Y , X ) and
H ( x 0 ) 1 H ( δ ) 1 1 ξ 0 ( x 0 δ )
and y 0 is well defined by the first step of Method-I for n = 0 . In view of (7) and (12), we get that
H ( x 0 ) = H ( x 0 ) H ( δ ) = 0 1 H ( δ + θ ( x 0 δ ) ) ( x 0 δ ) d θ ,
so,
H ( δ ) 1 H ( x 0 ) = 0 1 H ( δ ) 1 H ( δ + θ ( x 0 δ ) ) ( x 0 δ ) d θ 0 1 ξ 1 ( θ x 0 δ ) d θ x 0 δ .
Notice that δ + θ ( x 0 δ ) δ = θ x 0 δ < r for each θ [ 0 , 1 ] . That is, δ + θ ( x 0 δ ) U ( δ , r ) .
Using the first sub-step of Method-I for n = 0 and (7), we can write
y 0 δ = x 0 δ H ( x 0 ) 1 H ( x 0 ) + 1 3 H ( x 0 ) 1 H ( x 0 ) .
Then, we have by Equations (7), (9), (11), (18), (19), and (20) that
y 0 δ x 0 δ H ( x 0 ) 1 H ( x 0 ) + 1 3 H ( x 0 ) 1 H ( x 0 ) H ( x 0 ) 1 H ( δ ) 0 1 H ( δ ) 1 [ H ( δ + θ ( x 0 δ ) ) H ( x 0 ) ] ( x 0 δ ) ] d θ + 1 3 H ( x 0 ) 1 H ( δ ) H ( δ ) 1 H ( x 0 ) 1 1 ξ 0 ( x δ ) 0 1 ξ ( ( 1 θ ) x 0 δ ) d θ + 1 3 0 1 ξ 1 ( θ x 0 δ ) d θ × x 0 δ = g 1 1 ( x 0 δ ) x 0 δ < x 0 δ < r ,
which shows (14) for n = 0 and y 0 U ( δ , r ) .
Next, we shall show that A 0 = H ( x 0 ) + H ( y 0 ) 0 is invertible. Using the Equations (10) and (21), we obtain that
( 2 H ( δ ) ) 1 ( A 0 2 H ( δ ) ) 1 2 H ( δ ) 1 H ( x 0 ) H ( δ ) ) + H ( δ ) 1 ( H ( y 0 ) H ( δ ) ) 1 2 ξ 0 ( x 0 δ ) + ξ 0 ( y 0 δ ) 1 2 ξ 0 ( x 0 δ ) + ξ 0 ( g 1 ( x 0 δ ) x 0 δ ) = p ( x 0 δ ) p ( r ) < 1 .
Hence, we get that
A 0 1 H ( δ ) 1 2 ( 1 p ( x 0 δ ) ) .
So, z 0 is well defined and by the second sub-step of Method-I in Lemma 1
z 0 δ x 0 δ H ( x 0 ) 1 H ( x 0 ) + 3 2 A 0 1 H ( δ ) ( H ( δ ) 1 ( H ( x 0 ) H ( δ ) ) + H ( δ ) 1 ( H ( y 0 ) H ( δ ) ) ) 2 H ( x 0 ) 1 H ( δ ) H ( δ ) 1 H ( x 0 ) + 3 A 0 1 H ( δ ) 2 H ( δ ) 1 ( H ( x 0 ) H ( δ ) ) + H ( δ ) 1 ( H ( y 0 ) H ( δ ) ) × H ( δ ) 1 H ( x 0 ) ( 0 1 ξ ( ( 1 θ ) x 0 δ ) d θ 1 ξ 0 ( x δ ) + 3 ( ξ 0 ( x 0 δ ) + ξ 0 ( g 1 1 ( x 0 δ ) x 0 δ ) ) 2 0 1 ξ 1 ( θ x 0 δ ) d θ 8 ( 1 p ( x 0 δ ) ) 2 ( 1 ξ 0 ( x 0 δ ) ) + 3 ( ξ 0 ( x 0 δ ) + ξ 0 ( g 1 1 ( x 0 δ ) x 0 δ ) ) 0 1 ξ 1 ( θ x 0 δ ) d θ 4 ( 1 p ( x 0 δ ) ) 2 ) x 0 δ = g 2 1 ( x 0 δ ) x 0 δ x 0 δ < r ,
which proves (15) for n = 0 and z 0 U ( δ , r ) .
Hence, x 1 is well defined by last sub-step of Method-I for n = 0 . Then, by using the third sub-step of Method-I in Lemma 1, we get that
x 1 δ z 0 δ + 2 H ( x 0 ) 1 H ( δ ) ( H ( δ ) 1 ( H ( y 0 ) H ( δ ) ) + H ( δ ) 1 ( H ( x 0 ) H ( δ ) ) + H ( δ ) 1 H ( x 0 ) ) A 0 1 H ( δ ) H ( δ ) 1 H ( z 0 ) ( 1 + ( ξ 0 ( x 0 δ ) + ξ 0 ( g 1 1 ( x 0 δ ) x 0 δ ) + ξ 1 ( x 0 δ ) ) ( 1 ξ 0 ( x 0 δ ) ) ( 1 p ( x 0 δ ) ) × 0 1 ξ 1 ( θ g 2 1 ( x 0 δ ) x 0 δ ) d θ ) g 2 1 ( x 0 δ ) x 0 δ = g 3 1 ( x 0 δ ) x 0 δ x 0 δ < r ,
which proves (16) for n = 0 and x 1 U ( δ , r ) . By simply replacing x 0 , y 0 , z 0 , x 1 by x n , y n , z n , x n + 1 in the preceding estimates, we arrive at (14)–(16). Then, from the estimates x n + 1 δ < c x n δ < r , where c = g 3 1 ( x 0 δ ) [ 0 , 1 ) we deduce that lim n x n = δ and x n + 1 U ( δ , r ) .
Finally, we show the unique part: let P = 0 1 H ( δ * + t ( δ δ * ) ) d t for some δ * D 1 with H ( δ * ) = 0 . Using (8), we get that
H ( δ ) 1 ( P H ( δ ) 0 1 ξ 0 ( θ δ δ * ) d θ 0 1 ξ 0 ( θ r * ) d θ < 1 .
It follows from (26) that P is invertible. Then, from the identity 0 = H ( δ ) H ( δ * ) = P ( δ δ * ) , we conclude that δ = δ *  □

2.2. Convergence of Method-II

Set g 1 2 ( s ) = g 1 1 ( s ) and h 1 2 ( s ) = h 1 1 ( s ) . Define functions g 2 2 , h 2 2 , g 3 2 , and h 3 2 on [ 0 , ρ ) by
g 2 2 ( s ) = 0 1 ξ ( ( 1 θ ) s ) d θ 1 ξ 0 ( s ) + 3 ( ξ 0 ( s ) + ξ 0 ( g 1 2 ( s ) s ) 0 1 ξ 1 ( θ s ) d θ 4 ( 1 p ( s ) ) ( 1 ξ 0 ( s ) ) 2 ,
h 2 2 ( s ) = g 2 2 ( s ) 1 ,
g 3 2 ( s ) = 1 + ( ξ 0 ( s ) + ξ 0 ( g 1 2 ( s ) s ) + 2 ξ 1 ( s ) ) 0 1 ξ 1 ( θ g 2 2 ( s ) s ) d θ ( 1 ξ 0 ( s ) ) ( 1 p ( s ) ) g 2 2 ( s ) ,
and
h 3 2 ( s ) = g 3 2 ( s ) 1 .
We also get that h 2 2 ( 0 ) = h 3 2 ( 0 ) = 1 , h 2 2 ( s ) + as s + , and h 3 2 ( s ) + as s + .
Denote by r 2 and r 3 , the smallest such solutions of Equations h 2 2 ( s ) = 0 and h 3 2 ( s ) = 0 , respectively.
Set:
r = min { r i } , i = 1 , 2 , 3 .
Then, we have that for each s [ 0 , r ) , 0 g i 2 ( s ) < 1 .
Lemma 2.
Suppose that iterates { x n } , { y n } , and { z n } are well defined for each n = 0 , 1 , 2 , . Then, Method-II can be rewritten as
y n = x n 2 3 H ( x n ) 1 H ( x n ) , z n = x n H ( x n ) 1 H ( x n ) 3 2 ( A n 1 ( H ( x n ) H ( y n ) ) 2 ( H ( x n ) 1 ) 2 H ( x n ) + A n 1 ( H ( x n ) H ( y n ) ) H ( x n ) 1 H ( x n ) ) , x n + 1 = z n + 2 H ( x n ) 1 ( H ( y n ) H ( x n ) ) 2 H ( x n ) A n 1 H ( z n ) ,
where A n = H ( x n ) + H ( y n ) .
Proof. 
By Lemma 1, we only need to show the second sub-step of Method-II. We can write
z n = x n H ( x n ) 1 H ( x n ) 3 2 E n H ( x n ) 1 H ( x n ) ,
where
E n = 6 A n 1 H ( x n ) + H ( x n ) 1 A n 5 I = 6 A n 1 H ( x n ) 6 I + H ( x n ) 1 A n + I = 6 A n 1 ( H ( x n ) A n ) + H ( x n ) 1 A n + I = 6 A n 1 H ( y n ) + 2 I + H ( x n ) 1 H ( y n ) = A n 1 ( 6 H ( y n ) + 2 A n + A n H ( x n ) 1 H ( y n ) ) = A n 1 ( 3 H ( y n ) + 2 H ( x n ) + H ( y n ) H ( x n ) 1 H ( y n ) ) = A n 1 ( 2 ( H ( x n ) H ( y n ) ) H ( y n ) + H ( y n ) H ( x n ) 1 H ( y n ) ) = A n 1 ( 2 ( H ( x n ) H ( y n ) ) + H ( y n ) ( H ( x n ) 1 H ( y n ) I ) ) = A n 1 ( 2 ( H ( x n ) H ( y n ) ) + H ( y n ) H ( x n ) 1 ( H ( y n ) H ( x n ) ) = A n 1 ( H ( x n ) H ( y n ) ) ( 2 H ( y n ) H ( x n ) 1 ) = A n 1 ( H ( x n ) H ( y n ) ) ( 2 H ( x n ) H ( y n ) ) H ( x n ) 1 = A n 1 ( H ( x n ) H ( y n ) ) 2 H ( x n ) 1 + A n 1 ( H ( x n ) H ( y n ) ) .
By replacing E n in the estimate above it, we conclude the proof. □
Next, we present the local convergence analysis of Method-II in an analogous way to Method-I using the preceding notations.
Theorem 2.
Suppose that the hypotheses of Theorem 1 are satisfied but r is defined by (27). Then, the conclusions of Theorem 1 hold with Method-II replacing Method-I and the g 2 function replacing the g 1 function.

2.3. Convergence for Method-III

Set g 1 3 ( s ) = g 1 1 ( s ) and h 1 3 ( s ) = h 1 1 ( s ) . Suppose that equation
q ( s ) : = 1 2 ( 3 ξ 0 ( g 1 1 ( s ) s ) + ξ 0 ( s ) ) = 1 ,
has at least one positive solution. Denote by ρ 2 , the smallest such solution.
Set: ρ = min { ρ 0 , ρ 2 } . We define functions g 2 3 , h 2 3 , g 3 3 , and h 3 3 on [ 0 , ρ ) by
g 2 3 ( s ) = 0 1 ξ ( ( 1 θ ) s ) d θ 1 ξ 0 ( s ) + 3 ( ξ 0 ( s ) + ξ 0 ( g 1 3 ( s ) s ) ) 0 1 ξ 1 ( θ s ) d θ 4 ( 1 q ( s ) ) ( 1 ξ 0 ( s ) ) ,
h 2 3 ( s ) = g 2 3 ( s ) 1 ,
g 3 3 ( s ) = 1 + ( ξ 0 ( s ) + ξ 0 ( g 1 3 ( s ) s ) + 2 ξ 1 ( s ) ) 0 1 ξ 1 ( θ g 2 3 ( s ) s ) d θ ( 1 ξ 0 ( s ) ) ( 1 p ( s ) ) g 2 3 ( s ) ,
and
h 3 3 ( s ) = g 3 3 ( s ) 1 .
We also get that h 2 3 ( 0 ) = h 3 3 ( 0 ) = 1 , h 2 3 ( s ) + as s + , and h 3 3 ( s ) + as s + . Denote by r 2 and r 3 , the smallest such solutions of equations h 2 3 ( s ) = 0 and h 3 3 ( s ) = 0 , respectively.
Set:
r = min { r i } , i = 1 , 2 , 3 .
Then, we have that for each s [ 0 , r ) , 0 g i 3 ( s ) < 1 .
As in the previous two methods we need the auxiliary result.
Lemma 3.
Suppose that iterates { x n } , { y n } , and { z n } are well defined for each n = 0 , 1 , 2 , . Then, Method-III can be rewritten as
y n = x n 2 3 H ( x n ) 1 H ( x n ) z n = x n H ( x n ) 1 H ( x n ) 3 2 B n 1 ( H ( x n ) H ( y n ) ) H ( x n ) 1 H ( x n ) x n + 1 = z n + 2 H ( x n ) 1 ( H ( y n ) H ( x n ) ) 2 H ( x n ) A n 1 H ( z n ) ,
where B n = 3 H ( y n ) H ( x n ) .
Proof. 
We have that
z n = x n H ( x n ) 1 H ( x n ) 3 4 ( B n 1 A n I ) H ( x n ) 1 H ( x n ) .
But
B n 1 A n I = B n 1 ( A n B n ) = 2 B n 1 ( H ( x n ) H ( y n ) ) ,
so by replacing this estimate in the preceding one, we complete the proof. □
Next, we present the local convergence analysis of Method-III in an analogous way to Method-I using the preceding notations.
Theorem 3.
Suppose that the hypotheses of Theorem 1 are satisfied but r is defined by (28) and the g 3 function replaces the g 1 function. Then, the conclusions of Theorem 1 hold with Method-III replacing Method-I.
Proof. 
We have
( 2 H ( δ ) ) 1 ( B 0 3 H ( δ ) + H ( δ ) ) 1 2 3 H ( δ ) 1 ( H ( y 0 ) H ( δ ) + H ( δ ) 1 ( H ( x 0 ) H ( δ ) 1 2 3 ξ 0 ( g 1 3 ( x 0 δ ) x 0 δ + ξ 0 ( x 0 δ ) = q ( x 0 δ ) < 1 ,
so
B 0 1 H ( δ ) 1 2 ( 1 q ( x 0 δ ) ) .
The rest of the proof follows as the proof of Theorem 2. □
Remark 1.
 
(a) 
In view of (10) and the estimate
H ( δ ) 1 H ( x ) = H ( δ ) 1 ( H ( x ) H ( δ ) ) + I 1 + H ( δ ) 1 ( H ( x ) H ( δ ) ) 1 + ξ 0 ( x δ ) ,
condition (12) can be dropped and be replaced by
M ( s ) = 1 + ξ 0 ( s ) .
(b) 
The result obtained here can be used for operator H satisfying autonomous differential equation [2] of the form
H ( x ) = T ( H ( x ) ) ,
where T is a known continuous operator. Since H ( δ ) = T ( H ( δ ) ) = T ( 0 ) , we can apply the results without actually knowing the solution δ. Let, as an example H ( x ) = e x 1 . Then, we can choose: T ( x ) = x + 1 .
(c) 
It is worth noticing that methods I, II, and III do not change when we use the conditions of Theorems 1, 2, and 3 instead of stronger conditions used in [23]. Moreover, we can compute the theoretical order of convergence by computational order of convergence (COC) [29]
COC = ln x n + 1 δ x n δ / ln x n δ x n 1 δ , f o r e a c h n = 1 , 2 ,
or the approximate computational order of convergence (ACOC) [2], given by
ACOC = ln x n + 1 x n x n x n 1 / ln x n x n 1 x n 1 x n 2 f o r e a c h n = 1 , 2 , .

3. Numerical Examples

To validate the results of convergence theorems, we present few numerical examples as follows:
Example 1.
Suppose that X = Y = C [ 0 , 1 ] , where C [ 0 , 1 ] stands for the space of continuous functions defined on [ 0 , 1 ] . We shall use the maximum norm. Let D = U ¯ ( 0 , 1 ) . Define operator H on D by
H ( μ ) ( x ) = μ ( x ) 5 0 1 x τ μ ( τ ) 3 d τ .
From above equation, we have that
H ( μ ( λ ) ) ( x ) = λ ( x ) 15 0 1 x τ μ ( τ ) 2 λ ( τ ) d τ , for each λ D .
Then, for δ = 0 , μ ( x ) = 0 and λ ( x ) = 1 , we have ξ 0 ( s ) = 7.5 s , ξ ( s ) = 15 s , and ξ 1 ( s ) = 2 . Using the definitions of parameters r 1 , r 2 , and r 3 , their computed values are given in Table 1.
Thus, the convergence of the considered methods to δ = 0 is guaranteed, provided that x 0 U ( δ , r ) .
Example 2.
We consider the law of population growth. Let N ( t ) , λ, and ν be the population at time t , a constant birth rate of the population, and the immigration rate, respectively. Then, the equation governing population growth is given as (see [30])
d N ( t ) d t = λ N ( t ) + ν .
The solution of this differential equation is given by
N ( t ) = N 0 e λ t + ν λ ( e λ t 1 ) ,
where N 0 is initial population.
For a particular case study, the problem is given as: Suppose that a certain population initially contains 1,000,000 individuals, that 435,000 individuals immigrate to the community in the first year, and that 1,564,000 individuals are present in the end of one year. The problem is to find birth rate (λ) of this population.
To determine the birth rate ( λ ), we solve the equation
H ( x ) = 1564 1000 e x 435 x ( e x 1 ) = 0 ,
wherein x = λ . The solution ( δ ) of this equation is 0.1009979296 . Then, we have that ξ 0 ( s ) = L 0 s , ξ ( s ) = L s , and ξ 1 ( s ) = M , where
L = L 0 = L 1 = | H ( δ ) 1 | max 0.1 x 0.2 | f ( x ) | = 1.038089 ,
and
M = | H ( δ ) 1 | max 0.1 x 0.2 | H ( x ) | = 1.097991 .
Then, for the above set of values the parameters are given in Table 2.
Thus, the results of theorems ensure convergence of the methods I, II, and III to the solution δ = 0.1009979296 .
Example 3.
Let us consider the function H : = ( f 1 , f 2 , f 3 ) : D R 3 defined by
H ( x ) = 10 x 1 + sin ( x 1 + x 2 ) 1 , 8 x 2 cos 2 ( x 3 x 2 ) 1 , 12 x 3 + sin ( x 3 ) 1 T ,
where x = ( x 1 , x 2 , x 3 ) T .
The Fréchet-derivative is given by
H ( x ) = 10 + cos ( x 1 + x 2 ) cos ( x 1 + x 2 ) 0 0 8 + sin 2 ( x 2 x 3 ) 2 sin ( x 2 x 3 ) 0 0 12 + cos ( x 3 ) .
Using the initial approximation x 0 = { 0 , 0.5 , 0.1 } T , we obtain the root δ of the Function (33) as
δ = { 0.068978349172666557 , 0.24644241860918295 , 0.076928911987536964 } T .
Then, we get that ξ 0 ( s ) = ξ ( s ) = 0.269812 s and ξ 1 ( s ) = 2 . The values of parameters r 1 , r 2 , and r 3 we calculated are displayed in Table 3.
Example 4.
Consider the motivational example given at the introduction. We have that δ = 0 . It follows that ξ 0 ( s ) = L 0 s , ξ ( s ) = L s , and ξ 1 ( s ) = 2 , where L 0 = L = 146.66290 . The parameters are given in Table 4.
Thus, the convergence of the methods to δ = 0 is guaranteed, provided that x 0 U ( δ , r ) .

4. Basins of Attraction

In this section, we present complex geometries of Method-I, II, and III based on the basins of attraction when the methods are applied to the complex polynomial P ( z ) . The basin of attraction is an useful geometrical tool for comparing convergence domains of the iterative methods [3].
Let R : C C be a rational map on the Riemann sphere. The orbit of a point z 0 C is defined as the set { z 0 , R ( z 0 ) , R 2 ( z 0 ) , , R n ( z 0 ) , } . A point z 0 C is a fixed point of the rational function R if it satisfies R ( z 0 ) = z 0 . A periodic point z 0 of period m > 1 is a point such that R m ( z 0 ) = z 0 , where m is the smallest such integer. A point z 0 is called attracting if | R ( z 0 ) | < 1 , repelling if | R ( z 0 ) | > 1 , and neutral if | R ( z 0 ) | = 1 . Moreover, if | R ( z 0 ) | = 0 , the fixed point is super attracting. Let z f * be an attracting fixed point of the rational function R. The basin of attraction of the fixed point z f * is defined
A ( z f * ) = { z 0 C ^ : R n ( z 0 ) z f * , n } .
The set of points whose orbit approaches to an attracting fixed point z f * is called the Fatou set. The complementary set, called the Julia set, is the closure of the set consisting of repelling fixed points, which sets up the boundary between the basins of attraction.
In our experiments, we took a square region D = [ 3 , 3 ] × [ 3 , 3 ] of the complex plane, with 400 × 400 points, and applied the iterative methods starting with z 0 in the square. The numerical methods starting at point z 0 in a square can converge to the zero of the polynomial P ( z ) or eventually diverge. The stopping criterion for convergence up to a maximum of 25 iterations was considered to be 10 3 . If the desired tolerance was not achieved in 25 iterations, we did not continue and declared that the iteration from z 0 did not converge to any root. The strategy taken into account was this: a color was assigned to each starting point z 0 in the basin of a zero. Then, we distinguished the attraction basins by their colors. If the iterative function starting from the initial point z 0 converged, then it represented the basins of attraction with the particular assigned color, and if it diverged in 25 iterations, then it enters into the region of black color.
Test problem 1. Let P 1 ( z ) = z 2 1 having the zeros { 1 , 1 } . The basins of attractors generated by the methods for this polynomial are shown in Figure 1. From this figure, it can be observed that Method-III has more stable behavior than Methods I and II. In addition, Method-III exhibits very little chaotic behavior on the boundary points compared to other methods.
Test problem 2. Consider the polynomial P 2 ( z ) = z 3 1 having the zeros { 1 2 3 2 i , 1 2 + 3 2 i , 1 } . The basins assessed by the methods are shown in Figure 2. In this case also, the Method-III has the largest basins of attraction compared to Methods I and II. On the other hand, the fractal picture of the Method-II has a large number of diverging points shown by black zones.
Test problem 3. Consider biquadratic polynomial P 3 ( z ) = z 4 10 z 2 + 9 having simple zeros { 3 , 1 , 1 , 3 } . The basins for this polynomial are exhibited in Figure 3. We observed that Method-III showed good convergence with wider basins of attraction of the zeros in comparison to other methods. We also noticed that Method-II had bad stability properties.
Test problem 4. Let P 4 ( z ) = z 5 z having simple zeros { 1 , 1 , 0 , i , i } . Like previous problems, in this problem also, Method-III had a good convergence property for the solutions in comparison to other methods (see Figure 4). Moreover, this was the best method in terms of the least chaotic behavior on the boundary points. On the contrary, Method-II had the highest number of divergent points, and was followed by Method-I.

5. Conclusions

In the present study, we discussed the convergence of existing Jarratt-like methods of sixth order. In the earlier study of convergence, the conditions used were based on Taylor’s expansions requiring up to the sixth or higher-order derivatives of function, although the iterative procedures used first-order derivatives only. It is quite well understood that these hypotheses restrict the applications of the scheme. However, the present study extended the suitability of methods by using assumptions on the first-order derivative only. Moreover, this approach provides the radius of convergence, bounds on error, and estimates on the uniqueness of the solution of equations. These important elements of convergence are not established by the approaches such as Taylor series expansions with higher order derivatives which may not exist, may be costly, or may be difficult to calculate. So, we do not have any idea how close the initial guess was to the solution of convergence for the method. That is to say the initial guess is a shot in the dark by the approaches of applying Taylor series expansions. Theoretical results of convergence so obtained are verified by numerical testing. Finally, the convergence regions of the methods were also assessed by a graphical drawing tool; namely, basin of attraction.

Author Contributions

Methodology, J.R.S.; writing—review and editing, J.R.S.; conceptualization, D.K.; data curation, D.K.; investigation, I.K.A.; formal analysis, I.K.A.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rheinbolt, W.C. An adaptive continuation process for solving system of nonlinear equations. In Mathematical Models and Numerical Methods; Banach Center: Warsaw Poland, 1977; pp. 129–142. [Google Scholar]
  2. Argyros, I.K. Computational Theory of Iterative Methods; Series: Studies in Computational Mathematics; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Co.: New York, NY, USA, 2007; Volume 15. [Google Scholar]
  3. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  4. Argyros, I.K.; Sharma, J.R.; Kumar, D. Ball convergence of the Newton–Gauss method in Banach space. SeMA J. 2017, 74, 429–439. [Google Scholar] [CrossRef]
  5. Babajee, D.K.R.; Dauhoo, M.Z.; Darvishi, M.T.; Barati, A. A note on local convergence of iterative methods based on Adomian decomposition method and 3-note quadrature rule. Appl. Math. Comput. 2008, 200, 452–458. [Google Scholar] [CrossRef]
  6. Argyros, I.K.; Kumar, D.; Sharma, J.R. Study of optimal eighth order weighted-Newton methods in Banach spaces. Commun. Korean Math. Soc. 2018, 33, 677–693. [Google Scholar]
  7. Argyros, I.K.; Sharma, J.R.; Kumar, D. On the local convergence of weighted-Newton methods under weak conditions in Banach spaces. Ann. Univ. Sci. Budapest. Sect. Comp. 2018, 47, 127–139. [Google Scholar]
  8. Kumar, D.; Argyros, I.K.; Sharma, J.R. Convergence ball and complex geometry of an iteration function of higher order. Mathematics 2019, 7, 28. [Google Scholar] [CrossRef] [Green Version]
  9. Sharma, J.R.; Kumar, D. A fast and efficient composite Newton–Chebyshev method for systems of nonlinear equations. J. Complex. 2018, 49, 56–73. [Google Scholar] [CrossRef]
  10. Sharma, J.R.; Argyros, I.K.; Kumar, D. Newton-like methods with increasing order of convergence and their convergence analysis in Banach space. SeMA J. 2018. [Google Scholar] [CrossRef]
  11. Hernández, M.A.; Salanova, M.A. Modification of Kantrovich assumptions for semilocal convergence of Chebyshev method. J. Comput. Appl. Math. 2000, 126, 131–143. [Google Scholar] [CrossRef] [Green Version]
  12. Gutiérrez, J.M.; Magreñán, A.A.; Romero, N. On the semilocal convergence of Newton-Kantrovich method under center-Lipschitz conditions. Appl. Math. Comput. 2013, 221, 79–88. [Google Scholar]
  13. Jaiswal, J.P. Semilocal convergence of an eighth-order method in Banach spaces and its computational efficiency. Numer. Algor. 2016, 71, 933–951. [Google Scholar] [CrossRef]
  14. Jaiswal, J.P.; Pandey, B. Recurrence relations and semilocal convergence of a fifth order method in Banach spaces. Int. J. Pure Appl. Math. 2016, 108, 767–780. [Google Scholar] [CrossRef] [Green Version]
  15. Darvishi, M.T.; Barati, A. A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar] [CrossRef]
  16. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  17. Xiao, X.; Yin, H. A simple and efficient method with high order convergence for solving systems of nonlinear equations. Comput. Math. Appl. 2015, 69, 1220–1231. [Google Scholar] [CrossRef]
  18. Narang, M.; Bhatia, S.; Kanwar, V. New two-parameter Chebyshev–Halley–like family of fourth and sixth–order methods for systems of nonlinear equations. Appl. Math. Comput. 2016, 275, 394–403. [Google Scholar] [CrossRef]
  19. Xiao, X.; Yin, H. Achieving higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2017, 311, 251–261. [Google Scholar] [CrossRef]
  20. Jarratt, P. Some effcient fourth order multipoint methods for solving equations. BIT 1969, 9, 119–124. [Google Scholar] [CrossRef]
  21. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  22. Jarratt, P. Multipoint iterative methods for solving certain equations. Comput. J. 1966, 8, 398–400. [Google Scholar] [CrossRef] [Green Version]
  23. Alzahrani, A.K.H.; Behl, R.; Alshomrani, A. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
  24. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton–Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  25. Sharma, J.R.; Arora, H. Efficient Jarratt–like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  26. Sharma, J.R.; Kumar, D.; Argyros, I.K.; Magreñán, Á.A. On a bi–parametric family of fourth order composite Newton–Jarratt methods for nonlinear systems. Mathematics 2019, 7, 492. [Google Scholar] [CrossRef] [Green Version]
  27. Ahmad, F.; Tohidi, E.; Ullah, M.Z.; Carrasco, J.A. Higher order multi–step Jarratt–like method for solving systems of nonlinear equations: Application to PDEs and ODEs. Comp. Math. Appl. 2015, 70, 624–636. [Google Scholar] [CrossRef] [Green Version]
  28. Junjua, M.U.D.; Akram, S.; Yasmin, N.; Zafar, F. A new Jarratt–type fourth-order method for solving system of nonlinear equations and applications. J. Appl. Math. 2015, 14. [Google Scholar] [CrossRef]
  29. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  30. Burden, R.L.; Faires, J.D. Numerical Analysis; Brooks/Cole: Boston, MA, USA, 2005. [Google Scholar]
Figure 1. Basins of attraction for test problem 1.
Figure 1. Basins of attraction for test problem 1.
Mathematics 07 01203 g001
Figure 2. Basins of attraction for test problem 2.
Figure 2. Basins of attraction for test problem 2.
Mathematics 07 01203 g002
Figure 3. Basins of attraction for test problem 3.
Figure 3. Basins of attraction for test problem 3.
Mathematics 07 01203 g003
Figure 4. Basins of attraction for test problem 4.
Figure 4. Basins of attraction for test problem 4.
Mathematics 07 01203 g004
Table 1. Numerical results for example 1.
Table 1. Numerical results for example 1.
Method-IMethod-IIMethod-III
r 1 = 0.022222 r 1 = 0.022222 r 1 = 0.022222
r 2 = 0.021452 r 2 = 0.021108 r 2 = 0.021427
r 3 = 0.00602156 r 3 = 0.00601723 r 3 = 0.00597996
r = 0.00602156 r = 0.00601723 r = 0.00597996
Table 2. Numerical results for example 2.
Table 2. Numerical results for example 2.
Method-IMethod-IIMethod-III
r 1 = 0.40716 r 1 = 0.40716 r 1 = 0.40716
r 2 = 0.25898 r 2 = 0.25380 r 2 = 0.24430
r 3 = 0.140344 r 3 = 0.139924 r 3 = 0.135881
r = 0.140344 r = 0.139924 r = 0.135881
Table 3. Numerical results for example 3.
Table 3. Numerical results for example 3.
Method-IMethod-IIMethod-III
r 1 = 0.823619 r 1 = 0.823619 r 1 = 0.823619
r 2 = 0.656626 r 2 = 0.644632 r 2 = 0.651275
r 3 = 0.189075 r 3 = 0.188885 r 3 = 0.187318
r = 0.189075 r = 0.188885 r = 0.187318
Table 4. Numerical results for example 4.
Table 4. Numerical results for example 4.
Method-IMethod-IIMethod-III
r 1 = 0.00151519 r 1 = 0.00151519 r 1 = 0.00151519
r 2 = 0.00120798 r 2 = 0.00118591 r 2 = 0.00119813
r 3 = 0.000347836 r 3 = 0.000347487 r 3 = 0.000344604
r = 0.000347836 r = 0.000347487 r = 0.000344604

Share and Cite

MDPI and ACS Style

Sharma, J.R.; Kumar, D.; Argyros, I.K. Local Convergence and Attraction Basins of Higher Order, Jarratt-Like Iterations. Mathematics 2019, 7, 1203. https://doi.org/10.3390/math7121203

AMA Style

Sharma JR, Kumar D, Argyros IK. Local Convergence and Attraction Basins of Higher Order, Jarratt-Like Iterations. Mathematics. 2019; 7(12):1203. https://doi.org/10.3390/math7121203

Chicago/Turabian Style

Sharma, Janak Raj, Deepak Kumar, and Ioannis K. Argyros. 2019. "Local Convergence and Attraction Basins of Higher Order, Jarratt-Like Iterations" Mathematics 7, no. 12: 1203. https://doi.org/10.3390/math7121203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop