Next Article in Journal
An Application of a Modified Gappy Proper Orthogonal Decomposition on Complexity Reduction of Allen-Cahn Equation
Previous Article in Journal
A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Comparison between Two Ninth Convergence Order Algorithms for Equations

1
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Karnataka 575025, India
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(6), 147; https://doi.org/10.3390/a13060147
Submission received: 24 May 2020 / Revised: 13 June 2020 / Accepted: 19 June 2020 / Published: 20 June 2020

Abstract

:
A local convergence comparison is presented between two ninth order algorithms for solving nonlinear equations. In earlier studies derivatives not appearing on the algorithms up to the 10th order were utilized to show convergence. Moreover, no error estimates, radius of convergence or results on the uniqueness of the solution that can be computed were given. The novelty of our study is that we address all these concerns by using only the first derivative which actually appears on these algorithms. That is how to extend the applicability of these algorithms. Our technique provides a direct comparison between these algorithms under the same set of convergence criteria. This technique can be used on other algorithms. Numerical experiments are utilized to test the convergence criteria.

1. Introduction

In this study, we consider the problem of finding a solution x * of the nonlinear equation
F ( x ) = 0 ,
where F : D B 1 B 2 is a continuously differentiable nonlinear operator acting between the Banach spaces B 1 and B 2 , and D stands for an open non empty convex subset of B 1 . One would like to obtain a solution x * of (1) in closed form. However, this can rarely be done. So most researchers and practitioners develop iterative algorithms which converge to x * . It is worth noticing that a plethora of problems from diverse disciplines such as applied mathematics, mathematical biology, chemistry, economics, physics, engineering and scientific computing reduce to solving an equation like (1) [1,2,3,4]. Therefore, the study of these algorithms in the general setting of a Banach space is important. At this generality we cannot use these algorithms to find solutions of multiplicity greater than one, since we assume the invertibility of F ( x ) . There is an extensive literature on algorithms for solving systems [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Our technique can be used to look at the local convergence of these algorithms along the same lines. Algorithms (2) and (3) when (i.e, when B 1 = B 2 = R j ) cannot be used to solve undetermined systems in this form. However, if these derivatives are replaced by Moore–Penrose inverses (as in the case of Newton’s and other algorithms [1,3,4]), then these modified algorithms can be used to solve undetermined systems too. A similar local convergence analysis can be carried out. However we do not pursue this task here. We cannot discuss local versus global convergence in the setting of a Banach space. However, we refer the reader to subdivision solvers that are global and guarantee to find all solutions (when B 1 = B 2 = R j ) [2,3,5,6,18,20,24]. Then, using these ideas we can make use of Algorithms (2) and (3) we do not pursue this here.
In this paper we study efficient ninth order-algorithms studied in [23], defined for n = 1 , 2 , , by
y n = x n F ( x n ) 1 F ( x n ) z n = x n 2 3 ( A n 1 + F ( x n ) 1 ) F ( x n ) v n = z n 1 2 ( 4 A n 1 + F ( x n ) ) 1 ) F ( z n ) x n + 1 = v n 1 3 ( A n 1 + F ( x n ) 1 ) F ( v n )
and
y n = x n F ( x n ) 1 F ( x n ) z n = x n 1 2 ( F ( y n ) 1 + F ( x n ) 1 ) F ( x n ) v n = z n 1 2 ( B n + F ( x n ) ) 1 ) F ( z n ) x n + 1 = v n 1 2 ( B n + F ( x n ) 1 ) F ( v n ) ,
where A n = 3 F ( y n ) F ( x n ) , B n = F ( y n ) 1 F ( x n ) F ( y n ) 1 .
The analysis in [23] uses assumptions on the 10th order derivatives of F . However, the assumptions on higher order derivatives reduce the applicability of Algorithms (2) and (3). For example: Let B 1 = B 2 = R , D = [ 1 2 , 3 2 ] . Define f on D by
f ( s ) = s 3 log s 2 + s 5 s 4 , s 0 0 , s = 0 .
Then, we get x * = 1 ,
f ( s ) = 3 s 2 log s 2 + 5 s 4 4 s 3 + 2 s 2 ,
f ( s ) = 6 s log s 2 + 20 s 3 12 s 2 + 10 s ,
f ( s ) = 6 log s 2 + 60 s 2 = 24 s + 22 ,
and s * = 1 . Obviously f ( s ) is not bounded on D . Hence, the convergence of Algorithms (2) and (3) are not guaranteed by the analysis in [23].
We are looking for a ball centered at x * and of a certain radius such that if one chooses a starter x 0 from inside this ball, then the convergence of the method to x * is guaranteed. That is we are interested in the ball convergence of these methods. Moreover, we also obtain upper bounds on x n x * , radius of convergence and results on the uniqueness of x * not provided in [23]. Our technique can be used to enlarge the applicability of other algorithms in a similar manner [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24].
The rest of the paper is organized as follows. The convergence analysis of Algorithms (2) and (3) are given in Section 2 and examples are given in Section 3.

2. Ball Convergence

We present the ball convergence of Algorithms (2) and (3) which are based on some real functions and positive parameters. Let S = [ 0 , ) .
Suppose there exists a continuous and increasing function ω 0 on S with values in itself satisfying ω 0 ( 0 ) = 0 such that equation
ω 0 ( s ) 1 = 0 ,
has a least positive zero denoted by r 1 . We verify the existence of solutions for some functions that follow based on the Intermediate Value Theorem (IVT). Set S 1 = [ 0 , r 1 ) . Define functions g 1 , h 1 on S 1 as
g 1 ( s ) = 0 1 ω ( ( 1 τ ) s ) d τ 1 ω 0 ( s )
and
h 1 ( s ) = g 1 ( s ) 1 ,
where function ω on S 1 is continuous and increasing with ω ( 0 ) = 0 . We get h 1 ( 0 ) < 0 and h 1 ( s ) with s r 1 . Denote by R 1 the least zero of equation h 1 ( s ) = 0 in ( 0 , r 1 ) .
Suppose that equation
p ( s ) 1 = 0
has a least positive zero denoted by r p , where p ( s ) = 1 2 3 ω 0 ( g 1 ( s ) s ) + ω 0 ( s ) . Set r 2 = min { r 1 , r p } and S 2 = [ 0 , r 2 ) . Define functions g 2 and h 2 on S 2 as
g 2 ( s ) = g 1 ( s ) + ( ω 0 ( s ) + ω 0 ( g 1 ( s ) s ) ) 0 1 ω 1 ( τ s ) d τ 2 ( 1 ω 0 ( s ) ) ( 1 p ( s ) )
and
h 2 ( s ) = g 2 ( s ) 1 ,
where ω 1 is defined on S 2 and is also continuous and increasing. We get again h 2 ( 0 ) = 1 and h 2 ( s ) as s r 2 . Denote by R 2 the least zero of equation h 2 ( s ) = 0 on ( 0 , r 2 ) .
Suppose equation
ω 0 ( g 2 ( s ) s ) 1 = 0
has a least positive zero denoted by r 3 . Set r 4 = min { r 2 , r 3 } and S 3 = [ 0 , r 4 ) . Define functions g 3 and h 3 on S 3 as
g 3 ( s ) = g 1 ( g 2 ( s ) s ) + ω 0 ( s ) + ω 0 ( g 2 ( s ) s ) ( 1 ω 0 ( s ) ) ( 1 ω 0 ( g 2 ( s ) s ) ) + ( ω 0 ( s ) + ω 0 ( g 1 ( s ) s ) ) 0 1 ω 1 ( τ g 2 ( s ) s ) d τ ( 1 ω 0 ( s ) ) ( 1 p ( s ) ) g 2 ( s )
and
h 3 ( s ) = g 3 ( s ) 1 .
Then, we get h 3 ( 0 ) = 1 and h 3 ( s ) as s r 4 . Denote by R 3 the least solution of equation h 3 ( s ) = 0 in ( 0 , r 4 ) .
Suppose that equation
ω 0 ( g 3 ( s ) s ) 1 = 0
has a least positive zero denoted by r 5 . Set r 6 = min { r 4 , r 5 } and S 4 = [ 0 , r 6 ) . Define functions g 4 and h 4 on S 4 as
g 4 ( s ) = g 1 ( g 3 ( s ) s ) + ω 0 ( s ) + ω 0 ( g 3 ( s ) s ) ( 1 ω 0 ( s ) ) ( 1 p ( s ) ) + ( ω 0 ( s ) + ω 0 ( g 1 ( s ) s ) ) 0 1 ω 1 ( τ g 3 ( s ) s ) d τ ( 1 ω 0 ( s ) ) ( 1 p ( s ) ) g 3 ( s )
and
h 4 ( s ) = g 4 ( s ) 1 .
We have h 4 ( 0 ) = 1 and h 4 ( s ) as s r 6 . Denote by R 4 the least solution of equation h 4 ( s ) = 0 in ( 0 , r 6 ) . Consider a radius of convergence R as given by
R = min { R i } , i = 1 , 2 , 3 , 4 .
By these definitions, we have for s [ 0 , R )
0 ω 0 ( s ) < 1 ,
0 ω 0 ( g 1 ( s ) s ) < 1 ,
0 ω 0 ( g 2 ( s ) s ) < 1 ,
0 ω 0 ( g 3 ( s ) s ) < 1
and
0 g i ( s ) < 1 .
Finally, define U ( x , a ) = { y B 1 : x y < a } and U ¯ ( x , a ) its closure. We shall use the notation e n = x n x * , for all n = 0 , 1 , 2 , .
The conditions ( A ) shall be used.
(A1)
F : D B 2 is continuously differentiable and there exists a simple solution x * of equation F ( x ) = 0 with F ( x * ) being invertible.
(A2)
There exists a continuous and increasing function ω 0 from S into itself with ω 0 ( 0 ) = 0 such that for all x D
F ( x * ) 1 ( F ( x ) F ( x * ) ) ω 0 ( x x * ) .
Set D 0 = D U ( x * , r 1 ) .
(A3)
There exist continuous and increasing functions ω from S 1 into S with ω ( 0 ) = 0 such that for each x , y D 0
F ( x * ) 1 ( F ( y ) F ( x ) ) ω ( y x ) .
Set D 1 = D U ( x * , r 2 ) .
(A4)
There exists a continuous function ω 1 from S 2 into S such that for all x D 1
F ( x * ) 1 F ( x ) ω 1 ( x x * ) .
(A5)
U ¯ ( x * , R ) D , where R is defined in (8), and r 1 , r p , r 3 , r 5 exist.
(A6)
There exists R * R such that
0 1 ω 0 ( τ R * ) d τ 1 .
Set D 2 = D U ( x * , R * ) .
Next, the local convergence result for algorithm (2) follows.
Theorem 1.
Under the conditions (A) further consider choosing x 0 U ( x * , R ) { x * } . Then, sequence { x n } exists, stays in U ( x * , R ) with lim n x n = x * . Moreover, the following estimates hold true
y n x * g 1 ( e n ) e n e n < R ,
z n x * g 2 ( e n ) e n e n ,
v n x * g 3 ( e n ) e n e n ,
and
x n + 1 x * g 4 ( e n ) e n e n ,
with “ g m ” functions are introduced earlier and R is defined by (8). Furthermore, x * is the only solution of equation F ( x ) = 0 in the set D 2 given in (A6).
Proof. 
Consider x U ( x * , R ) { x * } . By (A1) and (A2)
F ( x * ) 1 ( F ( x ) F ( x * ) ) ω 0 ( x x * ) < ω 0 ( R ) 1 ,
so by a lemma of Banach on invertible operators [20] F ( x ) 1 L ( B 2 , B 1 ) with
F ( x ) 1 F ( x * ) 1 1 ω 0 ( x x * ) .
Setting x = x 0 , we obtain from algorithm (2) (first sub-step for n = 0 ) that y 0 exists. Then, using Algorithm (2) (first substep for n = 0 ), (A1), (8), (A3), (18) and (13) (for m = 1 )
y 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x * ) 0 1 F ( x * ) 1 ( F ( x * + τ ( x 0 x * ) ) F ( x 0 ) ) ( x 0 x * ) d τ 0 1 ω ( ( 1 τ ) e 0 ) d τ e 0 1 ω 0 ( e 0 ) = g 1 ( e 0 ) e 0 e 0 < R ,
so y 0 U ( x * , R ) and (14) is true for n = 0 . We must show A 0 is invertible, so z 0 , v 0 and x 1 exist by Algorithm (2) for n = 0 . Indeed, we have by (A2) and (19)
( 2 F ( x * ) ) 1 ( 3 F ( y 0 ) F ( x * ) ) + ( F ( x * ) F ( x 0 ) ) 1 2 3 F ( x * ) 1 ( F ( y 0 ) F ( x * ) ) + F ( x * ) 1 ( F ( x 0 ) F ( x * ) ) 1 2 ( ω 0 ( e 0 ) + 3 ω 0 ( y 0 x * ) ) 1 2 ( ω 0 ( e 0 ) + 3 ω 0 ( g 1 ( e 0 ) e 0 ) ) = p ( e 0 ) p ( R ) < 1 ,
so A 0 is invertible,
A 0 1 F ( x * ) 1 2 ( 1 p ( e 0 ) ) .
Then, using the second sub-step of Algorithm (3), (8), (13) (for m = 2 ), (18) (for x = x 0 ), (19) and (20), we first have
z 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) + ( F ( x 0 ) 1 2 3 A 0 1 2 3 F ( x 0 ) 1 ) F ( x 0 ) = x 0 x * F ( x 0 ) 1 F ( x 0 ) + 1 3 F ( x 0 ) 1 ( 3 F ( y 0 ) F ( x 0 ) 2 F ( x 0 ) ) A 0 1 F ( x 0 ) = ( x 0 x * F ( x 0 ) 1 F ( x 0 ) ) + ( F ( x 0 ) 1 ( F ( y 0 ) F ( x 0 ) ) A 0 1 F ( x 0 ) ) .
So, we get by using also the triangle inequality
z 0 x * g 1 ( e 0 ) + ( ω 0 ( y 0 x * ) + ω 0 ( e 0 ) ) 0 1 ω 1 ( τ e 0 ) d τ 2 ( 1 ω 0 ( e 0 ) ) ( 1 p ( e 0 ) ) e 0 = g 2 ( e 0 ) e 0 e 0 ,
so z 0 U ( x * , R ) and (15) is true for n = 0 . By the third sub-step of algorithm (2) for n = 0 , we write
v 0 x * = z 0 x * F ( z 0 ) 1 F ( z 0 ) + [ F ( z 0 ) 1 1 3 [ 4 A 0 1 + F ( x 0 ) 1 ) ] F ( z 0 ) = z 0 x * F ( z 0 ) 1 F ( z 0 ) + 1 3 [ 3 F ( z 0 ) 1 F ( x 0 ) 1 4 ( 3 F ( y 0 ) F ( x 0 ) ) 1 ] F ( z 0 ) = z 0 x * F ( z 0 ) 1 F ( z 0 ) + ( F ( z 0 ) 1 ( F ( x 0 ) F ` ( z 0 ) ) F ( x 0 ) 1 + 2 F ( x 0 ) 1 ( F ( y 0 ) F ( x 0 ) ) A 0 1 ) F ( x * ) ) F ( x * ) 1 F ( z 0 ) .
Then, using (8), (13 (for m = 3 ), (18) (for x = z 0 ), (19)–(22), and (22), we get
v 0 x * g 1 ( z 0 x * ) + ( ω 0 ( e 0 ) + ω 0 ( y 0 x * ) ) 0 1 ω 1 ( τ z 0 x * ) d τ ( 1 ω 0 ( e 0 ) ) ( 1 p ( e 0 ) ) z 0 x * g 3 ( e 0 ) e 0 e 0 < R ,
so v 0 U ( x * , R ) and (16) holds true for n = 0 . Similarly, if we exchange the role of z 0 with v 0 we first obtain
x 1 x * = v 0 x * F ( v 0 ) 1 F ( v 0 ) + [ F ( v 0 ) 1 ( F ( x 0 ) F ( v 0 ) ) + 2 F ( x 0 ) 1 ( F ( y 0 ) F ( x 0 ) ) A 0 1 ] F ( v 0 ) .
So, we get that
x 1 x * [ g 1 ( v 0 x * ) + ω 0 ( e 0 ) + ω 0 ( v 0 x * ) ( 1 ω 0 ( e 0 ) ) ( 1 ω 0 ( v 0 x * ) ) + ( ω 0 ( e 0 ) + ω 0 ( y 0 x * ) ) 0 1 ω 1 ( τ v 0 x * ) d τ ( 1 ω 0 ( e 0 ) ) ( 1 p ( e 0 ) ) ] v 0 x * g 4 ( e 0 ) e 0 e 0 ,
so x 1 U ( x * , R ) and (17) is true for n = 0 . Hence, estimates (14)–(17) are true for n = 0 . Suppose (14)–(17) are true for j = 0 , 1 , 2 , , n 1 , then by switching x 0 , y 0 , z 0 , v 0 , x 1 by x j , y j , z j , v j , x j + 1 in the previous estimates, we immediately obtain that these estimates hold for j = n , completing the induction. Moreover, by the estimate
x n + 1 x * λ e 0 < R ,
with λ = g 4 ( e 0 ) [ 0 , 1 ) , we obtain lim n x n = x * and x n + 1 U ( x * , R ) . Let u D 2 with F ( u ) = 0 . Set G = 0 1 F ( u + τ ( x * u ) ) d τ . In view of (A2) and (A6), we get
F ( x * ) 1 ( G F ( x * ) ) 0 1 ω 0 ( ( 1 τ ) x * u ) d τ 0 1 ω 0 ( τ R * ) d τ < 1 ,
so from the invertiability of G and the estimate 0 = F ( x * ) F ( u ) = G ( x * u ) we conclude that x * = u .  □
In a similar way we provide the local convergence analysis for Algorithm (3). This time the functions “g”, “h” are respectively, for g ¯ 1 = g 1 . h ¯ 1 = h 1 , R ¯ 1 = R ,
g ¯ 2 ( s ) = g ¯ 1 ( s ) + ( ω 0 ( s ) + ω 0 ( g ¯ 1 ( s ) s ) ) 0 1 ω 1 ( τ s ) d τ 2 ( 1 ω 0 ( s ) ) ( 1 ω 0 ( g ¯ 1 ( s ) s ) ) ,
h ¯ 2 ( s ) = g ¯ 2 ( s ) 1 ( R ¯ 2 solving h ¯ 2 ( s ) = 0 ) ,
g ¯ 3 ( s ) = [ g ¯ 1 ( g ¯ 2 ( s ) s ) + 1 2 c ( s ) 0 1 ω 1 ( τ g ¯ 2 ( s ) s ) d τ ] g ¯ 2 ( s ) 1 ω 0 ( g ¯ 1 ( s ) s ) ,
h ¯ 3 ( s ) = g ¯ 3 ( s ) 1 , ( R ¯ 3 solving h ¯ 3 ( s ) = 0 ) ,
where
c ( s ) = 1 2 ω 0 ( s ) + ω 0 ( g ¯ 2 ( s ) s ) 1 ω 0 ( g ¯ 2 ( s ) s ) 1 1 ω 0 ( g ¯ 2 ( s ) s ) ( ω 0 ( g ¯ 1 ( s ) s ) + ω 0 ( g ¯ 2 ( s ) s ) ) + ω 0 ( g ¯ 2 ( s ) s ) 1 ω 0 ( g ¯ 1 ( s ) s ) ( ω 0 ( g ¯ 1 ( s ) s ) + ω 0 ( s ) ) 1 1 ω 0 ( g ¯ 1 ( s ) s ) ,
g ¯ 4 ( s ) = [ g ¯ 1 ( g ¯ 3 ) s ) s ) + 1 2 d ( s ) 0 1 ω 1 ( τ g ¯ 3 ( s ) s ) d τ ] g ¯ 3 ( s ) ,
and
h ¯ 4 ( s ) = g ¯ 4 ( s ) 1 ,
where
d ( s ) = 1 2 ω 0 ( s ) + ω 0 ( g ¯ 3 ( s ) s ) 1 ω 0 ( g ¯ 3 ( s ) s ) 1 1 ω 0 ( g ¯ 3 ( s ) s ) ω 0 ( g ¯ 1 ( s ) s ) + ω 0 ( g ¯ 3 ( s ) s ) + ω 0 ( g ¯ 3 ( s ) s ) 1 ω 0 ( g ¯ 1 ( s ) s ) ( ω 0 ( g ¯ 1 ( s ) s ) + ω 0 ( s ) ) 1 1 ω 0 ( g ¯ 1 ( s ) s )
and R ¯ 4 , solving equation h ¯ 4 ( s ) = g ¯ 4 ( s ) 1 . A radius of convergence R ¯ is defined as in (8)
R ¯ = min { R ¯ i } .
Estimates (9)–(13) also hold with these changes. This time we are using the estimates
z 0 x * = [ x 0 x * F ( x 0 ) 1 F ( x 0 ) + [ F ( x 0 ) 1 1 2 ( F ( y 0 ) 1 + F ( x 0 ) 1 ) ] F ( x 0 ) = x 0 x * F ( x 0 ) 1 F ) ( x 0 ) + 1 2 F ( x 0 ) 1 ( F ( y 0 ) F ( x 0 ) ) F ( y 0 ) 1 F ( x 0 ) ,
so
z 0 x * ( g ¯ 1 ( e 0 ) + ( ω 0 ( e 0 ) + ω 0 ( y 0 x * ) ) 0 1 ω 1 ( τ e 0 ) d τ 2 ( 1 ω 0 ( e 0 ) ) ( 1 ω 0 ( y 0 x * ) ) ) e 0 g ¯ 2 ( e 0 ) e 0 e 0 < R ¯ .
Moreover, we can write
v 0 x 0 = z 0 x * F ( z 0 ) 1 F ( z 0 ) + [ F ( z 0 ) 1 1 2 B 0 1 2 F ( x 0 ) 1 ] F ( z 0 ) = z 0 x * F ( z 0 ) 1 F ( z 0 ) + 1 2 C 0 F ( z 0 ) ,
so
v 0 x * ( g ¯ 1 ( z 0 x * ) + 1 2 c ( e 0 ) ) 0 1 ω 1 ( τ z 0 x * ) d τ g ¯ 2 ( e 0 ) g ¯ 3 ( e 0 ) e 0 e 0 .
Then, we can write
x 1 x * = v 0 x * F ( v 0 ) 1 F ( v 0 ) + D 0 F ( v 0 ) ,
so
x 1 x * [ g ¯ 1 ( g ¯ 3 ( e 0 ) e 0 ) + 1 2 d ( e 0 ) 0 1 ω 1 ( τ g ¯ 3 ( e 0 ) e 0 ) d τ ] g ¯ 3 ( e 0 ) e 0 g ¯ 4 ( e 0 ) e 0 e 0 ,
where
C 0 = F ( z 0 ) 1 1 2 F ( y 0 ) 1 F ( x 0 ) F ( y 0 ) 1 1 2 F ( x 0 ) 1 = b 1 + b 2 ,
b 1 = 1 2 ( F ( z 0 ) 1 F ( x 0 ) ) = 1 2 F ( z 0 ) 1 ( F ( x 0 ) F ( z 0 ) ) F ( x 0 ) 1 ,
b 2 = 1 2 b 3 ,
b 3 = F ( z 0 ) 1 F ( y 0 ) 1 F ( x 0 ) F ( y 0 ) 1 = F ( z 0 ) 1 [ I F ( z 0 ) F ( y 0 ) 1 F ( x 0 ) F ( y 0 ) 1 ] = F ( z 0 ) 1 ( F ( y 0 ) F ( z 0 ) F ( y 0 ) 1 F ( x 0 ) ) F ( y 0 ) 1 ,
F ( z 0 ) 1 [ F ( y 0 ) F ( z 0 ) + F ( z 0 ) F ( z 0 ) F ( y 0 ) 1 F ( x 0 ) ] F ( y 0 ) 1 = F ( z 0 ) 1 [ F ( y 0 ) F ( z 0 ) + F ( z 0 ) ( I F ( y 0 ) 1 F ( x 0 ) ] F ( y 0 ) 1 = F ( z 0 ) 1 [ ( F ( y 0 ) F ( z 0 ) ) + F ( z 0 ) F ( y 0 ) 1 ( F ( y 0 ) F ( x 0 ) ) ] F ( y 0 ) 1 ,
so
C 0 F ( x * ) F ( x * ) 1 b 1 + F ( x * ) 1 b 2 1 2 ω 0 ( e 0 ) + ω 0 ( z 0 x * ) 1 ω 0 ( z 0 x * ) + 1 1 ω 0 ( z 0 x * ) ω 0 ( y 0 x * + ω 0 ( z 0 x * ) + ω 0 ( z 0 x * ) 1 ω 0 ( y 0 x * ) ω 0 ( y 0 x * ) + ω 0 ( e 0 ) ) 1 ω 0 ( y 0 x * ) = c 0 ( e 0 )
and
D 0 = F ( v 0 ) 1 1 2 F ( y 0 ) 1 F ( x 0 ) F ( y 0 ) 1 1 2 F ( x 0 ) 1
( v 0 is simply replacing z 0 in the definition of C 0 ), so
D 0 F ( x * ) 1 2 ω 0 ( e 0 ) + ω 0 ( v 0 x * ) 1 ω 0 ( v 0 x * ) + 1 1 ω 0 ( v 0 x * ) ( ω 0 ( y 0 x * ) + ω 0 ( v 0 x * ) + ω 0 ( v 0 x * ) 1 ω 0 ( y 0 x * ) ( ω 0 ( y 0 x * ) + ω 0 ( e 0 ) ) 1 1 ω 0 ( y 0 x * ) d 0 ( e 0 ) .
Hence, with these changes, we present the local convergence analysis of method (3).
Theorem 2.
Under the conditions (A) the conclusions of Theorem 1 hold but with R , g i , h i , replaces by R ¯ , g ¯ i , h ¯ i , respectively.
Remark 1.
We can compute [17] the computational order of convergence (COC) defined by
ξ = ln x n + 1 x * x n x * / ln x n x * x n 1 x *
or the approximate computational order of convergence
ξ 1 = ln x n + 1 x n x n x n 1 / ln x n x n 1 x n 1 x n 2 .
This way we obtain in practice the order of convergence without resorting to the computation of higher order derivatives appearing in the method or in the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of those results.

3. Numerical Examples

Example 1.
Let us consider a system of differential equations governing the motion of an object and given by
H 1 ( x ) = e x , H 2 ( y ) = ( e 1 ) y + 1 , H 3 ( z ) = 1
with initial conditions H 1 ( 0 ) = H 2 ( 0 ) = H 3 ( 0 ) = 0 . Let H = ( H 1 , H 2 , H 3 ) . Let B 1 = B 2 = R 3 , D = U ¯ ( 0 , 1 ) , x * = ( 0 , 0 , 0 ) T . Define function H on D for w = ( x , y , z ) T by
H ( w ) = ( e x 1 , e 1 2 y 2 + y , z ) T .
The Fréchet-derivative is defined by
H ( v ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 .
Notice that using the ( A ) conditions, we get for x * = ( 0 , 0 , 0 ) T , ω 0 ( s ) = ( e 1 ) s , ω ( s ) = e 1 e 1 s , ω 1 ( s ) = e 1 e 1 . The radii are
R 1 = 0.38269191223238574472986783803208 , R 2 = 0.19249424357776143135190238808718 ,
R 3 = 0.16097144932100204695046841152362 , R 4 = 0.1731041505859549911594541526938 ,
R ¯ 2 = 0.23043767601276282652733584654925 , R ¯ 3 = 2.5823927758875733218246750766411 ,
R ¯ 4 = 0.2195161774302133161906880332026 .
Example 2.
Let B 1 = B 2 = C [ 0 , 1 ] , the space of continuous functions defined on [ 0 , 1 ] be equipped with the max norm. Let D = U ¯ ( 0 , 1 ) . Define function H on D by
H ( φ ) ( x ) = φ ( x ) 5 0 1 x θ φ ( θ ) 3 d θ .
We have that
H ( φ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ φ ( θ ) 2 ξ ( θ ) d θ , for each ξ D .
Then, we get x * = 0 , so ω 0 ( s ) = 7 . 5 s , ω ( s ) = 15 s and ω 1 ( s ) = 2 . Then the radii are
R 1 = 0.066666666666666666666666666666667 , R 2 = 0.035324865589989970504625205194316 ,
R 3 = 0.047263681322789477534662694324652 , R 4 = 0.021857976760806939464654163884916 ,
R ¯ 2 = 1.1302558424873363485119170945836 , R ¯ 3 = 0.13819337319040553291316086870211 ,
R ¯ 4 = 0.052052957742070200819473058118092 .
Example 3.
Returning back to the motivational example at the introduction of this study, we have for x * = 1 , ω 0 ( s ) = ω ( s ) = 96.6629073 s and ω 1 ( s ) = 2 . Then, the radii are
R 1 = 0.0068968199414654552878434223828208 , R 2 = 0.00077090035103644658290300561986896 ,
R 3 = 0.00012680765288154951706510453757204 , R 4 = 0.010244807279452188691903913309034 ,
R ¯ 2 = 2.0195452298754390518809032073477 , R ¯ 3 = 0.044412236972383459243651770975703 ,
R ¯ 4 = 1.9996509448227068883596757586929 .

Author Contributions

Conceptualization, S.R. and I.K.A.; methodology, S.R., I.K.A. and S.G.; software, I.K.A. and S.G.; validation, S.R., I.K.A. and S.G.; formal analysis, S.R., I.K.A. and S.G.; investigation, S.R., I.K.A. and S.G.; resources, S.R., I.K.A. and S.G.; data curation, S.R. and S.G.; writing—original draft preparation, S.R., I.K.A. and S.G.; writing—review and editing, I.K.A. and S.G.; visualization, S.R., I.K.A. and S.G.; supervision, I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aizenshtein, M.; Bartoŕi, M.; Elber, G. Global solutions of well-constrained transcendental systems using expression trees and a single solution test. Comput. Aided Des. 2012, 29, 265–279. [Google Scholar] [CrossRef]
  2. Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, A.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Algorithms 2017. [Google Scholar] [CrossRef]
  3. Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  4. Van Sosin, B.; Elber, G. Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Comput. Aided Des. 2017, 90, 37–47. [Google Scholar] [CrossRef]
  5. Argyros, I.K. Computational Theory of Iterative Methods; Series: Studies in Computational Mathematics, 15; Chui, C.K., Wuytack, L., Eds.; Elsevier: New York, NY, USA, 2007. [Google Scholar]
  6. Argyros, I.K.; Magreñán, A.A. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 2015, 71, 1–23. [Google Scholar] [CrossRef]
  7. Alzahrani, A.K.H.; Behl, R.; Alshomrani, A.S. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
  8. Bartoň, M. Solving polynomial systems using no-root elimination blending schemes. Comput. Aided Des. 2011, 43, 1870–1878. [Google Scholar] [CrossRef]
  9. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–88. [Google Scholar] [CrossRef]
  10. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  11. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  12. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  13. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef] [Green Version]
  14. Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
  15. Esmaeili, H.; Ahmadi, M. An efficient three-step method to solve system of non linear equations. Appl. Math. Comput. 2015, 266, 1093–1101. [Google Scholar]
  16. Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef] [Green Version]
  17. Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  18. Magreñán, A.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 29–38. [Google Scholar] [CrossRef] [Green Version]
  19. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef] [Green Version]
  20. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  21. Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA 2017, 74, 147–163. [Google Scholar] [CrossRef]
  22. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  23. Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
  24. Argyros, I.K.; George, S.; Magreñán, A.A. Local convergence for multi-point-parametric Chebyshev-Halley-type methods of higher convergence order. J. Comput. Appl. Math. 2015, 282, 215–224. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; George, S. Local Comparison between Two Ninth Convergence Order Algorithms for Equations. Algorithms 2020, 13, 147. https://doi.org/10.3390/a13060147

AMA Style

Regmi S, Argyros IK, George S. Local Comparison between Two Ninth Convergence Order Algorithms for Equations. Algorithms. 2020; 13(6):147. https://doi.org/10.3390/a13060147

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, and Santhosh George. 2020. "Local Comparison between Two Ninth Convergence Order Algorithms for Equations" Algorithms 13, no. 6: 147. https://doi.org/10.3390/a13060147

APA Style

Regmi, S., Argyros, I. K., & George, S. (2020). Local Comparison between Two Ninth Convergence Order Algorithms for Equations. Algorithms, 13(6), 147. https://doi.org/10.3390/a13060147

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop