Next Article in Journal
2D Linear Canonical Transforms on Lp and Applications
Previous Article in Journal
Proportional-Integral-Derivative Controller Based-Artificial Rabbits Algorithm for Load Frequency Control in Multi-Area Power Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Convergence of Traub’s Method and Its Extensions

1
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangalore 575 025, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(1), 98; https://doi.org/10.3390/fractalfract7010098
Submission received: 3 December 2022 / Revised: 11 January 2023 / Accepted: 13 January 2023 / Published: 16 January 2023

Abstract

:
In this article, we examine the local convergence analysis of an extension of Newton’s method in a Banach space setting. Traub introduced the method (also known as the Arithmetic-Mean Newton’s Method and Weerakoon and Fernando method) with an order of convergence of three. All the previous works either used higher-order Taylor series expansion or could not derive the desired order of convergence. We studied the local convergence of Traub’s method and two of its modifications and obtained the convergence order for these methods without using Taylor series expansion. The radii of convergence, basins of attraction, comparison of iterations of similar iterative methods, approximate computational order of convergence (ACOC), and a representation of the number of iterations are provided.

1. Introduction

Many problems in engineering and natural sciences can be modeled into an equation of the form
F ( x ) = 0 ,
where F : Ω X Y is a Fréchet differentiable function on a convex subset Ω X ; X and Y are Banach spaces [1]. We are interested in finding the local unique solution of Equation (1). Typically, no analytical or closed-form solution exists in general. Therefore, we turn to iterative methods. Newton’s method, defined as
x n + 1 = x n A n 1 F ( x n ) , n = 0 , 1 , 2 , ,
where A n = F ( x n ) , is very popular. Almost all methods in the literature are some modification or extension of this method; see [2]. Many authors have considered multi-step methods in order to increase efficiency, as well as the order of convergence [3,4,5,6].
Among the multi-step methods, the Mean Newton methods are well studied (see [7]). Traub introduced a modification of Newton’s method in [5] (see also [8]), defined as follows:
y n = x n A n 1 F ( x n ) x n + 1 = x n 2 Δ n 1 F ( x n ) , n = 0 , 1 , 2 ,
where Δ n = F ( x n ) + F ( y n ) . This method, also called the Arithmetic-Mean Newton’s method, has an order of convergence of three. Later, Weerakoon and Fernando [9] approached the method using a trapezoidal approximation of the interpolatory quadrature formula. Frontini and Sormani in [10] showed that this method is one of the most efficient variants of Newton’s method, which uses the quadrature formula. Later, Cordero et al. [11] extended the method defined for n = 1 , 2 , by
y n = x n A n 1 F ( x n ) z n = x n 2 Δ n 1 F ( x n ) x n + 1 = z n F ( y n ) 1 F ( z n ) ,
where F : D R n R n . However, the method used Taylor series expansion and assumed the existence of derivatives up to order six. Sharma and Parhi [12] removed these assumptions and studied this method in Banach spaces. Nevertheless, they were unable to obtain the desired order. Many local convergences, as well as semi-local convergence studies, have been conducted on this method [7,13,14,15].
In this paper, we consider methods (2), (3) and an extension defined for n = 1 , 2 , by
y n = x n A n 1 F ( x n ) z n = x n 2 Δ n 1 F ( x n ) x n + 1 = z n F ( z n ) 1 F ( z n ) .
This method has an order of six. Parhi [15] has used the above extension to obtain an efficient sixth-order method using a linear interpolation of F ( z n ) .
Our paper is divided as follows. In Section 2, each method’s preliminary functions, definitions, and auxiliary results are given in order. Some numerical examples to show the radii of convergence, approximate computational order of convergence (ACOC), and an example to illustrate the basins of attractions are given in Section 3. Section 3 also contains a representation of the number of iterations as a heatmap and tables that compare the iterates of methods (2)–(4) with corresponding methods in [16]. Finally, the paper ends with conclusions in Section 4.

2. Main Results

Firstly, we introduce some functions required in the proofs, along with necessary notations and definitions.
Definition 1
([5] Page 9). If for a sequence { z n } converging to z * , there exist some p [ 0 , ) such that the limit defined as
C p = lim n z n + 1 z * z n z * p
exists. Then, p is called the order of convergence of sequence { z n } .
The above definition is somewhat restrictive. Ortega and Rheinboldt discussed a more general concept of R-order and Q-order in [2]. Nevertheless, with an additional condition 0 < C p < on C p , there is equivalence between the above definition and Q and R orders (see [17,18]).
We use the R order of convergence defined as follows (see [19,20]).
A sequence { z n } converges to z * with R order at least p if there are constants C ( 0 , ) and γ ( 0 , 1 ) such that
z n + 1 z * C γ p n , n = 0 , 1 , 2 , .
Note that
z n + 1 z * c z n z * p , c > 0 , n = 0 , 1 , 2 , ,
for p N implies (5) provided z 0 z * < 1 .
For practical computation of the order of convergence, one may use the Approximate Computational Order of Convergence (ACOC) [21], defined as
A C O C = log z k + 2 z k + 1 z k + 1 z k / log z k + 1 z k z k z k 1 .
Throughout this paper, the open and closed balls centered at u ˜ with radius δ are denoted as
B ( u ˜ , δ ) = { u X : u u ˜ < δ } and B [ u ˜ , δ ] = { u X : u u ˜ δ } ,
respectively. Furthermore, x n x * is denoted by ϵ x n , y n x * is denoted by ϵ y n , and z n x * is denoted by ϵ z n . Let K 1 , K 2 and M be positive constants in R ; our proofs are subject to the following conditions.
Assumption 1.
Assume
 1.
x * is the root of F ( x ) = 0 and A * 1 exists, where A * = F ( x * ) .
 2.
There exists K 1 > 0 , such that A * 1 F ( x ) F ( y ) K 1 x y for all x , y Ω .
 3.
There exists M > 0 , such that A * 1 F ( x ) M .
 4.
There exists K 2 > 0 , such that A * 1 ( F ( x ) F ( y ) ) K 2 x y for all x , y Ω .
First, we define function ζ 1 : [ 0 , 1 K 1 ) R as
ζ 1 ( t ) = K 1 2 ( 1 K 1 t ) .
Let ρ 1 = 2 3 K 1 . Note that h 1 ( t ) : = ζ 1 ( t ) t is an increasing function in the interval [ 0 , 1 K 1 ) . Furthermore, h 1 ( 0 ) = 0 and h 1 ( ρ 1 ) = 1 . That is,
0 ζ 1 ( t ) t < 1 t [ 0 , ρ 1 ) .
Define ζ 2 : [ 0 , ρ 1 ) R as below.
ζ 2 ( t ) = K 1 2 ( 1 + ζ 1 ( t ) t ) t .
Let
h 2 ( t ) = ζ 2 ( t ) 1 .
Clearly, since h 2 ( 0 ) = 1 and h 2 ( t ) as t 1 K 1 , by the intermediate value theorem, there exist a smallest positive root for h 2 ( t ) , say ρ 2 , in the interval [ 0 , 1 K 1 ) . So,
0 ζ 2 ( t ) t < 1 t [ 0 , ρ 2 ) .
Consider, ζ 3 : [ 0 , ρ 2 ) R ,
ζ 3 ( t ) = 1 2 ( 1 ζ 2 ( t ) ) M ζ 1 ( t ) + K 2 12 1 + K 1 2 t .
Let
h 3 ( t ) = ζ 3 ( t ) t 2 1 .
Note that h 3 ( 0 ) = 1 and h 3 ( t ) as t ρ 2 . Intermediate value theorem guarantees a smallest positive root ρ 3 in [ 0 , ρ 2 ) such that
0 ζ 3 ( t ) < 1 t [ 0 , ρ 3 ) .
Let
R = min { ρ 1 , ρ 3 , 1 } ,
then
0 ζ 1 ( t ) t < 1 , 0 ζ 2 ( t ) < 1 and 0 ζ 3 ( t ) t 2 < 1 t [ 0 , R ) .
Furthermore, one can see that ζ 3 ( t ) is an increasing function in [ 0 , ρ 2 ) and hence
ζ 3 ( t ) ζ 3 ( R ) t [ 0 , R ) .
Theorem 1.
Let Assumption hold and let R be as in (12); then the sequence { x n } defined by (2) with x 0 B ( x * , R ) { x * } , converges to x * such that
ϵ x n + 1 ζ 3 ( R ) ϵ x n 3 ,
where ζ 3 is as defined in ( 10 ) .
Proof. 
First, we will show that F ( x 0 ) 1 is bounded. Using Assumption 1 and (12),
A * 1 ( F ( x 0 ) A * ) K 1 ϵ x 0 K 1 R < 1 .
Hence, by Banach’s lemma on invertible operators [22], F ( x 0 ) 1 is invertible and
F ( x 0 ) 1 A * 1 1 K 1 ϵ x 0 .
From ( 2 ) , we have
y 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) = x 0 x * + F ( x 0 ) 1 F ( x 0 ) F ( x * ) = F ( x 0 ) 1 A * A * 1 0 1 ( F ( x 0 ) F ( x * + t ( x 0 x * ) ) ) d t ( x 0 x * ) ,
hence, by Assumption 1, Equations (13) and (16),
ϵ y 0 F ( x 0 ) 1 A * 0 1 A * 1 ( F ( x 0 ) F ( x * + t ( x 0 x * ) ) ) d t ( x 0 x * ) F ( x 0 ) 1 A * 0 1 K 1 x 0 x * t ( x 0 x * ) d t ϵ x 0 K 1 2 ( 1 K 1 ϵ x 0 ) ϵ x 0 2 ζ 1 ( ϵ x 0 ) ϵ x 0 2 < ϵ x 0 < R ,
so, iterate y 0 B ( x * , R ) . Next, we employ Banach’s lemma of invertible operators [22], Equation (13), and Assumption 1 to show that Δ 0 1 is bounded.
( 2 A * ) 1 ( Δ 0 2 A * ) 1 2 A * 1 ( F ( x 0 ) A * + F ( y 0 ) A * ) 1 2 ( K 1 ϵ x 0 + K 1 ϵ y 0 ) K 1 2 ( 1 + ζ 1 ( ϵ x 0 ) ϵ x 0 ) ϵ x 0 = ζ 2 ( ϵ x 0 ) < 1 .
So, Δ 0 1 is bounded and
Δ 0 1 A * 1 2 ( 1 ζ 2 ( ϵ x 0 ) ) .
From (2),
x 1 x * = x 0 x * 2 Δ 0 1 F ( x 0 ) = Δ 0 1 ( F ( x 0 ) + F ( y 0 ) ) ( x 0 x * ) 2 ( F ( x 0 ) F ( x * ) ) = Δ 0 1 0 1 ( F ( x 0 ) F ( x * + t ( x 0 x * ) ) ) d t ( x 0 x * ) + 0 1 ( F ( y 0 ) F ( x * + t ( x 0 x * ) ) ) d t ( x 0 x * ) = Δ 0 1 0 1 0 1 F ( x * + t ( x 0 x * ) θ ( x 0 x * t ( x 0 x * ) ) ) d θ × ( x 0 x * t ( x 0 x * ) ) d t ( x 0 x * ) + 0 1 0 1 F ( x * + t ( x 0 x * ) θ ( y 0 x * t ( x 0 x * ) ) ) d θ ( y 0 x * t ( x 0 x * ) ) d t × ( x 0 x * ) .
Let θ x 0 = θ ( 1 t ) ( x 0 x * ) and θ y 0 = θ ( y 0 x * t ( x 0 x * ) ) . Now, we split up x 1 x * as follows.
x 1 x * = Δ 0 1 A * [ B 1 + B 2 + B 3 ] ,
where
B 1 = A * 1 0 1 0 1 F ( x * + t ( x 0 x * ) θ x 0 ) d θ × ( 1 2 t ) ( x 0 x * ) d t ( x 0 x * ) , B 2 = A * 1 0 1 0 1 F ( x * + t ( y 0 x * ) θ y 0 ) d θ ( y 0 x * ) d t ( x 0 x * ) , B 3 = A * 1 0 1 0 1 F ( x * + t ( x 0 x * ) θ x 0 ) d θ t d t 0 1 0 1 F ( x * + t ( y 0 x * ) θ y 0 ) d θ t d t ( x 0 x * ) 2 .
Using Assumption 1, we have
B 1 0 1 0 1 A * 1 F ( x * + t ( x 0 x * ) θ x 0 ) d θ × ( x 0 x * 2 t ( x 0 x * ) ) d t ( x 0 x * ) sup t [ 0 , 1 ] 0 1 A * 1 F ( x * + t ( x 0 x * ) θ x 0 ) d θ × 0 1 ( ( 1 2 t ) ( x 0 x * ) ) d t ( x 0 x * ) = 0 .
In addition, by Assumptions 1 and (17), we have
B 2 = 0 1 0 1 A * 1 F ( x * + t ( y 0 x * ) θ y 0 ) d θ ( y 0 x * ) d t ( x 0 x * ) M ϵ y 0 ϵ x 0 M ζ 1 ( ϵ x 0 ) ϵ x 0 3 ,
and
B 3 = 0 1 0 1 A * 1 [ F ( x * + t ( x 0 x * ) θ x 0 ) d θ F ( x * + t ( y 0 x * ) θ y 0 ) d θ ] t d t ( x 0 x * ) 2 K 2 0 1 0 1 t ( t θ ) x 0 y 0 d θ d t ϵ x 0 2 K 2 12 x 0 y 0 ϵ x 0 2 K 2 12 ( ϵ x 0 + ϵ y 0 ) ϵ x 0 2 K 2 12 1 + ζ 1 ( ϵ x 0 ) ϵ x 0 ϵ x 0 3 .
So, using (19), (20), (21), (22), and (13),
ϵ x 1 Δ 0 1 A * M ζ 1 ( ϵ x 0 ) ϵ x 0 3 + K 2 12 ( 1 + ζ 1 ( ϵ x 0 ) ϵ x 0 ) ϵ x 0 3 1 2 ( 1 ζ 2 ( ϵ x 0 ) ) M ζ 1 ( ϵ x 0 ) + K 2 12 1 + K 1 2 ϵ x 0 ϵ x 0 3 ζ 3 ( ϵ x 0 ) ϵ x 0 3 ζ 3 ( R ) R 2 ϵ x 0 < ϵ x 0 < R .
Hence, x 1 B ( x * , R ) and (by (23)) (15) holds for n = 1 . By induction, the proof is complete if x 0 , y 0 , x 1 are replaced by x n , y n , x n + 1 , respectively. □
Next, to provide the convergence analysis of method (3), we define some functions and parameterss below.
Define ζ 4 : [ 0 , 1 K 1 ) R by
ζ 4 ( t ) = K 1 ζ 1 ( t ) t 2 .
and h 4 : [ 0 , 1 K 1 ) R
h 4 ( t ) = ζ 4 ( t ) 1 .
Then, h 4 ( 0 ) = 1 and h 4 ( t ) as t 1 K 1 . Hence, by the intermediate value theorem, h 4 has a smallest zero ρ 4 [ 0 , 1 K 1 ) , and
0 ζ 4 ( t ) < 1 t [ 0 , ρ 4 ) .
Define ζ 5 : [ 0 , ρ 4 ) R , by
ζ 5 ( t ) = K 1 1 ζ 4 ( t ) ζ 1 ( t ) + 1 2 ζ 3 ( t ) t ζ 3 ( t ) ,
and h 5 : [ 0 , ρ 4 ) R , by
h 5 ( t ) = ζ 5 ( t ) t 4 1 .
Observe that h 5 ( 0 ) = 1 and h 5 ( t ) as t ρ 4 . Using the intermediate value theorem, h 5 has a smallest zero ρ 5 in the interval [ 0 , ρ 4 ) . Let
R 1 = min { R , ρ 5 } ,
then
0 ζ 4 ( t ) < 1 and 0 ζ 5 ( t ) t 4 < 1 t [ 0 , R 1 ) .
It is clear that ζ 5 is an increasing function in [ 0 , ρ 4 ) . In particular,
ζ 5 ( t ) ζ 5 ( R 1 ) t [ 0 , R 1 ) .
Theorem 2.
Let Assumption 1 hold. The sequence { x n } is as in (3). If x 0 B ( x * , R 1 ) { x * } , R 1 is as in (27). Then, the sequence { x n } converges to x * and
ϵ x n + 1 ζ 5 ( R 1 ) ϵ x n 5 ,
where ζ 5 is as defined in (26).
Proof. 
We use induction to prove the theorem. Clearly, one can mimic the proof of Theorem 1 to obtain
ϵ z 0 ζ 3 ( ϵ x 0 ) ϵ x 0 3 .
Now, we will show that F ( y 0 ) 1 is bounded using Assumption 1 and (28),
A * 1 ( F ( y 0 ) F ( x * ) ) K 1 ϵ y 0 K 1 ζ 1 ( ϵ x 0 ) ϵ x 0 2 = ζ 4 ( ϵ x 0 ) ζ 4 ( R 1 ) < 1 .
Hence, F ( y 0 ) 1 is invertible and
F ( y 0 ) 1 A * 1 1 ζ 4 ( ϵ x 0 ) .
x 1 x * = z 0 x * F ( y 0 ) 1 F ( z 0 ) = z 0 x * F ( y 0 ) 1 [ F ( z 0 ) F ( x * ) ] = F ( y 0 ) 1 F ( y 0 ) ( z 0 x * ) 0 1 F ( x * + t ( z 0 x * ) ) ( z 0 x * ) d t = F ( y 0 ) 1 A * 0 1 A * 1 ( ( F ( y 0 ) F ( x * + t ( z 0 x * ) ) ) ) d t ( z 0 x * ) .
Hence, by using Assumption 1, (28) and (32), we obtain
ϵ x 1 F ( y 0 ) 1 A * 0 1 K 1 y 0 x * t ( z 0 x * ) d t ϵ z 0 1 1 ζ 4 ( ϵ x 0 ) 0 1 K 1 ( ϵ y 0 + t ϵ z 0 ) d t ϵ z 0 K 1 1 ζ 4 ( ϵ x 0 ) ϵ y 0 + ϵ z 0 2 ϵ z 0 K 1 1 ζ 4 ( ϵ x 0 ) ζ 1 ( ϵ x 0 ) ϵ x 0 2 + 1 2 ζ 3 ( ϵ x 0 ) ϵ x 0 3 × ζ 3 ( ϵ x 0 ) ϵ x 0 3 K 1 1 ζ 4 ( ϵ x 0 ) ζ 1 ( ϵ x 0 ) + 1 2 ζ 3 ( ϵ x 0 ) ϵ x 0 ζ 3 ( ϵ x 0 ) ϵ x 0 5 ζ 5 ( ϵ x 0 ) ϵ x 0 5 < ϵ x 0 < R 1 .
I.e., x 1 B ( x * , R 1 ) , and from (33) and (29),
ϵ x 1 K 1 1 ζ 4 ( ϵ x 0 ) ζ 1 ( ϵ x 0 ) ϵ x 0 2 + 1 2 ζ 3 ( ϵ x 0 ) ϵ x 0 3 ζ 3 ( ϵ x 0 ) ϵ x 0 3 K 1 1 ζ 4 ( ϵ x 0 ) ζ 1 ( ϵ x 0 ) + 1 2 ζ 3 ( ϵ x 0 ) ϵ x 0 ζ 3 ( ϵ x 0 ) ϵ x 0 5 ζ 5 ( ϵ x 0 ) ϵ x 0 5 ζ 5 ( R 1 ) ϵ x 0 5 .
The rest of the proof follows as in Theorem 1. □
To prove the convergence of method (4), we introduce some more functions and parameters. Let ζ 6 : [ 0 , ρ 2 ) R be defined as
ζ 6 ( t ) = K 1 ζ 3 ( t ) t 3
and h 6 : [ 0 , ρ 2 ) R as
h 6 ( t ) = ζ 6 ( t ) 1 .
Then, h 6 ( 0 ) = 1 and h 6 ( t ) as t ρ 2 . By the intermediate value theorem there exists a smallest zero ρ 6 in the interval [ 0 , ρ 2 ) such that
0 ζ 6 ( t ) < 1 t [ 0 , ρ 6 ) .
Lastly, we define functions ζ 7 : [ 0 , ρ 6 ) R by
ζ 7 ( t ) = K 1 2 ( 1 ζ 6 ( t ) ) ζ 3 2 ( t )
and h 7 : [ 0 , ρ 6 ) R by
h 7 ( t ) = ζ 7 ( t ) t 5 1 .
Then, h 7 ( 0 ) = 1 and h 7 ( t ) as t . The intermediate value theorem gives a smallest root ρ 7 in [ 0 , ρ 6 ) such that
0 ζ 7 ( t ) t 5 1 t [ 0 , ρ 7 ) .
If
R 2 = min { R , ρ 7 } ,
then
0 ζ 6 ( t ) < 1 and 0 ζ 7 ( t ) t 5 < 1 t [ 0 , R 2 ) .
Furthermore, observe that ζ 7 is an increasing function in [ 0 , ρ 6 ) . Specifically,
ζ 7 ( t ) ζ 7 ( R 2 ) t [ 0 , R 2 ) .
Theorem 3.
Let Assumption 1 hold, and the sequence { x n } defined by (4) with x 0 B ( x * , R 2 ) { x * } , where R 2 , as in (39), converges to x * such that
ϵ x n + 1 ζ 7 ( R 2 ) ϵ x n 6 ,
where ζ 7 is as in (37).
Proof. 
In extension (4), only the last step is different from extension (3). So, we can easily repeat the proof to obtain
ϵ z 0 ζ 3 ( ϵ x 0 ) ϵ x 0 3 .
Now, as in previous case, we will show that F ( z 0 ) 1 exists using Assumptions 1 and (40).
A * 1 ( F ( z 0 ) A * ) K 1 ϵ z 0 K 1 ζ 3 ( ϵ x 0 ) ϵ x 0 3 ζ 6 ( ϵ x 0 ) < ζ 6 ( R 2 ) < 1 .
Hence, F ( z 0 ) 1 is invertible and
F ( z 0 ) 1 A * 1 1 ζ 6 ( ϵ x 0 ) .
x 1 x * = z 0 x * F ( z 0 ) 1 F ( z 0 ) = F ( z 0 ) 1 F ( z 0 ) ( z 0 x * ) 0 1 F ( x * + t ( z 0 x * ) ) ( z 0 x * ) d t = F ( z 0 ) 1 A * 0 1 A * ( F ( z 0 ) F ( x * + t ( z 0 x * ) ) ) d t ( z 0 x * ) .
Hence, by using (43), Assumptions 1 and (40), it follows that
ϵ x 1 F ( z 0 ) 1 F ( x * ) 0 1 A * 1 ( F ( z 0 ) F ( x * + t ( z 0 x * ) ) ) d t ϵ z 0 1 1 ζ 6 ( ϵ x 0 ) 0 1 K 1 | ( 1 t ) | ϵ z 0 d t ϵ z 0 K 1 2 ( 1 ζ 6 ( ϵ x 0 ) ) ζ 3 ( ϵ x 0 ) ϵ x 0 3 × ζ 3 ( ϵ x 0 ) ϵ x 0 3 ζ 7 ( ϵ x 0 ) ϵ x 0 6 < ϵ x 0 < R 2 ,
so, x 1 B ( x * , R 2 ) . Furthermore, from (41) and (45), we have
ϵ x 1 ζ 7 ( R 2 ) ϵ x 0 6 .
Hence, (42) is satisfied for n = 1 . Now, the proof follows as in Theorem 2. □
The conditions that guarantee a unique solution are given in the following lemma.
Lemma 1.
Suppose Assumption 1 holds and x * is a simple solution of the equation F ( x ) = 0 . Then, F ( x ) = 0 has a unique solution x * in E : = Ω B [ x * , r ¯ ] provided
K 1 r ¯ < 2 .
Proof. 
Let p E be such that F ( p ) = 0 . Define J = 0 1 F ( x * + u ( p x * ) ) d u . Then, by Assumption 1, we have, in turn
A * 1 ( J A * ) K 1 0 1 x * + u ( p x * ) x * d u K 1 0 1 u p x * d u K 1 r ¯ 2 < 1 .
It follows that J is invertible, and hence p = x * by the identity 0 = F ( p ) F ( x * ) = J ( p x * ) .

3. Illustrations and Numerical Examples

In this section, we will illustrate our results using numerical examples. In the first three examples, we compute the radii of convergence. The next example compares the iterations of methods (2)–(4) with the corresponding methods in [16]. We also compute the ACOC for Examples 2 and 4 (the iterations of Examples 1 and 3 converge within three iterations on almost all initial points, so we have not computed ACOC for these examples). An illustration of the basins of attraction and a representation of the number of iterates as a heatmap follows.
The values of ρ i , i { 1 , 2 , , 7 } for the examples (Examples 1–3) are given in Table 1, and the ACOC of Examples 2 and 4 are given in Table 2.
Example 1.
Let X = Y = R , Ω = [ r , 2 r ] , r ( 2 2 , 1 ) and F : Ω Y be defined by
F ( x ) = x 3 r .
Here, x * = r 1 / 3 . M = 2 ( 2 r ) r 2 / 3 , K 1 = 2 ( 2 r ) r 2 / 3 and K 2 = 2 r 2 / 3 . For instance, if we take r = 1 , from Table 1, we obtain the values of R , R 1 , and R 2 as 0.33196 , 0.30365 , and 0.331963 , respectively.
Example 2.
Let X = Y = R 3 , Ω = B [ 0 , 1 ] . Define function F ( w ) on Ω for w = ( a 1 , a 2 , a 3 ) T by
F ( w ) = e a 1 1 , a 2 2 e 1 2 + a 2 , a 3 T .
Here, x * = ( 0 , 0 , 0 ) T . We have F ( w ) = e a 1 0 0 0 ( e 1 ) a 2 + 1 0 0 0 1 . Furthermore, M = e , K 1 = e 1 e and K 2 = e 1 . Similar to the previous case, we obtain R = 0.36315 R 1 = 0.33616 , and R 2 = 0.36314 (see Table 1).
Example 3.
Define F on Ω = [ 1 , 1 ] as
F ( x ) = s i n x
x * = 0 . We obtain M = 1 , K 1 = 1 and K 2 = 1 . Consequently, R = 0.66119 , R 1 = 0.618403 and R 2 = 0.66119 (see Table 1).
In the next example, we compare the performance of the methods (2)–(4) with that of Noor–Waseem-type methods studied in [16].
Example 4.
The system of equations
3 t 1 2 t 2 + t 2 2 = 1 t 1 4 + t 1 t 2 3 = 1 ,
has solutions ( 1 , 0.2 ) , ( 0.4 , 1.3 ) , and ( 0.9 , 0.3 ) . The solution ( 0.9 , 0.3 ) is considered for approximating using the methods (2)–(4) and the corresponding methods studied in [16]. We use the initial point ( 2 , 1 ) in our computation. Table 3, Table 4 and Table 5 provide the obtained results.
The next example is to compare basins of attraction for each of the discussed methods.
Example 5.
Define F on R 2 by
F ( x , y ) = ( x 3 y , y 3 x )
with roots r 1 = ( 1 , 1 ) , r 2 = ( 0 , 0 ) and r 3 = ( 1 , 1 ) .
The sub-figures (a), (b), (c), and (d) in the Figure 1 are generated using 400 × 400 equally spaced grid points from the rectangular region D = { ( x , y ) : x , y [ 2 , 2 ] } as initial points for the iterations. The points that converge to r 1 , r 2 and r 3 are colored cyan, magenta, and yellow, respectively. The points that do not converge to any roots after 50 iterations are marked black. The stopping criterion used is x n x * < 10 8 . The algorithm used is the same as in [23]. The sub-figures (e), (f), (g), and (h) in Figure 1 are generated with the same grid for the corresponding methods representing the number of iterations required to converge by each point of the grid. It represents the number of iterations required to converge on each grid point. In black, the initial points that did not converge within 50 iterations are represented. The technique used can be found in Ardelean [24].
We used a PC with Intel Core i7 processor running Ubuntu 22.4.1 LTS. The programs were executed using MATLAB programming language with version code R2022b.

4. Conclusions

Traub’s method (also known as Arithmetic-Mean Newton’s Method and Weerakoon and Fernando method) and its two extensions were studied in this paper using assumptions on the derivatives of the operator up to the order two. The theoretical parameters are verified using examples. The dynamics of the methods are also included in this study.

Author Contributions

Conceptualization, M.S.K., K.R., S.G., J.P. and I.K.A.; methodology, M.S.K., K.R., S.G., J.P. and I.K.A.; software, M.S.K., K.R., S.G., J.P. and I.K.A.; validation, M.S.K., K.R., S.G., J.P. and I.K.A.; formal analysis, M.S.K., K.R., S.G., J.P. and I.K.A.; investigation, M.S.K., K.R., S.G., J.P. and I.K.A.; resources, M.S.K., K.R., S.G., J.P. and I.K.A.; data curation, M.S.K., K.R., S.G., J.P. and I.K.A.; writing—original draft preparation, M.S.K., K.R., S.G., J.P. and I.K.A.; writing—review and editing, M.S.K., K.R., S.G., J.P. and I.K.A.; visualization, M.S.K., K.R., S.G., J.P. and I.K.A.; supervision, M.S.K., K.R., S.G., J.P. and I.K.A.; project administration, M.S.K., K.R., S.G., J.P. and I.K.A.; funding acquisition, M.S.K., K.R., S.G., J.P. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Santhosh George and Jidesh P are supported by SERB Project, Grant No. CRG/2021/004776. Krishnendu R thanks UGC, Govt of India, for the support. Muhammed Saeed K would like to thank National Institute of Technology Karnataka, India, for the support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Magrenán, A.A.; Argyros, I.K. A Contemporary Study of Iterative Methods: Convergence. Dynamics and Applications; Academic Press: London, UK, 2018. [Google Scholar]
  2. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Computer Science and Applied Mathematics; Academic Press: New York, NY, USA, 1970; ISBN 978-0-12-528550-6. [Google Scholar]
  3. Behla, R.; Arora, H. CMMSE: A novel scheme having seventh-order convergence for nonlinear systems. J. Comput. Appl. Math. 2022, 404, 113301. [Google Scholar] [CrossRef]
  4. Shakhno, S.M. On an iterative algorithm with superquadratic convergence for solving nonlinear operator equations. J. Comput. Appl. Math. 2009, 231, 222–235. [Google Scholar] [CrossRef] [Green Version]
  5. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing: New York, NY, USA, 1982; ISBN 978-0-8284-0312-2. [Google Scholar]
  6. Petković, M. Multipoint Methods for Solving Nonlinear Equations; Elsevier/Academic Press: Amsterdam, The Netherlands, 2013. [Google Scholar]
  7. Özban, A.Y. Some new variants of Newton’s method. Appl. Math. Lett. 2004, 17, 677–682. [Google Scholar] [CrossRef] [Green Version]
  8. Petković, L.D.; Petković, M.S. A note on some recent methods for solving nonlinear equations. Appl. Math. Comput. 2007, 185, 368–374. [Google Scholar] [CrossRef]
  9. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  10. Frontini, M.; Sormani, E. Some variant of Newton’s method with third-order convergence. Appl. Math. Comput. 2003, 140, 419–426. [Google Scholar] [CrossRef]
  11. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef] [Green Version]
  12. Sharma, D.; Parhi, S.K. On the local convergence of modified Weerakoon’s method in Banach spaces. J. Anal. 2020, 28, 867–877. [Google Scholar] [CrossRef]
  13. Argyros, I.K.; Cho, Y.J.; George, S. Local convergence for some third-order iterative methods under weak conditions. J. Korean Math. Soc. 2016, 53, 781–793. [Google Scholar] [CrossRef] [Green Version]
  14. Nishani, H.P.S.; Weerakoon, S.; Fernando, T.G.I.; Liyanage, M. Weerakoon-Fernando Method with accelerated third-order convergence for systems of nonlinear equations. IJMMNO 2018, 8, 287. [Google Scholar] [CrossRef]
  15. Parhi, S.K.; Gupta, D.K. A sixth order method for nonlinear equations. Appl. Math. Comput. 2008, 203, 50–55. [Google Scholar] [CrossRef]
  16. George, S.; Sadan, A.R.; Jidesh, P.; Argyros, I.K. On the Order of Convergence of Noor-Waseem Method. Mathematics 2022, 10, 4544. [Google Scholar] [CrossRef]
  17. Beyer, W.A.; Ebanks, B.R.; Qualls, C.R. Convergence rates and convergence-order profiles for sequences. Acta Appl. Math. 1990, 20, 267–284. [Google Scholar] [CrossRef]
  18. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef] [Green Version]
  19. Ezquerro, J.A.; Hernández, M.A. On the R-order of convergence of Newton’s method under mild differentiability conditions. J. Comput. Appl. Math. 2006, 197, 53–61. [Google Scholar] [CrossRef] [Green Version]
  20. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman: New York, NY, USA, 1984. [Google Scholar]
  21. Cordero, A.; Torregrosa, J.R. Variants of Newton’s Method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  22. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press: Boca Raton, FL, USA; Taylor and Francis Group: Abingdon, UK, 2022. [Google Scholar]
  23. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing Dynamical and Parameters Planes of Iterative Families and Methods. Sci. World J. 2013, 2013, e780153. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Ardelean, G. A comparison between iterative methods by using the basins of attraction. Appl. Math. Comput. 2011, 218, 88–95. [Google Scholar] [CrossRef]
Figure 1. Basins of attraction for the Example 5. (a) Newton’s Method; (b) Weerakoon Method; (c) Extension 1 (3); (d) Extension 2 (4). Heat maps for (e) Newton’s Method; (f) Weerakoon Method; (g) Extension 1 (3); (h) Extension 2 (4).
Figure 1. Basins of attraction for the Example 5. (a) Newton’s Method; (b) Weerakoon Method; (c) Extension 1 (3); (d) Extension 2 (4). Heat maps for (e) Newton’s Method; (f) Weerakoon Method; (g) Extension 1 (3); (h) Extension 2 (4).
Fractalfract 07 00098 g001
Table 1. The parameters ρ i of the Examples 1–3.
Table 1. The parameters ρ i of the Examples 1–3.
Example ρ 1 ρ 2 ρ 3 ρ 4 ρ 5 ρ 6 ρ 7
1 0.3333 0.38197 0.33196 0.36603 0.30365 0.34469 0.33207
2 0.3880 0.44459 0.36315 0.42604 0.33616 0.38418 0.36559
3 0.6667 0.76393 0.66119 0.73205 0.61840 0.68749 0.66163
Table 2. ACOC of Examples 2 and 4.
Table 2. ACOC of Examples 2 and 4.
ExampleRoot x 0 Traub’s MethodExtension (3)Extension (4)
2 ( 0 , 0 , 0 ) ( 1 , 0.03 , 0.03 ) 2.98 4.64 5.70
( 0.5 , 0.5 , 0.5 ) 2.90 4.24 5.32
4 ( 0.9 , 0.3 ) ( 2 , 1 ) 2.91 4.60 4.43
( 1.3 , 0.4 ) 3.01 4.48 5.60
Table 3. Traub’s Method (2) and the Noor–Waseem Method in [16].
Table 3. Traub’s Method (2) and the Noor–Waseem Method in [16].
kTraub’s Method (2)Noor-Waseem Method in [16]
x k = ( t 1 k , t 2 k ) x k = ( t 1 k , t 2 k )
0 ( 2.0000000000000000 , 1.0000000000000000 ) ( 2.0000000000000000 , 1.0000000000000000 )
1 ( 1.02074824149820786 , 0.25352907082513398 ) ( 1.01962359355810994 , 0.26538605472406479 )
2 ( 0.99287801967429134 , 0.30629644813087153 ) ( 0.99285365860566110 , 0.30634643384624071 )
3 ( 0.99277999485546530 , 0.30644044650526403 ) ( 0.99277999485264400 , 0.30644044650915097 )
4 ( 0.99277999485112322 , 0.30644044651102042 ) ( 0.99285365860566110 , 0.30634643384624071 )
5 ( 0.99277999485112322 , 0.30644044651102042 ) ( 0.99277999485264400 , 0.30644044650915097 )
Table 4. Fifth-Order Method (3) and the Noor–Waseem Fifth-order Extension Method in [16].
Table 4. Fifth-Order Method (3) and the Noor–Waseem Fifth-order Extension Method in [16].
kFifth Order Method (3)Method (3) in [16]
x k = ( t 1 k , t 2 k ) x k = ( t 1 k , t 2 k )
0 ( 2.0000000000000000 , 1.0000000000000000 ) ( 2.0000000000000000 , 1.0000000000000000 )
1 ( 0.99339266265870362 , 0.30563908458855637 ) ( 0.97999747117802393 , 0.31079296183420979 )
2 ( 0.99277999485112611 , 0.30644044651101687 ) ( 0.99252009675815366 , 0.30661919359513767 )
3 ( 0.99277999485112322 , 0.30644044651102042 ) ( 0.99277988170910103 , 0.30644055554978738 )
4 ( 0.99277999485112322 , 0.30644044651102042 ) ( 0.99277999485110035 , 0.30644044651104612 )
Table 5. The Sixth-Order Method (4) and the Noor–Waseem Sixth-order Extension Method in [16].
Table 5. The Sixth-Order Method (4) and the Noor–Waseem Sixth-order Extension Method in [16].
kSixth Order Method (4)Method (4) in [16]
x k = ( t 1 k , t 2 k ) x k = ( t 1 k , t 2 k )
0 ( 2.0000000000000000 , 1.0000000000000000 ) ( 2.0000000000000000 , 1.0000000000000000 )
1 ( 0.99278598580223975 , 0.30643277796171902 ) ( 1.03759994297628344 , 0.26149549469920185 )
2 ( 0.99277999485112322 , 0.30644044651102042 ) ( 0.99619799193796287 , 0.30257508692302936 )
3 ( 0.99277999485112322 , 0.30644044651102042 ) ( 0.99277999575683006 , 0.30644044541552573 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saeed K, M.; Remesh, K.; George, S.; Padikkal, J.; Argyros, I.K. Local Convergence of Traub’s Method and Its Extensions. Fractal Fract. 2023, 7, 98. https://doi.org/10.3390/fractalfract7010098

AMA Style

Saeed K M, Remesh K, George S, Padikkal J, Argyros IK. Local Convergence of Traub’s Method and Its Extensions. Fractal and Fractional. 2023; 7(1):98. https://doi.org/10.3390/fractalfract7010098

Chicago/Turabian Style

Saeed K, Muhammed, Krishnendu Remesh, Santhosh George, Jidesh Padikkal, and Ioannis K. Argyros. 2023. "Local Convergence of Traub’s Method and Its Extensions" Fractal and Fractional 7, no. 1: 98. https://doi.org/10.3390/fractalfract7010098

Article Metrics

Back to TopTop