Next Article in Journal
Change-Point Estimation and Detection for Mixture of Linear Regression Models
Previous Article in Journal
Extensions of Göhde and Kannan Fixed Point Theorems in Strictly Convex Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Convergence Order of Jarratt-Type Methods for Nonlinear Equations

by
Shobha M. Erappa
1,*,
Suma P. Bheemaiah
1,
Santhosh George
2,
Kanagaraj Karuppaiah
3 and
Ioannis K. Argyros
4
1
Department of Mathematics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576 104, Udupi, India
2
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Surathkal 575 025, Mangaluru, India
3
Department of Mathematics, Srinivasa Ramanujan Centre, SASTRA Deemed to Be University, Kumbakonam 612 001, Tamil Nadu, India
4
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(6), 401; https://doi.org/10.3390/axioms14060401
Submission received: 15 April 2025 / Revised: 21 May 2025 / Accepted: 22 May 2025 / Published: 24 May 2025
(This article belongs to the Section Mathematical Analysis)

Abstract

:
The order of convergence for Jarratt-type methods in solving nonlinear equations is determined without relying on Taylor expansion. Unlike previous studies, we depend solely on assumptions about the derivatives of the involved operator up to the second order. The proof presented in this paper is independent of the Taylor series expansion, thereby reducing the need for assumptions about derivatives of higher order of the involved operator and enhancing the applicability of these methods. The method’s applicability is broadened by employing the concept of generalized conditions in local convergence analysis and majorizing sequences in semi-local analysis. This study includes numerical examples and basins of attraction for the methods.
MSC:
41A58; 41A25; 49 M15; 90C56; 26A24

1. Introduction

Several real-world problems can be mathematically modelled as an equation of the form
G ( v ) = 0 ,
where G : E B 1 B 2 is a nonlinear operator mapping between the Banach spaces B 1 and B 2 and E is open convex set in B 1 . One of the significant hurdles appearing in the real world is determining solutions v of (1). Iterative methods are extensively used techniques for approximating solutions to equations that are nonlinear in nature, especially when exact solutions are not explicitly obtainable. Among the most widely used quadratically convergent iterative methods is Newton’s method (NM) as it converges rapidly from any sufficiently good initial guess. Even though this method provides a good convergence rate, the need to compute and invert the derivative of the given operator function in each of the iterative steps limits the applicability of this method. To overcome this, several Newton-like methods are available in the literature [1,2,3,4,5,6,7,8,9]. One such successful attempt was made by Ren et al. in [10], providing an iterative method of order six as in (2). Recall that a sequence { a n } in B 1 with lim n a n = α is known to be convergent of order q > 1 if there exists a nonzero constant C such that
lim n a n + 1 α a n α q = C .
Previous studies primarily used Taylor expansion to determine the order of convergence, which necessitates the existence of derivatives, mostly of higher order. An alternative method involves employing the computational order of convergence (COC) [11], defined as follows:
ρ ¯ = ln a n + 1 α a n α ln a n α a n 1 α ,
where a n 1 , a n , and a n + 1 are three consecutive iterates near root α or the approximate computational order of convergence (ACOC) defined as
ρ ¯ = ln a n + 1 a n a n a n 1 ln a n a n 1 a n 1 a n 2 ,
where a n 2 ,   a n 1 ,   a n , and a n + 1 are four consecutive iterates near root α , used to obtain the order of convergence.
The limitation of COC and ACOC for iterative methods lies in their susceptibility to the oscillating behavior of approximations and slow convergence during early iterations [12]. As a result, in general, COC and ACOC precisely do not accurately reflect the actual convergence order.
In [10], Ren et al., considered the following iterative scheme:
w n = v n 2 3 A ( v n ) 1 A ( v n ) , z n = v n 1 2 [ 3 A ( w n ) A ( v n ) ] 1 [ 3 A ( w n ) + A ( v n ) ] A ( v n ) 1 A ( v n ) , v n + 1 = z n A ( z n ) 1 A ( z n ) , n = 0 , 1 , 2 , 3 . . .
for solving (1), when B 1 = B 2 = R k .
In [10], Taylor’s expansion is used to achieve a sixth-order convergence, but the analysis requires conditions on the derivatives of A up to the seventh order. These assumptions restrict the applicability of the method given in (2) to problems involving operators that are differentiable at least seven times.
In this work, we initially determine the order of convergence of the scheme (refer to [8,13]) described for all n = 0 , 1 , 2 , as follows:
w n = v n 2 3 A ( v n ) 1 A ( v n ) , v n + 1 = v n 1 2 [ 3 A ( w n ) A ( v n ) ] 1 [ 3 A ( w n ) + A ( v n ) ] A ( v n ) 1 A ( v n ) ,
where B 1 and B 2 are Banach spaces.
Additionally, we enhance the method to a fifth-order approach, given as
w n = v n 2 3 A ( v n ) 1 A ( v n ) , z n = v n 1 2 [ 3 A ( w n ) A ( v n ) ] 1 [ 3 A ( w n ) + A ( v n ) ] A ( v n ) 1 A ( v n ) , v n + 1 = z n A ( 3 w n v n 2 ) 1 A ( z n ) , n = 0 , 1 , 2 , 3 . . . ,
in Section 4.
In Section 2, we establish a third-order convergence for Method (3), and in Section 3, we demonstrate a sixth-order convergence for (2), relying on assumptions about the derivatives of A up to the second order. Consequently, our analysis broadens the applicability of methods (3), (2), and (4) to problems that could not be addressed using the approaches in [10,14,15,16,17].
In Section 5, we examine the constraints of our approach and propose novel strategies to overcome these limitations for both local and semi-local convergence scenarios. The convergence conditions are solely tied to the operators involved in the method for both the semi-local and local cases.
The remaining part of the paper includes the efficiency index in Section 6, a numerical demonstration in Section 7, and basins of attraction in Section 8, concluding with a summary in Section 9.

2. Convergence Order of Iterative Scheme (3)

The analysis of local convergence relies on the following assumptions:
(A1)
A ( v ) 1 exists and ∃ L 0 > 0 such that v , w E ,
A ( v ) 1 ( A ( v ) A ( w ) ) L 0 v w .
(A2)
L 1 > 0 such that v , w E ,
A ( v ) 1 ( A ( v ) A ( w ) ) L 1 v w .
(A3)
L 2 > 0 such that
A ( v ) 1 A ( v ) L 2 , v E .
and
(A4)
L 3 > 0 such that
A ( v ) 1 A ( v ) L 3 , v E .
Using the constants L 0 , L 1 , L 2 , and L 3 , we define continuous nondecreasing functions (CNF) Θ 1 , H 1 : [ 0 , 1 L 0 ) R as follows:
Θ 1 ( t ) = L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 t ) ) + 1 ) t
and
H 1 ( t ) = Θ 1 ( t ) 1 .
Note that H 1 ( 0 ) = 1 and if H 1 ( t ) approaches as t tends to 1 L 0 , then H 1 ( t ) = 0 possesses a smallest positive root in [ 0 , 1 L 0 ) , which is denoted by λ . Define CNF g 1 , h 1 : [ 0 , λ ) R by
g 1 ( t ) = 1 48 ( 1 Θ 1 ( t ) ) ( 1 L 0 t ) 2 × [ ( 93 ( L 1 L 0 t ) + 32 L 1 L 3 + 36 L 0 L 2 ) ( 1 L 0 t ) + 12 L 0 L 2 L 3 ]
and
h 1 ( t ) = g 1 ( t ) t 2 1 .
As h 1 ( 0 ) = 1 and h 1 ( t ) approaches as t λ , there exists a smallest root of h 1 ( t ) = 0 in [ 0 , λ ) , which we call λ 1 .
So
r = min { λ 1 , 1 2 L 0 , 1 } .
Then, we have
0 g 1 ( t ) t 2 < 1 , t [ 0 , r ) .
Throughout the paper, we consider B ( v 0 , ρ ) and B [ v 0 , ρ ] as open and closed balls, respectively, centered at v 0 X with radius ρ > 0 .
Theorem 1. 
Assuming (A1)–(A4) are true, the sequence { v n } given by (3) with initial value v 0 B ( v , r ) converges to v , and the following estimate is valid:
v n + 1 v g 1 ( r ) v n v 3 .
Proof. 
An inductive argument will be employed for the proof. As a first step, we will demonstrate that the operator 3 A ( w ) A ( v ) is invertible for all v and w belonging to the open ball B ( v , r ) . Note that, by (A1), we have
( 2 A ( v ) ) 1 ( 3 A ( w ) A ( v ) 2 A ( v ) ) 3 2 A ( v ) 1 ( A ( w ) A ( v ) ) + 1 2 A ( v ) 1 ( A ( v ) A ( v ) ) L 0 2 ( 3 w v + v v ) 3 2 L 0 r + 1 2 L 0 r = 2 L 0 r < 1 .
Therefore, by applying Banach’s Lemma (BL) on the invertibility of operators, 3 A ( w ) A ( v ) is invertible and by (8), we have
( 3 A ( w ) A ( v ) ) 1 A ( v ) 1 2 ( 1 L 0 2 ( 3 w v + v v ) ) .
Similarly, one can prove that
A ( v ) 1 A ( v ) 1 1 L 0 v v .
Next, by Method (3), we have
v 1 v = v 0 v 1 2 ( 3 A ( w 0 ) A ( v 0 ) ) 1 ( 3 A ( w 0 ) + A ( v 0 ) ) A ( v 0 ) 1 A ( v 0 ) = ( 3 A ( w 0 ) A ( v 0 ) ) 1 [ ( 3 A ( w 0 ) A ( v 0 ) ) ( v 0 v ) 1 2 ( 3 A ( w 0 ) + A ( v 0 ) ) A ( v 0 ) 1 A ( v 0 ) ] .
For convenience, let P = ( 3 A ( w 0 ) A ( v 0 ) ) 1 . In order to prove (7), we rearrange the Equation (11) as follows:
v 1 v = P [ ( 3 A ( w 0 ) A ( v 0 ) ) 2 A ( v 0 ) + ( 2 I 1 2 ( 3 A ( w 0 ) + A ( v 0 ) ) A ( v 0 ) 1 ) A ( v 0 ) = P 3 0 1 0 1 A ( v + t ( v 0 v ) + θ ( w 0 v t ( v 0 v ) ) ) d θ × ( w 0 v t ( v 0 v ) ) d t ( v 0 v ) 0 1 0 1 A ( v + t ( v 0 v ) + θ ( 1 t ) ( v 0 v ) ) d t ( 1 t ) ( v 0 v ) 2 + 3 2 0 1 A ( w 0 + θ ( v 0 w 0 ) ) d θ ( v 0 w 0 ) A ( v 0 ) 1 A ( v 0 ) = P 3 0 1 0 1 A ( v + t ( v 0 v ) + θ ( w 0 v t ( v 0 v ) ) ) d θ × ( ( 1 3 t ) ( v 0 v ) 2 ) d t + 3 0 1 0 1 A ( v + t ( v 0 v ) + θ ( w 0 v t ( v 0 v ) ) ) d θ × 2 3 ( v 0 v A ( v 0 ) 1 A ( v 0 ) ) d t ( v 0 v ) ( because w 0 v = 1 3 ( v 0 v ) + 2 3 ( v 0 v A ( v 0 ) 1 A ( v 0 ) ) ) 0 1 0 1 A ( v + t ( v 0 v ) + θ ( 1 t ) ( v 0 v ) ) d t ( 1 t ) ( v 0 v ) 2 + 0 1 A ( w 0 + θ ( v 0 w 0 ) ) d θ ( A ( v 0 ) 1 A ( v 0 ) ) 2 .
Let S 1 ( θ , t ) = A ( v + t ( v 0 v ) + θ ( w 0 v t ( v 0 v ) ) ) , S 2 ( θ , t ) = A ( v + t ( v 0 v ) + θ ( 1 t ) ( v 0 v ) ) , and S 3 ( θ ) = A ( w 0 + θ ( v 0 w 0 ) ) . Then, by (12), we have
v 1 v = P 0 1 0 1 S 1 ( θ , t ) d θ ( ( 1 2 t ) ( v 0 v ) 2 ) d t + 2 0 1 0 1 S 1 ( θ , t ) d θ d t × A ( v 0 ) 1 ( A ( v 0 ) 0 1 A ( v + τ ( v 0 v ) ) d τ ) ( v 0 v ) 2 0 1 0 1 S 2 ( θ , t ) d θ d t ( v 0 v ) 2 + 0 1 0 1 ( S 2 ( θ , t ) S 1 ( θ , t ) ) t d t ( v 0 v ) 2 d θ + 0 1 S 3 ( θ ) d θ ( A ( v 0 ) 1 A ( v 0 ) ) 2 = : I 1 + I 2 + I 3 + I 4 + I 5 ,
where
I 1 = P 0 1 0 1 S 1 ( θ , t ) d θ ( ( 1 2 t ) ( v 0 v ) 2 ) d t , I 2 = 2 P 0 1 0 1 S 1 ( θ , t ) d θ d t A ( v 0 ) 1 ( A ( v 0 ) 0 1 A ( v + τ ( v 0 v ) ) d τ ) ( v 0 v ) 2 , I 3 = P 0 1 0 1 ( S 2 ( θ , t ) S 1 ( θ , t ) ) t d t ( v 0 v ) 2 d θ , I 4 = P 0 1 0 1 ( S 3 ( θ ) S 2 ( θ , t ) ) d θ d t ( v 0 v ) 2
and
I 5 = P 0 1 S 3 ( θ ) d θ [ ( A ( v 0 ) 1 0 1 A ( v + τ ( v 0 v ) ) ) 2 I ] ( v 0 v ) 2 .
Next, we estimate the norms of I 1 , I 2 , I 3 , I 4 , and I 5 . Note that
I 1 = P 0 1 0 1 S 1 ( θ , t ) d θ ( 1 2 t ) ( v 0 v ) 2 d t P A ( v ) 0 1 0 1 A ( v ) 1 [ A ( v + t ( v 0 v ) + θ ( w 0 v t ( v 0 v ) ) ) A ( v ) ] d θ | 1 2 t | v 0 v 2 d t + ( 3 A ( w 0 ) A ( v 0 ) ) 1 A ( v ) 0 1 ( 1 2 t ) ( v 0 v ) 2 d t L 1 P A ( v ) 0 1 0 1 t ( v 0 v ) + θ ( w 0 v t ( v 0 v ) ) × | 1 2 t | v 0 v 2 d θ d t L 1 P A ( v ) 0 1 3 | t | | 1 2 t | 2 v 0 v + | 1 2 t | 2 w 0 v v 0 v 2 d θ d t L 1 P A ( v ) 3 8 v 0 v 3 + 1 4 w 0 v v 0 v 2 ,
which is obtained using (A2) and (10) (with v = v 0 ). Note that w 0 v     w 0 v 0   +   v 0 v , and using (10), we have
w 0 v 0 = 2 3 A ( v 0 ) 1 A ( v 0 ) 2 3 A ( v 0 ) 1 A ( v ) 0 1 A ( v ) 1 A ( v + t ( v 0 v ) ) d t ( v 0 v ) 2 L 3 3 ( 1 L 0 v 0 v ) v 0 v .
Therefore,
w 0 v 1 + 2 L 3 3 ( 1 L 0 v 0 v ) v 0 v
and hence by (9) (with w = w 0 and v = v 0 ), we have
P A ( v ) 1 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v v ) .
Therefore, using (16) in (14), we obtain
I 1 L 1 ( 3 + 2 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) ) 16 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v 0 v ) v 0 v 3 .
Next,
I 2 = 2 P A ( v ) 0 1 0 1 A ( v ) 1 S 1 ( θ , t ) d θ d t A ( v 0 ) 1 A ( v ) × 0 1 A ( v ) 1 ( A ( v 0 ) A ( v + τ ( v 0 v ) ) ) d τ ( v 0 v ) 2
and by (9), (10), (A1), and (A3), we have
I 2 L 0 L 2 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v 0 v ) ( 1 L 0 v 0 v ) v 0 v 3 .
By using (9) and (A2), we have
I 3 L 1 8 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v 0 v ) × ( v 0 v + w 0 v ) v 0 v 2 .
Thus, by (15) and (16), we have
I 3 L 1 8 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v 0 v ) × 2 + 2 L 3 3 ( 1 L 0 v 0 v ) v 0 v 3 L 1 ( 3 ( 1 L 0 v 0 v ) + L 3 ) 12 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v 0 v ) ( 1 L 0 v 0 v ) v 0 v 3 .
Similarly, we have
I 4 = P A ( v ) 0 1 0 1 A ( v ) 1 ( S 3 ( θ ) S 2 ( θ , t ) ) d θ d t ( v 0 v ) 2 L 1 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v 0 v ) × ( 3 4 v 0 v + w 0 v + 1 2 v 0 w 0 ) v 0 v 2 .
So, by using (15) and (16), we have
I 4 L 1 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v 0 v ) × 5 4 + 3 2 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) v 0 v 3 = L 1 ( 11 11 L 0 v 0 v + 4 L 3 ) 8 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v 0 v ) ( 1 L 0 v 0 v ) v 0 v 3 .
Next, we shall obtain an estimate for I 5 . Observe that
I 5 = P A ( v ) 0 1 A ( v ) 1 S 3 ( θ ) d θ × A ( v 0 ) 1 0 1 A ( v + τ ( v 0 v ) ) d τ 2 I ( v 0 v ) 2 P A ( v ) A ( v ) 1 S 3 ( θ ) A ( v 0 ) 1 A ( v ) × 0 1 A ( v ) 1 ( A ( v + τ ( v 0 v ) ) A ( v 0 ) ) d τ + 0 1 A ( v ) 1 A ( v + τ ( v 0 v ) ) d τ A ( v 0 ) 1 A ( v ) × 0 1 A ( v ) 1 ( A ( v + τ ( v 0 v ) ) A ( v 0 ) ) d τ v 0 v 2 .
Therefore, by (9) and (A1)–(A4), we have
I 5 1 4 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v ) ) + 1 ) v 0 v ) ( 1 L 0 v 0 v ) 2 × L 2 L 0 ( 1 L 0 v 0 v + L 3 ) v 0 v 3 .
Thus, from (13)–(23), we have
v 1 v g 1 ( v 0 v ) v 0 v 3 .
Therefore, the iterate v 1 B ( v , r ) because g 1 ( v 0 v ) v 0 v 3 g 1 ( r ) r 2 v 0 v v 0 v r .
Simply replace v 0 , w 0 , and v 1 in the preceding arguments with v n , w n , and v n + 1 to complete the induction for (7). □
Theorem 2. 
The method specified in Equation (3) exhibits a convergence order of 3.
Proof. 
The argument used in the proof is similar to that of Theorem 3 in [18]. However, we include it here for completeness. Let e n = v n v . Let q be maximal value for which a constant C > 0 can be found such that
lim n e n + 1 e n q = C .
Then, since e n < r < 1 , it follows from (5) that for sufficiently large enough n, we have
e n + 1 = g 1 ( r ) e n 3 [ 1 + a n ] g 1 ( r ) e n 3 ,
with a n converging to zero as n tends to infinity.
So, by (24) and (25), we obtain
e n + 1 e n 3 g 1 ( r ) .
Thus, by (24), we have
q = 3 and C = g 1 ( r ) .
Thus, convergence order q = 3 .

3. Analysis of Convergence Order of (2)

For the analysis, we require some more CNFs:
Let Θ 2 , H 2 : [ 0 , λ ) R defined by
Θ 2 ( t ) = L 0 g 1 ( t ) t 3
and
H 2 ( t ) = Θ 2 ( t ) 1 .
Given that H 2 ( 0 ) = 1 and H 2 ( t ) and t λ , we can conclude that the equation H 2 ( t ) = 0 possesses a smallest positive solution within [ 0 , λ ) . This solution is denoted as λ 2 .
Let g 2 , h 2 : [ 0 , λ 2 ) R be CNF defined by
g 2 ( t ) = L 0 2 ( 1 L 0 g 1 ( t ) t 3 ) g 1 ( t ) 2
and
h 2 ( t ) = g 2 ( t ) t 5 1 .
Similarly, h 2 ( 0 ) = 1  &  h 2 ( t ) tends to as t λ 2 . Therefore, h 2 ( t ) = 0 possesses a smallest positive solution in [ 0 , λ 2 ) denoted by λ 3 .
Let
R = min { λ 3 , 1 2 L 0 , 1 } .
Then,
0 g 2 ( t ) t 5 < 1 , t [ 0 , R ] .
Theorem 3. 
Assuming (A1)–(A4) are true, the sequence { v n } given by (2) with initial value v 0 B ( v , R ) converges to v , and the following estimate is valid:
v n + 1 v g 2 ( R ) v n v 6 .
Proof. 
Adopting the same proof strategy as in Theorem 1, we find that
z n v g 1 ( v n v ) v n v 3 .
Note that by (10) and (A1),
v n + 1 v = z n v A ( z n ) 1 A ( z n ) L 0 2 ( 1 L 0 z n v ) z n v 2 L 0 2 ( 1 L 0 g 1 ( v n v ) v n v 3 ) g 1 ( v n v ) 2 v n v 6 g 2 ( R ) v n v 6 .
Now, since g 2 ( R ) v n v 6 g 2 ( R ) R 5 v n v v n v R , the iterate v n + 1 B ( v , R ) .
Theorem 4. 
The method outlined in (2) exhibits a sixth order of convergence.
Proof. 
Employ a proof strategy analogous to that used for Theorem 2. □

4. Analysis of Convergence Order of (4)

To analyze the convergence order of Method (4), we require some more CNFs as done in previous sections:
Let g 3 , h 3 : [ 0 , 3 1 L 0 ) R be CNFs defined by
g 3 ( t ) = L 0 2 ( 1 L 0 2 t 2 2 ( 1 L 0 t ) ) L 0 1 L 0 t + g 1 ( t ) t g 1 ( t )
and
h 3 ( t ) = g 3 ( t ) t 4 1 .
Then, h 3 ( 0 ) = 1 and h 3 ( t ) approaches as t 3 1 L 0 . Therefore, h 3 ( t ) = 0 possesses a smallest positive solution in [ 0 , 3 1 L 0 ) , which we call λ 4 .
Let
R 1 = min { λ 4 , 1 2 L 0 , 1 } .
Then, for all t [ 0 , R 1 ] ,
0 g 3 ( t ) t 4 < 1 .
Theorem 5. 
Assuming (A1)–(A4) are true, the sequence { v n } given by (4) with initial value v 0 B ( v , R 1 ) converges to v , and the following estimate is valid:
v n + 1 v g 3 ( R 1 ) v n v 5 .
Proof. 
In imitation of the proof presented for Theorem 1, we obtain
z n v g 1 ( v n v ) v n v 3 .
Note that by (10) and (A1),
v n + 1 v = z n v A 3 w n v n 2 1 A ( z n ) L 0 1 L 0 3 w n v n 2 v 3 w n v n 2 v + 1 2 z n v z n v L 0 2 ( 1 L 0 ( v n v A ( v n ) 1 A ( v n ) ) ) × 2 v n v A ( v n ) 1 A ( v n ) + z n v z n v g 3 ( R 1 ) v n v 5 .
Now, since g 3 ( R 1 ) v n v 5 g 3 ( R 1 ) R 1 4 v n v v n v R 1 , the iterate v n + 1 B ( v , R 1 ) .
Theorem 6. 
The method defined by (4) exhibits a convergence order of 5 .
Proof. 
Resembles the proof given for Theorem 2. □
The subsequent result addresses the uniqueness property of the solutions derived from Methods (3), (2), and (4).
Theorem 7. 
Suppose Assumption (A1) holds and the equation A ( v ) = 0 has a simple solution v . Then, for the equation A ( v ) = 0 , the only solution in the set E 1 = E B [ v , ρ ] is v provided that
L 0 < 2 ρ .
Proof. 
Suppose c E 1 is such that A ( c ) = 0 . Define the operator N : = 0 1 A ( v + γ ( c v ) ) d γ . Then, by Assumption (A1) and (34), we have
A ( v ) 1 ( N A ( v ) ) L 0 0 1 v + γ ( c v ) v d γ L 0 0 1 γ v c d γ L 0 2 ρ < 1 .
So, by BL, N is invertible and hence we obtain c = v from the identity 0 = A ( c ) A ( v ) = N ( c v ) .

5. Convergence Under Generalized Conditions

The applicability of Method (3) and Method (4) can be extended. Notice that the second Condition (A2) can be violated easily even for simple scalar functions. Define the function
A ( t ) = t 2 log t + 5 t 5 5 t 4 , t 0 0 , t = 0 .
Since v = 1 and A ( t ) is discontinuous at t = 0 , Condition (A2) is violated in any neighborhood containing 0 and 1. This necessitates a convergence analysis under generalized conditions and the operators inherent to the methods.
First, the local convergence is considered under some conditions. Set T = [ 0 , + ) .
Assume the following:
(H1)
Consider a CNF ψ 0 : T T for which the smallest positive solution to ψ 0 ( t ) 1 = 0 is ρ 0 . Let T 0 be the interval [ 0 , ρ 0 ) .
(H2)
Let ρ 1 T 0 { 0 } be the SPS of σ 1 ( t ) 1 = 0 , where the function σ 1 : T 0 T is given by
σ 1 ( t ) = 0 1 ψ ( ( 1 θ ) t ) d θ + 1 3 ( 1 + 0 1 ψ 0 ( θ t ) d θ ) 1 ψ 0 ( t ) ,
for some CNF ψ : T 0 T .
(H3)
Let p ( t ) 1 = 0 have an SPS given as ρ p T 0 { 0 } , where p : T 0 T is given by
p ( t ) = 3 2 ψ ( ( σ 1 ( t ) + 1 ) t ) + ψ 0 ( t ) .
Let T 1 = [ 0 , ρ p ) .
(H4)
The equation σ 2 ( t ) 1 = 0 has an SPS denoted by ρ 2 T 1 { 0 } , where σ 2 : T 1 T is given by
σ 2 ( t ) = 0 1 ψ ( ( 1 θ ) t ) d θ 1 ψ 0 ( t ) + 3 ψ ¯ ( t ) ( 1 + 0 1 ψ 0 ( θ t ) d θ ) 4 ( 1 p ( t ) ) ( 1 ψ 0 ( t ) ) ,
where
ψ ¯ ( t ) = ψ ( ( 1 + σ 1 ( t ) ) t ) ψ 0 ( σ 1 ( t ) t ) + ψ 0 ( t ) .
(H5)
The equation q ( t ) 1 = 0 has an SPS denoted by ρ q T 1 { 0 } , where q : T 1 T is given by
q ( t ) = ψ 0 ( 3 σ 1 ( t ) + 1 ) t 2 .
Let T 2 = [ 0 , ρ q ) .
(H6)
The equation σ 3 ( t ) 1 = 0 has an SPS denoted by ρ 3 , where σ 3 : T 2 T is given as
σ 3 ( t ) = 0 1 ψ ( ( 1 θ ) σ 2 ( t ) t ) d θ 1 ψ 0 ( σ 2 ( t ) t ) + 3 ψ ¯ ¯ ( t ) ( 1 + 0 1 ψ 0 ( θ σ 2 ( t ) t ) d θ ) 2 ( 1 q ( t ) ) ( 1 ψ 0 ( σ 2 ( t ) t ) ) ,
where
ψ ¯ ¯ ( t ) = q ( t ) + ψ 0 ( σ 2 ( t ) t ) 1 q ( t ) .
Let
ρ = min { ρ i } , i = 1 , 2 , 3 .
The developed functions ψ 0 and ψ are related to the operators in Method (4).
(H7)
There exists an invertible linear operator L and v E solving the equation A ( v ) = 0 such that for each v E ,
L 1 ( A ( x ) L ) ψ 0 ( v v ) .
Notice that under Condition (H1) and (35),
L 1 ( A ( v ) L ) ψ 0 ( 0 ) < 1 .
Thus, A ( v ) is invertible. Let E 1 = E B ( v , ρ 0 ) .
(H8)
L 1 ( A ( w ) A ( v ) ) ψ ( w v ) for each v , w E 1
and
(H9)
B [ v , ρ ] E .
The main local analysis for Method (4) follows in the next result.
Theorem 8. 
Let Conditions (H1)–(H9) hold. Then, the following assertions are satisfied provided that v 0 B ( v , ρ ) { v } :
{ v n } B ( v , ρ ) ,
w n v σ 1 ( v n v ) v n v v n v < ρ ,
z n v σ 2 ( v n v ) v n v v n v ,
v n + 1 v σ 3 ( v n v ) v n v v n v
and lim n v n = v , where the functions σ i are provided previously and the radius ρ is defined by Formula (35).
Proof. 
Let T = [ 0 , ρ ) . Then, for each t T ,
0 ψ 0 ( t ) < 1 ,
0 p ( t ) < 1 ,
0 q ( t ) < 1
and
0 σ i ( t ) < 1 .
The assertions given in (36)–(39) are shown by induction. Let u E = B ( v , ρ ) but be arbitrary. Condition (H1) and Formula (35) give
L 1 ( A ( u ) L ) ψ 0 ( u v ) ψ 0 ( ρ ) < 1 .
Thus, A ( u ) is invertible:
A ( u ) 1 L 1 1 ψ 0 ( u v ) .
Using (35), (43) (for i = 1 ), (H8), and (44),
w 0 v 1 1 ψ 0 ( v 0 v ) 0 1 ψ ( ( 1 θ ) v 0 v ) d θ v 0 v + 1 3 ( 1 + 0 1 ψ 0 ( θ v 0 v ) d θ ) v 0 v σ 1 ( v 0 v ) v 0 v v 0 v < ρ .
Thus, the iterate w 0 E and Item (37) holds if n = 0 .
The following estimate establishes the invertibility of the linear operator ( 3 A ( w 0 ) A ( v 0 ) ) and iterate z 0 by the second substep of Method (4):
( 2 L ) 1 ( 3 A ( w 0 ) A ( v 0 ) 2 L ) 1 2 [ 3 L 1 ( A ( w 0 ) A ( v 0 ) ) + L 1 ( A ( v 0 ) L ) 1 2 ( 3 ψ ( w 0 v 0 ) + 2 ψ 0 ( v 0 v ) ) p ( v 0 v ) < 1 ,
where we use Conditions (H3) and (H7) and Formulas (35), (41) and (36). Hence, by (46),
( 3 A ( w 0 ) A ( v 0 ) ) 1 L 1 2 ( 1 p ( v 0 v ) ) .
Moreover, the second substep gives
z 0 v = v 0 v A ( v 0 ) 1 A ( v 0 ) + [ I 1 2 ( 3 A ( w 0 ) A ( v 0 ) ) 1 ( 3 A ( w 0 ) + A ( v 0 ) ) ] A ( v 0 ) 1 A ( v 0 ) .
It follows by (35), (43) (for i = 2 ), (44), (45), (47), and (48), that
z 0 v σ 2 ( v 0 v ) v 0 v v 0 v .
Thus, the iterate z 0 E and for n = 0 the assertion (38) holds. Next, the invertibility of the linear operator A 3 w 0 v 0 2 establishes the existence of the iterate v 1 as follows:
L 1 ( A 3 w 0 v 0 2 L ) ψ 0 3 w 0 v 0 2 v ψ 0 2 w 0 v + w 0 v + v 0 v 2 = q ( v 0 v ) < 1 , ( by ( 35 ) and ( 42 ) ) ,
so
A 3 w 0 v 0 2 1 L 1 1 q ( v 0 v ) .
Then, the last substep of Method (4) gives in turn
v 1 v = z 0 v A ( z 0 ) 1 A ( z 0 ) + A ( z 0 ) 1 A 3 w 0 v 0 2 A ( z 0 ) A 3 w 0 v 0 2 1 A ( z 0 ) .
Using (35), (H8), (43) (for i = 3 ), (49), (50), and (51),
v 1 v 0 1 ψ ( ( 1 θ ) z 0 v ) d θ 1 ψ 0 ( z 0 v ) + ψ ¯ ¯ ( z 0 v ) ( 1 + 0 1 ψ 0 ( θ z 0 v ) d θ ) ( 1 ψ 0 ( z 0 v ) ) z 0 v σ 3 ( v 0 v ) v 0 v v 0 v ,
Hence, the iterate v 1 E and the assertion in (39) holds for n = 0 . The process of induction is terminated if v k , w k , z k , and v k + 1 replace v 0 , w 0 , z 0 and v 1 in the preceding calculations. Finally, from the estimate
v k + 1 v c v k v < ρ ,
where c = σ 3 ( v 0 v ) [ 0 , 1 ) , it follows that lim k v k = v and the iterate v k + 1 E .
The isolation of the solution v is discussed as the proposition presented below.
Proposition 1. 
Suppose there exists a solution v ¯ B ( v , ρ 4 ) for some ρ 4 > 0 , Condition (H7) holds for the ball B ( v , ρ 4 ) , and there exists ρ 5 ρ 4 such that
0 1 ψ 0 ( θ ρ 5 ) d θ < 1 .
Let E 2 = E B [ v , ρ 5 ] .
Then, the equation A ( v ) = 0 is uniquelly solvable by v in the region E 2 .
Proof. 
Define the linear operator L 1 = 0 1 A ( v + θ ( v ¯ v ) ) d θ . Then, by Condition (H7) for the ball B ( v , ρ 4 ) and (54),
L ( L 1 L ) 0 1 ψ 0 ( θ v ¯ v ) d θ ψ 0 ( ρ 5 ) < 1 .
Hence, v ¯ = v follows from the identity v ¯ v = L 1 1 ( A ( v ¯ ) A ( v ) ) = L 1 1 ( 0 ) = 0 .
Remark 1. 
(1) A possible choice for L = A ( v ) . In practice, L should be chosen to tighten the function ψ 0 . Notice also that it does not necessarily follow from (H7) that v is a simple solution or that A is differentiable at v .
(2) The results for Method (3) are obtained by restriction to the first two substeps of Method (4).
An analogous approach is followed in the semi-local analysis but the role of v is exchanged for v 0 and those of functions ψ 0 and ψ for φ 0 and φ , respectively, which are developed below.
Suppose
(e1)
There exist as CNF φ 0 : T T such that φ 0 ( t ) 1 = 0 has an SPS denoted by ρ 6 T { 0 } .
Set T 2 = [ 0 , ρ 6 ) . Let φ : T 2 T be a CNF. Define the sequence { α n } for α 0 = 0 , β 0 0 and each n = 0 , 1 , 2 , by
p ˜ n = 3 2 φ ( β n α n ) + φ 0 ( α n ) , γ n = β n + 1 8 ( 3 φ ( β n α n ) + 4 ( 1 + φ 0 ( α n ) ) ) ( β n α n ) 1 p ˜ n , μ ˜ n = ( 1 + 0 1 φ 0 ( α n + θ ( γ n α n ) ) d θ ) ( γ n α n ) + 3 2 ( 1 + φ 0 ( α n ) ) ( β n α n ) , q ˜ n = φ 0 ( 3 β n α n 2 ) , α n + 1 = γ n + μ n 1 q ˜ n , ξ n + 1 = ( 1 + 0 1 φ 0 ( α n + θ ( α n + 1 α n ) ) d θ ) ( α n + 1 α n ) + 3 2 ( 1 + φ 0 ( α n ) ) ( β n α n ) ,
and
β n + 1 = α n + 1 + 2 3 ξ n + 1 1 φ 0 ( α n + 1 ) .
(e2)
There exists ρ 7 [ 0 , ρ 6 ) such that
p ˜ n < 1 , q ˜ n < 1 , φ 0 ( α n ) < 1 and α n ρ 7 ,
for each n = 0 , 1 , 2 , . Consequently, 0 α n β n γ n α n + 1 ρ 7 and there exists ρ 8 [ 0 , ρ 7 ) such that lim n α n = ρ 8 .
The functions φ 0 and φ are connected to the operators on the iterative scheme given in (4).
(e3)
There exists v 0 E such that
L 1 ( A ( v ) L ) φ 0 ( v v 0 ) .
Let E 3 = E B ( v , ρ 6 ) . Notice that (e1) and (e3) imply that operator A ( v 0 ) is invertible. Let A ( v 0 ) 1 A ( v 0 ) 3 2 β 0 .
(e4)
L 1 ( A ( w ) A ( v ) ) φ ( w v ) for each v , w E 3 and
(e5)
B [ v 0 , ρ 8 ] E .
As in the local case, we obtain the following estimates in turn, using induction:
w 0 v 0 = 2 3 A ( v 0 ) 1 A ( v 0 ) β 0 = β 0 α 0 < ρ 8 ,
z n w n = 1 6 ( 3 A ( w n ) A ( v n ) ) 1 [ 4 ( 3 A ( w n ) A ( v n ) ) 3 ( 3 A ( w n ) + A ( v n ) ) ] ( 3 2 ( w n v n ) ) , z n w n 1 8 [ 3 L 1 ( A ( w n ) A ( v n ) ) + 4 L 1 A ( v n ) ] 1 p ˜ n w n v n γ n β n ,
z n v 0 z n w n + w n v 0 γ n β n + β n α 0 = γ n < ρ 8 ,
where p ˜ n = 3 2 φ ( β n α n ) + φ 0 ( α n ) . Let
A ( z n ) = 0 1 A ( v n + θ ( z n v n ) ) d θ ( z n v n ) 3 2 A ( v n ) ( w n v n ) , L 1 A ( z n ) 1 + 0 1 φ 0 ( α n + θ ( γ n α n ) ) d θ ( γ n α n ) + 3 2 ( 1 + φ 0 ( α n ) ) ( β n α n ) = μ n ,
v n + 1 z n μ n 1 q ˜ n = α n + 1 γ n ,
where
q ˜ n = φ 0 3 β n α n 2 φ 0 3 w n v n 2 v 0 φ 0 2 w n v 0 + w n v n 2 φ 0 2 β n + β n α n 2 = q ˜ n < 1 ,
and
v n + 1 v 0 v n + 1 z n + z n v 0 α n + 1 β n + β n = α n + 1 < ρ 8 ,
L 1 A ( v n + 1 ) 1 + 0 1 φ 0 ( α n + θ ( α n + 1 α n ) ) d θ ( α n + 1 α n ) + 3 2 ( 1 + φ 0 ( α n ) ) ( β n α n ) = ξ n + 1 ,
w n + 1 v n + 1 2 3 A ( v n + 1 ) 1 L L 1 A ( v n + 1 ) 2 3 ξ n + 1 1 φ 0 ( v n + 1 v 0 ) 2 3 ξ n + 1 1 φ 0 ( α n + 1 ) = β n + 1 α n + 1 ,
and
w n + 1 v 0 β n + 1 α n + 1 + α n + 1 α 0 = β n + 1 < ρ 8 .
It follows from (55)–(62) that the sequence { x n } is complete, since { α n } is convergent according to Condition (e2). But B 1 is a Banach space. Hence, there exists v B [ x 0 , ρ 8 ] such that lim n v n = v . Then, by letting n in
L 1 A ( v n + 1 ) ξ n + 1 ,
we deduce that A ( v ) = 0 . Finally, notice that
v n + j v n α n + j α n , j = 0 , 1 , 2 ,
Thus, for j ,
v v n ρ 8 α n .
Hence, the semi-local result for Method (4) is achieved.
Theorem 9. 
Let Conditions (e1)–(e5) hold. Then, there exists v B [ v 0 , ρ 8 ] solving the equation A ( v ) = 0 . Moreover, the following assertions hold:
{ v n } B ( v 0 , ρ 8 ) ,
w n v n β n α n ,
z n w n γ n β n ,
v n + 1 z n α n + 1 γ n
and
v v n ρ 8 α n .
The uniqueness property of the solution is specified in the next result.
Proposition 2. 
Suppose that there exists a solution v ¯ B ( v 0 , ρ 9 ) of the equation A ( v ) = 0 for some ρ 9 > 0 , that Condition (e3) holds in the ball B ( v 0 , ρ 9 ) , and that there exists ρ 10 ρ 9 such that
0 1 φ 0 ( ( 1 θ ) ρ 9 + θ ρ 10 ) d θ < 1 .
Let E 4 = E B [ v 0 , ρ 10 ] . Then, the only possible solution of the equation A ( v ) = 0 in the region E 4 is v ¯ .
Proof. 
Let A ( z ¯ ) = 0 for all z ¯ E 4 and L 2 = 0 1 A ( v ¯ + θ ( z ¯ v ¯ ) ) d θ be the linear operator. It follows that
L 1 ( L 2 L ) 0 1 φ 0 ( ( 1 θ ) v ¯ v 0 + θ z ¯ v 0 ) d θ 0 1 φ 0 ( ( 1 θ ) ρ 9 + θ ρ 10 ) d θ < 1 .
Thus, we deduce z ¯ = v ¯ .
Remark 2. 
(1) A possible choice for L = A ( v 0 ) .
(2) If the conditions (e1)–(e5) hold, then set ρ 9 = ρ 8 and v ¯ = v in Proposition 2.
(3) Replace the limit point ρ 8 by ρ 6 in Condition (e5).
(4) Clearly, the results for Method (3) are obtained by simply restricting the process to the first two substeps of Method (4).

6. Efficiency Indices

There are several measures for comparing iterative methods other than order of convergence—one of them is method efficiency. Recall that the informational efficiency, introduced by Traub [19], is given by E . I = o 1 / s , where o is the order of the methods and s is the number of function evaluations. Ostowski [20] coined the term computational efficiency (C.E) defined as C . E = ϖ 1 θ f , where ϖ is the order of convergence of the method and θ f is the number of function evaluations. Thus, the E.I and the C.E of Method (2) are 6 5 = 1.2 and 6 1 5 = 1.4310 , the E. I and C. E of Method (3) are 4 3 = 1.33 and 4 1 3 = 1.587 , and the E. I and C. E of Method (4) are 5 5 = 1 and 5 1 5 = 1.3797 .

7. Numerical Example

Example 1. 
Consider B 1 = B 2 = R 3 , v 0 = ( 0 , 0 , 0 ) T , and E = B [ 0 , 1 ] . Let A be a function defined on E for u = ( v , w , z ) T by
A ( u ) = e v 1 , e 1 6 w 3 + w , z 3 6 + z T .
Then, the first and second Fréchet derivatives are as follows:
A ( u ) = e v 0 0 0 e 1 2 w 2 + 1 0 0 0 z 2 2 + 1
and
A ( u ) = e v 0 0 | 0 0 0 | 0 0 0 0 0 0 | 0 ( e 1 ) w 0 | 0 0 0 0 0 0 | 0 0 0 | 0 0 z .
Now, one can notice that A ( v ) = A ( v ) 1 = d i a g ( 1 , 1 , 1 ) . Thus, with A ( v ) = 1 , we have
A ( v ) 1 ( A ( v ) A ( v ) ) ( e 1 ) v v ,
A ( v ) 1 ( A ( v ) A ( v ) ) ( e 1 ) v v ,
A ( v ) 1 A ( v ) e .
A ( v ) 1 A ( v ) e 1 e 1 ,
Hence, L 0 = L 1 = ( e 1 ) , L 2 = e , and L 3 = e 1 e 1 . With respect to λ 1 = 0.1146 , λ 2 = 0.7115 , λ 3 = 0.6959 , and λ 4 = 0.3001 , we obtain r = R = R 1 = 0.0667 .
Example 2. 
Consider the nonlinear integral equation of the Hammerstein-type given by
A ( v ) ( θ ) = v ( θ ) 5 H ( v ) ( θ ) ,
where H is any function such that
H ( v ) ( θ ) = θ 0 1 β v 3 ( β ) d ( β ) ,
defined on B 1 = B 1 = C [ 0 , 1 ] , the space of all continuous functions on the interval [ 0 , 1 ] . Let E = B [ 0 , 1 ] . Then, we obtain first Fréchet derivatives as follows:
A ( v ( ψ ) ) ( θ ) = ψ ( θ ) 15 θ 0 1 γ v 2 ( γ ) ψ ( γ ) d γ , f o r a l l v E .
One can observe that v = v ( θ ) = 0 is a solution of A ( v ) . Then, by applying Conditions (A1)–(A4), we have L 0 = L 1 = L 2 = 7.5 and L 3 = 2 . With respect to λ 1 = 0.5028 , λ 2 = 0.0129 , λ 3 = 0.0523 , and λ 4 = 0.1333 , we obtain r = 0.0667 , R = 0.0523 , and R 1 = 0.0667 .
Example 3. 
Consider the following nonlinear system whose roots are not known precisely:
A = v 3 3 v w 2 1 3 v 3 w w 3 + 1 .
The approximated solutions using (2) and (3) with initial point v 0 = ( 0 , 1 ) at each iteration are shown in Table 1. It is observed that the approximated solution is v = ( 0.290514555507251 , 1.084215081491351 ) with accuracy 10 14 .
Remark 3. 
Observe that Example 3 illustrates our method of approximating solutions of systems of nonlinear equations. Studies on the approximation of nonlinear systems of equations can be seen in works like [1,2,4,5]. Observe that from a theoretical view-point, when the findings in [4,5] are compared with the proposed sixth-order Jarratt-type method given in Equation (2), it can be seen that the iterative method in [5] focuses on achieving fourth-order convergence. While this is an improvement over the classical Newton’s method, it is lower than the sixth-order convergence achieved by one of the methods discussed in (2). But in the case of iterative methods mentioned in [4], some methods require the computation of second- or higher-order Fréchet derivatives, which can be computationally expensive. Thus, the limitations of the iterative methods in [5] and [4] compared to (2) are primarily related to the order of convergence and dependence on higher-order derivatives.
Yet, considering the efficiency indices, the E.I and C.E of the method in [5] are 4 3 = 1.33 and 4 1 3 = 1.587 , and the E. I and C. E of the method discussed in [4] are 8 7 = 1.142 and 8 1 7 = 1.345 . Overall, a key advantage is that it determines the order of convergence using assumptions about the derivatives of the involved operator up to the second order, avoiding Taylor expansion and the need for higher-order derivatives.
In the following illustrations, we integrate the iteration and the convergence order of Methods (3), (2), and (4) with those of the following methods:
Noor Waseem-type methods [21]: Given for n = 0 , 1 , 2 , as
w n = v n A ( v n ) 1 A ( v n ) v n + 1 = v n 4 G n 1 A ( v n ) ,
where G n = 3 A ( 2 v n + w n 3 ) + A ( w n ) ,
w n = v n A ( v n ) 1 A ( v n ) z n = v n 4 G n 1 A ( v n ) v n + 1 = z n A ( w n ) 1 A ( z n )
and
w n = v n A ( v n ) 1 A ( v n ) z n = v n 4 G n 1 A ( v n ) v n + 1 = z n A ( z n ) 1 A ( z n ) .
Newton Simpson-type methods [22,23]: Given for n = 0 , 1 , 2 , as
w n = v n A ( v n ) 1 A ( v n ) v n + 1 = v n 6 G n 1 A ( v n ) ,
where G n = A ( v n ) + 4 A ( v n + w n 2 ) + A ( w n ) ,
w n = v n A ( v n ) 1 A ( v n ) z n = v n 6 G n 1 A ( v n ) v n + 1 = z n A ( w n ) 1 A ( z n )
and
w n = v n A ( v n ) 1 A ( v n ) z n = v n 6 G n 1 A ( v n ) v n + 1 = z n A ( z n ) 1 A ( z n ) .
Example 4. 
Let T = T 1 = R 2 . Consider the system of equations [24]
3 t 1 2 t 2 + t 2 2 = 1 t 1 4 + t 1 t 2 3 = 1 .
Observe that a 1 = ( 1 , 0.2 ) , a 2 = ( 0.4 , 1.3 ) , and a 3 = ( 0.9 , 0.3 ) are the solutions of the above system of equations. The approximation of the solution a 3 using Methods (65)–(70), (3), (2), and (4) starting with v 0 = ( 2 , 1 ) is given. The results are displayed in Table 2, Table 3 and Table 4.

8. Basins of Attraction

In an iterative method, the set of all initial points that lead to convergence towards a solution of an equation is called the basin of attraction [25,26]. Using the approach of basins of attraction, we obtain the area of convergence of the iterative schemes (2), (3), and (4) when applied to the following examples:
Example 5. 
α 3 β = 0 β 3 α = 0 with solutions { ( 1 , 1 ) , ( 0 , 0 ) , ( 1 , 1 ) } .
Example 6. 
3 α 2 β β 3 = 0 α 3 3 α β 2 1 = 0 with solutions
{ ( 1 2 , 3 2 ) , ( 1 2 , 3 2 ) , ( 1 , 0 ) } .
Example 7. 
α 2 + β 2 4 = 0 3 α 2 + 7 β 2 16 = 0 with solutions
{ ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) } .
Corresponding to the roots of the system of nonlinear equations, the basins of attraction are generated in a rectangle domain w = { ( α , β ) R 2 : 2 α 2 , 2 β 2 } with equidistant grid points of 401 × 401 . According to the roots, each initial point ( α 0 , β 0 ) R 2 is assigned a color, to which the corresponding iterative method converges, starting from ( α 0 , β 0 ) . In the Figure 1, Figure 2 and Figure 3, we assign (a) red to (1, −1), blue to (0, 0) and green to (1, 1) for Example 5, (b) red to ( 1 2 , 3 2 ) , blue to ( 1 2 , 3 2 ) and green to (1,0) for Example 6, and (c) red to ( 3 , 1 ) , blue to ( 3 , 1 ) , green to ( 3 , 1 ) and yellow to ( 3 , 1 ) in Example 7 respectively. If either the method converges to infinity or it does not converge, then the point is marked black. For a maximum of 100 iterations, a tolerance of 10 8 is used.

9. Conclusions

We studied the Jarrat-type method of convergence order three and two extensions of it with convergence orders six and five, respectively. As mentioned in the introduction, we used assumptions on A and A only, meaning Methods (2), (3), and (4) can be used to solve problems that are not possible to solve using earlier methods of convergence analysis based on Taylor expansion. We discussed the limitations of our approach and developed new ways to overcome these limitations in Section 5. Finally, we compared the methods with other similar methods using an example. Also, using a basins of attraction approach, the convergence areas of Methods (2), (3), and (4) were given. In future research, our ideas would be extended to other methods to obtain similar benefits [6,7,8,9,10,11,12,14,15,16,17,18,19,20,24,25,26,27,28].

Author Contributions

Conceptualization, S.P.B.; Methodology, S.G.; Validation, I.K.A.; Formal analysis, S.M.E.; Investigation, S.G.; Resources, K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmad, F.; Tohidi, E.; Ullah, M.Z. Higher order multi-step Jarratt-like method for solving systems of nonlinear equations: Application to PDEs and ODEs. Comput. Math. Appl. 2015, 70, 624–636. [Google Scholar] [CrossRef]
  2. Ullah, M.Z.; Serra-Capizzano, S.; Ahmad, F. An efficient multi-step iterative method for computing the numerical solution of systems of nonlinear equations associated with ODEs. Appl. Math. Comput. 2015, 250, 249–259. [Google Scholar] [CrossRef]
  3. Argyros, I.K. The Theory and Applications of Iteration Methods; Taylor and Francis Group, CRC Press: Boca Raton, FL, USA, 2022; Volume 2. [Google Scholar]
  4. Ullah, M.Z.; Soleymani, F.; Al-Fhaid, A.S. Numerical solution of nonlinear systems by a general class of iterative methods with application to nonlinear PDEs. Numer. Algorithms 2014, 67, 223–242. [Google Scholar] [CrossRef]
  5. Yu, J.; Wang, X. A single parameter fourth-order Jarratt type iterative method for solving nonlinear systems. AIMS Math. 2025, 10, 7847–7863. [Google Scholar] [CrossRef]
  6. Bartle, R.G. Newton’s method in Banach spaces. Proc. Am. Math. Soc. 1955, 6, 827–831. [Google Scholar]
  7. Ben-Israel, A. A Newton-Raphson method for the solution of systems of equations. J. Math. Anal. Appl. 1966, 15, 243–252. [Google Scholar] [CrossRef]
  8. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  9. Saheya, B.; Chen, G.Q.; Sui, Y.K.; Wu, C.Y. A new Newton-like method for solving nonlinear equations. SpringerPlus 2016, 5, 1269. [Google Scholar] [CrossRef]
  10. Ren, H.; Wu, Q.; Bi, W. New variants of Jarratt’s method with sixth-order convergence. Numer. Algorithms 2009, 52, 585–603. [Google Scholar] [CrossRef]
  11. Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 8, 87–93. [Google Scholar] [CrossRef]
  12. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef]
  13. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  14. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. On developing fourth-order optimal families of methods for multiple roots and their dynamics. Appl. Math. Comput. 2015, 265, 520–532. [Google Scholar] [CrossRef]
  15. Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  16. Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
  17. Shakhno, S.M.; Gnatyshyn, O.P. On an iterative algorithm of order 1.839… for solving nonlinear operator equations. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar]
  18. Cárdenas, E.; Castro, R.; Sierra, W. A Newton-type midpoint method with high efficiency index. J. Math. Anal. Appl. 2020, 491, 124381. [Google Scholar] [CrossRef]
  19. Traub, J.F. Iterative methods for the solution of equations. Am. Math. Soc. 1982, 312. [Google Scholar] [CrossRef]
  20. Ostrowski, A.M. Solution of Equations and Systems of Equations: Pure and Applied Mathematics; A Series of Monographs and Textbooks; Elsevier: Amsterdam, The Netherlands, 2016; Volume 9. [Google Scholar]
  21. Noor, M.A.; Waseem, M.; Noor, K.I. New iterative technique for solving a system of nonlinear equations. Appl. Math. Comput. 2015, 271, 446–466. [Google Scholar] [CrossRef]
  22. Alqahtani, H.F.; Behl, R.; Kansal, M. Higher-Order Iteration Schemes for Solving Nonlinear Systems of Equations. Mathematics 2019, 7, 937. [Google Scholar] [CrossRef]
  23. Jayakumar, J. Generalized Simpson-Newton’s Method for Solving Nonlinear Equations with Cubic Convergence. IOSR J. Math. 2013, 7, 58–61. [Google Scholar] [CrossRef]
  24. Iliev, A.; Iliev, I. Numerical method with order t for solving system nonlinear equations. Collect. Sci. Work. 2000, 30, 3–4. [Google Scholar]
  25. Chun, C.; Lee, M.Y.; Neta, B.; Džunić, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 2012, 218, 6427–6438. [Google Scholar] [CrossRef]
  26. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  27. Ortega, J.M.; Rheinboldt, W.C. Iterative solution of nonlinear equations in several variables. In Classics in Applied Mathematics, Philadelphia: Society for Industrial and Applied Mathematics; Academic Press: Cambridge, MA, USA, 2000; Volume 14. [Google Scholar]
  28. Werner, W. Über ein Verfahren der Ordnung 1 + 2 zur Nullstellenbestimmung. Numer. Math. 1979, 32, 333–342. [Google Scholar] [CrossRef]
Figure 1. Dynamical plane of Method (2) with basins of attraction for Example 5 (left), Example 6 (middle), and Example 7 (right).
Figure 1. Dynamical plane of Method (2) with basins of attraction for Example 5 (left), Example 6 (middle), and Example 7 (right).
Axioms 14 00401 g001
Figure 2. Dynamical plane of Method (3) with basins of attraction for Example 5 (left), Example 6 (middle), and Example 7 (right).
Figure 2. Dynamical plane of Method (3) with basins of attraction for Example 5 (left), Example 6 (middle), and Example 7 (right).
Axioms 14 00401 g002
Figure 3. Dynamical plane of Method (4) with basins of attraction for Example 5 (left), Example 6 (middle), and Example 7 (right).
Figure 3. Dynamical plane of Method (4) with basins of attraction for Example 5 (left), Example 6 (middle), and Example 7 (right).
Axioms 14 00401 g003
Table 1. Results for Example 3.
Table 1. Results for Example 3.
No. of Iterations (n)Method (2)Method (3)
1(−0.285212504390587, 1.085353003161222)(−0.291165568093180, 1.084718626530083)
2(−0.290514555536393, 1.084215081898184)(−0.290514555507251, 1.084215081491351)
3(−0.290514555536393, 1.084215081898184)(−0.290514555507251, 1.084215081491351)
4(−0.290514555536393, 1.084215081898184)(−0.290514555507251, 1.084215081491351)
Table 2. Methods of order 3.
Table 2. Methods of order 3.
kNoor Waseem Method (65)RatioNewton Simpson Method (68)RatioMethod (3)Ratio
x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 3 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 3 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 3
0(2.000000, −1.000000) (2.000000, −1.000000) (2.000000, −1.000000)
1(1.264067, −0.166747)0.052791(1.263927, −0.166887)0.052792(1.151437, 0.051449)0.040459
2(1.019624, 0.265386)0.259247(1.019452, 0.265424)0.259156(0.994771, 0.304342)0.536597
3(0.992854, 0.306346)1.578713(0.992853, 0.306348)1.580144(0.992780, 0.306440)1.951273
4(0.992780, 0.306440)1.977941(0.992780, 0.306440)1.977957(0.992780, 0.306440)1.979028
5(0.992780, 0.306440)1.979028(0.992780, 0.306440)1.979028(0.992780, 0.306440)1.979028
Table 3. Methods of order 5.
Table 3. Methods of order 5.
kNoor Waseem Method (66)RatioNewton Simpson Method (69)RatioMethod (4)Ratio
x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 5 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 5 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 5
0(2.000000, −1.000000) (2.000000, −1.000000) (2.000000, −1.000000)
1(1.127204, 0.054887)0.004363(1.127146, 0.054883)0.004363(1.144528, 0.069067)0.004375
2(0.993331, 0.305731)0.501551(0.993328, 0.305734)0.501670(0.994305, 0.304922)0.495553
3(0.992780, 0.306440)3.889725(0.992780, 0.306440)3.889832(0.992780, 0.306440)3.847630
4(0.992780, 0.306440)3.916553(0.992780, 0.306440)3.916553(0.992780, 0.306440)3.916553
Table 4. Methods of order 6.
Table 4. Methods of order 6.
kNoor Waseem Method (67)RatioNewton Simpson Method (70)RatioMethod (2)Ratio
x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 6 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 6 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 6
0(2.000000, −1.000000) (2.000000, −1.000000) (2.000000, −1.000000)
1(1.067979, 0.174843)0.001211(1.067906, 0.174885)0.001211(1.027012, 0.256566)0.001057
2(0.992784, 0.306436)1.383068(0.992784, 0.306436)1.384152(0.992780, 0.306440)3.122403
3(0.992780, 0.306440)5.509412(0.992780, 0.306440)5.509414(0.992780, 0.306440)5.509727
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Erappa, S.M.; Bheemaiah, S.P.; George, S.; Karuppaiah, K.; Argyros, I.K. On the Convergence Order of Jarratt-Type Methods for Nonlinear Equations. Axioms 2025, 14, 401. https://doi.org/10.3390/axioms14060401

AMA Style

Erappa SM, Bheemaiah SP, George S, Karuppaiah K, Argyros IK. On the Convergence Order of Jarratt-Type Methods for Nonlinear Equations. Axioms. 2025; 14(6):401. https://doi.org/10.3390/axioms14060401

Chicago/Turabian Style

Erappa, Shobha M., Suma P. Bheemaiah, Santhosh George, Kanagaraj Karuppaiah, and Ioannis K. Argyros. 2025. "On the Convergence Order of Jarratt-Type Methods for Nonlinear Equations" Axioms 14, no. 6: 401. https://doi.org/10.3390/axioms14060401

APA Style

Erappa, S. M., Bheemaiah, S. P., George, S., Karuppaiah, K., & Argyros, I. K. (2025). On the Convergence Order of Jarratt-Type Methods for Nonlinear Equations. Axioms, 14(6), 401. https://doi.org/10.3390/axioms14060401

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop