Next Article in Journal / Special Issue
Ball Comparison between Two Efficient Weighted-Newton-like Solvers for Equations
Previous Article in Journal
Towards a Social-Ecological-Entropy Perspective of Sustainable Exploitation of Natural Resources
Previous Article in Special Issue
Semi-Local Convergence of a Seventh Order Method with One Parameter for Solving Non-Linear Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semi-Local Convergence of Two Derivative-Free Methods of Order Six for Solving Equations under the Same Conditions

by
Ioannis K. Argyros
1,*,
Christopher I. Argyros
1,
Jinny Ann John
2 and
Jayakumar Jayaraman
2
1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematics, Puducherry Technological University, Pondicherry 605014, India
*
Author to whom correspondence should be addressed.
Foundations 2022, 2(4), 1022-1030; https://doi.org/10.3390/foundations2040068
Submission received: 5 September 2022 / Revised: 23 October 2022 / Accepted: 25 October 2022 / Published: 1 November 2022
(This article belongs to the Special Issue Iterative Methods with Applications in Mathematical Sciences)

Abstract

:
We propose the semi-local convergence of two derivative-free, competing methods of order six to address non-linear equations. The sufficient convergence criteria are the same, making a direct comparison between them possible. The existing convergence technique uses the standard Taylor series approach, which requires derivatives up to order seven. The novelty and originality of our work lies in the fact that in contrast to previous research works, our convergence theorems only demand the first derivative. In addition, formulas for determining the region of uniqueness for solution, convergence radii, and error estimations are suggested. Such results cannot be found in works relying on the seventh derivatives. As a consequence, we are able to broaden the utility of these productive methods. The confirmation of our convergence findings through application problems brings this research to a close.

1. Introduction

Let B denote a normed linear space that is complete. Suppose that D B is a non-null, open, and convex set. Non-linear equations of the following type:
F ( x ) = 0 ,
where T : D B B is derivable as per Fréchet, may be used to simulate a wide range of complex scientific and engineering issues [1,2,3,4,5]. Mathematicians have long struggled to overcome non-linearity. These equations are very difficult to solve analytically. Because of this, the employment of iterative methods to reach a conclusion is common among scientists and researchers. Newton’s method is a popular iterative method for dealing with non-linear equations, but it is only of convergence order two. Many novel, higher-order iterative strategies for dealing with non-linear equations have been discovered in recent years and are currently being used [6,7,8,9,10]. However, the theorems on the convergence of these methods in most of these publications were derived by applying high-order derivatives. Furthermore, no results have been discussed regarding the error distances, radii of convergence, or the region in which the solution was the only one.
In research articles on iterative methods, it is crucial to determine the region where convergence is possible. Most of the time, the convergence zone is rather small. It is required to broaden the convergence domain without making any extra assumptions. Likewise, while investigating the convergence of iterative methods, exact error distances must be estimated. Taking these points into consideration, we developed convergence theorems for two methods, G M 6 (2) and S M 6 (3), which were proposed in [11] and [9], respectively. Let
u n = x n + F ( x n ) , v n = x n F ( x n ) ,
and [ u n , v n ; F ] is the divided difference of order one [1,4].
y n = x n [ u n , v n ; F ] 1 F ( x n ) , z n = y n M n 1 F ( y n ) , x n + 1 = z n M n 1 F ( z n ) , M n = 2 [ y n , x n ; F ] [ u n , v n ; F ] ,
y n = x n [ u n , v n ; F ] 1 F ( x n ) , z n = y n ( 3 I 2 [ u n , v n ; F ] 1 [ y n , x n ; F ] ) [ u n , v n ; F ] 1 F ( y n ) , x n + 1 = z n ( 3 I 2 [ u n , v n ; F ] 1 [ y n , x n ; F ] ) [ u n , v n ; F ] 1 F ( z n ) .
The convergence in these methods [9,11] is based on derivatives of F up to order seven and only offers the convergence rate. As a consequence, the applicability of these methods is limited. To emphasize this, we define F on D = [ 1 2 , 3 2 ] with the following equation:
F ( x ) = x 3 ln ( x 2 ) + x 5 x 4 , if x 0 0 , if x = 0 .
Next, it is easy to notice that due to the unboundedness of F , the results from the convergence of G M 6 [11] and S M 6 [9] do not stand true for this elementary example. Moreover, these articles do not provide any formula for approximating the error | | x n x * | | , the convergence region, or the uniqueness and accurate location of the root x * . This encourages us to develop the ball convergence theorems and hence compare the convergence domains of G M 6 and S M 6 by considering assumptions only on [ · , · ; F ] . Our research provides important formulas for the estimation of | | x n x * | | and the convergence radii. This study also discusses an exact location and the uniqueness of x * .
It is worth noting that the articles in [9,11] did not provide such formulas and locations either. The originality and novelty of our work derived from this fact. The same advantages can be obtained if our methodology is applied to other single or multi-step methods using the inverses of divided differences or derivatives along the same lines [11,12,13]. In particular, our work in [1] used derivatives of order one. Therefore, it cannot be used to solve equations that contain non-differentiable operators. We have provided such an example in the numerical Section 4 (see Example 3).
The other contents of this material can be summarized as follows: In Section 2, we develop two scalar sequences that are proved as majorizing sequences for methods (2) and (3). Section 3 discusses the semi-local convergence properties of the methods under consideration ((2) and (3)). The numerical testing of convergence outcomes is described in Section 4. A discussion is given in Section 5, while concluding remarks are provided in Section 6.

2. Majorizing Sequences

Two scalar sequences are generated, which are shown to be majorizing for method (2) and method (3). Let s 0 0 and s 0 be given parameters. Set T = 0 , .
  • Suppose the following:
There exist functions h 1 : T R , h 3 : T R , ψ 0 : T × T R , which are continuous and non-decreasing such that for h 2 ( t ) = h 1 ( t ) t + s 0 and h 4 ( t ) = h 3 ( t ) t + s 0 , the following equation:
ψ 0 ( h 2 ( t ) , h 4 ( t ) ) 1 = 0
has a least solution denoted by ρ 0 T { 0 } . Set T 0 = 0 , ρ 0 . Let ψ : T 0 × T 0 × T 0 × T 0 R be a continuous and non-decreasing function. Moreover, define the first scalar sequence { a n } with the following equation:
a 0 = 0 , b 0 = s , p n = ψ ( a n , b n , h 2 ( a n ) , h 4 ( a n ) ) ( b n a n ) , q n = p n + ψ 0 ( h 2 ( a n ) , h 4 ( a n ) ) , c n = b n + p n ( b n a n ) 1 q n , λ n = ( 1 + ψ 0 ( c n , b n ) ) ( c n b n ) + p n , a n + 1 = c n + λ n 1 q n , μ n + 1 = ( 1 + ψ 0 ( a n , a n + 1 ) ) ( a n + 1 a n ) + ( 1 + ψ 0 ( h 2 ( a n ) , h 4 ( a n ) ) ( b n a n ) , and b n + 1 = a n + 1 + μ n + 1 1 ψ 0 ( h 2 ( a n + 1 ) , h 4 ( a n + 1 ) ) .
The second scalar sequence is also denoted by { a n } and is defined by
σ n = 1 1 ψ 0 ( h 2 ( a n ) , h 4 ( a n ) ) [ 1 + 2 ψ ( a n , b n , h 2 ( a n ) , h 4 ( a n ) ) 1 ψ 0 ( h 2 ( a n ) , h 4 ( a n ) ) ] p n , c n = b n + σ n p n 1 q n , a n + 1 = c n + σ n λ n , b n + 1 = a n + 1 + μ n + 1 1 ψ 0 ( h 2 ( a n + 1 ) , h 4 ( a n + 1 ) ) .
The functions ψ 0 and ψ are assumed to be symmetric without loss of generality. Next, a common convergence result is given. The limit point for each of them is called a * , although it is generally not the same.
Lemma 1.
Suppose that for each n = 0 , 1 , 2 ,
q n < 1 , ψ 0 ( a n , b n ) < 1 , a n d a n < τ
for some τ > 0 . Then, the sequence generated by Formula (5) (or Formula (6)) is bounded from above by τ, non-decreasing and convergent to each upper-bound (least) a * such that a * [ s , τ ] .
Proof. 
Definition (5) of sequence { a n } (or sequence { a n } given by Formula (6)) and condition (7) immediately imply the result. □
Remark 1.
The parameter τ can be possibly chosen as τ < ρ 0 . However, it can also be chosen to simply satisfy condition (7), thus forcing the convergence for Formula (5) or (6).
The notation U ( x , α ) , U [ x , α ] stands for the open and closed balls in B, with a center x D and a radius α > 0 .

3. Semi-Local Convergence

The common conditions connected to the parameters and functions of the previous section are described below.
  • Suppose the following:
( H 1 )
There exists x 0 D such that x 0 + F ( x 0 ) , x 0 F ( x 0 ) D , [ u 0 , v 0 ; F ] 1 F ( x 0 ) 1 L ( B , B ) , [ u 0 , v 0 ; F ] 1 F ( x 0 ) s , and F ( x 0 ) s 0 .
( H 2 )
F ( x 0 ) 1 ( [ u 1 , u 2 ; F ] F ( x 0 ) ) ψ 0 ( u 1 x 0 , u 2 x 0 ) ,
I [ u 1 , x 0 ; F ] h 1 ( u 1 x 0 ) ,
I + [ u 2 , x 0 ; F ] h 3 ( u 2 x 0 ) for all u 1 , u 2 D .
Set D 0 = D U ( x 0 , ρ 0 ) .
( H 3 )
F ( x 0 ) 1 ( [ u 3 , u 4 ; F ] [ u 5 , u 6 ; F ] ) ψ ( u 3 x 0 , u 4 x 0 , u 5 x 0 , u 6 x 0 ) for all u 3 , u 4 , u 5 , u 6 D 0 .
( H 4 )
Condition (7) holds.
( H 5 )
U [ x 0 , a 1 * ] D , where a 1 * = m a x { a * , h 2 ( a * ) , h 4 ( a * ) } .
  • Next, the semi-local convergence is presented for method (2).
Theorem 1.
Suppose that conditions ( H 1 )–( H 5 ) hold. Then, the sequence { x n } starting at x 0 D and given by method (2) (or method (3)) is well-defined, remains in U ( x 0 , a * ) , and is convergent to some x * U [ x 0 , a * ] satisfying F ( x * ) = 0 . Moreover, the following assertion holds for all n 0 :
x * x n a * a n .
Proof. 
Mathematical induction is employed to show that
y m x m b m a m ,
z m y m c m b m ,
and
x m + 1 z m a m + 1 c m .
Estimate (9) holds if m = 0 , since according to condition ( H 1 ),
y 0 x 0 = [ u 0 , v 0 ; F ] 1 F ( x 0 ) s = s 0 = b 0 a 0 < a * ,
and the iterate y 0 U ( x 0 , a * ) . We need the following estimates:
v m x 0 = x m x 0 F ( x m ) + F ( x 0 ) F ( x 0 ) = ( I [ x m , x 0 ; F ] ) ( x m x 0 ) + F ( x 0 ) I [ x m , x 0 ; F ] x m x 0 + F ( x 0 ) h 1 ( x m x 0 ) x m x 0 + s 0 = h 2 ( x m x 0 ) a 1 * .
Similarly, we obtain the equation below:
u m x 0 I + [ x m , x 0 ; F ] x m x 0 + F ( x 0 ) h 3 ( x m x 0 ) x m x 0 + s 0 = h 4 ( x m x 0 ) a 1 * .
Let u m , v m U ( x 0 , a * ) . By applying condition ( H 2 ), we obtain the following equation:
F ( x 0 ) 1 ( [ u m , v m ; F ] F ( x 0 ) ) ψ 0 ( u m x 0 , v m x 0 ) ψ 0 ( h 2 ( a m ) , h 4 ( a m ) ) < 1 .
Then, the perturbation Lemma on linear operators attributed to Banach [4] asserts that [ u m , v m ; F ] 1 L ( B , B ) and
[ u m , v m ; F ] 1 F ( x 0 ) 1 1 ψ 0 ( h 2 ( a m ) , h 4 ( a m ) ) .
Similarly, we have
F ( x 0 ) 1 ( M m F ( x 0 ) ) F ( x 0 ) 1 ( [ y m , x m ; F ] [ u m , v m ; F ] ) + F ( x 0 ) 1 ( [ y m , x m ; F ] F ( x 0 ) ) ψ ( x m x 0 , y m x 0 , v m x 0 , w m x 0 ) + ψ 0 ( x m x 0 , y m x 0 ) = q ¯ m q m .
Moreover, according to the first sub-step of method (2), we can write the following equation:
F ( y m ) = F ( y m ) F ( x m ) [ u m , v m ; F ] ( y m x m ) = ( [ y m , x m ; F ] [ u m , v m ; F ] ) ( y m x m ) ,
so that
F ( x 0 ) 1 F ( y m ) ψ ( x m x 0 , y m x 0 , v m x 0 , w m x 0 ) y m x m = p ¯ m p m .
Consequently, with the second sub-step, we have
z m y m p ¯ m 1 q ¯ m p m 1 q m = c m b m
and
z m x 0 z m y m + y m x 0 c m b m + b m = c m < a * .
That is, (10) holds and the iterate z m U ( x 0 , a * ) .
  • In view of the identity F ( z m ) = F ( z m ) F ( y m ) + F ( y m ) = [ z m , y m ; F ] ( z m y m ) + F ( y m ) , (15), and ( H 2 ), we can obtain the equation below:
    F ( x 0 ) 1 F ( z m ) ( 1 + ψ 0 ( z m x 0 , y m x 0 ) ) z m y m + p ¯ m = λ ¯ m λ m .
Therefore, with the third sub-step of methods (2) and (16), we obtain the following:
x m + 1 z m M m 1 F ( x 0 ) F ( x 0 ) 1 F ( z m ) λ ¯ m 1 q ¯ m λ m 1 q m = a m + 1 c m and x m + 1 x 0 x m + 1 z m + z m x 0 a m + 1 c m + c m = a m + 1 < a * .
It thus follows that estimate (11) holds, and that the iterate x m + 1 U ( x 0 , a * ) .
  • Moreover, the first sub-step of method (2) gives the following equation:
    F ( x m + 1 ) = F ( x m + 1 ) F ( x m ) [ u m , v m ; F ] ( y m x m ) = [ x m + 1 , x m ; F ] ( x m + 1 x m ) [ u m , v m ; F ] ( y m x m ) ,
hence,
F ( x 0 ) 1 F ( x m + 1 ) ( 1 + ψ 0 ( x m + 1 x 0 , x m x 0 ) ) x m + 1 x m + ( 1 + ψ 0 ( u m x 0 , v m x 0 ) ) y m x m = μ ¯ m + 1 μ m + 1 .
Furthermore, with (17) and (13), we obtain
y m + 1 x m + 1 [ u m + 1 , v m + 1 ; F ] 1 F ( x 0 ) F ( x 0 ) 1 F ( x m + 1 ) μ ¯ m + 1 1 ψ 0 ( h 2 ( a m + 1 ) , h 4 ( a m + 1 ) ) μ m + 1 1 ψ 0 ( h 2 ( a m + 1 ) , h 4 ( a m + 1 ) ) = b m + 1 a m + 1 and y m + 1 x 0 y m + 1 x m + 1 + x m + 1 x 0 b m + 1 a m + 1 a m + 1 = b m + 1 < a * .
  • Thus, the induction for estimates (9)–(11) is finished.
  • According to Lemma (1), the sequence { a m } is complete and convergent. It then follows from estimates (8)–(11) that the sequence { x m } is also convergent in the Banach space B. Hence, the sequence { x m } is convergent for some x * U [ x 0 , a * ] . Then, using the continuity of the operator F and (17), we deduce that
    F ( a * ) = 0 if m .
Finally, from the estimate below:
x m + j x m a m + j a m for all j = 1 , 2 , ,
we can obtain estimate (8) if j . □
The uniqueness of the ball is discussed in the next result.
Proposition 1.
Suppose the following:
(1) 
There exists a solution ξ U ( x 0 , ρ 1 ) for some ρ 1 > 0 .
(2) 
Condition ( H 2 ) holds on the ball U ( x 0 , ρ 1 ) .
(3) 
There exists ρ 2 > ρ 1 such that
ψ 0 ( ρ 1 , ρ 2 ) < 1 .
Set D 2 = D U [ x 0 , ρ 2 ] .
Then, the equation F ( x ) = 0 is uniquely solvable by a * in the region D 2 .
Proof. 
Let ξ 1 D 2 with F ( ξ 1 ) = 0 . Define the linear operator G = [ ξ , ξ 1 ; F ] = [ ξ + F ( ξ ) , ξ 1 + F ( ξ 1 ) ; F ] . It then follows from ( H 2 ) and (18) that
F ( x 0 ) 1 ( G F ( x 0 ) ) ψ 0 ( ξ x 0 , ξ 1 x 0 ) ψ 0 ( ρ 1 , ρ 2 ) < 1 .
Hence, we conclude that ξ = ξ 1 , since the linear operator G is invertible, and G ( ξ ξ 1 ) = F ( ξ ) F ( ξ 1 ) = 0 0 = 0 . □
Remark 2.
(1) 
If all the conditions of Theorem 1 hold, then we can set ρ 1 = a * .
(2) 
The parameter ρ 0 given in closed form can replace a * in condition ( H 5 ).
Concerning the proof of Theorem 1, when method (3) is used instead, we notice the following:
z m y m = ( I + 2 [ u m , v m ; F ] 1 ( [ u m , v m ; F ] [ y m , x m ; F ] ) ) [ u m , v m ; F ] 1 F ( y m ) 1 1 ψ 0 ( u m x 0 , v m x 0 ) ( 1 + 2 ψ ( x m x 0 , y m y 0 , v m v 0 , u m u 0 1 ψ 0 ( u m x 0 , v m x 0 ) ) p ¯ m σ ¯ m p ¯ m a m + 1 c m
and similarly,
x m + 1 z m σ ¯ m λ ¯ m σ m λ m a m + 1 c m ,
where y m + 1 x m + 1 b m + 1 a m + 1 , as in method (2).
  • The rest is omitted as it is identical to the proof of Theorem 1.

4. Numerical Examples

Example 1.
Let B = R 3 , D = U [ 0 , 1 ] . Define the function F on D for x = ( x 1 , x 2 , x 3 ) T with the following equation:
F ( x ) = F ( x 1 , x 2 , x 3 ) = ( e x 1 1 , e 1 2 x 2 2 + x 2 , x 3 ) T
We have x * = ( 0 , 0 , 0 ) T . The divided difference is given by [ x , y ; F ] = 0 1 F ( y + θ ( x y ) ) d θ for all x , y D , with x y . Moreover, by this definition, it follows that for x 0 = ( 0.1 , 0.1 , 0.1 ) T , the (H) conditions are satisfied provided that ψ 0 ( s , t ) = 1 2 ( e 1 ) ( s + t ) , ψ ( s , t ) = 1 2 e ( s + t ) and τ = ρ 0 = 1 e 1 . The iterates are as given in Table 1.
Then, we can see that methods (2) and (3) converge to x * , and that the method ( S M 6 ) is faster in this example.
Example 2.
Let B = R 5 , D = U [ 0 , 1 ] and consider the system of five equations defined by
j = 1 , j i 5 x j e x i = 0 , 1 i 5 ,
where x * = ( 0.20388835470224016 , 0.20388835470224016 , 0.20388835470224016 , 0.20388835470224016 , 0.20388835470224016 ) T .
Choose x 0 = ( 0.3 , 0.3 , 0.3 , 0.3 , 0.3 ) T . The divided difference is given by [ x , y ; F ] = 0 1 F ( y + θ ( x y ) ) d θ for all x , y D , with x y . Then, the error estimates are as given in Table 2.
Hence, we can conclude that methods (2) and (3) converge to x * .
Example 3.
Let B = ( R 2 , . ) . Let us consider the following non-linear system:
y 2 + | x 1 | 1 4 = 0 x + | y | 3 2 = 0 .
Let u = ( u 1 , u 2 ) = m a x { | u 1 | , | u 2 | } for u = ( u 1 , u 2 ) R 2 . The operator F = ( F 1 , F 2 ) connected to this system is defined by the following equation:
F 1 ( u 1 , u 2 ) = u 2 2 + | u 1 1 | 1 4 a n d F 2 ( u 1 , u 2 ) = u 1 + | u 2 | 3 2 .
Clearly, operator F is not differentiable. Hence, the methods containing the derivative cannot be used to solve this system [1,6]. We then use [ u , v ; F ] M 2 × 2 R , which is the set of 2 × 2 matrices in R , as follows:
[ u , v ; F ] k , 1 = F k ( v 1 , v 2 ) F k ( u 1 , v 2 ) v 1 u 1 [ u , v ; F ] k , 2 = F k ( u 1 , v 2 ) F k ( u 1 , u 2 ) v 2 u 2
provided that v 1 u 1 and v 2 u 2 . We subsequently pick the initial point to be ( 0.8 , 0.3 ) . Then, after two iterations, both methods (2) and (3) give the solution x * = ( 1 , 1 2 ) .
Example 4.
Consider the non-linear equation F ( x ) = x 3 20 defined on B = R , where D = U [ 2 , 1 ] . We obtain x * = 2.714417616594906572 . Take x 0 = 2.7 . The divided difference is given by [ x , y ; F ] = 0 1 F ( y + θ ( x y ) ) d θ for all x , y D , with x y . Then, the error estimates are as given in Table 3.
Hence, we can conclude that methods (2) and (3) converge to x * .

5. Discussion

The semi-local convergence analysis of high convergence order methods has also been discussed in [1]. However, this was only performed for two-step methods under hypotheses on F and using the method of recurrent functions. In the present paper, we dealt with three-step methods under weaker hypotheses, only on F and under more general conditions. In addition, it must be noted that the second Fréchet derivative used in [1] did not appear in the methods of the present paper nor in the methods used in [1]. Hence, we also extended the applicability of these two-step methods in [1] as well as the three-step methods in the present paper. It is also worth noting that the technique developed in the present paper may be applied in the solutions of high-order boundary value problems of the form given in references [14,15,16,17].

6. Conclusions

A comparison was made between the convergence balls of two derivative-free equation solvers that are similar in efficiency. The ball convergence of G M 6 and S M 6 solely required a generalized Lipschitz continuity. Finally, our analytical conclusions were validated against real-world application challenges. Our technique was demonstrated with the use of methods (2) and (3) as examples. However, the technique does not really depend on these methods. Therefore, it can be used on other single and multi-step methods using inverses of divided differences of order one or derivatives along the same lines [6,8,9,12]. This will be the topic of our future research.

Author Contributions

Conceptualization, I.K.A., C.I.A., J.A.J. and J.J.; methodology, I.K.A., C.I.A., J.A.J. and J.J.; software, I.K.A., C.I.A., J.A.J. and J.J.; validation, I.K.A., C.I.A., J.A.J. and J.J.; formal analysis, I.K.A., C.I.A., J.A.J. and J.J.; investigation, I.K.A., C.I.A., J.A.J. and J.J.; resources, I.K.A., C.I.A., J.A.J. and J.J.; data curation, I.K.A., C.I.A., J.A.J. and J.J.; writing—original draft preparation, I.K.A., C.I.A., J.A.J. and J.J.; writing—review and editing, I.K.A., C.I.A., J.A.J. and J.J.; visualization, I.K.A., C.I.A., J.A.J. and J.J.; supervision, I.K.A., C.I.A., J.A.J. and J.J.; project administration, I.K.A., C.I.A., J.A.J. and J.J.; funding acquisition, I.K.A., C.I.A., J.A.J. and J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations were used in this manuscript:
L ( U , V ) Set of linear operators from U to V
{ a n } Scalar sequence

References

  1. Argyros, I.; Khattri, S. A unifying semi-local analysis for iterative algorithms of high convergence order. J. Nonlinear Anal. Optim. Theory Appl. 2013, 4, 85–103. [Google Scholar]
  2. Argyros, I.K. Unified convergence criteria for iterative Banach space valued methods with applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  3. Argyros, I.K. The Theory and Applications of Iteration Methods; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  4. Kantorovich, L.V.; Akilov, G.P. Functional Analysis in Normed Spaces; Pergamon Press: Oxford, UK, 1964. [Google Scholar]
  5. Magreñán, Á.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  6. Behl, R.; Bhalla, S.; Magreñán, Á.A.; Moysi, A. An Optimal Derivative Free Family of Chebyshev–Halley’s Method for Multiple Zeros. Mathematics 2021, 9, 546. [Google Scholar] [CrossRef]
  7. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  8. Ezquerro, J.; Hernández, M.; Romero, N.; Velasco, A. On Steffensen’s method on Banach spaces. J. Comput. Appl. Math. 2013, 249, 9–23. [Google Scholar] [CrossRef]
  9. Sharma, J.R.; Arora, H. An efficient derivative free iterative method for solving systems of nonlinear equations. Appl. Anal. Discret. Math. 2013, 7, 390–403. [Google Scholar] [CrossRef] [Green Version]
  10. Steffensen, J. Remarks on iteration. Scand. Actuar. J. 1933, 1933, 64–72. [Google Scholar] [CrossRef]
  11. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  12. Hernández, M.; Rubio, M. A uniparametric family of iterative processes for solving nondifferentiable equations. J. Math. Anal. Appl. 2002, 275, 821–834. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, Z.; Zheng, Q.; Zhao, P. A variant of Steffensen’s method of fourth-order convergence and its applications. Appl. Math. Comput. 2010, 216, 1978–1983. [Google Scholar] [CrossRef]
  14. El-Gamel, M.; Adel, W.; El-Azab, M. Bernoulli polynomial and the numerical solution of high-order boundary value problems. Math. Nat. Sci. 2019, 4, 45–59. [Google Scholar] [CrossRef] [Green Version]
  15. Adel, W.; Sabir, Z. Solving a new design of nonlinear second-order Lane–Emden pantograph delay differential model via Bernoulli collocation method. Eur. Phys. J. Plus 2020, 135, 427. [Google Scholar] [CrossRef]
  16. Adel, W. A fast and efficient scheme for solving a class of nonlinear Lienard’s equations. Math. Sci. 2020, 14, 167–175. [Google Scholar] [CrossRef]
  17. Zahra, W.; Ouf, W.; El-Azab, M. A robust uniform B-spline collocation method for solving the generalized PHI-four equation. Appl. Appl. Math. Int. J. (AAM) 2016, 11, 24. [Google Scholar]
Table 1. Error estimates for Example (1).
Table 1. Error estimates for Example (1).
Methods x 1 x * x 2 x * x 3 x * x 4 x *
Method (2) ( G M 6 ) 3.333 10 5 4.85711 10 8 4.65495 10 25 3.60678 10 127
Method (3) ( S M 6 ) 2.65222 10 5 9.93835 10 9 2.75132 10 29 1.23853 10 152
Table 2. Error estimates for Example (2).
Table 2. Error estimates for Example (2).
Methods x 1 x * x 2 x * x 3 x * x 4 x *
Method (2) ( G M 6 ) 7.4587996 10 3 1.67162 10 9 2.11783 10 49 8.75714 10 289
Method (3) ( S M 6 ) 5.99747 10 3 3.6518 10 10 1.86099 10 53 3.25959 10 313
Table 3. Error estimates for Example (4).
Table 3. Error estimates for Example (4).
Methods x 1 x * x 2 x * x 3 x * x 4 x *
Method (2) ( G M 6 ) 4.62347 10 3 5.09219 10 6 9.08917 10 24 2.93927 10 130
Method (3) ( S M 6 ) 3.65222 10 3 1.01023 10 6 4.524756 10 28 3.65299 10 156
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Argyros, C.I.; John, J.A.; Jayaraman, J. Semi-Local Convergence of Two Derivative-Free Methods of Order Six for Solving Equations under the Same Conditions. Foundations 2022, 2, 1022-1030. https://doi.org/10.3390/foundations2040068

AMA Style

Argyros IK, Argyros CI, John JA, Jayaraman J. Semi-Local Convergence of Two Derivative-Free Methods of Order Six for Solving Equations under the Same Conditions. Foundations. 2022; 2(4):1022-1030. https://doi.org/10.3390/foundations2040068

Chicago/Turabian Style

Argyros, Ioannis K., Christopher I. Argyros, Jinny Ann John, and Jayakumar Jayaraman. 2022. "Semi-Local Convergence of Two Derivative-Free Methods of Order Six for Solving Equations under the Same Conditions" Foundations 2, no. 4: 1022-1030. https://doi.org/10.3390/foundations2040068

APA Style

Argyros, I. K., Argyros, C. I., John, J. A., & Jayaraman, J. (2022). Semi-Local Convergence of Two Derivative-Free Methods of Order Six for Solving Equations under the Same Conditions. Foundations, 2(4), 1022-1030. https://doi.org/10.3390/foundations2040068

Article Metrics

Back to TopTop