Next Article in Journal
Monte Carlo Simulation Affects Convergence of Differential Evolution: A Case of Optical Response Modeling
Next Article in Special Issue
Hyperparameter Black-Box Optimization to Improve the Automatic Classification of Support Tickets
Previous Article in Journal
MIRROR: Methodological Innovation to Remodel the Electric Loads to Reduce Economic OR Environmental Impact of User
Previous Article in Special Issue
A Proposal of Printed Table Digitization Algorithm with Image Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Semi-Local Convergence of Two Competing Sixth Order Methods for Equations in Banach Space

1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
3
Department of Mathematics, University of Houston, Houston, TX 77204, USA
4
Department of Computational Mathematics, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(1), 2; https://doi.org/10.3390/a16010002
Submission received: 20 November 2022 / Revised: 14 December 2022 / Accepted: 19 December 2022 / Published: 20 December 2022
(This article belongs to the Collection Feature Papers in Algorithms)

Abstract

:
A plethora of methods are used for solving equations in the finite-dimensional Euclidean space. Higher-order derivatives, on the other hand, are utilized in the calculation of the local convergence order. However, these derivatives are not on the methods. Moreover, no bounds on the error and uniqueness information for the solution are given either. Thus, the advantages of these methods are restricted in their application to equations with operators that are sufficiently many times differentiable. These limitations motivate us to write this paper. In particular, we present the more interesting semi-local convergence analysis not given previously for two sixth-order methods that are run under the same set of conditions. The technique is based on the first derivative that only appears in the methods. This way, these methods are more applicable for addressing equations and in the more general setting of Banach space-valued operators. Hence, the applicability is extended for these methods. This is the novelty of the paper. The same technique can be used in other methods. Finally, examples are used to test the convergence of the methods.

1. Introduction

Let us consider a Fréchet derivable operator F : Ω X Y , where X, Y are Banach spaces and Ω ( ) is a convex and open set. In computational sciences and other related fields, equations of the type
F ( x ) = 0 ,
are regularly used to address numerous complicated problems. It is important to realize that obtaining the solutions to these equations is a challenging problem. The solutions are only being found analytically in a limited number of cases. Therefore, iterative procedures are often developed to solve these equations. However, it is a difficult task to create an effective iterative strategy for dealing with Equation (1). The popular Newton’s method is widely used to solve this equation. In order to increase the convergence order modifications of methods such as Chebyshev’s, Jarratt’s, etc. have been developed.
Various higher order iterative ways computing solution of (1) have been provided in [1,2,3]. These methods are based on Newton-like methods [2,3,4,5,6,7,8,9,10]. In [11], two cubically convergent iterative procedures are designed by Cordero and Torregrosa. Another third-order convergent method based on the evaluations of two F, one F , and one inversion of the matrix is presented by Darvishi and Barati [5]. In addition, Darvishi and Barati [5] also suggested methods having convergence order four. Sharma et al. [12] composed two weighted-Newton steps to generate an efficient fourth-order weighted Newton method for nonlinear systems. In addition, fourth and sixth-order convergent iterative algorithms are developed by Sharma and Arora [13] to solve nonlinear systems.
The main objective of this article is to extend the application of the sixth convergence order methods that we have selected from [13,14], respectively. These methods are:
y n = x n α F ( x n ) 1 F ( x n ) z n = x n 23 8 I 3 F ( x n ) 1 F ( y n ) + 9 8 F ( x n ) 1 F ( y n ) 2 F ( x n ) 1 F ( x n ) x n + 1 = z n 5 2 I 3 2 F ( x n ) 1 F ( y n ) F ( x n ) 1 F ( z n )
and
y n = x n α F ( x n ) 1 F ( x n ) z n = x n ( I + 21 8 F ( x n ) 1 F ( y n ) 9 2 ( F ( x n ) 1 F ( y n ) ) 2 + 15 8 ( F ( x n ) 1 F ( y n ) ) 3 ) F ( x n ) 1 F ( x n ) x n + 1 = z n ( 3 I 5 2 F ( x n ) 1 F ( y n ) + ( 1 2 F ( x n ) 1 F ( y n ) ) 2 ) F ( x n ) 1 F ( z n ) ,
respectively. If α = 2 3 methods (2) and (3) are reduced to the methods designed in [12,14], respectively. The motivation and the benefits of using these methods have been well explained in [13,14]. These methods require the evaluation of two derivatives, one inverse, and two operator evaluations per iteration. The convergence analysis was given in the special case when X = Y = R i . The local convergence of these methods is shown with the application of expensive Taylor formulas. Moreover, the existence of derivatives up to order seven is assumed. These derivatives do not appear in the methods. However, this approach reduces their applicability.
Motivation for writing this paper. Let us look at the following function to explain a viewpoint
F ( t ) = 0 , if t = 0 , 2 t 3 ln ( t ) + t 5 t 4 , if t 0 ,
where X = Y = R and the F is defined on Ω = [ 0.5 , 1.5 ] . Then, the unboundedness of F makes the previous functions’ convergence results ineffective for methods (2) and (3). Notice also that the results in [15,16] can not be used to solve equations with operators that are not at least seven times differentiable. However, these methods may converge. Moreover, existing results provide little information regarding the bounds of the error, the domain of convergence, or the location of the solution.
Novelty of the paper. The new approach addresses these concerns in the more general setting of Banach spaces. Moreover, we use only conditions on the derivative F that appears in these methods. Furthermore, we investigate the ball analysis of an iterative method in detail in order to determine convergence radii, approximate error bounds, and calculate the region where x * is the only solution. Another benefit of this analysis is that it simplifies the very difficult task of selecting x 0 . Consequently, we are motivated to investigate and compare the semi-local convergence of (2) and (3) (not given in [15,16]) under an identical set of constraints. Additionally, an error estimates | | x n x * | | and the convergence radii, the convergence theorems. Furthermore, the uniqueness of the convergence ball is discussed.
Future Work. The methods mentioned previously can also be extended with our technique along the same lines. These methods can be used to solve equations in the related works [15,16,17].
The following is a summary for the rest of this article: Section 2 contains results on majorizing sequences. Section 3 gives the convergence of the methods. The remaining Section 4 and Section 5 contain numerical examples and conclusions, respectively.

2. Majorizing Sequences

The real sequences defined in this section shall be shown to be majorizing for method (2) and method (3) in the next Section.
Let t 0 = 0 and s 0 = | α | ξ for some ξ 0 and consider functions w 0 : [ 0 , ) [ 0 , ) , w : [ 0 , ) [ 0 , ) , to be continuous and nondecreasing. Define the sequence { t n } for all n = 0 , 1 , 2 , . . .
b n = w 0 ( t n ) + w 0 ( s n ) o r w ( s n t n ) ,
γ n = b n 1 w 0 ( t n ) ,
δ n = 1 + 0 1 w 0 ( t n + θ ( u n t n ) ) d θ ( u n t n ) + 1 | α | ( 1 + w 0 ( t n ) ) ( s n t n ) ,
u n = s n + 1 8 | α | ( 8 | α 1 | + 6 γ n + 9 γ n 2 ) ( s n t n ) ,
t n + 1 = u n + 1 + 3 2 γ n δ n 1 w 0 ( t n ) ,
p n + 1 = 1 + 0 1 w 0 ( t n + θ ( t n + 1 t n ) ) d θ ( t n + 1 t n ) + 1 | α | ( 1 + w 0 ( t n ) ) ( s n t n ) o r 0 1 w ( ( 1 θ ) ( t n + 1 t n ) ) d θ ( t n + 1 t n ) + 1 | α | ( 1 + w 0 ( t n ) ) ( s n t n ) + ( 1 + w 0 ( t n ) ) ( t n + 1 t n ) ,
and
s n + 1 = t n + 1 + | α | p n + 1 1 w 0 ( t n + 1 ) .
We use the same convergence notation for the second sequence
u n = s n + 1 8 | α | ( 8 | 1 α | + 6 γ n + 9 γ n 2 + 15 γ n 3 ) ( s n t n ) ,
t n + 1 = u n + 1 4 ( 3 + 8 γ n + γ n 2 ) δ n 1 w 0 ( t n ) ,
and
s n + 1 = t n + 1 + | α | p n + 1 1 w 0 ( t n + 1 ) .
Next, the same convergence criteria are developed for these sequences.
Lemma 1.
Suppose that either sequence { t n } generated by Formula (5) or Formula (6) satisfy
w 0 ( t n ) < 1 ,
and
t n < τ ,
for some parameter τ > 0 and all n = 0 , 1 , 2 , . . . .
Then, these sequences are bounded from above by τ , nondecreasing and convergent to the same t * [ 0 , τ ] .
Proof. 
If follows by the Formulas (5) and (6) and the conditions (7) and (8) that the conclusions if the Lemma 1 hold. In particular, the limit point t * is the unique least upper bound of these sequences.
Notice that τ and t * do not have to be the same for each sequence. □
If the function w 0 is strictly increasing, the possibly choice for τ = w 0 1 ( 1 ) .
The semi-local convergence is discussed in the next Section.

3. Convergence

The following common set of conditions is sufficient for the convergence of these methods.
Suppose:
( C 1 ) There exist a starting point x 0 Ω and a parameter ξ 0 such that F ( x 0 ) 1 L ( Y , X ) and
F ( x 0 ) 1 F ( x 0 ) ξ .
( C 2 ) F ( x 0 ) 1 ( F ( v ) F ( x 0 ) ) w 0 ( v x 0 ) for all v Ω , where the function w 0 : [ 0 , ) [ 0 , ) is continuous and nondecreasing.
( C 3 ) Equation w 0 ( t ) 1 = 0 has a smallest positive solution ρ .
Set T = [ 0 , ρ ) and Ω 0 = B ( x 0 , ρ ) Ω .
( C 4 ) F ( x 0 ) 1 ( F ( v 2 ) F ( v 1 ) w ( v 2 v 1 ) for all v 2 , v 1 Ω 0 , where the function w : T [ 0 , ) is continuous and nondecreasing.
( C 5 ) The conditions (7) and (8) hold.
and
( C 6 ) B [ x 0 , t * ] Ω .
Next, the semi-local convergence is given first for method (2).
Theorem 1.
Suppose that the conditions ( C 1 ) ( C 6 ) hold. Then, the sequence { x n } generated by the Formula (2) is well defined in the ball B ( x 0 , t * ) , stays in the ball B ( x 0 , t * ) and converges to a limit point x * B [ x 0 , t * ] satisfying t F ( x * ) = 0 . Moreover, the solution x * relates to the method (2) and the sequence { t n } by
x n x * t * t n f o r a l l n = 0 , 1 , 2 , .
Proof. 
The following items shall be shown using mathematical induction on the number k:
y k x k s k t k ,
z k y k u k s k ,
x k + 1 z k t k + 1 u k .
Item (10) holds for k = 0 , since by the condition ( C 1 ) , the definition of the method (2) and the sequence (5)
y 0 x 0 = | α | | F ( x 0 ) 1 F ( x 0 ) | α | ξ = ( s 0 t 0 ) < t * .
Notice also that the iterates y 0 , z 0 and x 1 are well defined and y 0 B ( x 0 , t * ) . Then, for v B ( x 0 , t * ) , conditions (7), ( C 1 ) and ( C 2 ) give
F ( x 0 ) 1 ( F ( v ) F ( x 0 ) ) w 0 ( v x 0 ) < 1 .
This estimate together with the standard lemma by Banach on linear operator [2] implies that
F ( v ) 1 F ( x 0 ) 1 1 w 0 ( v x 0 ) .
By replacing the value of y k given in the first substep in the second substep of the method (2), we have
z k y k = ( α I 23 8 I + 3 A k 9 8 A k 2 ) F ( x k ) 1 F ( x k ) = 1 8 α [ 8 ( α 1 ) I 6 ( I A k ) 9 ( I A k ) 2 ] ( y k x k ) .
In view of the definition of the sequence (5), condition ( C 3 ) , (13) (for v = x k ) and the identity (14), we obtain the estimate
z k y k 1 8 | α | ( 8 | α 1 | + 6 γ ¯ k + 9 γ ¯ k 2 ) y k x k 1 8 | α | ( 8 | α 1 | + 6 γ k + 9 γ k 2 ) ( s k t k ) = u k s k ,
where
b ¯ k = w 0 ( x k x 0 ) + w 0 ( y k x 0 ) o r w ( y k x k )
and γ ¯ k = b ¯ k 1 w 0 ( x k x 0 ) .
The following estimates are also used
I A k = F ( x k ) 1 ( F ( x k ) F ( y k ) ) F ( x k ) 1 ( F ( x k ) F ( x 0 ) ) + F ( x k ) 1 ( F ( y k ) F ( x 0 ) ) w 0 ( x k x 0 ) + w 0 ( y k x 0 ) 1 w 0 ( x k x 0 ) = γ ¯ k w 0 ( t k ) + w 0 ( s k ) 1 w 0 ( t k ) = γ k ,
and similarly
I A k F ( x k ) 1 F ( x 0 ) F ( x 0 ) 1 ( F ( x k ) F ( y k ) ) w ( y k x k ) 1 w 0 ( x k x 0 ) = γ ¯ k w ( s k t k ) 1 w 0 ( t k ) = γ k .
It also follows from (15) that the estimate (11) holds and
z k x 0 z k y k + y k x 0 u k s k + s k = u k < t * .
Thus, the iterate z k B ( x 0 , t * ) .
By the first substep of method (2) one can write in turn that
F ( z k ) = F ( z k ) F ( x k ) + F ( x k ) = 0 1 F ( x k + θ ( z k x k ) ) d θ ( z k x k ) 1 α F ( x k ) ( y k x k )
leading to the estimate
F ( x 0 ) 1 F ( z k ) ( 1 + 0 1 w 0 ( x k x 0 + θ z k x k ) d θ ) z k x k + 1 | α | ( 1 + w 0 ( x k x 0 ) y k x k = δ ¯ k δ k .
Then, by the third substep of method (2)
x k + 1 z k = ( I + 3 2 ( I A k ) ) F ( x k ) 1 F ( z k ) ( 1 + 3 2 γ ¯ k ) δ ¯ k 1 w 0 ( x k x 0 ) ( 1 + 3 2 γ k ) δ k 1 w 0 ( t k ) = t k + 1 u k ,
and
x k + 1 x 0 x k + 1 z k + z k x 0 t k + 1 u k + u k = t k + 1 < t * .
Hence, the iterate x k + 1 B ( x 0 , t * ) and the item (12) hold.
Method (16) also gives
F ( x k + 1 ) = F ( x k + 1 ) F ( x k ) + F ( x k ) = 0 1 F ( x k + θ ( x k + 1 x k ) ) d θ ( x k + 1 x k ) 1 α F ( x k ) ( y k x k )
leading to
F ( x 0 ) 1 F ( x k + 1 ) ( 1 + 0 1 w 0 ( x k x 0 + θ x k + 1 x k ) d θ x k + 1 x k + 1 | α | ( 1 + w 0 ( x k x 0 ) ) y k x k = p ¯ k + 1 ( 1 + 0 1 w 0 ( t k + θ ( t k + 1 t k ) ) d θ ) ( t k + 1 t k ) + 1 | α | ( 1 + w 0 ( t k ) ) ( s k t k ) = p k + 1 ,
since
x k + 1 x k x k + 1 z k + z k x k + y k x k t k + 1 u k + u k s k + s k t k = t k + 1 t k .
On the other hand, we can write
F ( x k + 1 ) = F ( x k + 1 ) F ( x k ) F ( x k ) ( x k + 1 x k ) 1 α F ( x k ) ( y k x k ) + F ( x k ) ( x k + 1 x k )
and
F ( x 0 ) 1 F ( x k + 1 ) 0 1 w ( ( 1 θ ) ( x k + 1 x k ) ) d θ x k + 1 x k + 1 | α | ( 1 + w 0 ( x k x 0 ) ) y k x k + ( 1 + w 0 ( x k x 0 ) ) x k + 1 x k = p ¯ k + 1 0 1 w ( ( 1 θ ) ( t k + 1 t k ) ) d θ ( t k + 1 t k ) + 1 | α | ( 1 + w 0 ( t k ) ) ( s k t k ) + ( 1 + w 0 ( t k ) ) ( t k + 1 t k ) = p k + 1 .
Then, by the first substep of the method (2)
y k + 1 x k + 1 | α | F ( x k + 1 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x k + 1 ) | α | p ¯ k + 1 1 w 0 ( x k + 1 x 0 ) | α | p k + 1 1 w 0 ( t k + 1 ) = s k + 1 t k + 1 ,
and y k + 1 x 0 y k + 1 x k + 1 + x k + 1 x 0 s k + 1 t k + 1 + t k + 1 = s k + 1 < t * . It follows that the item (10) hold for k + 1 replacing k and y k + 1 B ( x 0 , t * ) . Then, the induction for the items (10)–(12) is completed.
The condition ( C 5 ) implies that the sequence { t k } is Cauchy as convergent. Consequently, the sequence { x k } is also Cauchy by estimates (10)–(12), and as such it is convergent to some limit point x * B [ x 0 , t * ] . Furthermore, the continuity of the operator F and (18) imply F ( x * ) = 0 if k .
Let m 0 . Then, by (10)–(12) the following can be written in turn
x k + m x k x k + m x k + m 1 + x k + m 1 x k + m 2 + . . . + x k + 1 x k t k + m t k + m 1 + t k + m 1 t k + m 2 + . . . + t k + 1 t k = t k + m t * .
By letting m in the estimate (21), the item (9) follows. □
Remark 1.
The parameter ρ can replace the limit point t * in the condition ( C 6 ) or τ in the condition (8).
The next result discusses the location and the uniqueness of a solution for the equation F ( x ) = 0 .
Proposition 1.
Suppose: ( i ) The exists a solution x ¯ B ( x 0 , ρ 1 ) of the equation F ( x ) = 0 for some parameter ρ 1 > 0 .
( i i ) The condition ( C 2 ) hold.
( i i i ) For ρ 2 ρ 1
0 1 w 0 ( θ ρ 1 ) d θ < 1 .
Set Ω 1 = B ( x 0 , ρ 2 ) Ω .
Then, the equation F ( x ) = 0 is uniquely solved by x ¯ in the region Ω 1 .
Proof. 
Let M = 0 1 F ( x ¯ + θ ( y ¯ x ¯ ) ) d θ for some y ¯ Ω 1 with F ( y ¯ ) = 0 . The application of the conditions ( i i ) , ( i i i ) and (21) gives in turn that.
F ( x 0 ) 1 ( M F ( x 0 ) ) 0 1 w 0 ( ( 1 θ ) x ¯ x 0 + θ y ¯ x 0 ) d θ 0 1 w 0 ( ( 1 θ ) ρ 1 + θ ρ 2 ) d θ < 1
concluding that the linear operator M is invertible and x ¯ = y ¯ , since
M ( x ¯ y ¯ ) = F ( x ¯ ) F ( y ¯ ) = 0 .
Remark 2.
The uniqueness of the solution result given in Proposition 1 is not using all the conditions of the Theorem 1. However, if all these conditions are used, then set ρ 1 = t * .
By using method (3) instead of the method (2) and sequence (6) instead of sequence (5) one obtains along the same lines for the proof the Theorem 1 (under conditions ( C 1 ) ( C 6 ) ) based on the following estimates:
z k y k = [ ( α 1 ) I 21 8 A k + 9 2 A k 2 15 8 A k 3 ] F ( x k ) 1 F ( x k ) = 1 8 [ 8 ( α 1 ) I 6 ( I A k ) 9 ( I A k ) 2 15 ( I A k ) 3 ] F ( x k ) 1 F ( x k ) 1 8 | α | ( 8 | α 1 | + 6 γ ¯ k + 9 γ ¯ k 2 + 15 γ ¯ k 3 ) y k z k 1 8 | α | ( 8 | α 1 | + 6 γ k + 9 γ k 2 + 15 γ k 3 ) ( s k t k ) = u k s k ,
x k + 1 z k = 1 4 ( 3 + 8 ( I A k ) + ( I A k ) 2 ) F ( x k ) 1 F ( z k ) 1 4 ( 3 + 8 γ ¯ k + γ ¯ k 2 ) δ ¯ k 1 w 0 ( x k x 0 ) 1 4 ( 3 + 8 γ k + γ k 2 ) δ k 1 w 0 ( t k ) = t k + 1 u k
Moreover, the estimate on y k + 1 z k is the same as in (20). Hence, the following result is reached but for the method (3).
Theorem 2.
Under the conditions ( C 1 ) ( C 6 ) the conclusions of Theorem 1 hold for the method (3) provided that the sequence (5) is switched with the sequence (6).

4. Numerical Example

Let us apply methods (2) and (3) with α = 2 3 to solve the following nonlinear problems.
Example 1.
Consider the system of nonlinear equations with F : R m R m defined by
F i ( x ) = 3 υ i 3 + 2 υ i + 1 5 + sin υ i υ i + 1 sin υ i + υ i + 1 , i = 1 , F i ( x ) = 3 υ i 3 + 2 υ i + 1 5 + sin υ i υ i + 1 sin υ i + υ i + 1 + 4 υ i υ i 1 exp υ i 1 υ i 3 , 1 < i < m , F i ( x ) = 4 υ i υ i 1 exp υ i 1 υ i 3 , i = m .
Here x = ( υ 1 , , υ m ) T . The initial approximation is calculated by the formula x 0 = ( 2 s , , 2 s ) T , where s is a real number. The exact solution is x * = ( 1 , , 1 ) T . The iterative process is stopped if the condition holds
F ( x n + 1 ) 10 10 .
Table 1 and Table 2 show values of errors for different s and m = 5 . Notice that the closer x 0 is to x * the faster the convergence.
Example 2.
Consider the boundary value problem
y ( t ) y ( t ) t g ( t ) + 2 y 2 ( t ) sin ( t ) = 0 , 0 < t < π 2 , y ( 0 ) = 0 , y ( π / 2 ) = 1 .
Denote υ i = y ( t i ) , i = 0 , , m + 1 , where t i = i h and h = π 2 ( m + 1 ) . Using the approximation for the first and second-order derivatives
υ i υ i 1 2 υ i + υ i + 1 h 2 , υ i υ i + 1 υ i 1 2 h , i = 1 , , m ,
the following system of the nonlinear equations
F i ( x ) = 2 υ i + υ i + 1 h 2 υ i + 1 t g ( t i ) + 2 h 2 υ i 2 sin ( t i ) = 0 , i = 1 , F i ( x ) = υ i 1 2 υ i + υ i + 1 h 2 ( υ i + 1 υ i 1 ) t g ( t i ) + 2 h 2 υ i 2 sin ( t i ) = 0 , i = 2 , , m 1 , F i ( x ) = 1 2 υ i + υ i + 1 h 2 ( 1 υ i 1 ) t g ( t i ) + 2 h 2 υ i 2 sin ( t i ) = 0 , i = m
with x = ( υ 1 , , υ m ) T is obtained.
Figure 1 shows F ( x n ) at each iteration. The results are obtained for m = 49 and ε = 10 10 . The starting approximations x 0 were given by formulas x 0 , i = sin ( i h ) + 0.6 (for the graphs on the left) and x 0 , i = 0.5 sin ( i h ) (for the graphs on the right). Notice that the method (2) convergences faster than (3) for both problems.

5. Conclusions

The local convergence analysis of the method (2) and the method (3) previously was given under hypotheses on the seventh derivative on the space R i . The analysis did not provide computable error bounds or uniqueness results for the solution. The rest of the methods listed in the Introduction have the same limitations. We wrote this paper to address these problems and to extend the applicability of these methods. As a sample, we demonstrated that with method (2) and method (3). However, the new approach works on the rest of the aforementioned methods. In particular, we considered the semi-local convergence analysis for these methods which is more interesting and challenging that the local convergence. Computable error estimates as well as the uniqueness of the solution results were given in the more general setting of Banach spaces. Moreover, the convergence is based only on the derivative appearing on the method and ω -continuity conditions. The new approach will be applied in the future to other iterative methods.

Author Contributions

Conceptualization, I.K.A., S.S., S.R. and H.Y.; methodology, I.K.A., S.S., S.R. and H.Y.; software, I.K.A., S.S., S.R. and H.Y.; validation, I.K.A., S.S., S.R. and H.Y.; formal analysis, I.K.A., S.S., S.R., and H.Y.; investigation, I.K.A., S.S., S.R., and H.Y.; resources, I.K.A., S.S., S.R. and H.Y.; data curation, I.K.A., S.S., S.R. and H.Y.; writing—original draft preparation, I.K.A., S.S., S.R. and H.Y.; writing—review and editing, I.K.A., S.S., S.R. and H.Y.; visualization, I.K.A., S.S., S.R., and H.Y.; supervision, I.K.A., S.S., S.R. and H.Y.; project administration, I.K.A., S.S., S.R. and H.Y.; funding acquisition, I.K.A., S.S., S.R. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  2. Shakhno, S.M. Convergence of the two-step combined method and uniqueness of the solution of nonlinear operator equations. J. Comput. Appl. Math. 2014, 261, 378–386. [Google Scholar] [CrossRef]
  3. Shakhno, S.M. On an iterative algorithm with superquadratic convergence for solving nonlinear operator equations. J. Comput. Appl. Math. 2009, 231, 222–235. [Google Scholar] [CrossRef] [Green Version]
  4. Argyros, I.K.; Shakhno, S.; Yarmola, H. Two-step solver for nonlinear equation. Symmetry 2019, 11, 128. [Google Scholar] [CrossRef] [Green Version]
  5. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  6. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  7. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comp. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  8. Kou, J.; Li, Y. An improvement of the Jarratt method. Appl. Math. Comput. 2007, 189, 1816–1821. [Google Scholar] [CrossRef]
  9. Magrenán, Á.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  10. Chun, C.; Neta, B. Developing high order methods for the solution of systems of nonlinear equations. Appl. Math. Comput. 2019, 342, 178–190. [Google Scholar] [CrossRef]
  11. Cordero, A.; Torregrosa, J.R. Variants of Newtons method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar]
  12. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  13. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  14. Xiao, X.; Yin, H. A simple and efficient method with high order convergence for solving systems of nonlinear equations. Comput. Math. Appl. 2015, 69, 1220–1231. [Google Scholar] [CrossRef]
  15. Zhang, J.; Yang, G. Low-complexity tracking control of strict-feedback systems with unknown control directions. IEEE Trans. Autom. Control. 2019, 64, 5175–5182. [Google Scholar] [CrossRef]
  16. Zhang, X.; Dai, L. Image enhancement based on rough set and fractional order differentiator. Fractal Fract. 2020, 6, 214. [Google Scholar] [CrossRef]
  17. Ding, W.; Wang, Q.; Zhang, J. Analysis and prediction of COVID-19 epidemic in South Africa. ISA Trans. 2022, 124, 182–190. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Example 2: norm of residual at each iteration.
Figure 1. Example 2: norm of residual at each iteration.
Algorithms 16 00002 g001
Table 1. The values x n x * at each iteration for s = 0.35 , 0.4 , 0.45 .
Table 1. The values x n x * at each iteration for s = 0.35 , 0.4 , 0.45 .
n s = 0.35 s = 0.4 s = 0.45
(2)(3)(2)(3)(2)(3)
03.0000e-013.0000e-012.0000e-012.0000e-011.0000e-011.0000e-01
11.2680e-018.3460e-013.7373e-032.7441e-022.3940e-054.1343e-04
22.0491e-056.6401e-023.3085e-146.3708e-0704.1744e-14
301.6444e-05 0
4 0
Table 2. The values x n x * at each iteration for s = 1 , 2.5 , 5 .
Table 2. The values x n x * at each iteration for s = 1 , 2.5 , 5 .
n s = 1 s = 2.5 s = 5
(2)(3)(2)(3)(2)(3)
01.0000e+001.0000e+004.0000e+004.0000e+009.0000e+009.0000e+00
18.1192e-021.0405e-011.1974e+001.3057e+003.3764e+003.5981e+00
21.8507e-067.5738e-051.2931e-011.9303e-019.3516e-011.1262e+00
3002.1824e-056.2796e-046.7872e-021.3871e-01
4 02.2427e-137.0948e-072.0438e-04
5 02.6645e-15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Regmi, S.; Yarmola, H. On the Semi-Local Convergence of Two Competing Sixth Order Methods for Equations in Banach Space. Algorithms 2023, 16, 2. https://doi.org/10.3390/a16010002

AMA Style

Argyros IK, Shakhno S, Regmi S, Yarmola H. On the Semi-Local Convergence of Two Competing Sixth Order Methods for Equations in Banach Space. Algorithms. 2023; 16(1):2. https://doi.org/10.3390/a16010002

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, Samundra Regmi, and Halyna Yarmola. 2023. "On the Semi-Local Convergence of Two Competing Sixth Order Methods for Equations in Banach Space" Algorithms 16, no. 1: 2. https://doi.org/10.3390/a16010002

APA Style

Argyros, I. K., Shakhno, S., Regmi, S., & Yarmola, H. (2023). On the Semi-Local Convergence of Two Competing Sixth Order Methods for Equations in Banach Space. Algorithms, 16(1), 2. https://doi.org/10.3390/a16010002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop