Next Article in Journal
Modeling Interactions among Migration, Growth and Pressure in Tumor Dynamics
Next Article in Special Issue
Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications
Previous Article in Journal
A Solution Procedure Combining Analytical and Numerical Approaches to Investigate a Two-Degree-of-Freedom Vibro-Impact Oscillator
Previous Article in Special Issue
On High-Order Iterative Schemes for the Matrix pth Root Avoiding the Use of Inverses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some High-Order Convergent Iterative Procedures for Nonlinear Systems with Local Convergence

1
Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(12), 1375; https://doi.org/10.3390/math9121375
Submission received: 2 May 2021 / Revised: 7 June 2021 / Accepted: 8 June 2021 / Published: 14 June 2021
(This article belongs to the Special Issue Application of Iterative Methods for Solving Nonlinear Equations)

Abstract

:
In this study, we suggested the local convergence of three iterative schemes that works for systems of nonlinear equations. In earlier results, such as from Amiri et al. (see also the works by Behl et al., Argryos et al., Chicharro et al., Cordero et al., Geum et al., Guitiérrez, Sharma, Weerakoon and Fernando, Awadeh), authors have used hypotheses on high order derivatives not appearing on these iterative procedures. Therefore, these methods have a restricted area of applicability. The main difference of our study to earlier studies is that we adopt only the first order derivative in the convergence order (which only appears on the proposed iterative procedure). No work has been proposed on computable error distances and uniqueness in the aforementioned studies given on R k . We also address these problems too. Moreover, by using Banach space, the applicability of iterative procedures is extended even further. We have examined the convergence criteria on several real life problems along with a counter problem that completes this study.

1. Introduction

The most common and difficult problem in the field of computational mathematics is to obtain the solutions of
F ( x ) = 0 ,
where F : Ω B 1 B 2 a Fréchet-differentiable, B 1 and B 2 Banach domains, Ω , a non-empty convex. It is hard to obtain the exact solution in analytic form for such problems or, in simple words, it is almost fictitious. This is one of main reasons that we must obtain an approximated and efficient solution up to any specific degree of accuracy by means of an iterative procedure.
Therefore, researchers have been putting great effort into developing new iterative methods over the past few decades. In addition, the accuracy of a solution is also dependent on several facts, some of them are: the choice of iterative method, initial approximation/s and structure of the considered problem with software such as Maple, Fortran, MATLAB, Mathematica, and so forth. Further, the people who used these iterative schemes faced several issues, some of which include: choice of starting point, derivative being zero about the root (in the case of derivative free multi-point schemes), difficulty near the initial point, slower convergence, divergence, convergence to an undesired solution, oscillation, failure of the iterative method, and so forth (for further information, please see [1,2,3,4,5]).
We study the local convergence of the Banach domain valued iterative procedures of orders eighth, eighth and seventh, defined for each σ = 0 , 1 , 2 , , respectively, by
y σ = x σ F ( x σ ) 1 F ( x σ ) , z σ = y σ 5 4 I 1 2 F ( y σ ) 1 F ( x σ ) + 1 4 F ( y σ ) 1 F ( x σ ) 2 F ( y σ ) 1 F ( y σ ) , x σ + 1 = z σ 3 2 I F ( y σ ) 1 F ( x σ ) + 1 2 F ( y σ ) 1 F ( x σ ) 2 F ( y σ ) 1 F ( z σ ) ,
y σ = x σ F ( x σ ) 1 F ( x σ ) , z σ = y σ 1 4 I + 1 2 F ( y σ ) 1 F ( x σ ) + 1 4 F ( y σ ) 1 F ( x σ ) 2 F ( x σ ) 1 F ( y σ ) , x σ + 1 = z σ 1 2 I + 1 2 F ( y σ ) 1 F ( x σ ) 2 F ( x σ ) 1 F ( z σ ) ,
and
y σ = x σ F ( x σ ) 1 F ( x σ ) , z σ = y σ 1 β F ( x σ ) 1 F ( y σ ) , w σ = z σ F ( x σ ) 1 2 1 β β F ( y σ ) + β F ( z σ ) , x σ + 1 = w σ Q ( t σ ) F ( x σ ) 1 F ( w σ ) ,
with t σ = I 1 β F ( x σ ) 1 [ y σ , z σ ; F ] , F : Ω B 1 B 2 a Fréchet-differentiable, B 1 and B 2 Banach domains, Ω a non-empty, convex and open, x 0 Ω an initial guess, β R { 0 } , and [ · , · ; F ] : Ω × Ω ( B 1 , B 2 ) a standard divided difference of order one [6]. Notice that by F ( y σ ) 1 F ( x σ ) 2 , we mean that F ( y σ ) 1 F ( x σ ) F ( y σ ) 1 F ( x σ ) , which exists as a composition between two linear operators. The following concerns arise for Reference [7] (the same is true for the studies mentioned in the papers [8,9,10,11,12,13,14,15,16,17,18,19,20]):
(1)
These procedures were studied in [7] for the special case when B 1 = B 2 = R j , j = 1 , 2 , 3 , , by using Taylor series and hypotheses on the derivatives reaching up to order 9 (not appearing on these iterative procedures). These hypotheses limit the applicability of the iterative procedures. Let us consider a motivational example. Therefore, we assume the following function H on T = Y = R , Ω = [ 1 2 , 3 2 ] as:
F ( θ ) = θ 3 ln θ 2 + θ 5 θ 4 , θ 0 0 , θ = 0 .
We yield
F ( θ ) = 3 θ 2 ln θ 2 + 5 θ 4 4 θ 3 + 2 θ 2 ,
F ( θ ) = 6 θ ln θ 2 + 20 θ 3 12 θ 2 + 10 θ ,
F ( θ ) = 6 ln θ 2 + 60 θ 2 12 θ + 22 .
So, we identify that F ( θ ) is not bounded in Ω . Therefore, results requiring the existence of F ( θ ) or higher cannot apply for studying the convergence of (2)–(4).
(2)
No computable error bounds x σ x . Hence, we do not know in advance how many iterates should be computed to achieve some pre-decided error tolerance.
(3)
Uniqueness results are not given in [7]. Here, x is a solution of the equation of (1).
In this paper, we address all (1)–(3) problems using only the first derivative, which appears in these iterative procedures. Hence, we extend the applicability of these procedures in the more general setting of a Banach domain. Moreover, because of its generality, our approach can extend the usage of other methods [8,9,10,11,12,13,14,15,16,17,18,19,21,22,23,24,25] in the same way.

2. Local Convergence

We study first of all, iterative procedure (2). Let ψ 0 : [ 0 , ) [ 0 , ) be a continuous and increasing function. Assume:
(i) Equation
ψ 0 ( θ ) 1 = 0 ,
has a minimal positive solution ρ 0 .
Set I 0 = [ 0 , 2 ρ 0 ) . Function ψ : I 0 [ 0 , ) to be continuous and increasing. Define function G 1 on I 0 in the following way:
G 1 ( θ ) = 0 1 ψ ( 1 μ ) θ d μ 1 ψ 0 ( θ ) .
(ii) Equation
G 1 ( θ ) 1 = 0 ,
has a minimal solution r 1 I 0 { 0 } .
(iii) Equation
ψ 0 G 1 ( θ ) θ 1 = 0 ,
has a minimal positive solution ρ 1 . Set ρ 2 = min { ρ 0 , ρ 1 } .
Consider function v : I 1 [ 0 , ) to be continuous and increasing, where I 1 = [ 0 , ρ 2 ) . Define function G 2 on I 1 in the following way:
G 2 ( θ ) = 1 + 1 4 ψ 0 ( θ ) + ψ 0 G 1 ( θ ) θ 1 ψ 0 G 1 ( θ ) θ 2 0 1 v μ G 1 ( θ ) θ d μ 1 ψ 0 G 1 ( θ ) θ G 1 ( θ ) ,
(iv) Equation
G 2 ( θ ) 1 = 0 ,
has a minimal solution r 2 I 1 { 0 } .
(v) We assume that equation
ψ 0 G 2 ( θ ) θ 1 = 0 ,
has a minimal positive solution ρ 3 and ρ = min { ρ 2 , ρ 3 } .
Define another function G 3 on I 3 = [ 0 , ρ ) by :
G 3 ( θ ) = [ 0 1 ψ ( 1 μ ) G 2 ( θ ) θ d μ 1 ψ 0 G 2 ( θ ) θ + ψ 0 G 2 ( θ ) θ + ψ 0 G 1 ( θ ) θ 0 1 v μ G 2 ( θ ) θ d μ 1 ψ 0 G 1 ( θ ) θ 1 ψ 0 G 2 ( θ ) θ + 1 2 ψ 0 ( θ ) + ψ 0 G 1 ( θ ) θ 1 ψ 0 G 1 ( θ ) θ 2 0 1 v μ G 2 ( θ ) θ d μ 1 ψ 0 G 1 ( θ ) θ ] G 2 ( θ ) ,
(vi) Equation
G 3 ( θ ) 1 = 0 ,
has a minimal solution r 3 I 3 { 0 } .
A radius of convergence r shall be shown to be
r = min { r i } , i = 1 , 2 , 3 .
Notice that
0 ψ 0 ( θ ) < 1 ,
0 ψ 0 G 1 ( θ ) θ < 1 ,
0 ψ 0 G 2 ( θ ) θ < 1 ,
and
0 G i ( θ ) < 1 , i = 1 , 2 , 3 ,
for all θ [ 0 , r ) .
Let S ¯ ( a , b ) stand for the closure of S ( a , b ) a with center a Ω and of radius b > 0 . The conditions ( B ) are used in the local convergence analysis of iterative procedure (2) provided the ψ functions are as given previously. Assume:
(B1
F : Ω B 2 is Fréchet- differentiable and there exists x Ω such that
F ( x ) = 0 and F ( x ) 1 ( B 2 , B 1 ) .
(B2
For all x Ω
F ( x ) 1 F ( x ) F ( x ) ψ 0 ( x x ) .
Set Ω 0 = Ω S ( x , ρ 0 ) .
(B3
For all x , y Ω 0
F ( x ) 1 F ( x ) F ( y ) ψ ( x y ) .
Set Ω 1 = Ω S ( x , ρ 2 ) .
(B4
For all x Ω 1
F ( x ) 1 F ( x ) v ( x x ) .
(B5
S ¯ ( x , r ˜ ) Ω , ρ exists and r ˜ is defined later.
(B6
There exists r ¯ r such that
0 1 ψ 0 ( μ r ¯ ) d μ < 1 .
Set Ω 2 = Ω S ¯ ( x , r ˜ ) .
Next, we develop the analysis of iterative procedure (2) by the preceding notation and conditions ( B ) .
Theorem 1.
Under the conditions ( B ) for r ˜ = r , further suppose that x 0 S ( x , r ) { x } . Then, sequence { x σ } generated by iterative scheme (2) is well defined, remains in S ( x , r ) for all σ = 0 , 1 , 2 , 3 , and converges to x . Moreover, the following assertions hold
y σ x G 1 ( x σ x ) x σ x x σ x < r ,
z σ x G 2 ( x σ x ) x σ x x σ x ,
and
x σ + 1 x G 3 ( x σ x ) x σ x x σ x < r ,
where the G i functions are given previously and r is defined by (9). Furthermore, x is the only solution of equation F ( x ) = 0 given in Ω 2 by ( B 6 ) .
Proof. 
Sequence { x σ } shall be shown to be well defined, to remain in S ( x , r ) and to converge to x using mathematical induction. In order to achieve this, we shall also show estimates (14)–(16). Let us assume that x S ( x , r ) { x } . Using B 2 , (8) and (9), we have
F ( x ) 1 F ( x ) F ( x ) ψ 0 ( x x ) ψ 0 ( r ) < 1 .
The Banach perturbation lemma on inversible operators [6], together with estimation (16), ensure: the existence of F ( x ) 1
F ( x ) 1 F ( x ) 1 ψ 0 ( x x ) ,
so
y σ x = x σ x F ( x σ ) 1 F ( x σ ) F ( x σ ) 1 F ( x ) 0 1 F ( x ) 1 F x + μ ( x σ x ) F ( x σ ) d μ ( x σ x ) 0 1 ψ ( 1 μ ) x σ x d μ 1 ψ 0 ( x σ x ) x σ x G 1 ( x σ x ) x σ x x σ x < r ,
z σ x = y σ x 1 4 I F ( y σ ) 1 F ( x σ ) 2 F ( y σ ) 1 F ( y σ ) y σ x 1 4 F ( y σ ) 1 F ( y σ ) F ( x σ ) 2 F ( y σ ) 1 F ( y σ ) [ 1 + 1 4 ψ 0 ( x σ x ) + ψ 0 G 1 ( x σ x ) x σ x 1 ψ 0 G 1 ( x σ x ) x σ x 2 × 0 1 v μ G 1 ( x σ x ) x σ x d μ 1 ψ 0 G 1 ( x σ x ) x σ x ] y σ x G 2 ( x σ x ) x σ x
and
x σ + 1 x = z σ x F ( z σ ) 1 F ( z σ ) + F ( z σ ) 1 F ( y σ ) F ( z σ ) F ( y σ ) 1 F ( z σ ) 1 2 I 2 F ( y σ ) 1 F ( x σ ) + F ( y σ ) 1 F ( x σ ) 2 F ( y σ ) 1 F ( z σ ) = z σ x F ( z σ ) 1 F ( z σ ) + F ( z σ ) 1 F ( y σ ) F ( z σ ) F ( y σ ) 1 F ( z σ ) 1 2 I F ( y σ ) 1 F ( x σ ) F ( y σ ) 1 F ( z σ ) [ 0 1 ψ ( 1 μ ) z σ x d μ 1 ψ 0 x σ x + ψ 0 ( z σ x ) + ψ 0 ( y σ x ) 0 1 v ( μ z σ x ) d μ 1 ψ 0 ( y σ x ) 1 ψ 0 ( z σ x ) + 1 2 ψ 0 ( x σ x ) + ψ 0 ( y σ x ) 1 ψ 0 ( y σ x ) 2 0 1 v ( μ z σ x ) d μ 1 ψ 0 ( y σ x ) ] z σ x G 3 ( x σ x ) x σ x x σ x < r .
The induction for assertions (14)–(16) is terminated by simply substituting x σ , y σ , z σ and x σ + 1 by x σ + 1 , y σ + 1 , z σ + 1 and x σ + 2 , respectively in the preceding calculations. It follows by the estimation
x σ + 2 x q x σ + 1 x < r ,
where 0 q = G 3 ( x 0 x ) < 1 that lim σ x σ = x . Finally, set T = 0 1 F u + μ ( x u ) d μ , for u Ω 2 with F ( u ) = 0 . Then, by hypotheses ( B 2 ) and ( B 6 ) , we obtain
F ( x ) 1 T F ( x ) 0 1 ψ 0 ( μ x u ) d μ 0 1 ψ 0 ( μ r ¯ ) d μ < 1 ,
so u = x is implied by the existence of T 1 and the estimate 0 = F ( x ) F ( u ) = T ( x u ) .
Secondly, we study iterative procedure (3) in an analogous way. There will be no change in the function G 1 . However, we must re-define the functions G 2 and G 3 in the following way with G ¯ 1 = G 1 :
G ¯ 2 ( θ ) = [ 0 1 ψ ( 1 μ ) G ¯ 1 ( θ ) θ d μ 1 ψ 0 G ¯ 1 ( θ ) θ + ψ 0 ( θ ) + ψ 0 G ¯ 1 ( θ ) θ 0 1 v μ G ¯ 1 ( θ ) θ d μ 1 ψ 0 ( θ ) 1 ψ 0 G ¯ 1 ( θ ) θ + 3 4 ψ 0 ( θ ) + ψ 0 G ¯ 1 ( θ ) θ 1 ψ 0 G ¯ 1 ( θ ) θ 2 0 1 v μ G ¯ 1 ( θ ) θ d μ 1 ψ 0 ( θ ) + ψ 0 ( θ ) + ψ 0 G ¯ 1 ( θ ) θ 1 ψ 0 G ¯ 1 ( θ ) θ × 0 1 v μ G ¯ 1 ( θ ) θ d μ 1 ψ 0 G ¯ 1 ( θ ) θ ] G ¯ 1 ( θ )
and
G ¯ 3 ( θ ) = [ 0 1 ψ ( 1 μ ) G ¯ 2 ( θ ) θ d μ 1 ψ 0 G ¯ 2 ( θ ) θ + ψ 0 ( θ ) + ψ 0 G ¯ 2 ( θ ) θ 0 1 v μ G ¯ 2 ( θ ) θ d μ 1 ψ 0 ( θ ) 1 ψ 0 G ¯ 2 ( θ ) θ + 1 2 ψ 0 ( θ ) + ψ 0 G ¯ 1 ( θ ) θ 1 ψ 0 G ¯ 1 ( θ ) θ 2 0 1 v μ G ¯ 2 ( θ ) θ d μ 1 ψ 0 ( θ ) + ψ 0 ( θ ) + ψ 0 G ¯ 1 ( θ ) θ 1 ψ 0 G ¯ 1 ( θ ) θ × 0 1 v μ G ¯ 2 ( θ ) θ d μ ] G ¯ 2 ( θ ) ,
respectively.
Define radius r ¯ corresponding to method (3) similarly by
r ¯ = min { r ¯ i } .
Then, we arrive at the following theorem with these changes:
Theorem 2.
Under the conditions ( B ) for r ˜ = r ¯ , further suppose that x 0 S ( x , r ¯ ) { x } . Then, sequence { x σ } generated by iterative scheme (3) is well defined, remains in S ( x , r ¯ ) for all σ = 0 , 1 , 2 , 3 , and converges to x . Moreover, the following assertions hold
y σ x G ¯ 1 ( x σ x ) x σ x x σ x < r ,
z σ x G ¯ 2 ( x σ x ) x σ x x σ x ,
and
x σ + 1 x G ¯ 3 ( x σ x ) x σ x x σ x < r ,
where the G ¯ i functions are given previously. Furthermore, x is the only solution of equation F ( x ) = 0 given in Ω 2 by ( B 6 ) .
Proof. 
By simply repeating the proof of Theorem 1 but using iterative procedure (3) instead of method (2), we get the estimates
z σ x = y σ x F ( y σ ) 1 F ( y σ ) + F ( y σ ) 1 F ( x σ ) 1 F ( y σ ) + I 1 4 I + F ( y σ ) 1 F ( x σ ) 2 F ( x σ ) 1 F ( y σ ) = y σ x F ( y σ ) 1 F ( y σ ) + F ( y σ ) 1 F ( x σ ) 1 F ( y σ ) 1 F ( x σ ) 1 F ( y σ ) + 3 4 I F ( y σ ) 1 F ( x σ ) 2 F ( x σ ) 1 F ( y σ ) + I F ( y σ ) 1 F ( x σ ) F ( y σ ) 1 F ( y σ ) [ 0 1 ψ ( 1 μ ) y σ x d μ 1 ψ 0 y σ x + ψ 0 ( x σ x ) + ψ 0 y σ x 0 1 v μ y σ x d μ 1 ψ 0 ( x σ x ) 1 ψ 0 y σ x + 3 4 ψ 0 ( x σ x ) + ψ 0 y σ x 1 ψ 0 y σ x 2 0 1 v μ y σ x d μ 1 ψ 0 ( x σ x ) + ψ 0 ( x σ x ) + ψ 0 ( y σ x ) 1 ψ 0 ( y σ x ) 0 1 v μ y σ x d μ 1 ψ 0 ( y σ x ) ] y σ x , G ¯ 2 ( x σ x ) x σ x x σ x ,
and
x σ + 1 x = z σ x F ( z σ ) 1 F ( z σ ) + F ( z σ ) 1 F ( x σ ) 1 F ( z σ ) + 1 2 I F ( y σ ) 1 F ( x σ ) 2 F ( x σ ) 1 F ( z σ ) + I F ( y σ ) 1 F ( x σ ) F ( y σ ) 1 F ( z σ ) [ 0 1 ψ ( 1 μ ) z σ x d μ 1 ψ 0 z σ x + ψ 0 ( x σ x ) + ψ 0 z σ x 0 1 v μ z σ x d μ 1 ψ 0 ( z σ x ) 1 ψ 0 x σ x + 1 2 ψ 0 ( x σ x ) + ψ 0 y σ x 1 ψ 0 y σ x 2 0 1 v μ z σ x d μ 1 ψ 0 ( x σ x ) + ψ 0 ( x σ x ) + ψ 0 ( y σ x ) 1 ψ 0 ( y σ x ) 0 1 v μ z σ x d μ ] z σ x , G ¯ 3 ( x σ x ) x σ x x σ x .
The proof of uniqueness of the solution is given in Theorem 1. □
Next, in order to study the local convergence of iterative procedure (3), we add condition ( B ) in ( B ) as follows:
(B′) 
For some functions H : [ 0 , 2 ρ ) [ 0 , ) continuous and increasing, we have
Q ( x , y ) H ( x y )
Again, there are no changes in the function G 1 . But, we have to re-define the functions G ¯ ¯ 2 and G ¯ ¯ 3 in the following way for G ¯ ¯ 1 G ¯ 1 :
G ¯ ¯ 2 ( θ ) = 1 + 1 β 0 1 v μ G ¯ ¯ 1 ( θ ) θ d μ 1 ψ 0 ( θ ) G ¯ ¯ 1 ( θ ) .
G ¯ ¯ 3 ( θ ) = G ¯ ¯ 2 ( θ ) + 2 1 β β 0 1 ψ 0 μ G ¯ ¯ 1 ( θ ) θ d μ G ¯ ¯ 1 ( θ ) + | β | 0 1 v μ G ¯ ¯ 2 ( θ ) θ d μ G ¯ ¯ 2 ( θ ) 1 ψ 0 ( θ ) .
and
G ¯ ¯ 4 ( θ ) = 1 + H ( θ ) 0 1 v μ G ¯ ¯ 1 ( θ ) θ d μ 1 ψ 0 ( θ ) G ¯ ¯ 3 ( θ ) .
We define the radius of convergence for method (4) in the following way:
r ¯ ¯ = min { r ¯ ¯ i } , i = 1 , 2 , 3 , 4 ,
where r ¯ ¯ 4 is the smallest positive solution of the equation
G ¯ ¯ 4 ( θ ) 1 = 0 .
With these new functions, we arrive at the following theorem:
Theorem 3.
Under the conditions ( B ) for r ˜ = r ¯ ¯ , further suppose that x 0 S ( x , r ¯ ¯ ) { x } . Then, sequence { x σ } generated by iterative scheme (4) is well defined, remains in S ( x , r ¯ ¯ ) for all σ = 0 , 1 , 2 , 3 , and converges to x . Moreover, the following assertions hold
y σ x G ¯ ¯ 1 ( x σ x ) x σ x x σ x < r ,
z σ x G ¯ ¯ 2 ( x σ x ) x σ x x σ x ,
and
x σ + 1 x G ¯ ¯ 3 ( x σ x ) x σ x x σ x < r ,
where the G ¯ ¯ i functions are given previously. Furthermore, x is the only solution of equation F ( x ) = 0 given in Ω 2 by ( B 6 ) .
Proof. 
By simply repeating the proof of Theorem 1 but using iterative procedure (4) instead of method (2), we get the estimates
z σ x = y σ x 1 β F ( x σ ) 1 F ( y σ ) = 1 + 1 | β | 0 1 v μ y σ x d μ 1 ψ 0 ( x σ x ) y σ x , G ¯ ¯ 2 ( x σ x ) x σ x x σ x ,
w σ x = z σ x F ( x σ ) 1 2 1 β β F ( y σ ) + β F ( z σ ) z σ x + 2 1 β β 0 1 ψ 0 μ y σ x d μ 1 ψ 0 ( x σ x ) y σ x + | β | 0 1 v μ z σ x d μ z σ x G ¯ ¯ 3 ( x σ x ) x σ x x σ x ,
and
x σ + 1 x = w σ x Q ( t σ ) F ( x σ ) 1 F ( w σ ) 1 + G ( x σ x ) 0 1 v μ w σ x d μ 1 ψ 0 ( x σ x ) w σ x G ¯ ¯ 4 ( x σ x ) x σ x x σ x .
The proof of uniqueness of the solution is given in Theorem 1. □

3. Numerical Examples

Here, we present the computational results based on the suggested theoretical results in this paper. We also compare the results of iterative procedures (2)–(4) with Q ( t σ ) = I + β t σ 1 1 β I + 1 4 ( 6 β 2 β ) t σ 1 1 β I 2 on the basis of radii of convergence. By the proceeding definition of H ( θ ) , we choose
H ( θ ) = 1 + ψ 0 ( θ ) + ψ 0 G 1 ( θ ) θ 2 1 ψ 0 ( θ ) + | 6 β 1 | ψ 0 ( θ ) + ψ 0 G 1 ( θ ) θ 2 16 | β | 1 ψ 0 ( θ ) 2 ,
for method (4). This way, hypothesis ( B ) is satisfied. We use [ x , y ; F ] = 0 1 F y + μ ( x y ) d μ . We choose a well mixture of standard and applied science problems for the computational results, which are illustrated in Examples 1–5. The results are listed in Table 1, Table 2, Table 3, Table 4 and Table 5. Additionally, we obtain the C O C approximated by means of
λ = ln x σ + 1 x | x σ x ln x σ x x σ 1 x , for   σ = 1 , 2 ,
or A C O C [19] by:
λ = ln x σ + 1 x σ x σ x σ 1 ln x σ x σ 1 x σ 1 x σ 2 , for   σ = 2 , 3 ,
In addition, we adopt ϵ = 10 100 as the error tolerance and the terminating criteria to solve nonlinear system or scalar equations are: ( i ) x σ + 1 x σ < ϵ , and ( i i ) F ( x σ ) < ϵ .
The computations are performed with the package M a t h e m a t i c a 11 with multiple precision arithmetic.
Example 1.
Following the example presented in the Introduction, for x = 1 , we can set
ψ 0 ( θ ) = ψ ( θ ) = 96.6629073 θ a n d v ( θ ) = 2 .
In Table 1, we present radii for example (1).
Example 2.
Let B 1 = B 2 = R 3 and Ω = S ( 0 , 1 ) . Assume F on Ω with v = ( x , y , z ) T as
F ( u ) = F ( u 1 , u 2 , u 3 ) = e u 1 1 , e 1 2 u 2 2 + u 2 , u 3 T ,
where, u = ( u 1 , u 2 , u 3 ) T . Then, we obtain
F ( u ) = e u 1 0 0 0 ( e 1 ) u 2 + 1 0 0 0 1 ,
the Fréchet-derivative. Hence, for x = ( 0 , 0 , 0 ) T , F ( x ) = F ( x ) 1 = d i a g { 1 , 1 , 1 } , we have
ψ 0 ( θ ) = ( e 1 ) θ , ψ ( θ ) = e 1 e 1 θ a n d v ( θ ) = e 1 e 1 .
So, we obtain convergence radii that are mentioned in Table 2.
Example 3.
The kinematic synthesis problem for steering [20,26], is given as
E i ν 2 sin η i ν 3 H i ν 2 sin φ i ν 3 2 + H i ν 2 cos φ i + 1 H i ν 2 cos η i 1 2 ν 1 ν 2 sin η i ν 3 ν 2 cos φ i + 1 ν 1 ν 2 cos η i ν 3 ν 2 sin φ i ν 3 2 = 0 , for i = 1 , 2 , 3 ,
where
E i = ν 3 ν 2 sin φ i sin φ 0 ν 1 ν 2 sin φ i ν 3 + ν 2 cos φ i cos φ 0 , i = 1 , 2 , 3
and
H i = ν 3 ν 2 sin η i + ν 2 cos η i + ν 3 ν 1 ν 2 sin η 0 + ν 2 cos η 0 + ν 1 ν 3 , i = 1 , 2 , 3 .
In Table 6, we present the values of η i and φ i (in radians).
The approximated solution is for Ω = B ¯ x , 1 8
x = ( 0.9051567 , 0.6977417 , 0.6508335 ) T .
Then, we get
ψ 0 ( θ ) = ψ ( θ ) = θ a n d v ( θ ) = 2 .
We provide the radii of convergence for Example 3 in Table 3.
Example 4.
Consider the following nonlinear system that involves logarithmic functions
H ( ν ) = ln ( ν i + 1 ) ν i 20 , i = 1 , 2 , 3 , , N
where ν = ( ν 1 , ν 2 , ν 3 , , ν σ ) T . For N = 50 , the required zero is x = ( 0 , 0 , 0 , , 0 ) T . Then, we have for Ω = B ¯ x , 1 20
ψ 0 ( θ ) = ψ ( θ ) = 1 2 θ a n d v ( θ ) = 2 .
We mentioned the radii of convergence for Example 4 in Table 4.
Example 5.
Let us consider that T = Y = C [ 0 , 1 ] , Ω = B ¯ ( 0 , 1 ) and introduce the domain of maps continuous in [ 0 , 1 ] having the max norm. We consider the following function φ on A :
Ψ ( ϕ ) ( x ) = Ψ ( x ) 0 1 x τ ϕ ( τ ) 3 d τ ,
which further yields:
Ψ ϕ ( μ ) ( x ) = μ ( x ) 3 0 1 x τ ϕ ( τ ) 2 μ ( τ ) d τ , f o r μ Ω .
We have x = 0 and
ψ 0 ( θ ) = 7.5 θ , ψ ( θ ) = 15 θ and v ( θ ) = 2 .
We list the radii of convergence for Example 5 in Table 5.
Remark 1.
We have noticed that, in all five examples, method (2) has a bigger radius of convergence as compared to all the other mentioned methods. So, we conclude that method (2) is better than the methods (3) and (4) in terms of convergent points and domain of convergence.

4. Conclusions

A comparative study was presented for three high convergence order methods utilizing only the first derivative (and the divided difference of order one) that only exist in these methods. Our analysis generated error bounds and results on the uniqueness of x that can be computed using majorant functions. However, in earlier studies, these concerns were not addressed and the procedures were limited to operators with the ninth order derivatives that are not in these methods. Our technique is applicable to extend to other procedures, since it is so general. In our numerical experiments, a comparison is given between the convergence radii.

Author Contributions

R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; Writing—Review & Editing, F.O.M.: Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. G-110-130-1441.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. G-110-130-1441. The authors, therefore, acknowledge with thanks DSR for technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1964. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice- Hall Series in Automatic Computation; Chelsa Publishing Company: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  4. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  5. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
  6. Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, A.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Algorith. 2017, 74, 371–391. [Google Scholar] [CrossRef]
  7. Amiri, A.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. Stability analysis of a parametric family of seventh-order iterative methods for solving nonlinear systems. Appl. Math. Comput. 2018, 323, 43–57. [Google Scholar] [CrossRef]
  8. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameter planes of iterative families and methods. Sci. World J. 2013, 2013, 506–519. [Google Scholar] [CrossRef]
  9. Cordero, A.; Garcéa-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P.; Vindel, P. Chaos in King’s iterative family. Appl. Math. Lett. 2013, 26, 842–848. [Google Scholar] [CrossRef] [Green Version]
  10. Cordero, A.; Garcéa-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional stability analysis of a family of biparametric iterative methods. J. Math. Chem. 2017, 55, 1461–1480. [Google Scholar] [CrossRef] [Green Version]
  11. Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient high-order iterative methods for solving nonlinear systems and their application on heat conduction problems. Complexity 2017, 2017, 6457532. [Google Scholar] [CrossRef] [Green Version]
  12. Cordero, A.; Gutiérrez, J.M.; Magreñán, A.A.; Torregrosa, J.R. Stability analysis of a parametric family of iterative methods for solving nonlinear models. Appl. Math. Comput. 2016, 285, 26–40. [Google Scholar] [CrossRef]
  13. Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension. Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
  14. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  15. Geum, Y.H.; Kim, Y.I.; Neta, B. A sixth-order family of three-point modified newton-like multiple-root finders and the dynamics behind their extraneous fixed points. Appl. Math. Comput. 2016, 283, 120–140. [Google Scholar] [CrossRef] [Green Version]
  16. Gutiérrez, J.M.; Hernández, M.A.; Romero, N. Dynamics of a new family of iterative processes for quadratic polynomials. J. Comput. Appl. Math. 2010, 233, 2688–2695. [Google Scholar] [CrossRef] [Green Version]
  17. Gutiérrez, J.M.; Plaza, S.; Romero, N. Dynamics of a fifth-order iterative method. Int. J. Comput. Math. 2012, 89, 822–835. [Google Scholar] [CrossRef]
  18. Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA J. 2017, 74, 147–163. [Google Scholar] [CrossRef]
  19. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  20. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algor. 2010, 54, 395–409. [Google Scholar] [CrossRef]
  21. Argyros, I.; Behl, R.; Motsa, S.S. Local Convergence of an Optimal Eighth Order Method under Weak Conditions. Algorithms 2015, 8, 645–655. [Google Scholar] [CrossRef] [Green Version]
  22. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–88. [Google Scholar] [CrossRef]
  23. Blanchard, P. The dynamics of Newton’s method. Proc. Symp. Appl. Math. 1994, 49, 139–154. [Google Scholar]
  24. Blanchard, P. Complex analytic dynamics on the Riemann sphere. Bull. AMS 1984, 11, 85–141. [Google Scholar] [CrossRef] [Green Version]
  25. Magreñán, A.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  26. Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
Table 1. Radii for Example 1.
Table 1. Radii for Example 1.
Methods β r 1 r 2 r 3 r 4 r x 0 σ λ
(2)-0.006896820.005368280.00450397-0.004503971.00427.9936
(3)-0.006896820.003760880.00298051-0.002980511.00127.9989
(4)10.006896820.003448410.00156060.0006211050.0006211051.000526.9996
(4) 0.2 0.006896820.001418250.001057890.000395020.000395021.000327.9997
It is straightforward to say that method (2) is better than other mentioned methods because it has larger radius of convergence.
Table 2. Radii for Example 2.
Table 2. Radii for Example 2.
Methods β r 1 r 2 r 3 r 4 r x 0 σ λ
(2)-0.3826920.300560.254218-0.254218 ( 0.2 , 0.2 , 0.2 ) T 37.8793
(3)-0.3826920.2141320.172611-0.172611 ( 0.16 , 0.16 , 0.16 ) T 38.0000
(4)10.3826920.1983280.09494980.0405250.040525 ( 0.04 , 0.04 , 0.04 ) T 36.9999
(4) 0.2 0.3826920.0835950.06409970.02571190.0257119 ( 0.021 , 0.021 , 0.021 ) T 37.9987
It is clear to say on the basis of above table that method (2) has a larger radius of convergence as compared to the other mentioned methods. So, we concluded that it better than the methods namely, (3) and (4).
Table 3. Radii for Example 3.
Table 3. Radii for Example 3.
Methods β r 1 r 2 r 3 r 4 r x 0 σ λ
(2)-0.6666670.5189140.435347-0.435347 ( 1 , 0.79 , 0.75 ) T 55.9774
(3)-0.6666670.3635370.288104-0.288104 ( 1.01 , 0.82 , 0.72 ) T 45.9920
(4)10.6666670.3333330.1508520.06003780.0600378 ( 0.85 , 0.64 , 0.61 ) T 45.8733
(4) 0.2 0.6666670.1370920.1022590.03818380.0381838 ( 0.88 , 0.67 , 0.63 ) T 45.9998
Since the method (2) has a larger radius of convergence as compared to the other methods (3) and (4). This means that method (2) has a wider domain for the choice of the starting points. So, we conclude that method (2) has more number of convergent points as compared to methods (3) and (4).
Table 4. Radii of convergence for Example 4.
Table 4. Radii of convergence for Example 4.
Methods β r 1 r 2 r 3 r 4 r x 0 σ λ
(2)-1.333331.037830.870694-0.870694 ( 0.8 , 0.8 , , 0.8 ) T 38.0001
(3)-1.333330.7270750.576209-0.576209 ( 0.5 , 0.5 , , 0.5 ) T 38.0000
(4)11.333330.6666670.3017040.1200760.120076 ( 0.11 , 0.11 , , 0.11 ) T 37.0000
(4) 0.2 1.333330.2741840.2045180.07636760.0763676 ( 0.07 , 0.07 , , 0.07 ) T 38.0000
We noticed from the above table that method (2) has better choices of staring points as compared to methods (3) and (4). Because methods (3) and (4) have a smaller domain of convergence as a contrast to method (2).
Table 5. Radii of convergence for Example 5.
Table 5. Radii of convergence for Example 5.
Methods β r 1 r 2 r 3 r 4 r
(2)-0.06666670.05276020.0438416-0.0438416
(3)-0.06666670.03680140.0303958-0.0303958
(4)10.06666670.02922980.01189070.004409010.00440901
(4) 0.2 0.06666670.01038070.007604540.002702170.00270217
It is straightforward to say on the basis of above table that method (2) has a larger domain of convergence in contrast to methods (3) and (4).
Table 6. Values of η i and φ i (in radians) for Example 3.
Table 6. Values of η i and φ i (in radians) for Example 3.
i η i φ i
0 1.3954170041747090114 1.7461756494150842271
1 1.7444828545735749268 2.0364691127919609051
2 2.0656234369405315689 2.2390977868265978920
3 2.4600678478912500533 2.4600678409809344550
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Mallawi, F.O. Some High-Order Convergent Iterative Procedures for Nonlinear Systems with Local Convergence. Mathematics 2021, 9, 1375. https://doi.org/10.3390/math9121375

AMA Style

Behl R, Argyros IK, Mallawi FO. Some High-Order Convergent Iterative Procedures for Nonlinear Systems with Local Convergence. Mathematics. 2021; 9(12):1375. https://doi.org/10.3390/math9121375

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, and Fouad Othman Mallawi. 2021. "Some High-Order Convergent Iterative Procedures for Nonlinear Systems with Local Convergence" Mathematics 9, no. 12: 1375. https://doi.org/10.3390/math9121375

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop