Next Article in Journal
On Joint Universality in the Selberg–Steuding Class
Previous Article in Journal
Fuzzy Property Grammars for Gradience in Natural Language
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended Seventh Order Derivative Free Family of Methods for Solving Nonlinear Equations

by
Ramandeep Behl
1,*,
Ioannis K. Argyros
2,
Fouad Othman Mallawi
1 and
Sattam Alharbi
1,3
1
Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(3), 736; https://doi.org/10.3390/math11030736
Submission received: 26 December 2022 / Revised: 25 January 2023 / Accepted: 25 January 2023 / Published: 1 February 2023

Abstract

:
A plethora of applications from Computational Sciences can be identified for a system of nonlinear equations in an abstract space. These equations are mostly solved with an iterative method because an analytical method does not exist for such problems. The convergence of the method is established by sufficient conditions. Recently, there has been a surge in the development of high convergence order methods. Local convergence results reveal the degree of difficulty when choosing the initial points. However, these methods may converge even in cases not guaranteed by the conditions. Moreover, it is not known in advance how many iterations should be carried out to reach a certain error tolerance. Furthermore, no computable information is provided about the isolation of the solution in a certain region containing it. The aforementioned concerns constitute the motivation for writing this article. The novelty of the works is the expansion of the applicability of the method under ω continuity conditions considered for the involved operator. The technique is demonstrated using a derivative-free seventh convergence three-step method. However, it was found that it can be used with the same effectiveness as other methods containing inverses of linear operators. The technique also uses information about the operators appearing in this method. This is in contrast to earlier works utilizing derivatives or divided differences not on the method which may not even exist for the problem at hand. The numerical experiments complement the theory.

1. Introduction

Finding an iterative method for a locally unique solution x * of following a nonlinear system is one of the most challenging and difficult tasks
F ( x ) = 0 .
Here, the operator F is defined on subset D of a Banach space X with values in itself. Most of the iterative methods which generate iterations required certain initial hypotheses for convergence to x * .
A popular iterative method is defined for all σ = 0 , 1 , 2 , 3 , as
x σ + 1 = x σ F ( x σ ) 1 F ( x σ ) .
This is the so-called Newton’s method which is only quadratically convergent [1,2,3]. One of the biggest problems with this method is that it requires the inversion of the Fréchet derivative at each step. On the other hand, finding the derivative of nonlinear operators is quite a challenging and time-consuming task and some Fréchet derivatives do not even exist. Therefore, scholars focus on derivative free iterative methods.
If we replace F in the scheme (2) by a divided difference of order one, an efficient derivative-free iterative method is defined for all σ = 0 , 1 , 2 , 3 , as
x σ + 1 = x σ F [ u σ , x σ ] 1 F ( x σ ) , u σ = x σ + F ( x σ ) ,
where F [ u σ , x σ ] : D × D ( X ) is the first order divided difference for D and the notation of ( X ) stands for the space of continuous linear operators from X into itself. This is the so-called Steffensen’s method which is also quadratically convergent [4,5,6].
Other single-step methods are the Secant [7,8] and the Kurchatov method [9] defined, respectively, by
x σ + 1 = x σ F [ x σ 1 , x σ ] 1 F ( x σ ) and x σ + 1 = x σ F [ 2 x σ x σ 1 , x σ 1 ] 1 F ( x σ ) .
The convergence order of these methods is 1 + 5 2 and 2, respectively. To elevate the order convergence as well as the efficiency, a plethora of iterative methods have been developed [10,11,12,13]. Among them, special attention has been paid to the local convergence of the method studied in [14,15], which is defined for x 0 D by
y σ = x σ F [ u σ , v σ ] 1 F ( x σ ) , u σ = x σ + F ( x σ ) , v σ = x σ F ( x σ ) , σ = 0 , 1 , 2 , z σ = y σ 3 I 2 F [ u σ , v σ ] 1 F [ y σ , x σ ] F [ u σ , v σ ] 1 F ( y σ ) x σ + 1 = z σ 13 4 I F [ u σ , v σ ] 1 F [ z σ , y σ ] 7 2 I 5 4 F [ u σ , v σ ] 1 F [ z σ , y σ ] F [ u σ , v σ ] 1 F ( z σ ) .
The motivation, construction, validity and comparison to other methods using similar information are explained in [14,15]. Taylor series expansions and derivatives of the eighth-order are used in the convergence analysis, although such derivatives do not occur anywhere in the expression (5). This way, the order of convergence seven was determined. It is noticed that no computable error bounds or any information on the uniqueness of the solution are obtained using, for example, Lipschitz-type functions. Moreover, a uniqueness of the solution region is not specified either. These problems limit the applicability of the method.
A motivational example is considered. For example, define the following function: F on X = R for D = [ 0.4 , 1.3 ] :
F ( τ ) = τ 2 ln τ 2 + τ 4 τ 3 , τ 0 0 , τ = 0 .
The second derivative of function F ( τ ) does not exist for D . Therefore, results requiring the existence of F ( τ ) or higher cannot be applied when studying the convergence of (5) to τ * = 1 . Notice that there is a plethora of iteration functions [16,17,18,19,20,21,22] used for the solutions of equations that also raise the same concerns. It is mentioned in these articles that x 0 should be sufficiently close to x * for convergence to be realized. However, nothing is said about how close x 0 should be to x * for convergence. That is, the choice of the starting point is a shot in the “dark”. Hence, the radius of the convergence ball is required. The counter expression (6) can also be used for other methods proposed in [6,7,22,23]. Local convergence results are important, since they demonstrate how difficult it is to choose the starting point. In this article, only the hypothesis on the continuity of operator F is employed in method (5). Moreover, a method is developed that is based on Lipschitz constants to find the convergence radii as well as the error estimations and the uniqueness of the solution results. Furthermore, the range of the starting guess x 0 is given, which guarantees the convergence of (5).

2. Convergence

Parameters and functions are used in the local convergence of the method (5). Set A = [ 0 , ) and consider parameters a 0 , b 0 and d 0 .
Suppose function:
(1)
p 0 ( τ ) 1
has a minimal zero S 0 A { 0 } for some function p 0 : A A which is continuous and non decreasing. Set A 0 = [ 0 , S 0 ) .
(2)
q 1 ( τ ) 1
has a minimal zero R 1 A 0 { 0 } for some function p : A 0 A which is a continuous and non decreasing function q 1 : A 0 A defined by
q 1 ( τ ) = p ( τ ) 1 p 0 ( τ ) .
(3)
q 2 ( τ ) 1
has a minimal zero R 2 A 0 { 0 } for some functions p 1 : A 0 A , p 3 : A 0 A which are continuous and non decreasing and function q 2 : A 0 A is defined by
q 2 ( τ ) = p 3 ( τ ) 1 p 0 ( τ ) + 2 d p 0 ( τ ) + p 1 ( τ ) ( 1 p 0 ( τ ) ) 2 q 1 ( τ ) .
(4)
q 3 ( τ ) 1
has a minimal zero R 3 A 0 { 0 } for some functions p 4 : A 0 A , p 5 : A 0 A which are continuous and non-decreasing and function q 3 : A 0 A defined by
q 3 ( τ ) = p 4 ( τ ) 1 p 0 ( τ ) + d h ( τ ) 5 h ( τ ) + 4 4 1 p 0 ( τ ) q 2 ( τ ) ,
where
h ( τ ) = p 0 ( τ ) + p 5 ( τ ) 1 p 0 ( τ ) .
The parameter R is defined by
R = min { R j } , j = 1 , 2 , 3
is proven to be a convergence radius for method (5). Set A 1 = [ 0 , R ) .
(5)
The definition of R implies that for all τ A 1
0 p 0 ( τ ) < 1
and
0 q j ( τ ) < 1
hold.
The notations U [ x * , ρ ) and U [ x * , ρ ] are used to denote open and closed balls in X , respectively, of center x * and radius ρ > 0 . The functions “p” are as previously given and x * is a simple solution of Equation ( 1 ) . Moreover, the following conditions are considered.Suppose:
(H1)
For all x D
F ( x * ) 1 F x + F ( x ) , x F ( x ) F ( x * ) p 0 | | x x * | | , I + F x , x * a and I F x , x * b .
Set D 0 = U [ x * , R ] D .
(H2)
For all x D 0
F ( x * ) 1 F x + F ( x ) , x F ( x ) x , x * ; F p ( | | x x * | | ) , F ( x * ) 1 F y , x F ( x * ) p 1 | | x x * | | , F ( x * ) 1 F x , x * F ( x * ) p 2 | | x x * | | , F ( x * ) 1 F x + F ( x ) , x F ( x ) F y , x * p 3 | | x x * | | , F ( x * ) 1 F x + F ( x ) , x F ( x ) F z , x * p 4 | | x x * | | , F ( x * ) 1 F z , y F ( x * ) p 5 | | x x * | | , and F ( x * ) 1 F [ x , x * ] d
and
(H3)
U x * , R * D , w h e r e R * = m a x { a R , b R , R } .
Next, the local convergence of method (5) is developed with the preceding notation.
Theorem 1. 
Suppose that the hypotheses is ( H 1 ) ( H 3 ) and hold and select a starting point x 0 U ( x * , R ) { x * } . Then, the following error bounds hold for all σ = 0 , 1 , 2 , . . .
| | y σ x * | | q 1 | | x σ x * | | | | x σ x * | | | | x σ x * | | < R ,
| | z σ x * | | q 2 | | x σ x * | | | | x σ x * | | | | x σ x * | | ,
and
| | x σ + 1 x * | | q 3 | | x σ x * | | | | x σ x * | | | | x σ x * | | ,
with the functions q j ( j = 1 , 2 , 3 ) and the radius R * as previously given. Moreover, the sequence { x σ } produced by method (5) is convergent to x * .
Proof. 
Items (10)–(12) are proven using induction. L e t x U ( x * , R ) x * be arbitrary. Then, it follows by ( H 1 )
| | x + F ( x ) x * | | = | | I + F [ x , x * ] ( x x * ) | | | | I + F [ x , x * ] | | | | x x * | | a R R * .
Similarly, the following is obtained:
| | x F ( x ) x * | | = | | I F [ x , x * ] ( x x * ) | | | | I F [ x , x * ] | | | | x x * | | b R R * .
Thus, it follows the iterate x + F ( x ) , x F ( x ) U [ x * , R * ] D (by ( H 3 ) ). The application of ( H 1 ) , (7) and (8) leads to
F ( x * ) 1 F [ x + F ( x ) , x F ( x ) ] F ( x * ) p 0 ( | | x x * | | ) p 0 ( R ) < 1 .
Hence, F x + F ( x ) , x F ( x ) 1 ( X ) with
F x + F ( x ) , x F ( x ) 1 F ( x * ) 1 1 p 0 ( | | x x * | | ) ,
due to a lemma by Banach on invertible operators [1,2,8,23]. Notice also that for x = x 0 , iteration y 0 exits according to the first sub step of method (5) for σ = 0 , which can be written as follows:
y 0 x * = F [ u 0 , v 0 ] 1 F [ u 0 , v 0 ] F [ x 0 , x * ] ( x 0 x * ) .
The usage of (7), (9) (for j = 0 ), ( H 2 ) , (13) (for x = x 0 ) and (14) leads to
| | y 0 x * | | p ( | | x 0 x * | | ) | | x 0 x * | | 1 p 0 ( | | x 0 x * | | ) q 1 ( | | x 0 x * | | ) | | x 0 x * | | | | x 0 x * | | < R ,
proving that the iterate y 0 U [ x * , R ] and (10) holds, if σ = 0 . Iterates z 0 and x 1 also exist by (13) and the second and third substeps of method (5) for σ = 0 . Then, the second substep is as follows:
z 0 x * = y 0 x * F [ u 0 , v 0 ] 1 F ( y 0 ) + 2 F [ u 0 , v 0 ] 1 F [ u 0 , v 0 ] F ( x * ) + F ( x * ) F [ y 0 , x 0 ] F [ u 0 , v 0 ] 1 F ( y 0 ) .
By (7), (9) (for j = 0 ), ( H 2 ), (13) (for u = x 0 ), (15), (16) and the triangle inequality, it follows
| | z 0 x * | | p 3 ( | | x 0 x * | | ) 1 p 0 ( | | x 0 x * | | ) + 2 d p 0 ( | | x 0 x * | | ) + p 1 ( | | x 0 x * | | ) 1 p 0 ( | | x 0 x * | | ) 2 | | y 0 x * | | q 2 ( | | x 0 x * | | ) | | x 0 x * | | | | x 0 x * | |
proving that the iterate z 0 U [ x * , R ] and (11) if σ = 0 . An estimate is needed
F [ u 0 , v 0 ] 1 F [ z 0 , y 0 ] I = F [ u 0 , v 0 ] 1 F [ z 0 , y 0 ] F ( x * ) + F ( x * ) F [ u 0 , v 0 ] p 0 ( | | x 0 x * | | ) + p 5 ( | | x 0 x * | | ) 1 p ( | | x 0 x * | | ) h ( | | x 0 x * | | ) : = h 0
Moreover, according to the third substep of method (5), it follows
x 1 x * = z 0 x * F [ u 0 , v 0 ] 1 F ( z 0 ) 1 4 5 F [ u 0 , v 0 ] 1 F [ z 0 , y 0 ] I 2 4 F [ u 0 , v 0 ] 1 F [ z 0 , y 0 ] I F [ u 0 , v 0 ] 1 F ( z 0 ) .
Next, using (7), (9) (for j = 1 ), ( H 2 ), (13) (for u = x 0 ), (15), (17)–(19), and the triangle inequality, the following is obtained:
l l l | | x 1 x * | | p 4 ( | | x 0 x * | | ) 1 p 0 ( | | x 0 x * | | ) + d ( 5 h 0 2 + 4 h 0 ) 4 ( 1 p 0 ( | | x 0 x * | | ) | | z 0 x * | | q 3 ( | | x 0 x * | | ) | | x 0 x * | | | | x 0 x * | | ,
proving that the iterate x 1 U [ x * , R ] and (12) holds, if σ = 0 . Hence, estimates (10)–(12) are shown for σ = 0 . Simply replace x 0 , y 0 , z 0 , x 1 by x i , y i , z i , x i + 1 in the preceding calculations to terminate the induction for items (10)–(12). It then follows by the estimation that
| | x i + 1 x * | | r | | x L x * | | < R ,
proving that the iterate x i + 1 U [ x * , R ] and lim i x i = x * .
A uniqueness region for the solution x * is established in the next result.
Proposition 1. 
Suppose that the following conditions hold:
(H4)
There exists a solution z * U ( x * , r ) of the equation F ( x ) = 0 for some r > 0 .
(H5)
The third condition in ( H 2 ) holds in the ball U ( x * , r )
and
(H6)
There exist R 1 > r such that
p 2 ( R 1 ) < 1 .
Define the region D 1 = U [ x * , R 1 ] D . Then, the only solution of the equation F ( x ) = 0 in the region D 1 is x * .
Proof. 
Define the linear operator M = F [ z * , x * ] . By applying the conditions ( H 4 ) ( H 6 ) , it follows that
F ( x * ) 1 M F ( x * ) p 2 ( z * x * ) p 2 ( R 1 ) < 1 .
Thus, the linear operator is invertible. Then, it follows from
z * x * = M 1 F ( z * ) F ( x * ) = M 1 ( 0 ) = 0 .
Therefore, it is concluded that z * = x * . □
Remark 1. 
(a) 
The following choices are considered:
F [ x , y ] = 1 2 F ( x ) + F ( y )
or
F [ x , y ] = 0 1 F x + θ ( y x ) d θ
or the standard definition of the divided difference when X = R i [6,7,13,23].
Moreover, suppose
F ( x * ) 1 F ( x ) F ( x * ) φ 0 ( | | x x * | | )
and
F ( x * ) 1 F ( x ) F ( y ) φ ( | | x x * | | ) ,
where functions φ 0 : A A , φ : A R are continuous and nondecreasing (see also Examples 1 and 2).
(b) 
Conditions ( H 1 ) ( H 3 ) can be condensed using the classical condition for studying methods involving divided differences instead, as follows
F ( x * ) 1 F [ θ 1 , θ 2 ] F [ θ 3 , θ 4 ] p 6 ( | | θ 1 , θ 2 | | , | | θ 2 θ 4 | | )
for all θ 1 , θ 2 , θ 3 , θ 4 D , where function p 6 : A × A A is continuous and non decreasing in both variables.
(c) 
Clearly, under all the conditions ( H 1 ) ( H 3 ) , set r = R in the Proposition 1.

3. Numerical Applications

This section is dedicated to the application of the proposed theoretical results in the earlier sections. Some nonlinear problems are chosen for computational comparisons. The numerical results are listed in Table 1, Table 2, Table 3 and Table 4. Further, the Computational Order of Convergence ( C O C ) for an iterative method { x σ } in X by
λ = ln ν σ + 1 ν * | ν σ ν * ln ν σ ν * ν σ 1 ν * , f o r σ = 1 , 2 ,
is used as well as the Approximated Computational Order of Convergence
( A C O C ) [22] as:
λ * = ln ν σ + 1 ν σ ν σ ν σ 1 ln ν σ ν σ 1 ν σ 1 ν σ 2 , f o r σ = 2 , 3 ,
The important thing that formulas (22) and (23) do not require any derivatives of the involved operator F for the determination of ( C O C ) and ( A C O C ) . The following terminating criteria for the solutions of nonlinear systems are used:
(i)
ν σ + 1 ν σ < ϵ , and
(ii)
F ( ν σ ) < ϵ , where ϵ = 10 100 .
All the computational work is performed with the help of M a t h e m a t i c a 11 . In addition, multi precision arithmetic for better numerical results is employed.
Example 1. 
Let X = R 3 and D = U [ 0 , 1 ] . Then, for w = ( w 1 , w 2 , w 3 ) t r , consider the operator F as
F ( w ) = w 1 , e w 2 1 , e 1 2 w 3 2 + w 3 t r .
The derivative F is calculated as
F ( w ) = 1 0 0 0 e w 2 0 0 0 ( e 1 ) w 3 + 1 .
The solution x * = ( 0 , 0 , 0 ) t r is verified using (24). Consequently, F ( x * ) = F ( x * ) 1 = D i a g { 1 , 1 , 1 } = I . Then, under the first or the second choice listed in Remark 1, the conditions ( H 1 ) ( H 3 ) are verified, provided that
a = c = d = l 2 1 + e l e 1 , b = l 2 3 + e l e 1 , φ 0 ( τ ) = ( e 1 ) τ , φ ( τ ) = e l e 1 τ , p 0 ( τ ) = l 2 φ 0 ( a t ) + φ 0 ( b τ ) , p ( τ ) = l 2 φ ( c τ ) + φ 0 ( b τ ) , p 1 ( τ ) = l 2 φ 0 ( q 1 ( τ ) τ ) + φ 0 ( τ ) , p 2 ( τ ) = l 2 φ 0 ( τ ) , p 3 ( τ ) = l 2 φ ( a + q 1 ( τ ) ) τ + φ 0 ( b τ ) , p 4 ( τ ) = l 2 φ ( a + q 2 ( τ ) ) τ + φ 0 ( b τ ) a n d p 5 ( τ ) = l 2 φ 0 q 2 ( τ ) τ + φ 0 q 1 ( τ ) τ .
Table 1 provides the radii, number of iterations, convergence order, CPU timing and initial approximation of method (5) for Example 1.
Table 1. Numerical results of Example 1.
Table 1. Numerical results of Example 1.
CaseR R * x 0 F ( x σ ) x σ + 1 σ σ λ CPU Timing
Method (5)0.075310.180351 17 100 , 17 100 , 17 100 t r 8.39844 × 10 222 8.39844 × 10 222 36.937500.20457
Example 2. 
Let X = K [ 0 , 1 ] and D = U ( 0 , 1 ) . In addition, X is the space of operators continuous in [ 0 , 1 ] having the max norm. An operator defined by:
Ψ ( Ω ) ( x ) = Ψ ( x ) 0 1 x γ Θ ( γ ) 3 d γ .
Then, this definition gives
Ψ Ω ( μ ) ( x ) = μ ( x ) 3 0 1 x γ Θ ( γ ) 2 μ ( γ ) d γ , for μ D .
The first or the second choice in Remark 1 the conditions ( H 1 ) ( H 3 ) are verified provided
a = b = d = 1 2 , φ 0 ( τ ) = 7.5 τ , φ ( τ ) = 15 τ , p 0 ( τ ) = 15 4 τ , p ( τ ) = 15 2 τ , p 1 ( τ ) = 15 2 τ , p 2 ( τ ) = 15 4 τ , p 3 ( τ ) = 15 2 τ , p 4 ( τ ) = 15 2 τ and p 5 ( τ ) = 15 4 τ .
For x * = 0 , the radii of method (5), for Example 2, are recorded in Table 2.
Table 2. Radii of convergence for Example 2.
Table 2. Radii of convergence for Example 2.
CaseR R *
Method (5)0.057440.02872
Example 3. 
The kinematic synthesis for r automotive steering problem (for detail, please see in [10,21]), is defined by
x 1 x 2 sin ( α κ ) x 3 x 2 cos ( φ κ ) + 1 x 1 x 2 cos ( α κ ) x 3 x 2 sin ( φ κ ) x 3 2 A κ x 2 sin ( α κ ) x 3 B κ x 2 sin ( φ κ ) x 3 2 [ B κ x 2 cos ( φ κ ) + 1 B κ x 2 cos ( α κ ) 1 ] 2 = 0 .
The values for the angles A κ and B κ are given by
A κ = x 2 cos ( φ κ ) cos ( φ 0 ) x 3 x 2 sin ( φ κ ) sin ( φ 0 ) x 1 x 2 sin ( φ κ ) x 3 , B κ = ( x 3 x 1 ) x 2 sin ( η 0 ) x 3 x 2 sin ( α κ ) + x 2 cos ( η 0 ) + x 1 x 3 x 2 cos ( α κ ) , κ = 1 , 2 , 3 .
In Table 5, the values of α κ and φ κ appear (in radians).
The approximated solution is for D = U [ x * , 1 ]
x * = ( 0.9051567 , 0.6977417 , 0.6508335 ) t .
The number of iterations, convergence order, CPU timing and initial approximation of method (5), for Example 3 are displayed in Table 3.
Table 3. Numerical results for Example 3.
Table 3. Numerical results for Example 3.
Case x 0 F ( x σ ) x σ + 1 σ σ λ CPU Timing
Method (5) 7 10 , 7 10 , 7 10 t r 4.01003 × 10 127 3.66513 × 10 125 45.038182.05587
Example 4. 
Consider a system of nonlinear equations (details can be found in Grau-Sánchez et al. [13]), which is given by
F ( x 1 , x 2 , , x χ ) = j = 1 , j k χ x j e x k , 1 k χ .
For particular value of χ = 10 , a 10 × 10 system of nonlinear equations is obtained. The convergence order, CPU timing and initial approximation of method (5), for this Example 4 appear in Table 4.
Table 4. Numerical results of Example 4.
Table 4. Numerical results of Example 4.
Case x 0 F ( x σ ) x σ + 1 σ σ λ CPU Timing
Method (2) 1 2 , 1 2 , 10 , 1 2 t r 1.21348 × 10 120 8.60364 × 10 122 62.016622.04416
Method (3) 1 2 , 1 2 , 10 , 1 2 t r 1.54853 × 10 178 1.09792 × 10 179 82.01134.24586
Method (5) 1 2 , 1 2 , 10 , 1 2 t r 5.00670 × 10 379 3.54977 × 10 380 37.065160.43371
The Example 4 converges to the solution x * = ( 0.56714 , 0.56714 , 0.56714 , 10 t i m e s , 0.56714 ) t r .
Table 5. Values of α κ and φ κ (in radians) for Example 3.
Table 5. Values of α κ and φ κ (in radians) for Example 3.
κ α κ φ κ
0 1.39541 1.74617
1 1.74448 2.03646
2 2.06562 2.23909
3 2.46006 2.46006
Remark 2. (1) It is clear from Table 4 that method (5) performs better as compared to method (2) and method (3) in terms of residual error, the difference between two consecutive iterations, CPU timing, ( C O C ) and number of iterations.

4. Concluding Remarks

The local convergence of high order methods has been established under weak ω continuity conditions and by using conditions involving only operators for the method (5). The earlier convergence analysis [14,15,24,25] utilized Taylor series expansions under the existence of high order derivatives which are not included within the method (5). Moreover, they may not even exist for the problem at hand. The introduction of the ω continuity conditions provided a computable error analysis as well as the determination of the uniqueness ball for the solution. This way, the number of iterates that need to be computed to obtain a desired error tolerance is known a priori. A region is specified, inside which there is only one solution. The determination of the convergence ball provides a choice for initial points. It is also worth noting that in the previous approaches without the establishment of the convergence ball, the initial point was a shot in the dark. Hence, the applicability of method (5) is extended to cases where this was not possible before. The technique is applicable for similar methods using inverses which are single, multistep and multipoint [7,8,20,23]. This is the direction for future work.

Author Contributions

R.B.: Supervision; conceptualization; methodology; validation; writing—original draft preparation; writing–review and editing. I.K.A.: conceptualization; methodology; validation; writing—original draft preparation; writing—review and editing. F.O.M.: conceptualization; writing—review and editing, supervision. S.A.: conceptualization; writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. (KEP-MSc: 58-130-1443).

Data Availability Statement

Not applicable.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (KEP-MSc: 58-130-1443). The authors, therefore, acknowledge DSR with thanks for technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gutierrez, J.M.; Magreñán, Á.A.; Romero, N. On the semi-local convergence of Newton-Kantorovich method under certain Lipschitz conditions. Appl. Math. Comput. 2013, 221, 79–88. [Google Scholar]
  2. Proinov, P.D. General local convergence theory for a class of iterative processes and its applications to Newton’s method. J. Complex. 2009, 25, 38–62. [Google Scholar] [CrossRef]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  4. Ren, H.; Wu, Q.; Bi, W. A class of two-step Steffenesen type method with fourth-order convergence. Appl. Math. Comput. 2009, 209, 206–210. [Google Scholar]
  5. Steffensen, J.F. Remarks on iteration. Skand. Aktuar. Tidsr. 1933, 16, 64–72. [Google Scholar] [CrossRef]
  6. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  7. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press, Taylor & Francis: New York, NY, USA, 2017. [Google Scholar]
  8. Magreñán, Á.A.; Argyros, I.K. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Academic Press: New York, NY, USA; Elsevier: Amsterdam, The Netherlands, 2019. [Google Scholar]
  9. Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two-step method method fir the nonlinear squares problem with decomposition of operator. J. Numer. Anal. Appl. Math. 2018, 128, 82–95. [Google Scholar]
  10. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algor. 2010, 54, 395–409. [Google Scholar] [CrossRef]
  11. Bahl, A.; Cordero, A.; Sharma, R.; Torregrosa, J.R. A novel bi-parametric sixth order iterative scheme for solving nonlinear systems and its dynamics. Appl. Math. Comput. 2019, 357, 147–166. [Google Scholar] [CrossRef]
  12. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  13. Grau-Sanchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  14. Wang, X.; Zhang, T. A family of Steffenstion-type methods with seventh order convergence. Numer. Algor. 2013, 62, 429–444. [Google Scholar] [CrossRef]
  15. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh order derivative free method for solving nonlinear systems. Number. Algor. 2015, 70, 545–558. [Google Scholar] [CrossRef]
  16. Sharma, J.R.; Arora, H. An efficient derivative free iterative method for solving systems of nonlinear equation. Appl. Anal. Discret. Math. 2013, 7, 390–403. [Google Scholar] [CrossRef]
  17. Sharma, J.R.; Arora, H. A novel derivative free algorithm with seventh order convergence for solving systems of nonlinear equations. Numer. Algor. 2014, 4, 917–933. [Google Scholar] [CrossRef]
  18. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  19. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  20. Sharma, J.R.; Gupta, P. Efficient Family of Traub-Steffensen-Type Methods for Solving Systems of Nonlinear Equations. Advan. Numer. Anal. 2014, 2014, 152187. [Google Scholar] [CrossRef]
  21. Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
  22. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  23. Argyros, I. The Theory and Application of Iteration Methods, 2nd ed.; Engineering Series; CRC Press, Taylor and Francis Pub. Group: Boca Ratan, FL, USA, 2022. [Google Scholar]
  24. Zhanlav, T.; Otgondorj, K. Higher order Jarratt-like iterations for solving system of nonlinear equations. Appl. Math. Comput. 2021, 395, 125849. [Google Scholar] [CrossRef]
  25. Zheng, Q.; Zhao, P.; Huang, F. A family of fourth-order Steffensen-type methods with the applications on solving nonlinear ODEs. Appl. Math. Comput. 2011, 217, 8196–8203. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Mallawi, F.O.; Alharbi, S. Extended Seventh Order Derivative Free Family of Methods for Solving Nonlinear Equations. Mathematics 2023, 11, 736. https://doi.org/10.3390/math11030736

AMA Style

Behl R, Argyros IK, Mallawi FO, Alharbi S. Extended Seventh Order Derivative Free Family of Methods for Solving Nonlinear Equations. Mathematics. 2023; 11(3):736. https://doi.org/10.3390/math11030736

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, Fouad Othman Mallawi, and Sattam Alharbi. 2023. "Extended Seventh Order Derivative Free Family of Methods for Solving Nonlinear Equations" Mathematics 11, no. 3: 736. https://doi.org/10.3390/math11030736

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop