Next Article in Journal
Convergence of Derivative-Free Iterative Methods with or without Memory in Banach Space
Previous Article in Journal
Generalized Iterative Method of Order Four with Divided Differences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Step Derivative-Free Method of Order Six

1
Department of Mathematics, University Centre for Research and Development, Chandigarh University, Mohali 140413, Punjab, India
2
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal 148106, Punjab, India
3
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
4
Department of Mathematics, University of Houston, Houston, TX 77204, USA
*
Author to whom correspondence should be addressed.
Foundations 2023, 3(3), 573-588; https://doi.org/10.3390/foundations3030034
Submission received: 10 July 2023 / Revised: 23 August 2023 / Accepted: 31 August 2023 / Published: 11 September 2023
(This article belongs to the Section Mathematical Sciences)

Abstract

:
Derivative-free iterative methods are useful to approximate the numerical solutions when the given function lacks explicit derivative information or when the derivatives are too expensive to compute. Exploring the convergence properties of such methods is crucial in their development. The convergence behavior of such approaches and determining their practical applicability require conducting local as well as semi-local convergence analysis. In this study, we explore the convergence properties of a sixth-order derivative-free method. Previous local convergence studies assumed the existence of derivatives of high order even when the method itself was not utilizing any derivatives. These assumptions imposed limitations on its applicability. In this paper, we extend the local analysis by providing estimates for the error bounds of the method. Consequently, its applicability expands across a broader range of problems. Moreover, the more important and challenging semi-local convergence not investigated in earlier studies is also developed. Additionally, we survey recent advancements in this field. The outcomes presented in this paper can be proved valuable to practitioners and researchers engaged in the development and analysis of derivative-free numerical algorithms. Numerical tests illuminate and validate further the theoretical results.
MSC:
65Y20; 65H10; 47H17; 41A58

1. Introduction

There are several numerical methods such as Newton, Broyden, secant and Steffensen [1,2,3,4,5,6,7,8] that can be used to approximate the solution x * of
F ( x ) = 0 ,
with F : Ω B B , F denoting a continuous operator, mapping a Banach space B into itself. Newton’s method is a popular iterative method to solve Equation (1). Iterative solution methods are mainly utilized when it is not possible to obtain solution x * in analytical or closed forms. Instead, these methods generate a sequence of approximate solutions that converge towards x * .
Steffensen’s method [1,2], developed for n = 0 , 1 , 2 , as
x n + 1 = x n B 1 F ( x n ) ,
where B = B n = [ w n , x n ; F ] and w n = x n + F ( x n ) , is often employed to provide a sequence of iterates converging quadratically to x * .
Many iterative approaches have been developed to improve efficiency and order convergence (see [9,10,11,12,13]). An approach established in [13] is given for x 0 Ω as
u 1 = x n a F ( x n ) , u 2 = x n + b F ( x n ) , y n = x n [ u 1 , u 2 ; F ] 1 F ( x n ) , v n = I [ u 1 , u 2 ; F ] 1 [ y n , x n ; F ] , A n = ( I + 2 v n 2 ( c 2 ) v n 2 ) [ u 1 , u 2 ; F ] 1 , z n = y n A n F ( y n ) , x n + 1 = z n A n F ( z n ) ,
where a , b , c R , [ . , . ; F ] : Ω × Ω L ( B ) . Note that the inversion of the same linear operator and a function evaluation are required per step. The local analysis (LA) of the method is shown to be of the order of six in [13] for a b under conditions of F ( 1 ) , F ( 2 ) , , F ( 7 ) not present in the method provided, B = R k . Moreover, the Taylor series expansion is used. We consider as a motivational scalar example solving h ( t ) = 0 , where
h ( t ) = t 6 log ( t ) + 7 t 7 7 t 6 , t 0 0 , t = 0 .
We let Ω = [−1.4, 1.3]. We notice that equation h ( t ) = 0 is solvable by t * = 1 Ω . We notice that the results in [13] require the existence and boundedness of the seventh derivative F ( 7 ) about the solution. But this derivative is not bounded. Thus, the findings in [13] do not imply that lim n x n = 1 . However, the sequence { x n } is convergent to 1. That is why it is useful to weaken the convergence criteria in [13] and not rely on derivatives of high order such that F ( j ) , j = 1 , 2 , , 7 which are also not used in the method. These limitations constitute the reason for writing this paper.
Motivational Limitations
(a) 
The existence of high-order derivatives not present in the method.
(b) 
B = R .
(c) 
A priori error analysis is not provided for x n x * .
(d) 
Results on the isolation of the solution case not present, either.
(e) 
The more challenging and important semi-local analysis (SLA) is not given.
We notice also that concerns (a)(e) are usually present in the study of other methods using the Taylor series approach [2,3,4,5,6,7,8,9,10,11,12,13,14].
The present paper positively addresses all these concerns, (a)(e) as follows:
Novelty of the paper
(a)′ 
The analysis of convergence uses only divided differences of order one that are present on the method and not the seventh-order derivative, or even higher, used in [13] and other methods [2,3,4,5,6,7,8,9,10,11,12,13,14] utilizing the Taylor series expansion approach.
(b)′ 
The analysis of convergence is carried out in the more general setting of Banach space valued operators, not only on R .
(c)′ 
An a priori error analysis is provided to determine upper error bounds on x n x * . This allows the determination of the number of iterations in advance to be carried out in order to achieve a predecided error tolerance.
(d)′ 
Computational results on the isolation of solutions are developed based on generalized continuity conditions [3,4,5,6,7,8] on the divided differences (see conditions ( C 1 ) and ( C 2 ) in Section 2).
It is also worth noting that the usual conditions in the convergence analysis of this and the other methods mentioned in the aforementioned references require that F ( x * ) is invertible. That is, x * must be a simple solution of equation F ( x ) = 0 , although derivative F ( x * ) is not present in Method (3). Thus, the earlier results in [13] cannot assure the convergence of Method (3) in cases operator F is a nondifferentiable operator, although the method may converge. But conditions ( C 1 ) and ( C 2 ) under our approach do not require F ( x * ) to exist or be invertible.
Thus, our approach can be utilized to solve equations like (1) in cases the operator is nondifferentiable.
(e)′ 
The semi-local analysis is developed and requires the usage of sequences that are majorizing [3] for Method (3).
Therefore, advantages (a)′–(e)′ extend the applicability of Method (3). Moreover, the methodology of this paper can be used on other methods [2,3,4,5,6,7,8,9,10,11,12,13,14] utilizing inverses of linear operators along the same lines in order to also achieve advantages (a)′–(e)′.
The concepts of local convergence and semi-local convergence are crucial for analyzing the behavior and effectiveness of iterative algorithms in the fields of mathematical optimization and numerical analysis. These ideas assist us in comprehending the behavior of optimization techniques and how they might be used in practical situations. We gain a deeper knowledge of how iterative algorithms converge towards optimal solutions and how they move close to these solutions by researching (LA, SLA). The definitions, characteristics and practical implications of (LA, SLA) are examined in this paper, shedding light on their importance from an application perspective. The brief description of LA and SLA is given below.
Definition 1.
Local convergence analysis uses information about the actual solution to determine the rate and radius of convergence of the method. This typically involves estimating the size of the region around the true solution where the method is guaranteed to converge. This type of analysis also usually involves deriving upper bounds on the error norms, which provide an estimate of how close the iterates of the method are to the true solution.
Definition 2.
Semi-local convergence behavior of the method is studied using the information from the initial point, typically by deriving sufficient conditions that guarantee convergence of the method. This analysis is usually carried out without any knowledge of the actual solution of the problem.
Generalized Lipschitz-type conditions are often used in both semi-local and local convergence analyses. These conditions involve bounding the difference between the iterates of the method and the true solution using a Lipschitz constant or a related quantity. These conditions can be used to derive sufficient conditions for convergence, as well as to estimate the rate and radius of convergence of the method.
It is crucial to examine how Technique (3) converges in both the LA (Section 2) and the SLA (Section 3) cases. Moreover, our approach offers prior error analysis estimates and the isolation of x * results not provided before and in the Banach space. This approach also enables a comparison of the convergence criteria of a method. If the approach is examined separately, the new conditions may be weaker than those that were provided. The numerical examples are included in Section 4. This section contains nonlinear equations and systems of equations as well as integral equations as a sample of where Method (3) can be applied to solve equations. Finally, concluding remarks are discussed in Section 5.

2. Local Analysis

The conditions are described below.
( C 1 )
There exist continuous as well as nondecreasing functions f 1 : M = [ 0 , ) M , f 2 : M M , and φ 0 : M × M M such that equation
φ 0 ( f 1 ( t ) , f 2 ( t ) ) 1 = 0
has a positive solution, and the smallest (PSS) is denoted by δ . Let M 1 = [ 0 , δ ) .
( C 2 )
There exists an invertible linear operator L and x * Ω with F ( x * ) = 0 so that for x Ω ,
| | L 1 ( [ u 1 , u 2 ; F ] L ) | | φ 0 ( | | u 1 x * | | , | | u 2 x * | | ) , | | u 1 x * | | f 1 ( | | d | | ) , | | u 2 x * | | f 2 ( | | d | | ) ,
where d = x x * . We let Ω 0 = Ω u ( x * , δ ) .
( C 3 )
There exist continuous as well as nondecreasing functions φ 1 : M 1 M , φ : M 1 × M 1 × M 1 M , and φ 2 : M 1 × M 1 × M 1 × M 1 M for each x Ω 0 ,
| | L 1 ( [ x , x * ; F ] L ) | | φ 1 ( | | d | | ) , | | L 1 ( [ u 1 , u 2 ; F ] [ x , x * ; F ] ) | | φ ( | | d | | , | | u 1 x * | | , | | u 2 x * | | ) , | | L 1 ( [ u 1 , u 2 ; F ] [ y , x ; F ] ) | | φ 2 ( | | d | | , | | y x * | | , | | u 1 x * | | , | | u 2 x * | | ) .
( C 4 )
Equation g 1 ( t ) 1 = 0 has a PSS denoted by r 1 M { 0 } , where g 1 : M × M × M M ,
g 1 ( t ) = φ ( t , f 1 ( t ) , f 2 ( t ) ) 1 φ 0 ( f 1 ( t ) , f 2 ( t ) ) .
We define function P : M 1 × M 1 × M 1 M by
p ( t ) = φ ( t , g 1 ( t ) , f 2 ( t ) ) 1 φ 0 ( f 1 ( t ) , f 2 ( t ) ) .
( C 5 )
Equations g k ( t ) 1 = 0 , k = 2 , 3 have PSS denoted by r 2 , r 3 M { 0 } , respectively, where g 2 : M 1 M , g 3 : M 1 M are
g 2 ( t ) = 1 + ( 1 + 2 p ( t ) + 2 ( c 2 ) p 2 ( t ) ) 1 + 0 1 φ 1 ( θ g 1 ( t ) t ) d θ 1 φ 0 ( f 1 ( t ) , f 2 ( t ) ) g 1 ( t )
and
g 3 ( t ) = 1 + ( 1 + 2 p ( t ) + 2 ( c 2 ) p 2 ( t ) ) 1 + 0 1 φ 1 ( θ g 2 ( t ) t ) d θ 1 φ 0 ( f 1 ( t ) , f 2 ( t ) ) g 2 ( t ) .
( C 6 )
u [ x * , r ] Ω , with r = min { r i } , i = 1 , 2 , 3 .
It is implied, if t [ 0 , r ) , that
0 φ 0 ( f 1 ( t ) , f 2 ( t ) ) < 1
and
0 g i ( t ) < 1 .
Theorem 1.
Assume conditions ( C 1 ) ( C 6 ) are validated and pick x 0 u ( x * , r ) { x * } . Then, the following items hold:
{ x n } Ω ,
y n x * g 1 ( d n ) d n d n < r ,
z n x * g 2 ( d n ) d n d n ,
x n + 1 x * g 3 ( d n ) d n d n ,
where d n = x n x * with the radius r as defined in condition ( C 6 ) and functions φ i are as previously given.
Proof. 
Items (7)–(10) are validated through mathematical induction. By hypothesis x 0 μ ( x * , r ) { x * } , ( C 1 ) , ( C 2 ) and ( C 6 ) , it follows that
L 1 ( [ x 0 a F ( x 0 ) , x 0 + b F ( x 0 ) ; F ] L ) φ 0 ( x 0 a F ( x 0 ) x * , x 0 + b F ( x 0 ) x * ) φ 0 ( f 1 ( d 0 ) , f 2 ( d 0 ) ) φ 0 ( r , r ) < 1 .
Thus, [ x 0 a F ( x 0 ) , x 0 + b F ( x 0 ) ; F ] 1 exists and
[ x 0 a F ( x 0 ) , x 0 + b F ( x 0 ) ; F ] 1 L 1 1 φ 0 ( f 1 ( d 0 ) , f 2 ( d 0 ) ,
by the standard Banach perturbation Lemma [3] involving inverses of linear operators. Then, from the first substep of (3), y 0 exists, and
y 0 x * = d 0 [ u 1 , u 2 ; F ] 1 F ( x 0 ) . = [ u 1 , u 2 ; F ] 1 ( [ u 1 , u 2 ; F ] [ x 0 , x * ; F ] ) ( d 0 ) .
Using ( C 3 ) , ( C 6 ) , (6) (for i = 1 ), (11) and (12),
y 0 x * [ u 1 , u 2 ; F ] 1 L L 1 ( [ u 1 , u 2 ; F ] [ x 0 , x * ; F ] ) d 0 g 1 ( d 0 ) d 0 d 0 < r .
Hence, the iterate y 0 u ( x * , r ) , and item (8) is validated if n = 0 .
Notice that iterates z 0 and x 1 are also well defined by the invertibility of linear operator [ u 1 , u 2 ; F ] 1 . Some estimates are needed:
v 0 = I [ u 1 , u 2 ; F ] 1 [ y 0 , x 0 ; F ] [ u 1 , u 2 ; F ] 1 L L 1 ( [ u 1 , u 2 ; F ] [ y 0 , x 0 ; F ] ) φ 2 ( d 0 , y 0 x * , u 1 x * , u 2 x * ) 1 φ 0 ( f 1 ( d 0 ) , f 2 ( d 0 ) = P 0 ,
F ( y 0 ) = F ( y 0 ) F ( x * ) = 0 1 F ( x * + a ( y x * ) ) d a ( y 0 x * ) , 0 1 [ F ( x * + a ( y x * ) ) d a L + L ] ( y 0 x * ) ,
so
L 1 F ( y 0 ) 1 + 0 1 φ 0 ( a ( y x * ) ) d a y 0 x * .
Then, by the second substep, (6) (for i = 2 ), (11) and (14),
z 0 x * = y 0 x * A 0 1 F ( y 0 ) .
It follows that
z 0 x * [ 1 + ( 1 + 2 P 0 + 2 | c 2 | P 0 2 ) 1 + 0 1 φ 1 ( θ y 0 x * ) d θ 1 φ 0 ( u 1 x * , u 2 x * ) y 0 x * g 2 ( d 0 ) d 0 d 0 .
Therefore, iterate z 0 u ( x * , r ) . Moreover, Item (9) is validated if n = 0 . Similarly, from the last substep, we have
d 1 [ 1 + ( 1 + 2 P 0 + 2 | c 2 | P 0 2 ) 1 + 0 1 φ 1 ( θ z 0 x * ) d θ 1 φ 0 ( u 1 x * , u 2 x * ) ] z 0 x * g 3 ( d 0 ) d 0 d 0 .
Therefore, iterate x 1 u ( x * , r ) and item (7) are true if n = 1 . These calculations are repeatable for x 0 , y 0 , z 0 , x 1 , switched with x m , y m , z m , x m + 1 terminating the induction for Items (7)–(10). Then, from estimation
d m + 1 ξ d m < r ,
where ξ = g 3 ( d 0 ) [ 0 , 1 ) , it follows that lim m x m = x * . □
Remark 1.
(i) 
The real functions f 1 and f 2 are left uncluttered in Theorem 1. But some choices are motivated by calculations
u 1 x * = d b F ( x ) = ( I a [ x , x * ; F ] ) ( d ) = [ ( 1 a L ) a L L 1 ( [ x , x * ; F ] L ) ] ( d ) , u 1 x * [ I a L + | a | L φ 1 ( ( d ) ) ] ( d ) .
Thus, we can choose
f 1 ( t ) = I a L + | a | L φ 1 ( t ) t ,
and similarly
f 2 ( t ) = I + b L + | b | L φ 1 ( t ) t .
(ii) 
Conditions can be expressed without u 1 and u 2 like, for example,
L 1 ( [ x , y ; F ] L ) φ ¯ 0 ( d , y x * ) .
( C 2 ) For all x , y Ω where φ ¯ 0 is as φ 0 . But then, we must require r in ( C 6 ) to be
r ¯ = max { r , f 1 ( r ) , f 2 ( r ) } .
Notice, however, that condition ( C 6 ) is stronger than ( C 2 ) and function φ ¯ 0 is less tight than φ 0 .
(iii) 
Linear operator L is chosen so that functions φ are as tight as possible. Some popular choices are: L = F T ( x * ) (the differentiable case) or L = [ x 1 , x 0 ; F ] , x 1 , x 0 Ω (the non-differentiable case). It is worth noticing that the invertibility of F ( x * ) is not assumed or implied.
The next result discusses the isolation of solution x * .
Proposition 1.
Suppose that δ u ( x * , δ 1 ) is solvable by equation F ( x ) = 0 with some δ 1 > 0 ; ( C 2 ) is valid in ball u ( x * , δ 1 ) and there exists δ 2 δ 1 such that
φ 1 ( δ 2 ) < 1 .
Define the set Ω 1 = Ω u [ x * , δ 2 ] . Then, x * in collection Ω 1 can only solve equation F ( x ) = 0 .
Proof. 
Suppose δ x * . Then, the divided difference [ δ , x * ; F ] exists. Then, ( C 2 ) and (17) offer
L 1 ( [ δ , x * ; F ] L ) φ 0 ( δ x * ) φ 0 ( δ 2 ) < 1 .
Hence, [ δ , x * ; F ] 1 exists and
δ x * = [ δ , x * ; F ] 1 ( F ( δ ) F ( x * ) ) = [ δ , x * ; F ] 1 ( 0 ) = 0 .
Therefore, we deduce δ = x * .

3. Semi-Local Analysis

The role of x * is exchanged by x 0 in this analysis. In particular, we suppose the items described below.
( H 1 )
There exist continuous as well as nondecreasing functions ψ 0 : M × M M , f 3 : M M and f 4 : M M such that equation
ψ 0 ( f 3 ( t ) , f 4 ( t ) ) 1 = 0
has a PSS denoted by δ 3 . We let Ω 2 = Ω u [ x 0 , δ 3 ) and M 2 = [ 0 , δ 3 ) .
( H 2 )
There exist an initial point x 0 Ω and a linear operator L which is invertible such that
L 1 ( [ u 1 , u 2 ; F ] L ) φ 0 ( u 1 x 0 , u 2 x 0 ) , u 1 x 0 f 3 ( x x 0 )
and
u 2 x 0 f 4 ( x x 0 )
for each x Ω .
Notice that conditions ( H 1 ) and ( H 2 ) offer, for x = x 0 ,
L 1 ( [ u 1 , u 2 ; F ] L ) φ 0 ( f 3 ( 0 ) , f 4 ( 0 ) ) < 1 .
Thus, [ u 1 , u 2 ; F ] 1 is invertible and we can set [ u 1 , u 2 ; F ] 1 F ( x 0 ) b 0 for some b 0 0 .
( H 3 )
There exists continuous as well as nondecreasing function ψ : M 2 × M 2 × M 2 × M 2 M , so for x , y Ω 2 ;
L 1 ( [ u 1 , u 2 ; F ] [ x , y ; F ] ) ψ ( x x 0 , y x 0 , u 1 x 0 , u 2 x 0 ) .
We define the scalar sequence { a r } , { b r } and { c r } for a 0 = 0 , b 0 [ 0 , δ 3 ) and each r = 0 , 1 , 2 , by
q r = ψ ( a r , b r , f 3 ( a r ) , f 4 ( a r ) ) 1 ψ 0 ( f 3 ( a r ) , f 4 ( a r ) ) , λ r = ψ ( a r , b r , f 3 ( a r ) , f 4 ( a r ) ) , c r = b r + ( 1 + 2 q r + 2 | c 2 | q r 2 ) λ r 1 ψ 0 ( f 3 ( a r ) , f 4 ( a r ) ) , μ r = 0 1 ( 1 + ψ 0 ( b r + θ ( c r b r ) ) d θ ) ( c r b r ) + λ r , a r + 1 = c r + ( 1 + 2 q r + 2 | c 2 | q r 2 ) μ r 1 ψ 0 ( f 3 ( a r ) , f 4 ( a r ) ) δ r + 1 = ψ ( a r , a r + 1 , f 3 ( a r ) , f 4 ( a r ) ) ( a r + 1 a r ) + ( 1 + ψ 0 ( f 3 ( a r ) , f 4 ( a r ) ) ( a r + 1 a r )
and
b r + 1 = a r + 1 + δ r + 1 1 ψ 0 ( f 3 ( a r + 1 ) , f 4 ( a r + 1 ) ) .
( H 4 )
There exists δ ¯ [ 0 , δ 3 ) , so
ψ 0 ( f 3 ( a r ) , f 4 ( a r ) ) < 1 and a r δ ¯ for all r = 0 , 1 , 2 , .
Then, by Formula (18) and this condition that
F ( y r ) = F ( y r ) F ( x r ) [ u 1 , u 2 ; F ] ( y r x r ) , = ( [ y r , x r ; F ] [ u 1 , u 2 ; F ] ) ( y r x r ) ,
we obtain
L 1 F ( y r ) ψ ( x r x 0 , y r x 0 , u 1 x 0 , u 2 x 0 ) y r x r , ψ ( a r , b r , f 3 ( a r ) , f 4 ( a r ) ) ( b r a r ) = λ r .
Moreover,
F ( z r ) = F ( z r ) F ( y r ) + F ( y r ) , L 1 F ( z r ) 1 + 0 1 ψ 0 ( y r x 0 + θ z r y r ) d θ z r y r + λ r , 1 + 0 1 ψ 0 ( b r + θ c r b r ) d θ c r b r + λ r = μ r , x r + 1 z r A r L L 1 F ( z r ) , ( 1 + 2 q r + 2 | c 2 | q r 2 ) μ r 1 ψ 0 ( f 3 ( a r ) , f 4 ( a r ) ) , = a r + 1 a r , x r + 1 x 0 x r + 1 z r + z r x 0 , a r + 1 c r + c r a 0 = a r + 1 < a * ,
F ( x r + 1 ) = F ( x r + 1 ) F ( x r ) [ u 1 , u 2 ; F ] ( y r x r ) , = ( [ x r + 1 , x r ; F ] [ u 1 , u 2 ; F ] ) ( x r + 1 x r ) + [ u 1 , u 2 ; F ] ( x r + 1 y r ) ,
so
L 1 F ( x r + 1 ) ψ ( a r , a r + 1 , f 3 ( a r ) , f 4 ( a r ) ) ( a r + 1 a r ) + ( 1 + ψ 0 ( f 3 ( a r ) , f 4 ( a r ) ) ) ( a r + 1 b r ) = δ r + 1 .
Consequently, we obtain
y r + 1 x r + 1 [ u 1 , u 2 ; F ] 1 L L 1 F ( a r + 1 ) δ r + 1 1 ψ 0 ( f 3 ( a r + 1 ) , f 4 ( a r + 1 ) ) = b r + 1 a r + 1
and
y r + 1 x 0 y r + 1 x r + 1 + x r + 1 x 0 b r + 1 a r + 1 + a r + 1 a 0 = b r + 1 < a * .
Hence, { x r } is fundamental in B, so it converges to x * u [ x 0 , a * ] . Letting r + in (19), we obtain F ( x * ) = 0 . Hence, we provide the semi-local analysis of Method (3).
Theorem 2.
Assume conditions ( H 1 ) ( H 4 ) are validated. Then, the following items hold:
{ x r } u ( x 0 , a * ) , y r x r b r a r , z r y r c r b r , x r + 1 z r a r + 1 b r
and there exists x * u [ x 0 , a * ] with F ( x * ) = 0 . Moreover, for r = 0 , 1 , 2 ,
x * x r a * a r .
Proof. 
All items except (20) are shown above Theorem 2. By the estimate
x i + m x i a i + m a i ,
we deduce (20) by letting m + in (21). □
Remark 2.
Comments as in Remark 1 can be offered. In particular, choices for functions f 3 and f 4 can be
f 3 ( t ) = ( I a L + | a | L ψ 1 ( t ) ) t + | a | F ( x 0 )
and
f 4 ( t ) = ( I + b L + | b | L ψ 1 ( t ) ) t + | b | F ( x 0 )
provided
L 1 ( [ x , x 0 ; F ] L ) ψ 1 ( x x 0 ) ,
for each x Ω 1 and some continuous as well as nondecreasing function ψ 1 : M 1 M . The computation for the derivation of the function f 3 is
u 1 x 0 = ( I a L + a L L 1 ( [ x , x 0 ; F ] L ) ) ( x x 0 ) + a F ( x 0 ) ,
so
u 1 x 0 ( I a L + | a | L ψ 1 ( x x 0 ) ) x x 0 + | a | F ( x 0 ) .
Similar computations lead to the definition of function f 4 .
The isolation of the solution results is specified in the next result.
Proposition 2.
Assume that equation F ( x ) = 0 is solvable by some h u ( x 0 , δ 4 ) for δ 4 > 0 ; ( H 2 ) holds ball u ( x 0 , δ 4 ) and there exists δ 5 δ 4 so that
w 0 ( δ 4 , δ 5 ) < 1 .
Consider the set Ω 3 = Ω u [ x 0 , δ 5 ] . Then, h in the set Ω 3 can only solve F ( x ) = 0 .
Proof. 
Let h 1 Ω 3 for F ( h 1 ) = 0 . Define operator T = 0 1 F ( h + θ ( h 1 h ) ) d θ . It follows that, in view of condition ( H 2 ) and (22),
L 1 ( T L ) ψ 0 ( h x 0 , h 1 x 0 ) ψ 0 ( δ 4 , δ 5 ) < 1 .
Thus, we conclude that h 1 = h .

4. Numerical Tests

In the following numerical examples, we estimate the real parameters defined in the preceding sections.
Example 1.
Let B = R × R × R and Ω = u ( ξ * , 1 ) with ξ * = ( 0 , 0 , 0 ) T . Define mapping H for ξ = ( ξ 1 , ξ 2 , ξ 3 ) T , ξ i R by
H ( ξ ) = ξ 1 , e ξ 2 1 , ( e 1 ) 2 ξ 3 2 + ξ 3 T .
This definition provides the possibility to assert that the H of mapping H is the Jacobian matrix
H ( ξ ) = 1 0 0 0 e ξ 2 0 0 0 ( e 1 ) ξ 3 + 1 .
Notice that H ( ξ * ) = O and H ( ξ * ) = I . Then, conditions ( C 1 ) ( C 5 ) are validated if
φ 0 ( θ 1 , θ 2 ) = 1 2 ( e 1 ) ( θ 1 + θ 2 ) , φ ( θ 1 , θ 2 , θ 3 ) = 1 2 ( e 1 ) ( θ 1 + θ 2 + θ 3 ) , φ 1 ( θ ) = 1 2 ( e 1 ) ( θ ) .
Next, we obtain the radius of convergence r by using ( C 6 ) as r = min { r i } , i = 1 , 2 , 3 . Parameter r 1 is the smallest root of g 1 ( t ) 1 = 0 , which, on solving, offers r 1 0.14976 . Parameter r 2 is the smallest root of g 2 ( t ) 1 = 0 , which, when solved, offers r 2 0.05704. Parameter r 3 is the smallest root of g 3 ( t ) 1 = 0 , which offers r 3 0.03913. Therefore,
r = min { 0.14976, 0.05704, 0.03913} = 0.03913.
Figure 1 shows with graph that r 3 is the radius of convergence in Example 1.
Example 2.
Consider B = C [ 0 , 1 ] to be the space of functions in [ 0 , 1 ] which are continuous and Ω = u [ l * , 1 ] . Assume the nonlinear integral [14] given
l ( d ) = 0 1 T ( d , ω ) l ( ω ) 3 / 2 + l ( ω ) 2 2 d ω ,
T ( d , ω ) = ( 1 d ) ω , ω d , d ( 1 ω ) , d ω .
Notice that l * ( d ) = 0 . Define H : Ω [ 0 , 1 ] C [ 0 , 1 ] as
H ( l ) ( d ) = l ( d ) 0 1 T ( d , ω ) l ( ω ) 3 / 2 + l ( ω ) 2 2 d ω .
Derivative H is given by
H ( l ) q ( d ) = q ( d ) 0 1 T ( d , ω ) 3 2 l ( ω ) 1 / 2 + l ( ω ) d ω ;
since H ( l * ( d ) ) = 1 , it follows that
H ( α ) 1 ( H ( l ) H ( q ) ) 5 16 l q .
In (23), switch q by l 0 :
H ( α ) 1 ( H ( l ) H ( l 0 ) ) 5 16 l l 0 .
Thus, we take
φ 0 ( θ 1 , θ 2 ) = θ 1 + θ 2 , φ ( θ 1 , θ 2 , θ 3 ) = θ 1 + θ 2 + θ 3 , φ 1 ( θ ) = θ .
Parameters r i , i = 1 , 2 , 3 are the PSS of g i ( t ) 1 = 0 , and on solving, we have r 1 0.12867 , r 2 0.04784 , and r 3 0.02734. Then, radius r is
r = min { 0.0. 12867 , 0.04784, 0.02734} = 0.02734.
Figure 2 shows with graph that r 3 is the radius of convergence in Example 2.
Example 3.
Let B = C [ 0 , 1 ] be as in Example 2 and Ω = u ¯ [ 0 , 1 ] . Consider H on Ω as
H ( φ ) ( l ) = φ ( l ) 10 0 1 l ρ φ ( ρ ) 3 d ρ .
The definition gives
H ( φ ( ξ ) ) ( l ) = ξ ( l ) 30 0 1 l ρ φ ( ρ ) 2 ξ ( ρ ) d ρ , for each ξ Ω .
Since l * = 0 , we can set
φ 0 ( θ 1 , θ 2 ) = 2 ( θ 1 + θ 2 ) , φ ( θ 1 , θ 2 , θ 3 ) = 2 ( θ 1 + θ 2 + θ 3 ) , φ 1 ( θ ) = θ 5 .
Then, using ( C 6 ) , we have
r = min { 0.07057, 0.02136, 0.01192} = 0.01192.
Figure 3 shows with graph that r 3 is the radius of convergence in Example 3.
Example 4.
Let X = Y = R , Ω = ( 1 , 1 ) and define Θ on Ω by
Θ ( x ) = e x 1 .
Then, it is obvious that x * = 0 and Θ ( x * ) = 1 . Then, for x , r , s , u , v , w D 0 , it follows that
| Θ ( x * ) 1 [ x , r ; Θ ] Θ ( x * ) | = | 0 1 Θ ( Ψ x + ( 1 Ψ ) r ) d Ψ Θ ( x * ) ) | = | 0 1 ( e Ψ x + ( 1 Ψ ) r 1 ) d Ψ | = | 0 1 ( Ψ x + ( 1 Ψ ) r ) 1 + Ψ x + ( 1 Ψ ) r 2 ! + ( Ψ x + ( 1 Ψ ) r ) 2 3 ! + d Ψ | | 0 1 ( Ψ x + ( 1 Ψ ) r ) 1 + 1 2 ! + 1 3 ! + d Ψ | e 1 2 ( | d | + | r x * | ) ,
| Θ ( x * ) 1 [ x , r ; Θ ] [ x , s ; Θ ] | = | 0 1 Θ ( Ψ x + ( 1 Ψ ) r ) Θ ( Ψ x + ( 1 Ψ ) s ) ) d Ψ | = | 0 1 0 1 Θ ( γ ( Ψ x + ( 1 Ψ ) r ) + ( 1 γ ) ( Ψ x + ( 1 Ψ ) s ) × ( Ψ x + ( 1 Ψ ) r ( Ψ x + ( 1 Ψ ) s ) ) d Ψ d Ψ | = | 0 1 0 1 e ( Ψ ( Ψ x + ( 1 Ψ ) r ) + ( 1 Ψ ) ( Ψ x + ( 1 Ψ ) s ) ( Ψ x + ( 1 Ψ ) r ( Ψ x + ( 1 Ψ ) s ) d γ d Ψ | 0 1 e | ( 1 Ψ ) ( r s ) | d Ψ e 1 e 1 2 ( | r s | )
and
| Θ ( x * ) 1 [ u , v ; Θ ] [ w , v ; Θ ] | = | 0 1 Θ ( Ψ v + ( 1 Ψ ) u ) Θ ( Ψ v + ( 1 Ψ ) w ) ) d Ψ | = | 0 1 0 1 Θ ( γ ( Ψ v + ( 1 Ψ ) u ) + ( 1 γ ) ( Ψ v + ( 1 Ψ ) w ) × ( Ψ v + ( 1 Ψ ) u ( Ψ v + ( 1 Ψ ) w ) ) d γ d Ψ | = | 0 1 0 1 e ( γ ( Ψ v + ( 1 Ψ ) u ) + ( 1 γ ) ( Ψ v + ( 1 Ψ ) w ) ( Ψ v + ( 1 Ψ ) u ( Ψ v + ( 1 Ψ ) w ) d γ d Ψ | 0 1 e | ( 1 Ψ ) ( u w ) | d Ψ e 1 e 1 2 ( | u w | ) .
That is to say, condition ( C 3 ) is true for
φ 0 ( θ 1 , θ 2 ) = e 1 2 ( θ 1 θ 2 ) , φ ( θ 1 , θ 2 , θ 3 ) = e 1 e 1 2 ( θ 1 + θ 2 + θ 3 ) , φ 1 ( θ ) = e 1 e 1 2 θ .
Then, from condition ( C 6 ) , it follows that
r = min { 0.53377, 0.10194, 0.04567} = 0.04567.
Figure 4 shows with graph that r 3 is the radius of convergence in Example 4.
Example 5.
According to research in biology, the blood velocity in an artery depends on how far it is located from the central axis of the artery (Figure 5). Then, the Poiseuille’s law states that the function is
S ( r ) = C ( R 2 r 2 ) ,
where S is the velocity (cm/s) of the blood and r cm is the distance from the central axis of the artery, R is the radius of the artery, C is the constant that depends on the blood viscosity. We say that artery
C = 1.76× 10 5   c m / s
and
R = 1.2× 10 2   c m
is the case.
Then, the problem becomes
H ( x ) = 25.344 176000 x 2 = 0 ,
where x = r .
Parameters x * = 0 solve H ( x ) = 0 . Thus, we obtain
φ 0 ( θ 1 , θ 2 ) = 8.42( θ 1 + θ 2 ) , φ ( θ 1 , θ 2 , θ 3 ) = 8.42( θ 1 + θ 2 + θ 3 ) , φ 1 ( θ ) = 5 θ .
Therefore, r = min { r i } , i = 1 , 2 , 3 . yields
r = min { 0.15885× 10 1 , 0.30577× 10 2 , 0.16119× 10 2 } = 0.16119× 10 2 .
Figure 6 shows with graph that r 3 is the radius of convergence in Example 5.

5. Conclusions

The focus of this paper was to provide a comprehensive analysis of the LA and SLA of a derivative-free sixth-order method in the Banach space. It is noteworthy that the convergence has been investigated in earlier studies by supposing the existence of some derivatives of high order but which in fact do not appear in the iterative method. Contrary to this, our approach only considers the first-order divided differences that are actually present in the iterative process. This unique feature makes the method applicable to a wider range of functions, thereby expanding its utility. In the analysis, we present an error estimate and a convergence ball that bounds the iterates, providing further benefits to the analysis of convergence. In addition, the sufficient conditions are developed to show the uniqueness of solution in the given domain. To verify the theoretical results, we conducted numerical tests on several problems, demonstrating the effectiveness of this approach. The specific advantages were listed and explained in items (a)′–(e)′ of the introduction. Moreover, this technique can be extended to other methods, making it a valuable contribution in the field of the theory of iterative functions. This will be the focus of our research on the methods appearing in [2,3,4,6,8,9,10,11,12,14].

Author Contributions

S.K.: Writing—Original Draft Preparation; J.R.S.: Writing—Review and Editing; I.K.A.: Conceptualization; Methodology; S.R.: Validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Correction Statement

This article has been republished with a minor correction to the Data Availability Statement. This change does not affect the scientific content of the article.

References

  1. Steffensen, I.F. Remarks on iteration. Scand. Actuar. J. 1933, 16, 64–72. [Google Scholar] [CrossRef]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Argyros, I.K. Computational Theory of Iterative Methods; Series: Studies in Computational Mathematics; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Co.: New York, NY, USA, 2007. [Google Scholar]
  4. Liu, Z.; Zheng, Q.; Zhao, P. A variant of Steffensen’s method of fourth-order convergence and its applications. Appl. Math. Comput. 2010, 216, 1978–1983. [Google Scholar] [CrossRef]
  5. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 253, 1739–1743. [Google Scholar] [CrossRef]
  6. Ezquerro, J.A.; Hernández, M.A.; Romero, N. Solving nonlinear integral equations of Fredholm type with high order iterative methods. J. Comput. Appl. Math. 2011, 36, 1449–1463. [Google Scholar] [CrossRef]
  7. Ezquerro, J.A.; Grau, Á.; Grau-Sánchez, M.; Hernández, M.A.; Noguera, M. Analysing the efficiency of some modifications of the secant method. Comput. Math. Appl. 2012, 64, 2066–2073. [Google Scholar] [CrossRef]
  8. Grau-Sánchez, M.; Noguera, M. A technique to choose the most efficient method between secant method and some variants. Appl. Math. Comput. 2012, 218, 6415–6426. [Google Scholar] [CrossRef]
  9. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algorithms 2010, 54, 395–409. [Google Scholar] [CrossRef]
  10. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  11. Grau-Sanchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  12. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algorithms 2015, 70, 545–558. [Google Scholar] [CrossRef]
  13. Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. Memory in a new variant of King’s family for solving nonlinear systems. Mathematics 2020, 8, 1251. [Google Scholar] [CrossRef]
  14. Sharma, J.R.; Kumar, S.; Argyros, I.K. Generalized Kung-Traub method and its multi-step iteration in Banach spaces. J. Complex. 2019, 54, 101400. [Google Scholar] [CrossRef]
Figure 1. Graph for radius of convergence of Example 1.
Figure 1. Graph for radius of convergence of Example 1.
Foundations 03 00034 g001
Figure 2. Graph for radius of convergence of Example 2.
Figure 2. Graph for radius of convergence of Example 2.
Foundations 03 00034 g002
Figure 3. Graph for radius of convergence of Example 3.
Figure 3. Graph for radius of convergence of Example 3.
Foundations 03 00034 g003
Figure 4. Graph for radius of convergence of Example 4.
Figure 4. Graph for radius of convergence of Example 4.
Foundations 03 00034 g004
Figure 5. Cut-away view of an artery.
Figure 5. Cut-away view of an artery.
Foundations 03 00034 g005
Figure 6. Graph for radius of convergence of Example 5.
Figure 6. Graph for radius of convergence of Example 5.
Foundations 03 00034 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, S.; Sharma, J.R.; Argyros, I.K.; Regmi, S. Three-Step Derivative-Free Method of Order Six. Foundations 2023, 3, 573-588. https://doi.org/10.3390/foundations3030034

AMA Style

Kumar S, Sharma JR, Argyros IK, Regmi S. Three-Step Derivative-Free Method of Order Six. Foundations. 2023; 3(3):573-588. https://doi.org/10.3390/foundations3030034

Chicago/Turabian Style

Kumar, Sunil, Janak Raj Sharma, Ioannis K. Argyros, and Samundra Regmi. 2023. "Three-Step Derivative-Free Method of Order Six" Foundations 3, no. 3: 573-588. https://doi.org/10.3390/foundations3030034

Article Metrics

Back to TopTop