Next Article in Journal
Lévy Walks as a Universal Mechanism of Turbulence Nonlocality
Previous Article in Journal
Three-Step Derivative-Free Method of Order Six
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence of Derivative-Free Iterative Methods with or without Memory in Banach Space

1
Department of Mathematical & Computational Science, National Institute of Technology Karnataka, Mangaluru 575025, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, University of Houston, Houston, TX 77204, USA
*
Author to whom correspondence should be addressed.
Foundations 2023, 3(3), 589-601; https://doi.org/10.3390/foundations3030035
Submission received: 2 August 2023 / Revised: 12 September 2023 / Accepted: 18 September 2023 / Published: 19 September 2023
(This article belongs to the Section Mathematical Sciences)

Abstract

:
A method without memory as well as a method with memory are developed free of derivatives for solving equations in Banach spaces. The convergence order of these methods is established in the scalar case using Taylor expansions and hypotheses on higher-order derivatives which do not appear in these methods. But this way, their applicability is limited. That is why, in this paper, their local and semi-local convergence analyses (which have not been given previously) are provided using only the divided differences of order one, which actually appears in these methods. Moreover, we provide computable error distances and uniqueness of the solution results, which have not been given before. Since our technique is very general, it can be used to extend the applicability of other methods using linear operators with inverses along the same lines. Numerical experiments are also provided in this article to illustrate the theoretical results.

1. Introduction

Let F : D B 1 B 2 be a differentiable operator in the Fréchet sense, with D being nonempty, convex, and open set, and B 1 , B 2 be Banach spaces.
A plethora of problems are modeled using the equation
F ( x ) = 0 .
Equation (1) can be defined on the real line or complex plane or constitute a system of equations derived from a descritization of a boundary (see also Numerical Section 4 for such examples). Then, to find a solution x * of Equation (1), we rely mostly on iterative methods. This is the case since solutions in closed forms can only be obtained in special cases.
The methods of successive substitutions, or Picard’s method and Newton’s method [1,2,3,4], have been used extensively to generate a sequence approximating a solution x * D of Equation (1). But these are of convergence orders one and two, respectively. Another drawback is the usage of the Fréchet derivative in the case of Newton’s method. That is why the secant method is introduced, which avoids the derivative and is of order 1 + 5 2 < 2 . Later, the Steffensen and the Kurchatov methods are developed, which are also derivative-free and the convergence order is two [5,6,7,8,9]. However, it is important to develop derivative-free methods of orders greater than two. Our study contributes in this direction.
In particular, iterative methods without memory use the current iteration, whereas those with memory rely on the current iteration and the previous ones [10,11]. The idea of using the latter method is to increase the convergence order without additional operator evaluations. This type of method is important, since it is are derivative-free.
In this article, we develop a local and semi-local analysis of convergence for two methods. The first one is without memory and the second method is with memory. The methods are defined, respectively, as:
z n = x n + α F ( x n ) y n = x n [ x n , z n ; F ] 1 F ( x n ) x n + 1 = y n A n [ y n , z n ; F ] 1 F ( y n ) ,
where α R or α C , A n : D × D L ( B 1 , B 2 ) , and
z n = x n [ x n , x n 1 ; F ] 1 F ( x n ) y n = x n [ x n , z n ; F ] 1 F ( x n ) x n + 1 = y n A n [ y n , z n ; F ] 1 F ( y n ) .
These methods are extensions of Traub’s work on Steffensen-like methods [4]. Method (2) is without memory and uses two operator evaluations and one inverse evaluation per complete step. However, Method (3) is with memory, requiring similar calculations, and is faster than Method (2). Methods (2) and (3) are also studied in [12], when B 1 = B 2 = R . They are of orders four and 2 + 6 , respectively, in the scalar case provided that A ( 0 ) = A ( 0 ) = 1 and A ( 0 ) < [12].
Motivation The convergence order is shown using the Taylor series expansion approach, which is based on derivatives up to order five (not on these methods), limiting their applicability. As a simple but motivational example:
Let B 1 = B 2 = R , Ω = [ 0.4, 1.3] . Define function g on Ω with
g ( t ) = 2 t 3 log t 2 + 5 t 5 4 t 4 i f t 0 0 i f t = 0 .
It is clear that, in this example, the exact solution is t * = 1 . Clearly, g ( t ) is not bounded on Ω . Therefore, the local analysis of convergence for these methods is not guaranteed by the analysis in [4,12]. However, the methods may converge (see Numerical Section 4).
Other concerns are: the lack of upper error estimates on x n x * or results on the location and uniqueness of x * . These concerns constitute our motivation for writing this article. These limitations appear in the study of other methods [4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Our approach is applicable to those methods along the same lines.
Novelty We find computable convergence radius and error estimates relying only on the derivative appearing on these methods and generalized conditions on F . That is how we extend the utilization of these methods. Notice that local analysis of convergence results on iterative methods is significant, since it reveals how difficult it is to pick a starting point, x 0 . Our idea can be used analogously with other methods and for the same reasons because it is so general. Moreover, a more important and difficult semi-local analysis of convergence (not presented in [12]) is also developed in this paper.
The local analysis is developed in Section 2, followed by the semi-local convergence in Section 3, whereas the examples appear in Section 4, followed by the concluding remarks in Section 5.

2. Local Analysis

We first develop the ball convergence analysis of Method (2) using real parameters and functions. Let M = [ 0 , ) and a 0 .
Suppose function:
(i)
ξ 0 ( t , a t ) 1
has a minimal zero (MZ) R 0 M { 0 } , for some function ξ 0 : M × M M nondecreasing and continuous (NDC). Let M 0 = [ 0 , R 0 ) .
(ii)
ζ 1 ( t ) 1
has an MZ d 1 M 0 { 0 } , where ξ : M 0 × M 0 M is NDC and ζ 1 : M 0 M is defined by
ζ 1 ( t ) = ξ ( t , a t ) 1 ξ 0 ( t , a t ) .
(iii)
ξ 0 ( ζ 1 ( t ) t , 0 ) 1 , ξ 0 ( ζ 1 ( t ) t , a t ) 1
have an MZ R 1 , R 2 M 0 { 0 } , respectively. Let R = min { R 1 , R 2 } and M 1 = [ 0 , R ) .
(iv)
ζ 2 ( t ) 1
has an MZ d 2 M 1 { 0 } , or some functions ξ 1 : M 1 M , ξ 2 : M 1 × M 1 M NCD and ζ 2 : M 1 M , defined by
ζ 2 ( t ) = ξ ( ζ 1 ( t ) t , a t ) ξ 1 ( ζ 1 ( t ) t ) ( 1 ξ 0 ( ζ 1 ( t ) t , 0 ) ) ( 1 ξ 0 ( ζ 1 ( t ) t , a t ) ) + ξ 2 ( t , ζ 1 ( t ) t ) ξ 1 ( ζ 1 ( t ) t ) 1 ξ 0 ( ζ 1 ( t ) t , a t ) ζ 1 ( t ) .
The parameter
d = min { d i } , i = 1 , 2
is shown in Theorem 1 to be a convergence radius for Method (2). Set M 2 = [ 0 , d ) .
The definition of the parameter d implies
0 ξ 0 ( t , a t ) < 1 , 0 ξ 0 ( ζ 1 ( t ) t , 0 ) < 1 ,
0 ξ 0 ( ζ 1 ( t ) t , a t ) < 1 , a n d 0 ζ i ( t ) < 1
are valid for all t M 2 .
By U ¯ ( x * , λ ) , we denote the closure of open ball U ( x * , λ ) with center x * B 1 and of radius λ > 0 .
The following conditions are needed.
Suppose:
(h1)
There exists an invertible operator L so that
L 1 ( [ x , y ; F ] L ) ξ 0 ( x x * , y x * )
and
I + α [ x , x * ; F ] a
For each x , y D .
Set D 0 = U ( x * , R 0 ) D .
(h2)
L 1 ( [ x , z ; F ] [ x , x * ; F ] ) ξ ( x x * , z x * ) ,
L 1 F ( x ) ξ 1 ( x x * )
and
I A ( x , y ) ξ 2 ( x x * , y x * )
For each x , y , z D 0 .
(h3)
U ¯ ( x * , d ˜ * ) D for d ˜ * = max { a d ˜ , d ˜ } and d ˜ to be given later
and
(h4)
There exists d * d ˜ * , satisfying ξ 0 ( 0 , d * ) < 1 or ξ 0 ( d * , 0 ) < 1 .
Let D 1 = U ¯ ( x * , d * ) D .
The local analysis of Method (2) uses condition (H) and is given in:
Theorem 1.
Under the conditions (H) for d ˜ = d , pick x 0 U ( x * , d ) { x * } . Then, the sequence { x n } is convergent to x * . Moreover, this limit is the only zero of F in the set D 1 , given in (h4).
Proof. 
The following assertions shall be shown using induction on m
y m x * ζ 1 ( x m x * ) x m x * x m x * < d
and
x m + 1 x * ζ 2 ( x m x * ) x m x * x m x * ,
with radius d, as defined in, (4) and functions ζ i , as given previously. We have
z 0 x * = x 0 x * + α F ( x 0 ) = x 0 x * + α [ x 0 , x * ; F ] ( x 0 x * ) = ( I + α [ x 0 , x * ; F ] ) ( x 0 x * ) a x 0 x * < d ˜ * .
Using (4), (5), (h1), and (h3), we obtain
L 1 ( [ x 0 , z 0 ; F ] L ) ξ 0 ( x 0 x * , z 0 x * ) ξ 0 ( d , a d ) < 1 ,
which, together with a lemma on inverses of linear operators due to Banach [8], imply the linear operator [ x 0 , z 0 ; F ] is invertible and
[ x 0 , z 0 ; F ] 1 L 1 1 ξ 0 ( x x * , z 0 x * ) .
Notice that y 0 exists by the first substep of Method (2), from which we can also have
y 0 x * = x 0 x * [ x 0 , z 0 ; F ] 1 F ( x 0 ) = [ x 0 , z 0 ; F ] 1 × ( [ x 0 , z 0 ; F ] [ x 0 , x * ; F ] ) ( x 0 x * ) .
By (4), (6) (for i = 1 ), (h2), (h3), (10), and (11), we obtain
y 0 x * ξ ( x 0 x * , z 0 x * ) x 0 x * 1 ξ 0 ( x 0 x * , z 0 x * ) ζ 1 ( x 0 x * ) x 0 x * x 0 x * < d ,
showing (7) for m = 0 and that the iterate y 0 U ( x * , d ) . Notice also that the iterate x 1 exists by the second substep of Method (2), from which we can also have
x 1 x * = y 0 x * [ y 0 , x 0 ; F ] 1 F ( y 0 ) + ( [ y 0 , x * ; F ] 1 [ y 0 , z 0 ; F ] 1 ) F ( y 0 ) + ( I A 0 ) [ y 0 , z 0 ; F ] 1 F ( y 0 ) = [ y 0 , x * ; F ] 1 ( [ y 0 , z 0 ; F ] [ y 0 , x * ; F ] ) [ y 0 , z 0 ; F ] 1 F ( y 0 ) + ( I A 0 ) [ y 0 , z 0 ; F ] 1 F ( y 0 ) .
Then, in view of (4), (6) (for i = 1 ), (10), (12), and (13), we obtain
x 1 x * ξ ( y 0 x * , z 0 x * ) ξ 1 ( y 0 x * ) ( 1 ξ 0 ( y 0 x * , 0 ) ) ( 1 ξ 0 ( y 0 x * , z 0 x * ) ) + ξ 2 ( x 0 x * , y 0 x * ) ξ 1 ( y 0 x * ) 1 ξ 0 ( y 0 x * , x 0 x * ) y 0 x * ζ 2 ( x 0 x * ) x 0 x * x 0 x * ,
showing (8) for m = 0 and the iterate x 1 U ( x * , d ) . Simply, replace z 0 , x 0 , y 0 , x 1 by z m , x m , y m , x m + 1 in the previous calculations to complete the induction for (7) and (8). Then, from the estimation
x m + 1 x * γ x m x * < d ,
where γ = ζ 2 ( x 0 x * ) [ 0 , 1 ) , we conclude lim m x m = x * and x m + 1 U ( x * , ρ ) . Let Q = [ x * , q ; F ] for some q D 1 with F ( q ) = 0 . By (h1) and (h4), we obtain
L 1 ( Q L ) ξ 0 ( 0 , q x * ) ξ 0 ( 0 , d * ) < 1 .
Therefore, q = x * follows by the invertibility of Q and the identity 0 = F ( q ) F ( x * ) = Q ( q x * ) .
Remark 1.
(a) 
We can compute the computational order of convergence (COC), defined by
ξ = ln x n + 1 x * x n x * / ln x n x * x n 1 x *
or the approximate computational order of convergence
ξ 1 = ln x n + 1 x n x n x n 1 / ln x n x n 1 x n 1 x n 2 .
(b) 
The choice A ( t ) = 1 + t + β t 2 , t = F ( x ) 1 F ( y ) satisfies the conditions A ( 0 ) = A ( 0 ) = 1 and A ( 0 ) < required to show the fourth convergence order of Method (2). Next, we show how to choose function ξ 2 in this case. Notice that we have
( L ( x x * ) ) 1 ( F ( x ) F ( x * ) L ( x x * ) ) 1 x x * L 1 ( [ x , x * ; F ] L ) x x * f o r x x * ξ 0 ( x x * , 0 ) ,
so
ξ 2 ( s , t ) = ξ 0 ( s , 0 ) .
(c) 
The usual choice for L = F ( x * ) [8]. But this implies that the operator F is differentiable at x = x * and x * is simple. This makes it unattractive for solving non-differentiable equations. However, if L is chosen to be different from F ( x * ) , then one can also solve non-differentiable equations.
(d) 
The parameter a can be replaced by a real function as follows:
I + α [ x , x * ; F ] = I + α L L 1 ( [ x , x * ; F ] L + L ) = I + α L + α L L 1 ( [ x , x * ; F ] L ) , I + α [ x , x * ; F ] I + α L + | α | L ξ 0 ( x x * , x * x * ) .
Thus, we can set
a ( t ) = I + α L + | α | L ξ 0 ( t , 0 ) ,
where a is a non-decreasing function defined in M 0 . Then, a ( t ) can replace a in the preceding results.
Next, we develop the ball convergence analysis of Method (3) in an analogous way. But this time, the “ ζ ” functions are defined as
ζ ¯ 1 ( t ) = ξ ( t , t ) 1 ξ 0 ( t , t ) , ζ ¯ 2 ( t ) = ξ ( t , ζ ¯ 1 ( t ) t ) 1 ξ 0 ( t , ζ ¯ 1 ( t ) t )
and
ζ ¯ 3 ( t ) = ξ 0 ( ζ ¯ 2 ( t ) t , ζ ¯ 1 ( t ) t ) ξ 1 ( ζ ¯ 2 ( t ) t ) ( 1 ξ 0 ( ζ ¯ 2 ( t ) t , 0 ) ) ( 1 ξ 0 ( ζ ¯ 2 ( t ) t , ζ ¯ 1 ( t ) t ) ) + ξ 2 ( t , ζ ¯ 2 ( t ) t ) ξ 1 ( ζ ¯ 2 ( t ) t ) 1 ξ 0 ( ζ ¯ 2 ( t ) t ) , ξ 0 ( ζ ¯ 1 ( t ) t ) ) ζ ¯ 2 ( t ) .
and
d ¯ = min { d ¯ 1 , d ¯ 2 , d ¯ 3 } , d ˜ = d ¯ ,
and the least zeros of the ζ ¯ i , i = 1 , 2 , 3 functions in M 0 { 0 } , d ¯ 1 , d ¯ 2 , d ¯ 3 , respectively, exist.
The motivation for the introduction of functions ζ ¯ i is coming from the estimations
z n x * = x n x * [ x n , x n 1 ; F ] 1 F ( x n ) = [ x n , x n 1 ; F ] 1 ( [ x n , x n 1 ; F ] [ x n , x * ; F ] ) ( x n x * ) ξ ( x n x * , x n 1 x * ) x n x * 1 ξ 0 ( x n x * , x n 1 x * ) ζ ¯ 1 ( x 0 x * ) x 0 x * x 0 x * < d ¯ ,
y n x * = x n x * [ x n , z n ; F ] 1 F ( x n ) = [ x n , z n ; F ] 1 ( [ x n , z n ; F ] [ x n , x * ; F ] ) ( x n x * ) ξ ( x n x * , z n x * ) x n x * 1 ξ 0 ( x n x * , z n x * ) ζ ¯ 2 ( x n x * ) x n x * x n x *
and, as in (13),
x n + 1 x * [ ξ 0 ( ζ ¯ 2 ( x n x * ) x n x * , ζ ¯ 1 ( x n x * ) x n x * ) ( 1 ξ 0 ( ζ ¯ 2 ( x n x * ) x n x * , 0 ) ) × ξ 1 ( ζ ¯ 2 x n x * ) x n x * ) ( 1 ξ 0 ( ζ ¯ 2 ( x n x * ) x n x * , ζ ¯ 1 ( x n x * ) x n x * ) ) + ξ 2 ( x n x * , ζ ¯ 2 ( x n x * ) x n x * ) ξ 1 ( ζ ¯ 2 x n x * ) x n x * ) 1 ξ 0 ( ζ ¯ 2 ( x n x * ) x n x * , ζ ¯ 1 ( x n x * ) x n x * ) ) ] × ζ ¯ 2 ( x n x * ) x n x * ζ ¯ 3 ( x n x * ) x n x * x n x * .
Hence, we arrive at the corresponding local convergence result for Method (3).
Theorem 2.
Under the conditions (H), hold with d ˜ = d ¯ , the conclusions of Theorem 1 hold for Method (3) with d , and ζ is replaced by d ¯ , ζ ¯ , respectively.

3. Semi-Local Analysis

The analysis in this case uses a majorant sequence [1,2,3,8].
Assume the following:
(e1)
There exist continuous and nondecreasing functions f : M R , p 0 : M × M R so that the equation p 0 ( t , f ( t ) ) 1 = 0 has a smallest positive solution, denoted as s . Set M 2 = [ 0 , s ) .
(e2)
There exists a continuous and nondecreasing function p : M 2 × M 2 × M 2 R . Define the sequence { α n } for α 0 = 0 , some β 0 0 , and each n = 0 , 1 , 2 , by
c n = ( 1 + p 0 ( α n , β n ) ) ( β n α n ) + ( 1 + p 0 ( α n , f ( α n ) ) ) ( β n α n ) , α n + 1 = β n + p ( α n , β n , f ( α n ) ) c n 1 p 0 ( β n , f ( α n ) ) , b n + 1 = ( 1 + p 0 ( α n , α n + 1 ) ) ( α n + 1 α n ) + ( 1 + p 0 ( α n , f ( α n ) ) ) ( α n + 1 α n )
and
β n + 1 = α n + 1 + b n + 1 1 p 0 ( α n + 1 , f ( α n + 1 ) ) .
A convergence criterion for this sequence is:
(e3)
There exists s 0 M 2 such that for each n = 0 , 1 , 2 , p 0 ( β n , f ( α n ) ) < 1 , p 0 ( α n , f ( α n ) ) < 1 , and α n s 0 . It follows by the definition of the sequence and this condition that 0 α n β n α n + 1 s 0 , and there exists α * [ 0 , s 0 ] such that lim n α n = α * . These functions are connected to the operators of the method.
(e4)
There exists an invertible operator L so that for each x , y D and some x 0 D
L 1 ( [ x , y ; F ] L ) p 0 ( x x 0 , y x 0 )
and for z = x + α F ( x )
z x 0 f ( x x 0 ) x x 0 .
Set D 2 = D U ( x 0 , s ) .
(e5)
For A = A ( x , y , z ) and each x , y , z D 2
A p ( x x 0 , y x 0 , z x 0 )
and
(e6)
U [ x 0 , α * ] D .
It follows by (e1) and (e4) that p 0 ( 0 , z x 0 ) p 0 ( 0 , 0 ) < 1 . Thus, the linear operator [ x 0 , z 0 ; F ] is invertible. That is why we can set β 0 [ x 0 , z 0 ; F ] 1 F ( x 0 ) . The motivational calculations for the majorant sequence follow in turn by induction:
F ( y n ) = F ( y n ) F ( x n ) [ x n , z n ; F ] ( y n x n ) , L 1 F ( y n ) ( 1 + p 0 ( x n x 0 , y n x 0 ) ) y n x 0 + ( 1 + p 0 ( x n x 0 , z n x 0 ) ) y n x n = c ¯ n ( 1 + p 0 ( α n , β n ) ) ( β n α n ) + ( 1 + p 0 ( α n , f ( α n ) ) ( β n α n ) = c n , A n p ( x n x 0 , y n x 0 , z n x 0 ) p ( α n , β n , f ( α n ) ) ,
L 1 ( [ y n , z n ; F ] L ) p 0 ( y n x 0 , z n x 0 ) p 0 ( β n , f ( α n ) ) < 1 , [ y n , z n ; F ] 1 L 1 1 p 0 ( β n , f ( α n ) ) , x n + 1 y n A n [ y n , z n ; F ] 1 L L 1 F ( y n ) p ( α n , β n , f ( α n ) ) c n 1 P 0 ( β n , f ( α n ) ) = α n + 1 β n ,
x n + 1 x 0 x n + 1 y n + y n x 0 α n + 1 β n + β n α 0 = α n + 1 < α * , F ( x n + 1 ) = F ( x n + 1 ) F ( x n ) [ x n , z n ; F ] ( y n x n ) ,
L 1 F ( x n + 1 ) ( 1 + p 0 ( x n x 0 , x n + 1 x 0 ) ) x n + 1 x n + ( 1 + p 0 ( x n x 0 , z n x 0 ) y n x n = b ¯ n + 1 ( 1 + p 0 ( α n , α n + 1 ) ) ( α n + 1 α n ) + ( 1 + p 0 ( α n , f ( α n ) ) ) ( β n α n ) = b n + 1
y n + 1 x n + 1 [ x n + 1 , z n + 1 ; F ] 1 L L 1 F ( x n + 1 ) b n + 1 1 p 0 ( α n + 1 , f ( α n + 1 ) ) = β n + 1 α n + 1
and
y n + 1 x 0 y n + 1 x n + 1 + x n + 1 x 0 β n + 1 α n + 1 + α n + 1 α 0 = β n + 1 < α * .
Thus, the iterates x n , y n , z n U ( x * , α * ) and the sequence { x n } is Cauchy in the Banach space B 1 and, as such, it is convergent to some x * U [ x 0 , α * ] (since U [ x 0 , α * ] is a closed set). By letting n , we deduce F ( x * ) = 0 .
Thus, we arrived at the semi-local convergence result for the Method (2).
Theorem 3.
Assume the conditions (e1)–(e6) hold. Then, the sequence { x n } is well-defined, remains in U [ x 0 , α * ] , and is convergent to a solution x * U [ x 0 , α * ] of the equation F ( x ) = 0 .
The uniqueness of the solution is discussed next.
Proposition 1.
Assume the following:
(i) 
There exists a solution x ¯ U ( x 0 , s 1 ) of the equation F ( x ) = 0 for some s 1 > 0 .
(ii) 
The first condition in (e4) holds in the ball U ( x 0 , s 1 ) .
(iii) 
There exists s 2 s 1 so that
p 0 ( s 1 , s 2 ) < 1 .
Set D 3 = D U [ x 0 , s 2 ] . Then, the equation F ( x ) = 0 is uniquely solvable by x ¯ in the domain D 3 .
Proof. 
Let y ¯ D 3 with F ( y ¯ ) = 0 with y ¯ x ¯ . Then, the divided difference E = [ x ¯ , y ¯ ; F ] is well-defined. Then, we have the estimate
L 1 ( E L ) p 0 ( x ¯ x 0 , y ¯ x 0 ) p 0 ( s 1 , s 2 ) < 1 .
It follows that the linear operator E is invertible. Then, we can write
x ¯ y ¯ = E 1 ( F ( x ¯ ) F ( y ¯ ) ) = E 1 ( 0 ) = 0 .
Thus, we deduce y ¯ = x ¯ .
Remark 2.
(i) 
The limit point α * can be switched with s in the condition (e6).
(ii) 
Under all the conditions of Theorem 3, we can take x ¯ = x * and s 1 = α * .
(iii) 
As in the local case, a choice for the real function f can be provided, being motivated by the calculation:
z x 0 = x x 0 + α ( F ( x ) F ( x 0 ) ) + α F ( x 0 ) = ( I + α [ x , x 0 ; F ] ) ( x x 0 ) + α F ( x 0 ) , = [ ( I + α L ) + α L L 1 ( [ x , x 0 ; F ] L ) ] ( x x 0 ) + α F ( x 0 ) .
Thus, we can take
f ( t ) = [ I + α L + | α | L p 0 ( t , 0 ) ] t + | α | F ( x 0 ) .
The semi-local analysis of convergence for Method (3) follows along the same lines.

4. Numerical Examples

In the first example, we use the standard and popular divided difference [1,4,12]
[ x , y ; F ] = 0 1 F ( y + θ ( x y ) ) d θ , α = 1 ,
and ξ 2 as in Remark 1.
The first three examples validate our local convergence analysis results.
Example 1.
Consider the kinematic system
F 1 ( v 1 ) = e v 1 , F 2 ( v 2 ) = ( e 1 ) v 2 + 1 , F 3 ( v 3 ) = 1
with  F 1 ( 0 ) = F 2 ( 0 ) = F 3 ( 0 ) = 0 .  Let  F = ( F 1 , F 2 , F 3 ) .  Let  B 1 = B 2 = R 3 , D = U ¯ ( 0 , 1 ) , x * = ( 0 , 0 , 0 ) T .  Define function F on D for  w = ( v 1 , v 2 , v 3 ) T  by
F ( w ) = ( e v 1 1 , e 1 2 v 2 2 + v 2 , v 3 ) T .
Then, we obtain
F ( w ) = e v 1 0 0 0 ( e 1 ) v 2 + 1 0 0 0 1 .
The conditions (H) are validated if we choose ξ 0 ( u 1 , u 2 ) = 1 2 ( e 1 ) ( u 1 + u 2 ) , ξ ( u 1 , u 2 ) = 1 2 e 1 e 1 ( u 1 + u 2 ) , ξ 1 ( u 1 ) = e 1 e 1 , and ξ 2 ( u 1 , u 2 ) = 1 2 ( e 1 ) u 1 , and a = e 2 . Then, by using (i)–(iv) and solving the scalar equations, we deduce that the radii are:
d 1 = 0.241677, d 2 = 0.192518, d ¯ 1 = 0.285075, d ¯ 2 = 0.285075, d ¯ 3 = 0.251558.
Therefore, Method (3) provides the largest radius for the example. Consequently, we conclude d = d 2 and d ¯ = d 3 .
The iterates are given in Table 1.
Notice that Method (3) is also faster than (2) in this example.
Example 2.
Consider B 1 = B 2 = C [ 0 , 1 ] ,   D = U ¯ ( 0 , 1 ) , and F : D B 2 , given as
F ( λ ) ( t ) = φ ( t ) 5 0 1 x τ λ ( τ ) 3 d τ .
We have that
F ( λ ( ξ ) ) ( t ) = ξ ( t ) 15 0 1 x τ λ ( τ ) 2 ξ ( τ ) d τ , f o r e a c h ξ D .
Then, we find that x * = 0 . Hence, the conditions (H) are validated for ξ 0 ( u 1 , u 2 ) = 15 4 ( u 1 + u 2 ) , ξ ( u 1 , u 2 ) = 15 2 ( u 1 + u 2 ) , ξ 1 ( u 1 ) = 2 ,   ξ 2 ( u 1 , u 2 ) = 15 4 u 1 , and a = 7 . Then, the radii are:
d 1 = 0.011111, d 2 = 0.00865678, d ¯ 1 = 0.044444, d ¯ 2 = 0.044444, d ¯ 3 = 0.0413453.
Hence, we conclude d = d 2 and d ¯ = d ¯ 3 .
Example 3.
By the academic example in the introduction, we have ξ 0 ( u 1 , u 2 ) = ξ ( u 1 , u 2 ) = 96.6629073 2 ( u 1 + u 2 ) , ξ 1 ( u 1 ) = 2 , ξ 2 ( u 1 , u 2 ) = 96.6629073 2 u 1 , and a = 5 . Then, the radii are:
d 1 = 0.0017242, d 2 = 0.00138313, d ¯ 1 = 0.00517261,
d ¯ 2 = 0.005172613, d ¯ 3 = 0.0044859.
Hence, we conclude d = d 2 and d ¯ = d ¯ 3 .
Concerning the semi-local case and the application of the methods, we provide two more examples. The first example involvs non-differentiable mappings.
Example 4.
Let D = R × R . The 2 × 2 nonlinear and non-differentiable system to be solved is
3 t 1 2 t 2 + t 2 2 1 + | t 1 1 | = 0 t 1 4 + t 1 t 2 3 1 + | t 2 | = 0 .
The system can also be described as
F = ( F 1 , F 2 ) ,
where
F 1 ( t 1 , t 2 ) = 3 t 1 2 t 2 + t 2 2 1 + | t 1 1 | F 2 ( t 1 , t 2 ) = t 1 4 + t 1 t 2 3 1 + | t 2 | .
The system becomes F ( t 1 , t 2 ) = 0 . Then, as A = [ . , . ; F ] , which is a 2 × 2 real matrix for t ¯ = [ t 1 , t 2 ] t r and t ˜ = [ t 3 , t 4 ] t r by
[ t ¯ , t ˜ ; F ] i , 1 = F i ( t 3 , t 4 ) F i ( t 1 , t 4 ) t 3 t 2 f o r t 1 t 3 ,
and
[ t ¯ , t ˜ ; F ] i , 2 = F i ( t 1 , t 4 ) F i ( t 1 , t 2 ) t 4 t 2 f o r t 2 t 4 ,
i = 1 , 2 . Otherwise, set [ . , . ; F ] = O . Notice that these matrices constitute standard divided differences [9,10,11,17]. Let us choose t ¯ 0 = [ 5 , 5 ] t r and t ˜ = [ 1 , 0 ] t r to be the starters for scheme (2). Then, the solution of the system is t * = [ t 1 * , t 2 * ] t r for
t 1 * = 0.894655373334687 and t 2 * = 0.327826421746298.
The solution is obtained after four iterations for both methods.
Example 5.
The system of equations
3 t 1 2 t 2 + t 2 2 = 1 t 1 4 + t 1 t 2 3 = 1 ,
has solutions ( 1 , 0.2) , ( 0.4, 1.3) , and ( 0.9, 0.3) . The solution ( 0.9, 0.3) is considered for approximating using Methods (2) and (3). We use the initial point ( 2 , 1 ) in our computation. Table 1 and Table 2 provide the obtained results.
The iterates are given in Table 2.
Notice that Method (3) is faster than Method (2) in this example.

5. Conclusions

There are some drawbacks when Taylor series expansions are used to find the order of convergence for iterative methods. Some of these are: (a) high order derivatives not on the methods must exist; (b) computable estimates of x n x * ; and (c) uniqueness of the solution x * results are not given. These drawbacks create problems like not knowing how to pick initial points or how many iterates are needed to achieve a pre-decided error tolerance. We developed a technique in this paper so general that it can be applied to extend the applicability of other methods along the same lines [1,2,3,4,5,6,7,8,9,12,13,14,15,16,17]. In particular, we addressed problems (a)–(c) using generalized conditions only on the first derivative and divided differences of order one. Notice that only divided differences of order one appear in these methods. Hence, we extended the applicability of these methods in the more general setting of Banach space-valued equations. Numerical experiments where the convergence criteria are tested complete this paper. The idea of this paper shall be used in future work to extend the applicability of similar methods [5,12,13,14,15,16].

Author Contributions

Conceptualization, S.G., I.K.A. and S.R.; methodology, S.G., I.K.A. and S.R.; software, S.G., I.K.A. and S.R.; validation, S.G., I.K.A. and S.R.; formal analysis, S.G., I.K.A. and S.R.; investigation, S.G., I.K.A. and S.R.; resources, S.G., I.K.A. and S.R.; data curation, S.G., I.K.A. and S.R.; writing—original draft preparation, S.G., I.K.A. and S.R.; writing—review and editing, S.G., I.K.A. and S.R.; visualization, S.G., I.K.A. and S.R.; supervision, S.G., I.K.A. and S.R.; project administration, S.G., I.K.A. and S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare that there are no conflict of interest.

Correction Statement

This article has been republished with a minor correction to the Data Availability Statement. This change does not affect the scientific content of the article.

References

  1. Argyros, I.K.; George, S.; Magreñán, A.A. Local convergence for multi-point-parametric Chebyshev-Halley-type method of higher convergence order. J. Comput. Appl. Math. 2015, 282, 215–224. [Google Scholar] [CrossRef]
  2. Argyros, I.K.; Magreñán, A.A. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 2015, 71, 1–23. [Google Scholar] [CrossRef]
  3. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56, 101423. [Google Scholar] [CrossRef]
  4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  5. Kung, H.T.; Traub, J.F. Optimal order of one point and multi point iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  6. Magrenan, A.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
  7. Neta, B. A new family of high order methods for solving equations. Int. J. Comput. Math. 1983, 14, 191–195. [Google Scholar] [CrossRef]
  8. Ortega, J.M.; Rheinboldt, W.G. Iterative Solutions of Nonlinear Equations in Several Variables; SIAM: New York, NY, USA, 1970. [Google Scholar]
  9. Steffensen, J.F. Remarks on iteration. Scand. Actuar. J. 1993, 16, 64–72. [Google Scholar] [CrossRef]
  10. Shakhno, S.M.; Gnatyshyn, O.P. On an iterative Method of order 1.839... for solving nonlinear least squares problems. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar]
  11. Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step scheme for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
  12. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. J. Comput. Appl. Math. 2019, 354, 286–298. [Google Scholar] [CrossRef]
  13. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
  14. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  15. Dzunic, J.; Petkovic, M.S. On generalized biparametric multi point root finding methods with memory. J. Comput. Appl. Math. 2014, 255, 362–375. [Google Scholar] [CrossRef]
  16. Petkovic, M.S.; Dzunic, J.; Petkovid, L.D. A family of two point methods with memory for solving non linear equations. Appl. Anal. Discret. Math. 2011, 5, 298–317. [Google Scholar] [CrossRef]
  17. Sharma, J.R.; Gupta, P. On some highly efficient derivative free methods with and without memory for solving nonlinear equations. Int. J. Comput. Methods 2015, 12, 1–28. [Google Scholar] [CrossRef]
Table 1. Iterates of Method (2) and Method (3).
Table 1. Iterates of Method (2) and Method (3).
n x n by (2) x n by (3)
−1(0.2000, 0.2000, 0.2000)(0.2000, 0.2000, 0.2000)
0(0.1000, 0.1000, 0.1000)( 0.1000, 0.1000, 0.1000)
1( 0.0044, 0.0526, 0)( 0.0000, 0.0457, 0)
2(0.0000, 0.0325, 0)(0.0000, 0.0276, 0)
3(0.0000, 0.0215, 0)(−0.0000, 0.0181, 0)
4(0.0000, 0.0147, 0)(−0.0000, 0.0124, 0)
5(0.0000, 0.0103, 0)(−0.0000, 0.0087, 0)
6(0.0000, 0.0074, 0)(−0.0000, 0.0062, 0)
7(0.0000, 0.0053, 0)(−0.0000, 0.0045, 0)
8(0.0000, 0.0038, 0)(−0.0000, 0.0032, 0)
9(0.0000, 0.0028, 0)(−0.0000, 0.0024, 0)
10(0.0000, 0.0020, 0)(−0.0000, 0.0017, 0)
11(0.0000, 0.0015, 0)(−0.0000, 0.0013, 0)
12(0.0000, 0.0011, 0)(−0.0000, 0.0009, 0)
13(0.0000, 0.0008, 0)(−0.0000, 0.0007, 0)
14(0.0000, 0.0006, 0)(−0.0000, 0.0005, 0)
15(0.0000, 0.0004, 0)(−0.0000, 0.0004, 0)
16(0.0000, 0.0003, 0)(−0.0000, 0.0003, 0)
17(0.0000, 0.0002, 0)(−0.0000, 0.0002, 0)
18(0.0000, 0.0002, 0)(−0.0000, 0.0001, 0)
19(0.0000, 0.0001, 0)(−0.0000, 0.0001, 0)
20(0.0000, 0.0001, 0)(−0.0000, 0.0001, 0)
21(0.0000, 0.0001, 0)(−0.0000, 0.0001, 0)
22(0.0000, 0.0001, 0)(0, 0, 0)
Table 2. Iterates of Method (2) and Method (3).
Table 2. Iterates of Method (2) and Method (3).
n x n by (2) x n by (3)
−1(1.9, −0.9)
0(2.000000, −1.000000)(2.000000, −1.000000)
1(1.953072, −0.962331)(1.153994, 0.203527)
2(1.903627, −0.920635)(0.996799, 0.301846)
3(1.851328, −0.874390)(0.992780, 0.306440)
4(1.795779, −0.822929)(0.992780, 0.306440)
5(1.736504, −0.765386)
6(1.672947, −0.700609)
7(1.604467, −0.627018)
8(1.530378, −0.542399)
9(1.450068, −0.443592)
10(1.363359, −0.326162)
11(1.271401, −0.184796)
12(1.178280, −0.018149)
13(1.091066, 0.152382)
14(1.020124, 0.270191)
15(0.993678, 0.305320)
16(0.992780, 0.306440)
17(0.992780, 0.306440)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

George, S.; Argyros, I.K.; Regmi, S. Convergence of Derivative-Free Iterative Methods with or without Memory in Banach Space. Foundations 2023, 3, 589-601. https://doi.org/10.3390/foundations3030035

AMA Style

George S, Argyros IK, Regmi S. Convergence of Derivative-Free Iterative Methods with or without Memory in Banach Space. Foundations. 2023; 3(3):589-601. https://doi.org/10.3390/foundations3030035

Chicago/Turabian Style

George, Santhosh, Ioannis K. Argyros, and Samundra Regmi. 2023. "Convergence of Derivative-Free Iterative Methods with or without Memory in Banach Space" Foundations 3, no. 3: 589-601. https://doi.org/10.3390/foundations3030035

Article Metrics

Back to TopTop