Next Article in Journal
Long-Run Equilibrium in the Market of Mobile Services in the USA
Previous Article in Journal
Concepts of Statistical Causality and Strong and Weak Properties of Predictable Representation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence of High-Order Derivative-Free Algorithms for the Iterative Solution of Systems of Not Necessarily Differentiable Equations

1
Department of Mathematics, University of Houston, Houston, TX 77205, USA
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematical & Computational Science, National Institute of Technology Karnataka, Surathkal 575 025, India
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(5), 723; https://doi.org/10.3390/math12050723
Submission received: 24 January 2024 / Revised: 23 February 2024 / Accepted: 25 February 2024 / Published: 29 February 2024

Abstract

:
In this study, we extended the applicability of a derivative-free algorithm to encompass the solution of operators that may be either differentiable or non-differentiable. Conditions weaker than the ones in earlier studies are employed for the convergence analysis. The earlier results considered assumptions up to the existence of the ninth order derivative of the main operator, even though there are no derivatives in the algorithm, and the Taylor series on the finite Euclidian space restricts the applicability of the algorithm. Moreover, the previous results could not be used for non-differentiable equations, although the algorithm could converge. The new local result used only conditions on the divided difference in the algorithm to show the convergence. Moreover, the more challenging semi-local convergence that had not previously been studied was considered using majorizing sequences. The paper included results on the upper bounds of the error estimates and domains where there was only one solution for the equation. The methodology of this paper is applicable to other algorithms using inverses and in the setting of a Banach space. Numerical examples further validate our approach.

1. Introduction

Let F indicate the mapping of a subset  D E  into itself, where E is a Banach space. In a plethora of applications, researchers have reduced the problem to finding a solution  x * D  of
F ( x ) = 0 .
The analytical version of the solution  x *  is difficult to determine in general. Therefore, iterative algorithms have been developed that generate sequences that converge to  x *  by means of some initial hypotheses [1,2,3,4].
The Newton’s Scheme [1,2,3] defined by
x 0 D x n + 1 = x n F ( x n ) 1 F ( x n ) , n = 0 , 1 , 2 , ,
is a popular quadratic-order algorithm. Recently, there has been a surge in the need to develop an algorithm with an order higher than two [5,6,7,8,9]. The Taylor series expansion provides the local order of convergence. But, there are limitations to this approach:
(C1)
The convergence analysis is usually only local and  E = R j ,  where j is a natural number.
(C2)
The sufficient convergence hypotheses involve  F ( d ) ,  where  d = 1 +  order of convergence.
(C3)
No a priori and computational error distances are available.
(C4)
The isolation of the solution  x *  is not discussed.
(C5)
The semi-local convergence, which is considered more interesting and challenging than the local convergence, is not discussed.
Our idea addresses concerns (C1)–(C5) as follows:
(C1)’
The analysis is developed in Banach space.
(C2)’
The sufficient convergence hypotheses involve only the operators on the algorithm (see Algorithm 1), i.e., the divided differences. This is in contrast with the motivational work in [10] using hypotheses on high-order derivatives in the algorithm to show the convergence of the algorithm.
(C3)’ 
Error estimates become available under the concept of  ω  continuity [1,2,3,11] in the local and majorizing sequences [3,7,12,13] in the semi-local case.
(C4)’ 
The isolation of the solution  x *  is specified.
and
(C5)’ 
The semi-local convergence analysis of the algorithm is studied.
An algorithm was taken from [10] to demonstrate this idea. However, the same idea was similarly applicable in the algorithm containing the inverses of linear operators [4,11,12,13,14,15,16,17,18,19,20,21].
Let us redevelop the algorithm, but formatted in Banach space, as follows,  n = 0 , 1 , 2 , :  
Algorithm 1
Step 1: Given  x 0 D ,  solve  A n u 1 = F ( x n ) ,  for  u 1 .  
Step 2: Set  y n = x n u 1 .  
Step 3: Solve  A n u 2 = F ( y n ) ,  for  u 2 .  
Step 4: Solve  A n u 3 = G n u 2 ,  for  u 3 .  
Step 5: Solve  A n u 4 = G n u 3 ,  for  u 4 .  
Step 6: Ste  z n = y n a 0 u 2 ( 3 2 a 0 ) u 3 ( a 0 2 ) u 4 .  
Step 7: Solve  A n u 5 = F ( z n ) ,  for  u 5 .  
Step 8: Solve  A n u 6 = Q n u 5 ,  for  u 6 .  
Step 9: Solve  A n u 7 = Q n u 6 ,  for  u 7 .  
Step 10: Solve  A n u 8 = Q n u 7 ,  for  u 8 .  
Step 11: Solve  A n u 9 = Q n u 8 ,  for  u 9 .  
Step 12: Set x n + 1 = z n a 1 u 5 a 2 u 6 a 3 u 7 a 4 u 8 a 5 u 9 .  
Step 13: If  x n + 1 = x n ,  STOP. Otherwise, repeat the process with  n n + 1 .  
Here,  a 0  and  a 5  are free real parameters,  a 1 = a 5 + 4 ,   a 2 = 4 a 5 6 ,   a 3 = 6 a 5 + 4  and  a 4 = 4 a 5 1 ,    b 0 , b 1 , b 2  are fixed real numbers,  w n = x n + b 0 F ( x n ) ,   h n = y n + b 1 F ( y n ) ,   l n = z n + b 2 F ( z n ) ,   A n = [ x n , w n ; F ] ,   G n = [ h n , y n ; F ]  and  Q n = [ l n , z n ; F ] .  
Here,  [ . , . ; F ] : D × D L ( E )  is a divided difference of an order one for the operator F [11,18,20,22], and the notation  L ( E )  is used for the set of continuous linear operators mapping E into itself. The interesting point of the algorithm is that, because of the usage of the same coefficient operator, only one LU decomposition can be performed for solving, e.g., linear systems with multiple right-hand sides. Thus, the algorithm without memory includes three steps (see Step 2, Step 6, and Step 12, where the iterations  y n , z n , x n + 1  are computed, respectively) and five free non-zero operator parameters. In Section 4, the parameters are further specialized. The convergence order is shown to be eight in Theorem 1 in [10]. But the existence of the ninth derivative is required for the local convergence analysis [10]. Thus, if F is not differentiable by at least that amount, the conclusions in [10] cannot assure the convergence of the algorithm to  x * .  But, the algorithm can converge. Other limitations are listed in the aforementioned concerns of (C1)–(C5). As a further motivation, consider the folowing example: If  D ˜  stands for any neighborhood containing the numbers  t = 0 , 1  define  F ( t ) = 4 t 3 log t + 7 t 5 7 t 4  if  t 0 .  Clearly,  t * = 1 D ˜  solves equation  F ( t ) = 0 .  But, the conclusions in [10] cannot assure that  lim n x n = 1 , although the algorithm converges to  t * ,  as the function  F ( t )  is not continuous at  t = 0 D ˜ .  A more important semi-local analysis of the algorithm that has not yet been presented is also developed in this paper [14,15,16,17,20,22].
The rest of the paper contains the following: a local analysis of the algorithm is provided in Section 2, a semi-local analysis is provided in Section 3, numerical examples are provided in Section 4, and the paper ends with a conclusion in Section 5.

2. Local Analysis

We use the symbols  U ( x , a )  and  U [ x , a ]  to denote open and closed balls in  E ,  respectively, with center  x E  and of radius  a > 0 .  Let M denote the nonnegative axis, and NFC stands for a function that is nondecreasing and continuous on M or some subset of it. Then, the following hypotheses are required in the local analysis.
Assume:
(H1)
Nondecreasing functions and continuous (NFC)  f 1 : M M , φ 0 : M × M M  exist, so that the equation
φ 0 ( f 1 ( t ) , t ) 1 = 0
admits a minimal positive solution (MPS) denoted by  ρ .  Let  M 0 = [ 0 , ρ ) .  It follows that for each  t M { 0 }  
0 φ 0 ( f 1 ( t ) , t ) < 1
and, consequently, the function  λ : M 0 M  provided by
λ ( t ) = 1 1 φ 0 ( f 1 ( t ) , t )
is positive.
(H2)
NFC  f 2 , f 3 , φ 1 : M 0 M  and  φ : M 0 × M 0 M  exist, such that the equation  g i ( t ) 1 = 0 , i = 1 , 2 , 3  admits MPS denoted by  ρ i ,  respectively, provided  g i : M 0 M  are provided as
g 1 ( t ) = λ ( t ) φ ( f 1 ( t ) , t ) , g 2 ( t ) = [ φ ( f 1 ( t ) , g 1 ( t ) t ) + | 1 a 0 | ( 1 + 0 1 φ 1 ( θ g 1 ( t ) t ) d θ ) + | 3 2 a 0 | λ ( t ) ( 1 + φ 0 ( f 2 ( t ) , g 1 ( t ) t ) ) + | a 0 2 | λ 2 ( t ) ( 1 + φ 0 ( f 2 ( t ) , g 1 ( t ) t ) ) 2 ] λ ( t ) g 1 ( t ) ,
and for
λ 1 ( t ) = φ ( f 1 ( t ) , g 2 ( t ) t ) + | 1 a 1 | ( 1 + 0 1 φ 1 ( θ g 2 ( t ) t ) d θ ) + | a 2 | λ ( t ) ( 1 + φ 0 ( f 3 ( t ) , g 2 ( t ) t ) + | a 3 | λ ( t ) 2 ( 1 + φ 0 ( f 3 ( t ) , g 2 ( t ) t ) ) 2 + | a 4 | λ ( t ) 3 ( 1 + φ 0 ( f 3 ( t ) , g 2 ( t ) t ) ) 3 + | a 4 | λ ( t ) 4 ( 1 + φ 0 ( f 3 ( t ) , g 2 ( t ) t ) ) 4 , g 3 ( t ) = λ 1 ( t ) λ ( t ) g 2 ( t ) .
Define parameter
ρ * = min { ρ i } , i = 1 , 2 , 3
and the set  M * = [ 0 , ρ * ) .  These definitions imply that if  t M *  
0 g i ( t ) < 1 .
(H3)
L is an invertible operator on E, such that for each  x D  
w x * f 1 ( x x * ) ,
L 1 ( [ x , w ; F ] L )   φ 0 ( w x * , x x * ) , w = x + b 0 F ( x ) .
Define the region  D 0 = D S ( x * , ρ * )  with  S ( x * , ρ * ) = { y D : y x * < ρ * } .  
(H4)
h x * f 2 ( y x * ) , l x * f 3 ( z x * ,  
L 1 ( [ x , x * ; F ] L ) φ 1 ( x x * ) ,
L 1 ( [ x , w ; F ] [ x , x * ; F ] ) φ ( w x * , x x * )
for  x D 0 , w = x + b 0 F ( x ) , h = y + b 1 F ( y )  and  l = z + b 2 F ( z )  and  y , z  are provided by the last two substeps of the algorithm.
It is shown that  y , z  exist (see Proof of Theorem 1).
and
(H5)
S [ x * , ρ * ] D ,  where  S [ x * , ρ * ]  is the closure of  S ( x * , ρ * ) .  
A local analysis of the algorithm follows.
Theorem 1. 
Under conditions (H1)–(H5), the sequence  { x n }  is convergent to  x *  provided that  x 0 U ( x * , ρ * ) { x * } .  Moreover, the following assertions hold
{ x n } S ( x * , ρ * ) ,
y n x * g 1 ( x n x * ) x n x * x n x * < ρ * ,
z n x * g 2 ( x n x * ) x n x * x n x * ,
and
x n + 1 x * g 3 ( x n x * ) x n x * x n x * ,
where the functions  g i  are as previously provided and the radius  ρ *  is defined by the Formula (4).
Proof. 
From the hypothesis  x 0 S ( x * , ρ * ) .  If  F ( x 0 ) 0  and  b 0 0 ,  then  w 0 x 0 .  It follows that the divided difference  A 0 = [ x 0 , w 0 ; F ]  is well defined. Then, by (H1), (4) and (H3), we obtain
L 1 ( A 0 L ) φ 0 ( w 0 x * , x 0 x * )   φ 0 ( f 1 ( x 0 x * , x 0 x * )   φ 0 ( f 1 ( ρ * ) , ρ * ) < 1 .
Thus, by the Banach perturbation Lemma on the linear operators with inverses [1,2,6,18 A 0 1   exists,
A 0 1 L 1 1 φ 0 ( f 1 ( x 0 x * , x 0 x * )
and the iterate  y 0  exists in the first substep of the algorithm.
Then, we can write
y 0 x * = x 0 x * A 0 1 F ( x 0 )   = A 0 1 ( A 0 [ x 0 , x * ; F ] ) ( x 0 x * ) .
Using (4), (5) (for  i = 1  ), (11) and (12)
y 0 x * A 0 1 L L 1 ( A 0 [ x 0 , x * ; F ] ) x 0 x *   φ ( w 0 x * , x 0 x * ) x 0 x * 1 φ 0 ( f 1 ( x 0 x * , x 0 x * )   g 1 ( x 0 x * ) x 0 x * x 0 x * < ρ * .
Thus, in the second substep of the algorithm, the following happens
z 0 x * = y 0 x * A 0 1 F ( y 0 )   + ( 1 a 0 ) A 0 1 F ( y 0 ) ( 3 2 a 0 ) A 0 1 G 0 A 0 1 G 0 A 0 1 F ( y 0 ) ( a 0 2 ) A 0 1 G 0 A 0 1 G 0 A 0 1 F ( y 0 )
leading to
z 0 x * [ φ ( w 0 x * , y 0 x * ) λ ( x 0 x * )   + | 1 a 0 | λ ( x 0 x * ) ( 1 + 0 1 φ 0 ( θ y 0 x * ) d θ )   + | 3 2 a 0 | λ 2 ( x 0 x * ) ( 1 + φ 0 ( h 0 x * , y 0 x * ) )   + | a 0 2 | λ 3 ( x 0 x * ) ( 1 + φ 0 ( h 0 x * , y 0 x * ) ) 2 ] y 0 x *   g 2 ( x 0 x * ) x 0 x * x 0 x * .
Hence, the iterate  z 0 S ( x * , ρ * )  and the assertion (8) holds if  n = 0 .  
Similarly, in the last substep of the algorithm, we write
x 1 x * = z 0 x * A 0 1 F ( z 0 )   + ( 1 a 1 ) u 5 a 2 u 6 a 3 u 7 a 4 u 8 a 5 u 9 ,
leading to
x 1 x * [ φ ( w 0 x * , z 0 x * ) λ ( x 0 x * )   + | 1 a 0 | λ ( x 0 x * ) ( 1 + 0 1 φ 1 ( θ z 0 x * ) d θ )   + | a 2 | λ 2 ( x 0 x * ) ( 1 + φ 0 ( l 0 x * , z 0 x * ) )   + | a 3 | λ 3 ( x 0 x * ) ( 1 + φ 0 ( l 0 x * , z 0 x * ) ) 2   + | a 4 | λ 4 ( x 0 x * ) ( 1 + φ 0 ( l 0 x * , z 0 x * ) ) 3   + | a 5 | λ 5 ( x 0 x * ) ( 1 + φ 0 ( l 0 x * , z 0 x * ) ) 4 ] z 0 x *   g 3 ( x 0 x * ) x 0 x * x 0 x * .
Thus, assertion (6) holds if  n = 1  and (9) if  n = 0 .  Repeat the preceding calculations with  x m , y m , z m , x m + 1  replacing  x 0 , y 0 , z 0 , x 1  to complete the induction for assertions (7)–(9). Then, from the estimate
x m + 1 x * c x m x * < ρ * ,
where  c = g 3 ( x 0 x * ) [ 0 , 1 ) ,  we conclude that  lim m x m = x *  and the iterate  x m + 1 S ( x * , ρ * ) .     □
The uniqueness region can be determined.
Proposition 1. 
Suppose:
A solution of  x ¯ S ( x * , ρ 4 )  exists for the equation  F ( x ) = 0  for  ρ 4 > 0 ;  the condition (H3) holds on the ball  S ( x * , ρ 4 )  and  ρ 5 ρ 4  exists, such that
0 1 φ 1 ( θ ρ 5 ) d θ < 1 .
Define the region  D 1 = D S [ x * , ρ 5 ] .  Then, the equation  F ( x ) = 0  is uniquely solvable by  x *  in the region  D 1 .  
Proof. 
Let us consider the divided difference  [ x * , x ¯ , F ]  provided that  x x ¯ .  Then, it follows by (H3) and (15) that
L 1 ( M L ) 0 1 φ 1 ( θ x ¯ x * ) d θ 0 1 φ 1 ( θ ρ 5 ) d θ < 1 .
Thus, the linear operator M is invertible. It follows from  x ¯ x * = M 1 ( F x ¯ ) F ( x * ) ) = M 1 ( 0 ) = 0  that  x ¯ = x * .     □
Remark 1. 
 
(i
We can certainly choose  ρ 4 = ρ *  in Proposition 1.
(ii
Possible choice for the uncluttered functions  f i  can be obtained as follows:
w 0 x * = x 0 x * + b 0 F ( x 0 ) = ( I + b 0 [ x 0 , x * ; F ] ) ( x 0 x * ) = ( I + b 0 L L 1 ( [ x 0 , x * ; F ] L + L ) ) ( x 0 x * ) = ( ( I + b 0 L ) + b 0 L L 1 ( [ x 0 , x * ; F ] L ) ) ( x 0 x * ) , w 0 x * [ I + b 0 L + | b 0 | L φ 1 ( x 0 x * ) ] x 0 x * .
Thus, we can define
f 1 ( t ) = ( I + b 0 L + | b 0 | L φ 1 ( t ) ) t .
Similarly, we set
f 2 ( t ) = ( I + b 1 L + | b 1 | L φ 1 ( t ) ) t
and
f 3 ( t ) = ( I + b 2 L + | b 2 | L φ 1 ( t ) ) t .
(iii
A possible choice for L in local convergence studies may be  L = F ( x * ) , or  L = I , or any other linear operator satisfying the conditions (H1)–(H5) (see also the Example 1 in the Section 4).

3. Semi-Local Analysis

The role of  x * , φ 0 , φ  is exchanged with  x 0 , ψ 0  and  ψ  as follows:
Assume:
(E1)
NFC  f 4 : M M , ψ 0 : M × M M  exists, such that equation
ψ 0 ( f 4 ( t ) , t ) 1 = 0
has MPS  ρ 6 M { 0 } .  Let  M 1 = [ 0 , ρ 6 ) .  Define  ψ : M 1 M  of the sequence  { α n }  for  α 0 = 0 ,    β 0 [ 0 , ρ 6 )  and each  n = 0 , 1 , 2 ,  by
q n = 1 1 ψ 0 ( f 4 ( α n ) , α n ) , p n = ψ ( α n , β n , f 4 ( α n ) ) ( β n α n ) , γ n = β n + | a 0 | q n p n + | 3 2 a 0 | q n 2 ( 1 + ψ 0 ( f 5 ( β n ) , β n ) ) + | a 0 2 | q n 3 ( 1 + ψ 0 ( f 5 ( β n ) , β n ) ) , d n = ( 1 + ψ 0 ( α n , γ n ) ) ( γ n α n ) + p n ,
α n + 1 = γ n + | a 1 | q n d n + | a 2 | q n 2 d n ( 1 + ψ 0 ( f 5 ( γ n ) , γ n ) ) + | a 3 | q n 3 d n ( 1 + ψ 0 ( f 5 ( γ n ) , γ n ) ) 2 + | a 4 | q n 4 d n ( 1 + ψ 0 ( f 5 ( γ n ) , γ n ) ) 3 + | a 5 | q n 5 d n ( 1 + ψ 0 ( f 5 ( γ n ) , γ n ) ) 4 , δ n + 1 = ψ ( α n , α n + 1 , f 4 ( α n ) ) ( α n + 1 α n ) + ( 1 + ψ 0 ( f 4 ( α n ) , α n ) ) ( α n + 1 β n ) ,
and
β n + 1 = α n + 1 + δ n + 1 1 ψ 0 ( f 4 ( α n + 1 ) , α n + 1 ) .
(E2)
ρ 7 [ 0 , ρ 6 )  exists, such that for each  n = 0 , 1 , 2 ,  
ψ 0 ( f 4 ( α n ) , α n ) < 1 and α n ρ 7 .
It follows that  0 α n β n γ n α n + 1 < ρ 7  and  ρ 8 [ 0 , ρ 7 )  exists, such that  lim n α n = ρ 8 .  
(E3)
An invertible linear operator of L and  x 0 D  exists, such that for each  x D  
L 1 ( [ w , x ; F ] L ) ψ 0 ( w x 0 , x x 0 )
and
w x 0 f 4 ( x x 0 ) .
Notice that by condition (E1)
L 1 ( [ w 0 , x 0 ; F ] L ) ψ 0 ( w 0 x 0 , 0 ) ψ 0 ( f 4 ( 0 ) , 0 ) < 1 .
Thus, the linear operator  A 0 = [ w 0 , x 0 ; F ]  is invertible and we can take
A 0 1 F ( x 0 ) β 0 .
(E4)
Let  D 1 = D S ( x 0 , ρ 6 )  
L 1 ( [ y , x ; F ] [ w , x ; F ] ) ψ ( x x 0 , y x 0 , w x 0 )
for each  x , y D 1 .  
and
(E5)
S [ x 0 , ρ 8 ] D .  
Then, using induction, as in the local case, we obtain the estimates
y 0 x 0 = A 0 1 F ( x 0 ) β 0 = β 0 α 0 < ρ 8 , F ( y k ) = F ( y n ) F ( x n ) [ w n , x n ; F ] ( y n x n ) = ( [ y n , x n ; F ] [ w n , x n ; F ] ) ( y n x n ) , L 1 F ( y n ) ψ ( x n x 0 , y n x 0 , w n x 0 ) y n x n ψ ( α n , β n , f 4 ( α n ) ) ( β n α n ) = p n ,
A n 1 L 1 1 ψ 0 ( w n x 0 , x n x 0 ) 1 1 ψ 0 ( f 4 ( α n ) , α n ) = q n , z n y n | a 0 | q n p n + | 3 2 a 0 | q n 2 ( 1 + ψ 0 ( h n x 0 , y n x 0 ) + | a 0 2 | q n 3 ( 1 + ψ 0 ( h n x 0 , y 0 x 0 ) 2 | a 0 | q n p n + | 3 2 a 0 | q n 2 ( 1 + ψ 0 ( f 5 ( β n ) , β n ) ) + | a 0 2 | q n 3 2 ( 1 + ψ 0 ( f 5 ( β n ) , β n ) ) 2 = γ n β n ,
z n x 0 z n y n + y n x 0 γ n β n + β n α 0 = γ n < ρ 8 , x n + 1 z n | a 1 | q n d n + | a 2 | q n 2 ( 1 + ψ 0 ( l n x 0 , z n x 0 ) + | a 3 | q n 3 d n ( 1 + ψ 0 ( f 6 ( γ n ) , γ n ) ) 2 + | a 4 | q n 4 d n ( 1 + ψ 0 ( f 6 ( γ n ) , γ n ) ) 3 + | a 5 | q n 5 d n ( 1 + ψ 0 ( f 6 ( γ n ) , γ n ) ) 4 = α n + 1 γ n , x n + 1 x 0 x n + 1 z n + z n x 0 α n + 1 γ n + γ n α 0 = α n + 1 < ρ 8 ,
F ( x n + 1 ) = F ( x n + 1 ) F ( x n ) M n ( y n x n ) = ( F ( x n + 1 ) F ( x n ) M n ( x n + 1 x n ) ) + M n ( x n + 1 y n ) = ( [ x n + 1 , x n ; F ] M n ) ( x n + 1 x n ) + M n ( x n + 1 y n ) ,
L 1 F ( x n + 1 ) ψ ( x n x 0 , x n + 1 x 0 , w n x 0 ) x n + 1 x n   + ( 1 + ψ 0 ( w n x 0 , x n x 0 ) ) x n + 1 y n   ψ ( α n , α n + 1 , f 4 ( α n ) ) ( α n + 1 α n )   + ( 1 + ψ 0 ( f 4 ( α n ) , α n ) ) ( α n + 1 β n )   = δ n + 1
y n + 1 x n + 1 A n + 1 1 L L 1 F ( x n + 1 ) δ n + 1 1 ψ 0 ( f 4 ( w n + 1 x 0 , x n + 1 x 0 ) δ n + 1 1 ψ 0 ( f 4 ( α n + 1 ) , α n + 1 ) = β n + 1 α n + 1 ,
y n + 1 x 0 y n + 1 x n + 1 + x n + 1 x 0 β n + 1 α n + 1 + α n + 1 α 0 = β n + 1 < ρ 8 .
Thus,  { x n } S ( x 0 , ρ 8 )  and is Cauchy in a Banach space  E .  Hence,  x * S [ x 0 , ρ 8 ]  exists, such that  lim n x n = x * .  
By letting  n  in (16), we obtain  F ( x * ) = 0 .  Notice that from the estimate,
x n + j x n α n + j α n .
If  j  in (17), we obtain
x * x n α * α n .
Therefore, we arrive at the semi-local result for the Algorithm 1:
Theorem 2. 
Suppose that conditions (E1)–(E5) hold. Then, the following assertions hold
{ x n } S ( x 0 , ρ 8 ) ,
y n x n β n α n ,
x n + 1 z n α n + 1 γ n
and  x * S [ x 0 , ρ 8 ]  exists, solving the equation  F ( x ) = 0 .  
The uniqueness of the solution for equation of  F ( x ) = 0  is specified.
Proposition 2. 
Suppose: A solution of  x ˜ S ( x 0 , ρ 9 )  exists for the equation  F ( x ) = 0  for some  ρ 9 > 0 ;  condition (E3) holds on the ball  S ( x 0 , ρ 9 )  and  ρ 10 ρ 9  exists, such that
ψ 0 ( ρ 9 , ρ 10 ) < 1 .
Define the region  D 2 = D S [ x 0 , ρ 10 ] .  Then, equation  F ( x ) = 0  is uniquely solvable by  x ˜  in the region  D 2 .  
Proof. 
Let  y ˜ D 2  with  F ( y ˜ ) = 0  and  y ˜ x ˜ .  Then, the divided difference  Q = [ y ˜ , x ˜ ; F ] = [ w ˜ , x ˜ ; F ]  is well defined. It then follows from (18) that
L 1 ( Q L ) ψ 0 ( y ˜ x 0 , x ˜ x 0 ) ψ 0 ( ρ 9 , ρ 10 ) < 1 .
Therefore, we deduce  y ˜ = x ˜ .     □
Remark 2. 
 
(i
The limit point  ρ 8  can be replaced by  ρ 6  in (E5) (provided in the condition (E1)).
(ii
Suppose that all conditions (E1)–(E5) hold in Proposition 1. Then, set  ρ 9 = ρ 8  and  x ˜ = x * .  
(iii
Functions  f i , i = 4 , 5 , 6  can be specified as in the local case by the following estimates:
w x 0 = [ ( I + b 0 L ) + b 0 L L 1 ( [ x , x 0 ; F ] L ) ] ( x x 0 ) + b 0 F ( x 0 ) , w x 0 ( I + b 0 L + | b 0 | L ψ 1 ( x x 0 ) x x 0 + | b 0 | F ( x 0 ) .
Hence, we can define
f 4 ( α n ) = ( I + b 0 L + | b 0 | L ψ 1 ( α n ) ) α n + | b 0 | F ( x 0 ) .
Similarly, we choose
f 5 ( β n ) = ( I + b 1 L + | b 1 | L ψ 1 ( β n ) ) β n + | b 1 | F ( x 0 )
and
f 6 ( γ n ) = ( I + b 2 L + | b 2 | L ψ 2 ( γ n ) ) γ n + | b 2 | F ( x 0 ) .
(iv
A possible choice for L may be  L = A 0 , provided that the operator  A 0  is invertible or  L = I .  Other choices are possible, as long as conditions (E1)–(E4) are validated.

4. Numerical Examples

In this Section, we chose  a 0 = 3 , a 5 = 0 , b 0 = 1 , b 1 = 1 , and  b 2 = 1  for all of the examples to obtain the specialization of the algorithm, which is defined by Algorithm 2.
Algorithm 2
Step1: Given  x 0 D ,  Solve  A n u 1 = F ( x n ) ,  for  u 1 .  
Step 2: Set  y n = x n u 1 .  
Step 3: Solve  A n u 2 = F ( y n ) ,  for  u 2 .  
Step 4: Solve  A n u 3 = G n u 2 ,  for  u 3 .  
Step 5: Solve  A n u 4 = G n u 3 ,  for  u 4 .  
Step 6: Set  z n = y n 3 u 2 + 3 u 3 u 4 .  
Step 7: Solve  A n u 5 = F ( z n ) ,  for  u 5 .  
Step 8: Solve  A n u 6 = Q n u 5 ,  for  u 5 .  
Step 9: Solve  A n u 7 = Q n u 6 ,  for  u 6 .  
Step 10: Solve  A n u 8 = Q n u 7 ,  for  u 8 .  
Step 11: Set  x n + 1 = z n 4 u 5 + 6 u 6 4 u 7 + u 8 .  
Step 12: If  x n + 1 = x n ,  STOP. Otherwise, repeat the process with  n n + 1 .  
Here  w n = x n F ( x n ) , h n = y n + F ( y n ) ,   l n = z n F ( z n ) , A n = [ x n , w n ; F ] ,   G n = [ h n , y n ; F ] Q n = [ l n , z n ; F ] .
Also, we considered the choice of the divided difference  [ x , y ; F ] = 0 1 F ( x + θ ( y x ) ) d θ  and  L = F ( x * ) .  
In Example 1, we provided the choice of the operator L as well as the functions  φ 0 , φ , and   φ 1  to validate the local convergence conditions (H1)–(H5). Notice that functions  f 1 , f 2  and  f 3  were chosen, as in Remark 1 (ii). There was no need to choose the operator L in the rest of the examples as the convergence of the aforementioned Algorithm was established (semi-local convergence). The stopping criterion is  x n x n 1 < ϵ ,  where  ϵ  is the desired error tolerance.
Example 1. 
Let  M = R × R × R  and  Ω = S [ x * , 1 ] .  The mapping F is defined on Ω for  a = ( a 1 , a 2 , a 3 ) t r R  as
F ( a ) = ( a 1 , e a 2 1 , e 1 2 a 3 2 + a 3 ) t r .
Then,  F  is calculated to be
F ( a ) = 1 0 0 0 e a 2 0 0 0 ( e 1 ) a 3 + 1
Then,  x * = ( 0 , 0 , 0 ) t r  solves equation  F ( a ) = 0  . Moreover, the definition of  F  provides  F ( x * ) = I  . Take  L = I .  
Then, conditions (H3)–(H5) are valid if we define for  f i  as provided in Remark 1.
φ 1 ( t ) = e 1 2 t , f 1 ( t ) = f 2 ( t ) = f 3 ( t ) = φ 1 ( t ) t , φ 0 ( s , t ) = 1 2 ( e 1 ) ( t + f 1 ( t ) ) ,
φ ( s , t ) = 1 2 ( e 1 ) f 1 ( t ) .
These choices of scalar functions validate the conditions of Theorem 1. This assures the convergence of the sequence  { x n }  to solution  x * .  
Then, from Formula (4), we deduce
ρ 1 = 0.59619959522338323554725671786752 ,
ρ 2 = 0.24652697702590073500524094190487
and
ρ 3 = ρ * = 0.10739579951416893362926171861238 .
Example 2. 
The solution sought for the nonlinear system
3 θ 1 2 θ 2 + θ 2 2 1 + | θ 1 1 | = 0 θ 1 4 + θ 1 θ 2 3 1 + | θ 2 | = 0
Let  F = ( Q 1 , Q 2 )  for  ( θ 1 , θ 2 ) R × R , where
Q 1 = 3 θ 1 2 θ 2 + θ 2 2 1 + | θ 1 1 | = 0 a n d Q 2 = θ 1 4 + θ 1 θ 2 3 1 + | θ 2 | = 0 .
Then, the system becomes
F ( s ) = 0 f o r s = ( θ 1 , θ 2 ) T .
The divided difference  L = [ . , . ; F ]  belongs in the space  M 2 × 2 ( R )  and is the standard  2 × 2  matrix in  R 2  [11,18]. Let us choose  x 0 = ( 5 , 5 ) T  . It turns out that the algorithm converges to the solution of  x * , as the initial guess of  x 0  is close enough to it. Hence, there is no need to validate the conditions of Theorem 2, which are sufficient. Then, the application of the algorithm provides the solution  θ *  after three iterations. The solution  θ * = ( θ 1 * , θ 2 * ) T , where
θ 1 * = 0.894655373334687
and
θ 2 * = 3.27826421746298 .
Example 3. 
Consider the system of 100 equations defined by
F ( x ) = 0 ,
where
F ( x ) ( i ) = { x i 2 sin ( x i + 1 ) 1 , i t 1 i 99 x i 2 sin ( x i ) 1 , i t i = 100 .
The results obtained for the initial point  ( 2 , 2 , , 2 )  are provided in Table 1.
Example 4. 
In this example, we consider a system of five equations defined by
F ( x ) = 0 ,
where
F ( x ) ( i ) = j = 1 , j i 5 x i e x i .
The results obtained for the initial points  ( 1 , 1 , 1 , 1 , 1 )  are provided in Table 2.
We used a 4-core 64-bit Windows machine with 11th Gen Intel(R) Core (TM) i5-1135G7 CPU @ 2.40 GHz for all our computations using MATLAB R2023b.

5. Conclusions

The step eighth-order Algorithm without derivatives of the operator was studied in this paper using assumptions only on the first divided difference of the operator. Earlier studies using the Taylor series expansion algorithm made assumptions up to a ninth order derivative not on the algorithm [10].
We provided sufficient convergence conditions involving only the operators on the algorithm, computable error upper bounds on  x n x * ,  and presented the uniqueness of the solution results. It is worth noticing that the methodology of this study was not dependent on the convergence order of the iterative algorithm as the convergence conditions did not make use of it. Moreover, the assumption that the solution was simple was not made or implied by the convergence conditions. Thus, in case convergence conditions were satisfied, the methods also found solutions of multiplicity greater than one. The approach in this paper was applied to other algorithms with inverses to obtain the same benefits [4,20,21,22]. This will be the focus of our future research.

Author Contributions

Conceptualization, S.R., I.K.A. and S.G.; algorithm study, S.R., I.K.A. and S.G.; software, S.R., I.K.A. and S.G.; validation, S.R., I.K.A. and S.G.; formal analysis, S.R., I.K.A. and S.G.; investigation, S.R., I.K.A. and S.G.; resources, S.R., I.K.A. and S.G.; data curation, S.R., I.K.A. and S.G.; writing—original draft preparation, S.R., I.K.A. and S.G.; writing—review and editing, S.R., I.K.A. and S.G.; visualization, S.R., I.K.A. and S.G.; supervision, S.R., I.K.A. and S.G.; project administration, S.R., I.K.A. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Argyros, I.K.; George, S. Ball comparison between two optimal eight-order Algorithm under weak conditions. SeMA J. 2015, 72, 1–11. [Google Scholar] [CrossRef]
  2. Argyros, I.K.; George, S. Local convergence of two competing third order Algorithm in Banach space. Appl. Math. 2014, 4, 341–350. [Google Scholar]
  3. Argyros, C.I.; Regmi, S.; Argyros, I.K.; George, S. Contemporary Algorithms: Theory and Applications; NOVA Publishers: Hauppauge, NY, USA, 2022; Volume II. [Google Scholar]
  4. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative Algorithm for solving nonlinear systems. Numer. Algorithms 2015, 70, 545–558. [Google Scholar] [CrossRef]
  5. Ahmad, F.; Tohidi, E.; Ullah, M.Z.; Carrasco, J.A. Higher order multi-step Jarratt-like Algorithm for solving systems of nonlinear equations: Application to PDEs and ODEs. Comput. Math. Appl. 2015, 70, 624–636. [Google Scholar]
  6. Alharbey, R.A.; Kansal, M.; Behl, R.; Machado, J.A.T. Efficient Three-Step Class of Eighth-Order Multiple Root Solvers and Their Dynamics. Symmetry 2019, 11, 837. [Google Scholar] [CrossRef]
  7. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. On developing fourth-order optimal families of Algorithm for multiple roots and their dynamics. Appl. Math. Comput. 2015, 265, 520–532. [Google Scholar] [CrossRef]
  8. Budzkoa, D.A.; Cordero, A.; Torregrosa, J.R. New family of iterative Algorithm based on the Ermakov–Kalitkin Algorithm for solving nonlinear systems of equations. Comput. Math. Math. Phys. 2015, 55, 1947–1959. [Google Scholar] [CrossRef]
  9. Cordero, A.; Soleymani, F.; Torregro, J.R. Dynamical analysis of iterative Algorithm for nonlinear systems or how to deal with the dimension? Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
  10. Ahmad, F.; Soleymani, F.; Haghani, F.K.; Serra-Capizzano, S. Higher order derivative-free iterative Algorithm with and without memory for systems of nonlinear equations. Appl. Math. Comput. 2017, 314, 199–211. [Google Scholar] [CrossRef]
  11. Potra, F.-A. A characterisation of the divided differences of an operator which can be represented by Riemann integrals. Math.-Rev. Anal. Numér. Thérie Approx. Anal. Numér. Théor. Approx. 1980, 2, 251–253. [Google Scholar]
  12. Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step Algorithm for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
  13. Shakhno, S.M.; Gnatyshyn, O.P. On an iterative algorithm of order 1.839… for solving nonlinear operator equations. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar] [CrossRef]
  14. Grau-Sánchez, M.; Noguera, M.; Diaz-Barrero, J.L. Note on the efficiency of some iterative Algorithm for solving nonlinear equations. SeMA J. 2015, 71, 15–22. [Google Scholar] [CrossRef]
  15. Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding Algorithm. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar] [CrossRef]
  16. Magreñán, A.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 29–38. [Google Scholar] [CrossRef]
  17. Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S. On a new Algorithm for computing the numerical solution of systems of nonlinear equations. J. Appl. Math. 2012, 15, 751975. [Google Scholar]
  18. Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in SeveralVariables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  19. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
  20. Sharma, J.R.; Arora, H.; Petković, M.S. An efficient derivative free family of fourth order Algorithm for solving systems of nonlinear equations. Appl. Math. Comput. 2014, 235, 383–393. [Google Scholar] [CrossRef]
  21. Sharma, J.R.; Arora, H. Efficient derivative-free numerical Algorithm for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  22. Singh, R.; Panday, S. Efficient optimal eighth order Algorithm for solving nonlinear equations. AIP Conf. Proc. 2023, 2728, 030013. [Google Scholar] [CrossRef]
Table 1. Iterated solutions of Example 3.
Table 1. Iterated solutions of Example 3.
IterationSolution   | F ( x ) ( i ) |   Time
  n     x i   s
10.524657457768460047345322189931150.8621149772513719.196941
21.06664178887946660222479001970490.00382735862083619.437201
31.06822354419724901828341271936221.110223024625157 × 10 16  28.369249
41.06822354419724901828347111426311.110223024625157 × 10 16  37.619249
Table 2. Iterated solutions of Example 4.
Table 2. Iterated solutions of Example 4.
IterationSolution   | F ( x ) ( i ) |   Time
  n     x i   s
10.203910805919986559686662985768631.081148328339054 × 10 4  0.002347
20.203888354702240176541394589548871.110223024625157 × 10 16  0.002501
30.203888354702240176541394589548871.110223024625157 × 10 16  0.002839
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; George, S. Convergence of High-Order Derivative-Free Algorithms for the Iterative Solution of Systems of Not Necessarily Differentiable Equations. Mathematics 2024, 12, 723. https://doi.org/10.3390/math12050723

AMA Style

Regmi S, Argyros IK, George S. Convergence of High-Order Derivative-Free Algorithms for the Iterative Solution of Systems of Not Necessarily Differentiable Equations. Mathematics. 2024; 12(5):723. https://doi.org/10.3390/math12050723

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, and Santhosh George. 2024. "Convergence of High-Order Derivative-Free Algorithms for the Iterative Solution of Systems of Not Necessarily Differentiable Equations" Mathematics 12, no. 5: 723. https://doi.org/10.3390/math12050723

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop