Next Article in Journal
Mobile Robot Path Planning Based on Kinematically Constrained A-Star Algorithm and DWA Fusion Algorithm
Previous Article in Journal
Exploring Spatial-Based Position Encoding for Image Captioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Equation Solving: Extending the Applicability of Steffensen-Type Methods

by
Ramandeep Behl
1,*,
Ioannis K. Argyros
2 and
Monairah Alansari
1
1
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(21), 4551; https://doi.org/10.3390/math11214551
Submission received: 12 September 2023 / Revised: 31 October 2023 / Accepted: 1 November 2023 / Published: 5 November 2023

Abstract

:
Local convergence analysis is mostly carried out using the Taylor series expansion approach, which requires the utilization of high-order derivatives, not iterative methods. There are other limitations to this approach, such as the following: the analysis is limited to finite-dimensional Euclidean spaces; no a priori computable error bounds on the distance or uniqueness of the solution results are provided. The local convergence analysis in this paper positively addresses these concerns in the more general setting of a Banach space. The convergence conditions involve only the operators in the methods. The more important semi-local convergence analysis not studied before is developed by using majorizing sequences. Both types of convergence analyses are based on the concept of generalized continuity. Although we study a certain class of methods, the same approach applies to extend the applicability of other schemes along the same lines.
MSC:
65G99; 47H17; 49M15

1. Introduction

Iterative methods are a powerful tool in numerical analysis used to solve nonlinear equations. Nonlinear equations often arise in a variety of fields, including physics, engineering, economics, and finance, and are notoriously difficult to solve analytically. Such problems can be transformed into a form like
G ( x ) = 0 ,
where  G : D E E  and  E  is a Banach space.
Iterative methods are numerical techniques that start with an initial guess and iteratively refine the solution until a desired level of accuracy is achieved. This makes them particularly useful for solving complex nonlinear equations that cannot be easily solved using traditional analytical methods. The usage of iterative methods for solving nonlinear equations has revolutionized many areas of science and engineering, and continues to be an important research topic in the field of numerical analysis.
The Newton–Raphson method, one of the most significant iterative techniques, is defined as
x σ + 1 = x σ G ( x σ ) 1 G ( x σ ) ,
Newton’s method is a classic and widely used iterative algorithm for finding the solutions to nonlinear systems. However, Newton’s method has some limitations. For example, it can fail to converge if the initial guess is too far from the required solution or, if the Fréchet derivative of operator G is zero or very small, then it is hard to obtain the inverse of G. To address these issues, various extensions of Newton’s method have been developed over time, such as Steffensen’s method and a higher-order version of Steffensen’s method. These extensions improve the convergence rate and robustness of Newton’s method, making it more effective for a wider range of problems. In this context, researchers continue to explore new ways to extend Newton’s method and other numerical methods, further expanding the range of applications where they can be used effectively.
One of these fourth-order convergent methods, presented by Singh, A. [1], is defined by
A σ = [ u σ , x σ ; G ] , u σ = x σ + G ( x σ ) , y σ = x σ A σ 1 G ( x σ ) , x σ + 1 = T ( x σ ) ,
where  [ · , · ; G ] : D × D 𝓁 ( E )  is the space of bounded linear operators from  E  into  E  and T is any iteration function of convergence order four.
Some of the special cases of scheme (2) are given below:
Special case 1:
B σ = [ y σ , x σ ; G ] + [ u σ , y σ ; G ] [ u σ , x σ ; G ] T ( x σ ) = y σ B σ 1 G ( y σ )
Method (2) becomes
y σ = x σ A σ 1 G ( y σ ) x σ + 1 = y σ B σ 1 G ( y σ )
Method (3), also studied by Wang et al. [2], was the multidimensional extension to the scalar method proposed by Ren et al. [3].
Special case 2:
T ( x σ ) = y σ 3 I A σ 1 [ y σ , x σ ; G ] + [ u σ , y σ ; G ] A σ 1 G ( y σ )
Then, method (2) is reduced to
y σ = x σ A σ 1 G ( x σ ) x σ + 1 = y σ 3 I A σ 1 [ y σ , x σ ; G ] + [ u σ , y σ ; G ] A σ 1 G ( y σ )
Scheme (4) is presented by Sharma and Arora in [4].
Special case 3:
T ( x σ ) = y σ [ y σ , x σ ; G ] 1 [ u σ , x σ ; G ] [ u σ , y σ ; G ] 1 G ( y σ )
It follows by this choice that method (2) specializes to
y σ = x σ A σ 1 G ( x σ ) x σ + 1 = y σ [ y σ , x σ ; G ] 1 [ u σ , x σ ; G ] [ u σ , y σ ; G ] 1 G ( y σ )
Scheme (5) is proposed by Cordero et al. [5]. Some other important cases are mentioned by Singh, A. [1].
There are certain limitations with earlier works using the Taylor series expansion approach. Below is a list.
(L1)
The local convergence analysis carried out in the case when  E = R k , where k is a natural number.
(L2)
There are no computable error bounds on the distances  x * x σ . Therefore, we do not know a priori how many iterations must be carried out to reach a certain pre-decided error tolerance.
(L3)
There is no uniqueness of the solution results.
(L4)
The existence is assumed of derivatives that are not present in the method. As an example for method (2), consider  D = [ 0.5 , 1.5 ]  and function  G : D R  defined by
G ( t ) = t 4 log t + 2 t 5 2 t 4 , t 0 , 0 , t = 0 .
Clearly,  t * = 1 D  solves the equation  G ( t ) = 0 . But, the fourth derivative  G ( 4 )  is unbounded on  D , since this function is not continuous at  t = 0 D . Therefore, the results in [1] cannot assure the convergence of method (2) to  t * . However, method (2) converges to  t *  by using the starting point  x 0 = 1.2 D .
(L5)
The choice of the initial point is a “shot in the dark”, since no computable radius of convergence is provided.
(L6)
The more important semi-local convergence is not provided in [1] for method (2) or special cases (3)–(5).
In this paper, we address these limitations.
(L1)′
The convergence analysis is carried out in the setting of a Banach space.
(L2)′
A priori computable upper error bounds on the distances are provided. Hence, we know in advance the number of iterations to be carried out in order to achieve a desired error tolerance.
(L3)′
A neighborhood is specified that contains only one solution.
(L4)′
The convergence is established using only the operators in method (2).
(L5)′
The radius of convergence is determined. Hence, if we choose an initial point from the ball with this radius, the convergence is assured.
(L6)′
The semi-local convergence is developed by utilizing majorizing sequences.
It is worth noting that the convergence conditions are based on the concept of generalized continuity (see e.g., conditions  ( H 4 ) ( H 6 ) ). Our approach can be used to extend the applicability of other methods along the same lines.

2. Convergence Analysis I: Local

Let  B = [ 0 , + ) .
Assume:
(H1)
There exist functions of  g : B B , ϕ 0 : B × B B  which are increasing as well as continuous (IC) such that the equation  ϕ 0 t , Δ ( t ) 1 = 0  admits a smallest positive solution (sps), denoted by  ρ . Let  B 0 = [ 0 , ρ ) .
(H2)
There exists an IC function  ϕ : B 0 B  such that for  h 1 : B 0 B  the equation  h 1 ( t ) 1 = 0  has a sps  s 1 B 0 { 0 } , where  h 1 ( t ) = ϕ t , Δ ( t ) 1 ϕ 0 t , Δ ( t ) .
(H3)
There exists an IC function  h 2 : B 1 B 0 B  such that the equation  h 2 ( t ) 1 = 0  admits a sps  s 2 B 1 { 0 } . The set  B 1  and the function  h 2  are developed later. Define the parameters by
s = min { s 1 , s 2 } .
The functions  ϕ 0  and  ϕ  are associated with the data in method (2) as follows:
(H4)
There exists an operator P such that  P 1 𝓁 ( B )  and for each  x E , u = x + G ( x )
P 1 ( [ u , x ; G ] P ) ϕ 0 ( x x * , u x * ) and u x * Δ ( x x * ) .
Let  D 0 = D U ( x * , ρ 0 ) .
(H5)
  P 1 [ u , x ; G ] [ x , x * ; G ] ϕ ( x x * , u x * )
(H6)
T ( x ) x * h 2 ( x x * ) x x *  for each  x D 0
(H7)
U [ x * , s * ] D , where  s * = max { s , Δ ( s * ) } .
(H8)
There exists a function IC  v : B B  such that for each  x D
P 1 [ x , x * ; G ] P v ( x x * )
(H9)
There exists  s 3 s  such that  v ( s 3 ) < 1 .
Let  D 1 = D U [ x * , s 3 ] . Let  B 2 = [ 0 , s ) . It follows by the definitions that for each  t B 2
0 ϕ 0 t , Δ ( t ) < 1 ,
0 h 1 ( t ) < 1
and
0 h 2 ( t ) < 1 .
Next, the local convergence analysis of method (2) uses the conditions  ( H 1 ) ( H 9 )  in combination with the preceding notation.
Theorem 1.
Assume the conditions  ( H 1 ) ( H 9 )  are satisfied. Then, the following assertions hold provided  x 0 U ( x 0 , s ) { x * }
{ x σ } U ( x * , s ) ,
y σ x * h 1 ( x σ x * ) x σ x * x σ x * < s ,
x σ + 1 x * h 2 ( x σ x * ) x σ x * x σ x *
and
lim σ x σ = x * .
Moreover, the point  x *  is the only solution of the equation  G ( x ) = 0  in the set  D 1 .
Proof. 
The assertions (10)–(12) are validated using induction. By the conditions  ( H 1 ) , ( H 3 ) ,  (6), and (7), and the choice of the starting point  x 0  we have
P 1 ( A 0 P ) ϕ 0 ( x 0 x * , u 0 x * ) ϕ 0 s , Δ ( s ) < 1 ,
which implies  A 0 1 𝓁 ( E )  by the standard perturbation of linear operators [6,7,8] due to Banach and
A 0 1 P 1 1 ϕ 0 ( x 0 x * , Δ ( x 0 x * ) .
Thus, the iterate  y 0  is well defined by the first substep of method (2) if  σ = 0 . Moreover, we can write in turn that
y 0 x * = x 0 x * A 0 1 G ( x 0 ) = A 0 1 A 0 [ x 0 , x * ; G ] ( x 0 x * ) ,
employing (6), (8),  ( H 4 ) , (15), and (16), we obtain
y 0 x * ϕ ( x 0 x * , u 0 x * ) x 0 x * 1 ϕ 0 ( x 0 x * , Δ ( x 0 x * ) ) h 1 ( x 0 x * ) x 0 x * x 0 x * < s .
Hence, the iterate  y 0 U ( x * , s )  and the assertion (11) holds if  σ = 0 .  Notice that the iterate  x 1  is well defined by the second subset of method (2). Then, the application of (6), (9), and  ( H 6 )  implies
x 1 x * = T ( x 0 ) x * h 2 ( x 0 x * ) x 0 x * x 0 x * .
So, the iterate  x 1 U ( x * , s )  validating (10) and also (12) if  σ = 0 . The induction for the assertions (10)–(13) is terminated if  x m , y m , x m + 1  replace, respectively,  x 0 , y 0 , x 1  in the preceding calculations. Furthermore, the estimate
x m + 1 x * d x m x * x m x * ,
for  d = h 2 ( x 0 x * ) [ 0 , 1 )  leads to  lim m x m = x *  and the iterate  x m + 1 U ( x * , s ) . Therefore, the assertions (10) (for  σ = m + 1 )  and (13) are satisfied. It is left to show the uniqueness of the solution in the set  D 1 . Let  y * D 1  be a solution of the equation  G ( x ) = 0 .  Then, the conditions  ( H 8 )  and  ( H 9 )  give in turn that
P 1 [ y * , x * ; G ] P v ( y * x * ) , v ( s 3 ) < 1 .
Thus, the linear operator  [ y * , x * ; G ] 1 𝓁 ( E )  and
y * x * = [ y * , x * ; G ] 1 G y * G x * = 0 ,
since  G ( y * ) = G ( x * ) = 0 . Hence, we conclude by identity (20) that  y * = x * . □
Remark 1. The second condition in  ( H 3 )  is left uncluttered. A possible choice for the function G is motivated by the calculations:
u x * = x x * + G ( x ) = I + [ x , x * ; G ] ) ( x x * ) = I + P P 1 [ x , x * ; G ] P + P ( x x * ) = I + P + P P 1 [ x , x * ; G ] P ( x x * )
thus,
u x * [ I + P + P + v ( x x * ) ] x x *
Hence, we can set
Δ ( t ) = I + P + P v ( t ) t .
Moreover, if  Δ ( s ) s  is not satisfied, then the condition  ( H 7 )  can be replaced by
(H7)′
U [ x * , s * ] D , where  s * = max { s , Δ ( s ) } .
The function  h 2  can be determined further provided that the operator T is specialized (see Section 4).

3. Convergence II: Semi-Local

Semi-local majorizing sequences [6,8] are employed for this type of convergence. Let  a 0 = 0  and  b 0 0 . Define the sequence by
a σ + 1 = b σ + ξ σ , γ σ + 1 = ψ a σ , a σ + 1 , Δ ( a σ ) ( a σ + 1 a σ ) + 1 + ψ 0 σ , Δ ( a σ ) ( a σ + 1 b σ ) a n d b σ + 1 = a σ + 1 + γ σ + 1 1 ψ 0 a σ + 1 , Δ ( a σ + 1 ) .
where  ξ σ  is a sequence of non-negative parameters to be determined later and  g : B B , ψ 0 : B × B B , ψ : B × B × B B  are given IC functions. A general convergence result is needed for the sequence  { a σ } .
Lemma 1.
Assume:
(C1)
ψ 0 ( a σ , Δ ( a σ ) < 1  and  a σ a  for some  a ¯ b 0 .
Then, the following assertions hold
0 a σ b σ a σ + 1 a ¯ ,
and there exists  a [ 0 , a 0 ]  such that
lim σ a σ = a .
Proof. 
Formula (21) and condition  ( C 1 )  imply assertion (22) by which (23) is satisfied. Limit point a is the least bound (upper) of sequence  { a σ }  which is unique. Sequence  { a σ }  is shown to be majorizing for  { x σ }  in Theorem 2. But, first, let us associate with the operators in method (2) as follows:
(C2)
There exists a linear operator P such that  P 1 𝓁 ( E )  and, for each  x D ,
P 1 [ u , x ; G ] P ψ 0 ( x x 0 , u x 0 ) , u x 0 Δ ( x x 0 .
It follows by the first condition that there exists  x 0 D  such that
P 1 [ u 0 , x 0 ; G ] P ψ 0 x 0 x 0 , Δ ( x 0 x 0 ) < 1 ,
thus,  A 0 1 𝓁 ( E ) . Set  A 0 1 G ( x 0 ) b 0 , B 3 = [ 0 , a ¯ )  and  D 3 = D U ( x 0 , a ¯ ) .
(C3)
P 1 [ y , x ; G ] [ u , x ; G ] ψ ( x x 0 , y x 0 , u x 0 )  for each  x , y , u D 3 .
(C4)
There exists IC function  ψ 1 : B × B B  such that
P 1 [ x , y , G ] P ψ 1 ( x x 0 , y x 0 ) .
(C5)
T ( x σ ) y σ a σ + 1 b
y σ = x σ [ x σ + G ( x σ ) , x σ ; G ] 1 G ( x σ ) .
(C6)
The equation  ψ 1 ( t , t ) 1 = 0  has a smallest positive solution  s 1  and there exists  s 2 s 1  such that  ψ 1 ( s 1 , s 2 ) < 1 .
Let  D 4 = D U [ x 0 , s 2 ] .
(C7)
U [ x 0 , a 1 * ] D , where  a 1 * = max { a * , Δ ( a * ) } .
Notice that  a ¯ 0  can be the smallest positive solution of the equation
ψ 0 t , Δ ( t ) 1 = 0 ,
(if it exists). □
The semi-local convergence analysis relies on conditions  ( C 1 ) ( C 7 )  under the developed terminology.
Theorem 2.
Assume that conditions  ( C 1 ) ( C 7 )  are satisfied. Then, the following assertions hold
{ x σ } U ( x 0 , a * ) ,
y σ x σ b σ a σ ,
x σ + 1 y σ a σ + 1 b σ
and there exists a solution  x *  of equation  G ( x ) = 0  such that
x * U [ x 0 , a * ] , lim σ x σ = x * , F ( x * ) = 0
and
x * x σ a * a σ .
Proof. 
As in the local case, induction is used to show assertions (24)–(26). Clearly, assertion (24) holds for  σ = 0 . The conditions in  ( C 2 )  and Formula (21) imply the existence of iterate  y 0 ,
y 0 x 0 = A 0 1 G ( x 0 ) b 0 a 0 = b 0 < a * .
Thus, iterate  y 0 U ( x 0 , a * )  and assertion (25) holds if  σ = 0 .
Condition  ( C 6 )  gives
x σ + 1 y σ = T ( x σ ) y σ a σ + 1 b σ
and
x σ + 1 x 0 x σ + 1 y σ + y σ x 0 a σ + 1 b σ + b σ a 0 = a σ + 1 < a * .
Hence, iterate  x σ + 1 U ( x 0 , a * )  and assertion (26) holds for  σ = 0 . Then, by the first substep of method (2), we can write:
G ( x σ + 1 ) = G ( x σ + 1 ) G ( x σ ) A σ ( y σ x σ ) = G ( x σ + 1 ) G ( x σ ) A σ ( x σ + 1 x σ ) + A σ ( x σ + 1 x σ ) A σ ( y σ x σ ) = [ x σ + 1 , x σ ; G ] A σ ( x σ + 1 x σ ) + A σ ( x σ + 1 y σ ) ,
leading by  ( C 2 ) , ( C 3 ) , and the induction hypothesis to
P 1 G ( x σ + 1 ) ψ x σ x 0 , x σ + 1 x 0 , u σ x 0 x σ + 1 x σ + ( 1 + ψ 0 ( x σ x 0 , u σ x 0 ) ) x σ + 1 y σ = γ ¯ σ + 1 ψ ( a σ , a σ + 1 , Δ ( a σ ) ( a σ + 1 a σ ) + ( 1 + ψ 0 ( a σ , Δ ( a σ ) ) ( a σ + 1 b σ ) = γ σ + 1 .
Thus, we obtain by the first substep of method (2) for  ( σ + 1 )  replacing  σ  and condition  ( C 2 ) , (21), and (30)
y σ + 1 x σ + 1 A σ + 1 1 P P 1 G ( x σ + 1 ) γ σ ¯ + 1 1 ψ 0 x σ + 1 x 0 , u σ + 1 x 0 γ σ ¯ + 1 1 ψ 0 a σ + 1 , Δ ( a σ + 1 ) = b σ + 1 a σ + 1
and
y σ + 1 x 0 y σ + 1 x σ + 1 + x σ + 1 x 0 b σ + 1 a σ + 1 + a σ + 1 a 0 = b σ + 1 < a * .
The induction is completed for relations (24)–(26). By condition  ( C 1 )  sequence  { a σ }  is convergent to  a * . Hence, it follows that sequence  { a σ }  is also fundamental. Moreover, by (29) and (31), sequence  { a σ }  is majorizing for  { x σ } . Therefore, sequence  { x σ }  is also fundamental in Banach space E and as such it is convergent to some  x * U [ x 0 , a * ]  (since  U [ x 0 , a * ]  is a closed set). Furthermore, by letting  σ  in (30),  G ( x * ) = 0 , where we also used the continuity of the operator G. Finally, from the estimate
x σ + j x σ a σ + j a σ
and by letting  j + , assertion (28) follows. Hence, assertions (27) and (28) are also satisfied. □
The isolation of a solution of the equation  G ( x ) = 0  is discussed in the following item.
Proposition 1.
Assume that there exists  z 1 U ( x 0 , s 3 )  with  G ( z 1 ) = 0  and some  s 3 > 0  ; condition  ( C 2 )  holds in ball  U ( x 0 , ρ 3 ) ,  and there exists  s 4 s 3  such that
ψ 0 ( s 3 , s 4 ) < 1 .
Set  D 4 = D U [ x 0 , s 4 ] . Then,  z 1  is unique as a solution of the equation  G ( x ) = 0  in the set  D 4 .
Proof. 
Let  z 2 D 4  be such that  G ( z 2 ) = 0  with  z 1 z 2 . It follows that the divided difference  Q = [ z 1 + G ( z 1 ) , z 2 ; G ]  is well defined. It follows by condition  ( C 2 )  and (33) that
P 1 ( Q P ) ψ 0 z 1 x 0 , z 2 x 0 ψ 0 ( s 3 , s 4 ) < 1 .
Thus,  Q 1 𝓁 ( E ) . Then, from the identity
z 1 z 2 = Q 1 G ( z 1 ) G ( z 2 ) = Q 1 ( 0 ) = 0 ,
we conclude that  z 2 = z 1 . □
Remark 2
(i) 
The limit point  a *  can be replaced by  a ¯  in condition  ( C 7 ) .
(ii) 
It is clear that, under all conditions  ( C 1 ) ( C 7 ) , one can choose  z 1 = x *  and  s 3 = a *  in Proposition 1.
(iii) 
Notice also that, as in the local case,
Δ ( t ) = I + P + P ψ 2 ( t ) t + G ( x 0 ) ,
provided that
P 1 [ x , x 0 ; G ] P ψ 2 ( x x 0 )
for some IC function  ψ 2 .

4. Special Cases and Applications

The functions  ( h 1 )  and  ( h 2 )  can be specialized:
Local Case 1: Assume:
( H 6 )  There exists an IC function  ϕ 1 : B 0 × B 0 × B 0 B  such that, for each  x , y D 0 ,
P 1 [ y , x ; G ] [ u , x ; G ] ϕ 1 x x * , y x * , u x * .
We need the estimates:
P 1 B σ L = P 1 [ y σ , x σ ; G ] [ u σ , x σ ; G ] + [ u σ , y σ ; G ] L ϕ 1 ( x σ x * , y σ x * , u σ x * ) + ϕ 0 ( y σ x * , u σ x * ) = q ¯ σ
and from
B σ [ y σ , x * ; G ] = ( [ y σ , x σ ; G ] [ u σ , x σ ; G ] ) + ( [ u σ , y σ ; G ] [ y σ , x * ; G ] )
P 1 B σ [ y σ , x * ; G ] ϕ 1 ( x σ x * , y σ x * , u σ x * ) + ϕ 0 ( y σ x * , u σ x * ) = p ¯ σ .
Define the functions  q : B 0 B , p : B 0 B  by
q ( t ) = ϕ 1 t , h 1 ( t ) t , Δ ( t ) + ϕ 0 h 1 ( t ) t , Δ ( t ) , p ( t ) = ϕ 1 t , h 1 ( t ) t , Δ ( t ) + ϕ 0 h 1 ( t ) t , Δ ( t ) .
Assume that the equation  q ( t ) 1 = 0  admits a smallest positive solution  s 0 B 0 . Then,  B σ 1 𝓁 ( E )  and condition  ( H 6 )  can be replaced by  ( H 6 )  provided that
h 2 ( t ) = p ( t ) 1 q ( t ) h 1 ( t ) .
The motivation for the definition of the function  h 2  follows from the estimates
x σ + 1 x * = y σ x * B σ 1 G ( y σ )
and
x σ + 1 x * B σ 1 L P 1 B σ [ y σ , x * ; G ] y σ x * p σ 1 q σ h 1 ( x σ x * ) x σ x * .
Local Case 2
Assume:
  ( H 6 )
P 1 [ u , x ; G ] [ y , x * ; G ] ϕ 2 x x * , y x * , u x * , P 1 [ u , x ; G ] [ u , y ; G ] ϕ 3 x x * , y x * , u x *
and  ( H 6 )  for each  x , y D 0  where  ϕ 2 : B 0 × B 0 × B 0 B  and  ϕ 3 : B 0 × B 0 × B 0 B  are IC functions. This time, by the second substep of method (2), in turn we can write
x σ + 1 x * = y σ x * A σ 1 G ( y σ ) A σ 1 A σ [ y σ , x σ ; G ] + A σ [ u σ , y σ ; G ] A σ 1 G ( y σ ) .
But,  G ( y σ ) = G ( y σ ) G ( x * ) = [ y σ , x * ; G ] ( y σ x * ) , so
P 1 G ( y σ ) = P 1 [ y σ , x * ; G ] P + P ( y σ x * ) 1 + P 1 [ y σ , x * ; G ] L y σ x * ( 1 + v ( y σ x * ) ) y σ x * 1 + v h 1 ( x σ x * ) x σ x * h 1 ( x σ x * ) x σ x * = μ ¯ σ h 1 ( x σ x * ) x σ x * .
Moreover, we have
y σ x * A n 1 G ( y σ ) ϕ 2 ( x σ x * , y σ x * , u σ x * ) y σ x * 1 ϕ 0 ( x σ x * , u σ x * ) = ( α σ ¯ ) y σ x * , P 1 A σ [ y σ , x σ ; G ] ϕ 1 ( x σ x * , y σ x * , u σ x * ) = β ¯ σ
and
P 1 A σ [ u σ , y σ ; G ] ϕ 3 ( x σ x * , y σ x * , u σ x * ) = γ ¯ σ .
By summing up, we obtain
x σ + 1 x * α σ ¯ + ( β ¯ σ + γ ¯ σ ) μ ¯ σ 1 ϕ 0 x σ x * , u σ x * 2 y σ x * .
The preceding estimate justifies the definition of the function  h 2  as follows
μ ( t ) = 1 + v h 1 ( t ) t α ( t ) = ϕ 2 t , h 1 ( t ) t , Δ ( t ) 1 ϕ 0 t , Δ ( t ) β ( t ) = ϕ 1 t , h 1 ( t ) t , Δ ( t ) γ ( t ) = ϕ 3 t , h 1 ( t ) t , Δ ( t ) and h 2 ( t ) = α ( t ) + β ( t ) + γ ( t ) μ ( t ) 1 ϕ 0 t , Δ ( t ) 2 h 1 ( t ) .
Thus, the delicate condition  ( H 6 )  can be replaced by  ( H 6 ) . Similarly, we specialize the sequences  { a σ }  and  { b σ }  in the semi-local convergence analysis of method (2).
Semi-local Case 1
We have
x σ + 1 y σ = B σ 1 G ( y σ ) ψ a σ , b σ , Δ ( a σ ) ( b σ a σ ) 1 c σ = a σ + 1 b σ ,
with  c σ = ψ ( a σ , b σ , Δ ( a σ ) ) + ψ 0 ( b σ , Δ ( a σ ) )  where the following estimates are used by the definition of method (2)
G ( y σ ) = G ( y σ ) G ( x σ ) A σ ( y σ x σ ) = [ y σ , x σ ; G ] [ u σ , x σ ; G ] ( y σ x σ ) P 1 G ( y σ ) ψ x σ x * , y σ x 0 , u σ x 0 y σ x σ ψ a σ , b σ , Δ ( a σ ) ( b σ a σ )
and
B σ 1 P 1 1 c σ .
Thus, condition  ( C 5 )  is dropped provided that it is replaced by  ( C 5 )
a σ + 1 = b σ + ψ a σ , b σ , Δ ( a σ ) ( b σ a σ ) 1 c σ
and  c σ < 1  for each  σ = 0 , 1 , 2 , .
Semi-local Case 2:
Assume:
( C 5 ) P 1 [ u , x ; G ] [ u , y ; G ] ψ 1 ( x x 0 , y x 0 , u x 0 )  for each  x , y D 0  and some IC function  ψ 1 : B 0 × B 0 × B 0 B .
We can write as before
x σ + 1 y σ = A σ 1 G ( y σ ) A σ 1 A σ [ y σ , x σ ; G ] + A σ [ u σ , y σ ; G ] A σ 1 G ( y σ ) ,
so
x σ + 1 y ( σ ) 1 + ψ ( a σ , b σ , Δ ( a σ ) ) + ψ 1 ( a σ , b σ , Δ ( a σ ) ) 1 ψ 0 ( a σ , Δ ( a σ ) ) ψ ( a σ , b σ , Δ ( a σ ) ) ( b σ a σ ) 1 ψ 0 ( a σ , Δ ( a σ ) ) = a σ + 1 b σ .
Thus, again, condition  ( C 5 )  can be replaced by  ( C 5 )  in Theorem 2.
Remark 3.
Concerning the choice of the linear operators we can suggest two interesting cases:
(1) 
P = G ( x * ) , if the operator G is differentiable in the local convergence case.
(2) 
P = [ x 0 , x 1 ; G ] , x 0 , x 1 D , if the operator is not necessarily differentiable or  P = G ( x * )  if the operator is differentiable in the semi-local convergence case.
Other possible choices are given in [6,7,9].

5. Numerical Applications

To evaluate the effectiveness of our mathematical methods, we test them on a variety of problems, including systems of differential equations, first kind Hammerstein integrals [7,8,10], steering motion problems, and boundary value problems. In addition to this, we also chose a nonlinear, nondifferentiable function; these functions arise in a wide range of applications and are particularly challenging to solve. By solving these problems using our methods, we can determine their accuracy, efficiency, and suitability for various applications.
First of all, we obtained the radii of convergence for iterative solver (2), so that we have an idea of how much closeness of the initial approximation is required for convergence to the exact solution. We used the iterative approach to perform the computation and found the computational order of convergence after choosing a suitable beginning approximation. This made it easier for us to see how quickly the iterative solver was reaching the precise solution. For computing computational order of convergence  ( C O C ) , the following formulas are used:
κ = ln x ρ + 1 x * | x ρ x * ln x ρ x * x ρ 1 x * , for   ρ = 1 , 2 ,
or approximated computational order of convergence  ( A C O C )  [5,10] by:
κ * = ln x ρ + 1 x ρ x ρ x ρ 1 ln x ρ x ρ 1 x ρ 1 x ρ 2 , for   ρ = 2 , 3 ,
To make sure the approach was effective, we additionally recorded the CPU timing for the computation. Finally, we determined how many iterations would be necessary to achieve the specified accuracy as well as the residual error.
Iterative solvers use stopping criteria to decide when to stop iterating and accept the current approximation as the solution. Depending on the type of problem being solved and the method being used, a variety of stopping criteria can be used. We opt for the following standard criteria:
(i)
x k + 1 x k   <   ϵ ,  and
(ii)
G ( x k )   <   ϵ ,
where  ϵ = 10 100  is the error tolerance. The computations and multiple precision arithmetic are performed with the help of Mathematica-11.
Example 1.
Consider the following system of differential equations
G 1 ( w 1 ) = e w 1 , G 2 ( w 2 ) = ( e 1 ) w 2 + 1 , G 3 ( w 3 ) = 1
subject to  G 1 ( 0 ) = G 2 ( 0 ) = G 3 ( 0 ) = 0 . We consider  G = ( G 1 , G 2 , G 3 )  and  Λ = U [ 0 , 1 ] . The  ξ = ( 0 , 0 , 0 ) T  is a solution. Let us assume that function G is defined on Λ with  w = ( w 1 , w 2 , w 3 ) T  by
G ( w ) = e w 1 1 e 1 2 w 2 2 + w 2 w 3 .
This definition gives
G ( w ) = e w 1 0 0 0 ( e 1 ) w 2 + 1 0 0 0 1 .
Thus, by the definition of G, it follows that  G ( ξ ) = 1 . Let  [ x , y ; G ] = 0 1 G x + θ ( y x ) d θ . Then, hypotheses  ( H 1 ) ( H 4 )  are verified for
ϕ 0 ( t 1 , t 2 ) = e 2 ( t 1 + t 2 ) , ϕ ( t 1 , t 2 ) = e 1 2 t 2 , ϕ 1 ( t 1 , t 2 , t 3 ) = e 1 2 ( t 1 + t 2 ) , Δ ( t ) = 2 + v ( t ) t , v ( t ) = e 1 2 t , ϕ 2 ( t 1 , t 2 , t 3 ) = e 2 ( t 1 + t 2 ) + e 1 2 t 3 , and ϕ 3 ( t 1 , t 2 , t 3 ) = e 2 ( t 1 + t 2 ) .
The computational results are shown in Table 1.
Example 2.
Let  D = U [ 0 , 1 ]  and  A 1 = A 2 = C [ 0 , 1 ] . Consider the nonlinear integral equation of the first kind Hammerstein operator H defined by
G ( Υ ) ( x ) = Υ ( x ) 3 0 1 x ζ Υ ( ζ ) 3 d ζ .
The calculation for the derivative gives
G Υ ( q ) ( x ) = q ( x ) 9 0 1 x ζ Υ ( ζ ) 2 q ( ζ ) d ζ ,
for  q C [ 0 , 1 ] . By this value of the operator  F , conditions  ( C 1 ) ( C 4 )  are verified; we choose
ϕ 0 ( t 1 , t 2 ) = 9 4 ( t 1 + t 2 ) , ϕ ( t 1 , t 2 ) = 9 4 ( t 1 + t 2 ) , ϕ 1 ( t 1 , t 2 , t 3 ) = 9 4 ( t 1 + t 2 ) , ϕ 2 ( t 1 , t 2 , t 3 ) = 9 4 ( t 1 + t 2 + t 3 ) , ϕ 3 ( t 1 , t 2 , t 3 ) = 9 4 ( t 1 + t 2 ) , v ( t ) = 9 4 t , Δ ( t ) = 2 + v ( t ) t ,
with  P = G ( x * ) = I , where I is an identity matrix. By adopting the above functions, we obtain the radii for compositions (2) of Example 2 in Table 2.
Example 3.
The system of nonlinear equations is a powerful tool for solving boundary value problems in many fields, such as physics, engineering, and finance. It allows for the modeling of complex systems with multiple variables and nonlinear relationships. By solving the system of equations, we can obtain solutions that satisfy the boundary conditions and accurately represent the behavior of the system. Therefore, we consider a boundary value problem (see [8]), which is given by
u + u 3 = 0
with  u ( 0 ) = u ( 1 ) = 0 . The interval  [ 0 , 1 ]  is divided into 1006 sections to yield
γ 0 = 0 < γ 1 < γ 2 < < γ 1006 = 1 , γ k + 1 = γ k + h , h = 1 1006 .
Then, we can choose  u 0 = u ( γ 0 ) = 0 , u 1 = u ( γ 1 ) , , u 1006 = u ( γ 1006 ) = 0 . We have
u θ = u θ 1 2 u θ + u θ + 1 h 2 , θ = 1 , 2 , 3 , , 1005 ,
acquired by the use of the discretization approach. The following nonlinear system of equations is obtained as  1005 × 1005 .
u θ 1 + u θ + 1 2 u θ + h 2 u θ 3 = 0 .
We present iterations and the  C O C  of Example 4 in Table 3. Expression (38) converges to the following resulting column vector (not a matrix):
x * = 0.00105 , 0.00210 , 0.00315 , 0.00420 , 0.00525 , 0.00630 , 0.00735 , 0.00840 , 0.00945 , 0.0105 , 0.0115 , 0.0126 , 0.0136 , 0.0147 , 0.991 , 0.992 , 0.992 , 0.993 , 0.994 , 0.995 , 0.996 , 0.996 , 0.997 , 0.998 , 0.999 t r .
Example 4.
We examine one of the most well-known applied science problems, the Hammerstein integral equation (see pp. 19–20 in [8] to compare the effectiveness and applicability of our suggested methods to those of others), which is given below:
x ( s ) = 1 + 1 5 0 1 G ( s , t ) x ( t ) 3 d t
where  x C [ 0 , 1 ] s , t [ 0 , 1 ]  and kernel G is
G ( s , t ) = ( 1 s ) t , t s , s ( 1 t ) , s t .
To convert the aforementioned equation into a problem with finite dimensions, use the Gauss–Legendre quadrature formula, which is  0 1 g ( t ) d t j = 1 10 w j g ( t j ) ,  where the abscissas  t j  and the weights  w j  are determined for  j = 10  by the Gauss–Legendre quadrature formula. Denoting the approximations of  x ( t i )  by  x i ( i = 1 , 2 , , 10 ) , one obtains the system of nonlinear equations  5 x i 5 j = 1 10 a i j x j 3 = 0 ,  where  i = 1 , 2 , , 10
a i j = w j t j ( 1 t i ) , j i , w j t i ( 1 t j ) , i < j .
For  i = j = 10 , the abscissas  t j  and weights  w j  are known and shown in Table 4.
The convergence approaches towards the root  x * = ( 1.001 , 1.006 , 1.014 , 1.021 1.026 , 1.026 , 1.021 , 1.014 , 1.006 , 1.001 ) t r .  This is tested in Table 5.
Example 5.
Here, we solve the nonlinear, nondifferentiable system given as
3 t 1 2 t 2 + t 2 2 1 + | t 1 1 | = 0 t 1 4 + t 1 t 2 3 1 + | t 2 | = 0 .
Then, we set  G = ( G 1 , G 2 ) , and  G 1 = Q 1 + Q 2 , G 2 = Q 3 + Q 4  where
Q 1 ( t 1 , t 2 ) = 3 t 1 2 t 2 + t 2 2 1 , Q 2 ( t 1 , t 2 ) = | t 1 1 | , Q 3 ( t 1 , t 2 ) = t 1 4 + t 1 t 2 3 1 , a n d Q 3 ( t 1 , t 2 ) = | t 2 | .
Notice that  Q 2  and  Q 4  constitute the nondifferentiable part of the equation. The convergence towards the root  ( 0.8946554 , 0.3278265 ) t r  is tested in Table 6.
Example 6
(Bratu 2D Problem). The widely recognized two-dimensional Bratu problem, as described in [11,12], is defined as follows:
u x x + u t t + C e u = 0 , o n w i t h b o u n d a r y c o n d i t i o n s u = 0 o n Ω , where Ω = { ( x , t ) : 0 x 1 , 0 t 1 } .
The approximate solution for a nonlinear partial differential equation can be determined by employing finite difference discretization. This approach simplifies the problem into solving a system of nonlinear equations. Let us denote the approximate solution at the grid points of the mesh as  u i , j , where  u i , j  represents the solution at position  x i  and time  t j . Additionally, we define M and N as the number of steps in the x and t directions, and h and k as the corresponding step sizes. To tackle the provided partial differential equation, we will apply the central difference method to  u x x  and  u t t , i.e.,
u x x ( x i , t j ) = ( u i + 1 , j 2 u i , j + u i 1 , j ) / h 2 , C = 0.1 , t [ 0 , 1 ] .
We seek the solution to a system with dimension  225 × 225  by choosing  M = 16  and  N = 16 . It converges to the following resulting column vector (not a matrix):
x * = 0.000619 , 0.00104 , 0.00135 , 0.00157 , 0.00173 , 0.00184 , 0.00190 , 0.00192 , 0.00190 , 0.00184 , 0.00173 , 0.00157 , 0.00135 , 0.00104 , 0.000619 , 0.00104 , 0.00181 , 0.00239 , 0.00281 , 0.00312 , 0.00333 , 0.00345 , 0.00349 , 0.00345 , 0.00333 , 0.00312 , 0.00281 , 0.00239 , 0.00181 , 0.00104 , 0.00135 , 0.00239 , 0.00318 , 0.00379 , 0.00423 , 0.00453 , 0.0047 , 0.00476 , 0.00470 , 0.00453 , 0.00423 , 0.00379 , 0.00318 , 0.00239 , 0.00135 , 0.00157 , 0.00281 , 0.00379 , 0.00453 , 0.00508 , 0.00545 , 0.00567 , 0.00574 , 0.00567 , 0.00545 , 0.00508 , 0.00453 , 0.00379 , 0.00281 , 0.00157 , 0.00173 , 0.00312 , 0.00423 , 0.00508 , 0.00571 , 0.00614 , 0.00639 , 0.00648 , 0.00639 , 0.00614 , 0.00571 , 0.00508 , 0.00423 , 0.00312 , 0.00173 , 0.00184 , 0.00333 , 0.00453 , 0.00545 , 0.00614 , 0.00661 , 0.00689 , 0.00698 , 0.00689 , 0.00661 , 0.00614 , 0.00545 , 0.00453 , 0.00333 , 0.00184 , 0.00190 , 0.00345 , 0.00470 , 0.00567 , 0.00639 , 0.00689 , 0.00719 , 0.00728 , 0.00719 , 0.00689 , 0.00639 , 0.00567 , 0.00470 , 0.00345 , 0.00190 , 0.00192 , 0.00349 , 0.00476 , 0.00574 , 0.00648 , 0.00698 , 0.00728 , 0.00738 , 0.00728 , 0.00698 , 0.00648 , 0.00574 , 0.00476 , 0.00349 , 0.00192 , 0.00190 , 0.00345 , 0.00470 , 0.00567 , 0.00639 , 0.00689 , 0.00719 , 0.00728 , 0.00719 , 0.00689 , 0.00639 , 0.00567 , 0.00470 , 0.00345 , 0.00190 , 0.00184 , 0.00333 , 0.00453 , 0.00545 , 0.00614 , 0.00661 , 0.00689 , 0.00698 , 0.00689 , 0.00661 , 0.00614 , 0.00545 , 0.00453 , 0.00333 , 0.00184 , 0.00173 , 0.00312 , 0.00423 , 0.00508 , 0.00571 , 0.00614 , 0.00639 , 0.00648 , 0.00639 , 0.00614 , 0.00571 , 0.00508 , 0.00423 , 0.00312 , 0.00173 , 0.00157 , 0.00281 , 0.00379 , 0.00453 , 0.00508 , 0.00545 , 0.00567 , 0.00574 , 0.00567 , 0.00545 , 0.00508 , 0.00453 , 0.00379 , 0.00281 , 0.00157 , 0.00135 , 0.00239 , 0.00318 , 0.00379 , 0.00423 , 0.00453 , 0.00470 , 0.00476 , 0.00470 , 0.00453 , 0.00423 , 0.00379 , 0.00318 , 0.00239 , 0.00135 , 0.00104 , 0.00181 , 0.00239 , 0.00281 , 0.00312 , 0.00333 , 0.00345 , 0.00349 , 0.00345 , 0.00333 , 0.00312 , 0.00281 , 0.00239 , 0.00181 , 0.00104 , 0.000619 , 0.00104 , 0.00135 , 0.00157 , 0.00173 , 0.00184 , 0.00190 , 0.00192 , 0.00190 , 0.00184 , 0.00173 , 0.00157 , 0.00135 , 0.00104 , 0.000619 t r .
The computational results are given in Table 7.

6. Conclusions

A process is introduced that shows convergence for a family of Steffensen-like methods. The advantage of this process over earlier ones is that the condition about the existence of  F ( x * ) 1  is not required. Consequently, the methods can be applied to solve nondifferentiable equations with a convergence theory to back them up. Moreover, the process is applicable to other methods involving the inverses of linear operators such as [1,2,3,4,5,6,7,8,9,10,13,14,15,16,17,18]. It is worth noting that in earlier work the local convergence analysis is shown using Taylor series expansions and assumptions on derivatives of high order not present in the method. Furthermore, no computable error estimates or uniqueness of the solution results are provided. Finally, the more interesting semi-local convergence analysis not studied in [1,19,20,21,22,23,24,25,26,27,28,29] is also developed in this paper. All these concerns are positively addressed in this paper and in the more general setting of a Banach space. Therefore, the applicability of such methods is extended to the local as well as the semi-local convergence cases using only conditions on the operators appearing in the method. Finally, the numerical applications further complement the theoretical findings.

Author Contributions

R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; Writing—Review. M.A.: Writing—Review and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia under grant no. (IFPIP:1305-247-1443).

Data Availability Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, A. An efficient fifth-order Steffensen-type method for solving systems of nonlinear equations. Int. J. Comput. Sci. Math. 2018, 5, 501–514. [Google Scholar] [CrossRef]
  2. Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algorithms 2013, 62, 429–444. [Google Scholar] [CrossRef]
  3. Ren, H.; Wu, Q.; Bi, W. A calss of two-step Steffensen type method of fourth order convergence. Appl. Math. Comput. 2009, 209, 206–210. [Google Scholar] [CrossRef]
  4. Sharma, J.R.; Arora, H. An efficient derivative free method for solving systems of nonlinear equations. Appl. Anal. Discret. Math. 2013, 7, 390–403. [Google Scholar] [CrossRef]
  5. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A new technique to obtain derivative-free optimal iterative methods for solving nonlinear equation. J. Comput. Appl. Math. 2013, 252, 95–102. [Google Scholar] [CrossRef]
  6. Argyros, I.K. The Theory and Application of Iteration Methods, 2nd ed.; Engineering Series; Routledge: Boca Raton, FL, USA, 2022. [Google Scholar]
  7. Magreñán, A.A.; Argyros, I.K. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Academic Press: Cambridge, MA, USA; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  8. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  9. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
  10. Grau-Sánchez, M.; Grau, A.; Noguera, M. Frozen divided difference schme for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 1739–1743. [Google Scholar] [CrossRef]
  11. Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
  12. Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
  13. Abad, M.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes. Bulltein Math. 2014, 105, 133–145. [Google Scholar]
  14. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  15. Liu, Z.; Zheng, Q.; Zhao, P. A variant of Steffensen’s method of fourth-order convergence and its applications. Appl. Math. Comput. 2010, 216, 1978–1983. [Google Scholar] [CrossRef]
  16. Ostrowski, A.M. Solution of Equations and Systems of Equations, Pure and Applied Mathematics; Academic Press: New York, NY, USA; London, UK, 1960; Volume IX. [Google Scholar]
  17. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  18. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  19. Alefeld, G.E.; Potra, F.A. Some efficient methods for enclosing simple zeros of nonlinear equations. BIT 1992, 32, 334–344. [Google Scholar] [CrossRef]
  20. Costabile, F.; Gualtieri, M.I.; Serra-Capizzano, S. An iterative method for the computation of the solutions of nonlinear equations. Calcolo 1999, 36, 17–34. [Google Scholar] [CrossRef]
  21. Ezquerro, J.A.; Grau-Sánchez, M.; Grau, A.; Hernández, M.A. Construction of derivative-free iterative methods from Chebyshev’s method. Anal. Appl. 2013, 11, 1350009. [Google Scholar] [CrossRef]
  22. Ezquerro, J.A.; Grau-Sánchez, M.; Grau, A.; Hernández, M.A.; Noguera, M.; Romero, N. On iterative methods with accelerated convergence for solving systems of nonlinear equations. J. Optim. Theory Appl. 2011, 151, 163–174. [Google Scholar] [CrossRef]
  23. Galántai, A.; Abaffy, J. Always convergent iteration methods for nonlinear equations of Lipschitz functions. Numer. Algorithms 2015, 69, 443–453. [Google Scholar] [CrossRef]
  24. Grau-Sánchez, M.; Noguera, M. A technique to choose the most efficient method between secant method and some variants. Appl. Math. Comput. 2012, 218, 6415–6426. [Google Scholar] [CrossRef]
  25. Hernández, M.A.; Rubio, M.J. Semilocal convergence of the secant method under mild convergence conditions of differentiability. Comput. Math. Appl. 2022, 44, 277–285. [Google Scholar] [CrossRef]
  26. Potra, F.A.; Pták, V. A generalization of Regula Falsi. Numer. Math. 1981, 36, 333–346. [Google Scholar] [CrossRef]
  27. Potschka, A. Backward step control for global Newton-type methods. SIAM J. Numer. Anal. 2016, 54, 361–387. [Google Scholar] [CrossRef]
  28. Sanchez, M.G.; Noguera, M.; Gutieerez, J.M. Frozen iterative methods using divided differences “A la Schmidt-Schwerlick”. J. Optim. Theory Appl. 2014, 10, 931–948. [Google Scholar] [CrossRef]
  29. Schmidt, J.W.; Schwetlick, H. Ableitungsfreie Verfahren mit Hoherer Konvergenzgeschwindigkeit. Computing 1968, 3, 215–226. [Google Scholar] [CrossRef]
Table 1. Numerical results of solver (2) for Example 1.
Table 1. Numerical results of solver (2) for Example 1.
s0 s1 s2 s
0.18365 0.19369 0.13771 0.13771
Cases x 0 G ( x σ ) x σ + 1 x σ σ κ CPU Timing
Solver (3) ( 0.3 , 0.3 , 0.3 ) t r 9.1 × 10 132 9.1 × 10 132 44.00000.0237102
Solver (4) ( 0.3 , 0.3 , 0.3 ) t r 1.3 × 10 350 1.3 × 10 350 54.00000.0362553
Solver (5) ( 0.3 , 0.3 , 0.3 ) t r 6.6 × 10 114 6.6 × 10 114 44.00000.0251659
Table 2. Radii of solver (2) of Example 2.
Table 2. Radii of solver (2) of Example 2.
s 0 s 1 s 2 s
0.0780310.0703610.0526540.052654
Table 3. Numerical results for Example 3.
Table 3. Numerical results for Example 3.
Cases x 0 G ( x σ ) x σ + 1 x σ σ κ CPU Timing
Solver (3) ( 0.4 , 0.4 , 1005 , 0.4 ) t r 2.0 × 10 290 2.2 × 10 285 44.00018395.78
Solver (4) ( 0.4 , 0.4 , 1005 , 0.4 ) t r 1.8 × 10 275 4.6 × 10 271 44.000213475.8
Solver (5) ( 0.4 , 0.4 , 1005 , 0.4 ) t r 5.5 × 10 286 5.8 × 10 281 44.000229148.4
Table 4. The abscissas  t j  and weights  w j  by Gauss–Legendre quadrature formula.
Table 4. The abscissas  t j  and weights  w j  by Gauss–Legendre quadrature formula.
jtjwj
1 0.01304673574141413996101799 0.03333567215434406879678440
2 0.06746831665550774463395165 0.07472567457529029657288816
3 0.16029521585048779688283632 0.10954318125799102199776746
4 0.28330230293537640460036703 0.13463335965499817754561346
5 0.42556283050918439455758700 0.14776211235737643508694649
6 0.57443716949081560544241300 0.14776211235737643508694649
7 0.71669769706462359539963297 0.13463335965499817754561346
8 0.83970478414951220311716368 0.10954318125799102199776746
9 0.93253168334449225536604834 0.07472567457529029657288816
10 0.98695326425858586003898201 0.03333567215434406879678440
Table 5. Numerical results for Example 4.
Table 5. Numerical results for Example 4.
Cases x 0 G ( x σ ) x σ + 1 x σ σ κ CPU Timing
Solver (3) ( 1.1 , 1.1 , 10 , 1.1 ) t r 5.8 × 10 379 1.2 × 10 379 43.99990.893317
Solver (4) ( 1.1 , 1.1 , 10 , 1.1 ) t r 1.9 × 10 342 4.1 × 10 343 44.00000.796277
Solver (5) ( 1.1 , 1.1 , 10 , 1.1 ) t r 8.4 × 10 372 1.8 × 10 372 43.99990.760152
Table 6. Numerical results for Example 5.
Table 6. Numerical results for Example 5.
Cases x 0 G ( x σ ) x σ + 1 x σ σ κ CPU Timing
Solver (3) ( 0.9 , . 03 ) t r 1.9 × 10 157 7.5 × 10 158 53.00000.0449242
Solver (4) ( 0.9 , . 03 ) t r 5.8 × 10 174 2.2 × 10 174 53.00000.0528398
Solver (5) ( 0.9 , . 03 ) t r 1.7 × 10 187 6.5 × 10 188 53.00000.0525841
Table 7. Numerical results for Example 6.
Table 7. Numerical results for Example 6.
Cases G ( x σ ) x σ + 1 x σ σ κ CPU Timing
Solver (3) 2.2 × 10 194 2.5 × 10 193 33.99971004.13
Solver (4) 4.1 × 10 194 4.5 × 10 193 33.99971640.15
Solver (5) 2.6 × 10 194 2.9 × 10 193 33.99971741.74
The initial estimation is  x 0 = 0.1 sin ( i π h ) sin ( j π h ) , , 0.1 sin ( i π h ) sin ( j π h ) t r , i , j = 1 , 2 , 3 , , 15 .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Alansari, M. Enhancing Equation Solving: Extending the Applicability of Steffensen-Type Methods. Mathematics 2023, 11, 4551. https://doi.org/10.3390/math11214551

AMA Style

Behl R, Argyros IK, Alansari M. Enhancing Equation Solving: Extending the Applicability of Steffensen-Type Methods. Mathematics. 2023; 11(21):4551. https://doi.org/10.3390/math11214551

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, and Monairah Alansari. 2023. "Enhancing Equation Solving: Extending the Applicability of Steffensen-Type Methods" Mathematics 11, no. 21: 4551. https://doi.org/10.3390/math11214551

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop