Next Article in Journal
Production Mechanism of the Charmed Baryon Λc+
Next Article in Special Issue
A Generalized Explicit Iterative Method for Solving Generalized Split Feasibility Problem and Fixed Point Problem in Real Banach Spaces
Previous Article in Journal
A Subclass of Janowski Starlike Functions Involving Mathieu-Type Series
Previous Article in Special Issue
Common Solution for a Finite Family of Equilibrium Problems, Quasi-Variational Inclusion Problems and Fixed Points on Hadamard Manifolds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Algorithm Derivative-Free to Improve the Steffensen-Type Methods

by
Miguel A. Hernández-Verón
1,
Sonia Yadav
2,
Ángel Alberto Magreñán
1,*,
Eulalia Martínez
3 and
Sukhjit Singh
2
1
Department of Mathematics and Computation, University of La Rioja, 26006 Logroño, Spain
2
Department of Mathematics, Dr BR Ambedkar National Institute of Technology, Jalandhar 144011, India
3
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(1), 4; https://doi.org/10.3390/sym14010004
Submission received: 13 November 2021 / Revised: 10 December 2021 / Accepted: 13 December 2021 / Published: 21 December 2021

Abstract

:
Solving equations of the form H ( x ) = 0 is one of the most faced problem in mathematics and in other science fields such as chemistry or physics. This kind of equations cannot be solved without the use of iterative methods. The Steffensen-type methods, defined using divided differences are derivative free, are usually considered to solve these problems when H is a non-differentiable operator due to its accuracy and efficiency. However, in general, the accessibility of these iterative methods is small. The main interest of this paper is to improve the accessibility of Steffensen-type methods, this is the set of starting points that converge to the roots applying those methods. So, by means of using a predictor–corrector iterative process we can improve this accessibility. For this, we use a predictor iterative process, using symmetric divided differences, with good accessibility and then, as corrector method, we consider the Center-Steffensen method with quadratic convergence. In addition, the dynamical studies presented show, in an experimental way, that this iterative process also improves the region of accessibility of Steffensen-type methods. Moreover, we analyze the semilocal convergence of the predictor–corrector iterative process proposed in two cases: when H is differentiable and H is non-differentiable. Summing up, we present an effective alternative for Newton’s method to non-differentiable operators, where this method cannot be applied. The theoretical results are illustrated with numerical experiments.

1. Introduction

One of the most studied problems in numerical mathematics is finding the solution of nonlinear systems of equations
H ( x ) = 0 ,
where H : Ω R m R m is a nonlinear operator, H ( H 1 , H 2 , , H m ) with H i : Ω R m R , 1 i m , and Ω is a non-empty open convex domain. In this context, iterative methods are a powerful tool for solving these equations [1]. Many applied problems can be reduced to solving systems of nonlinear equations, which is one of the most basic problems in mathematics. These problems arise in all scientific areas. Both in mathematics, physics and especially in a diverse range of engineering applications. This task has applications in many scientific fields [2,3]. Applications in the geometric theory of the relativistic string can be found [4], also when solcing nonlinear equations in porous media problems [5,6], in solving nonlinear stochastic differential equations (by the first order finite difference method) [7], in solving nonlinear Volterra integral equations [8], an many others.
In general, there are two aspects that must be considered when we choose an iterative process to approximate a solution of Equation (1). The first one is related to the computational efficiency of the iterative process [9]. The other one, with the same importance, is known as the accessibility of the iterative process [10], which represents the possibility to locate starting points that ensure the convergence of the sequence generated by the iterative process to a solution of Equation (1). Newton’s method, due to its characteristics, it is usually considered as a reference in the measure of these two aspects. However, this method has a serious shortcoming: the derivative H ( x ) has to be computed and evaluated at each iteration. This makes it inapplicable when the equations involved presents non-differentiable operators and in situations when the evaluation of the derivative is too expensive in terms of computation and time. In these cases, one alternative commonly used is to approximate the derivatives by divided differences using a numerical derivation formula, where iterative processes free of derivatives are obtained. For this purpose, authors use first order divided differences [9,11]. First, we denote by L ( R m , R m ) the space of bounded linear operators from R m to R m . An operator x , y ; D L ( R m , R m ) is called first order divided difference for the operator D : Ω R m R m on the points x and y  ( x y ) if it is satisfied that
x , y ; D ( x y ) = D ( x ) D ( y ) .
In this paper, we consider derivative-free iterative processes using the previous ideas. But these methods have also a serious shortcoming: they have a region of reduced accessibility. In [10], the accessibility of an iterative process is increased by means of an analytical procedure, which consists of modifying the convergence conditions. However, in this work, we will increase accessibility by constructing an iterative predictor–corrector process. This iterative process has a first prediction phase and then a second accurate approximation phase. The first phase allows us, by applying the predictor method, to locate a starting point for the corrector method to ensure convergence to a solution of the equation.
Kung and Traub presented in [12] a class of iterative processes without derivatives. These iterative processes considered by Kung and Traub contain Steffensen-type methods as a special case. In [13], a generalized Steffensen-type is considered with the following algorithm:
x 0 Ω , α , β [ 0 , 1 ] , y n = x n α H ( x n ) , z n = x n + β H ( x n ) , , x n + 1 = x n [ y n , z n ; H ] 1 H ( x n ) , n 0 .
As special cases of the previous algorithm, the three most well-known Steffesen-type methods: for α = 0 and β = 1 we obtain the original Steffensen method, the Backward-Steffensen method is obtained for α = 1 and β = 0 and the Center-Steffensen method is obtained for α = 1 and β = 1 .
Notice that, if we consider the Newton’s method,
x n + 1 = x n H ( x n ) 1 H ( x n ) , n 0 ; x 0 Ω is given ,
which is one of the most used iterative methods [14,15,16,17,18] to approximate a solution x of H ( x ) = 0 , the Steffensen-type methods are obtained as a special case of this method, where the evaluation of H ( x ) in each step is approximated by the divided difference of first order [ x α H ( x ) , x + β H ( x ) ; H ] . The Steffensen-type methods have been widely studied by many recognized researchers such as Alarcón, Amat, Busquier and López ([19]) who presented a study and applications of Steffensen method to boundary-value problems, Argyros ([20]) who gave an improved convergence theorem related to Steffensen method or Ezquerro, Hernández, Romero and Velasco ([21]) who studied the generalization of the Steffensen method to Banach spaces.
Symmetric divided differences generally perform better. This fact can be seen in the dynamical behavior of the Center-Steffensen method, see Section 2, which is the best one, in term of convergence, from the Steffensen-type methods given previously. Moreover, this method maintains the quadratic convergence of Newton’s method, by approximating the derivative through symmetric divided differences with respect to the x n , and the Center-Steffensen method also has the same computational efficiency as Newton’s method. However, to achieve the second order in practice, an iteration close enough to the solution is needed to have a good approximation of the first derivative of H used in Newton’s method. In other case, some extra iterations in comparison with Newton’s method are required. Basically, when the norm of H ( x ) is big, the approximation of the divided difference to the first derivative of H is bad. So, in general, the set of starting points of the Steffensen-type methods is poor. This reality can be observed experimentally by means of the basins of attraction shown in Section 2. This fact justifies that Steffensen-type methods are less used than Newton’s method to approximate solutions of equations for differentiable operators.
Thus, two are our main objectives in this work: on the one hand, in the case of differentiable operators, where Newton’s method can also be applied, our objective is to construct a predictor–corrector iterative process with an accessibility and efficiency such as Newton’s method. Secondly, the other objective is to ensure that this predictor–corrector iterative process considered have a behavior such as Newton’s method but considering the case of non-differentiable operators where Newton’s method cannot be applied.
Following this idea, in this paper we consider the derivative-free point-to-point iterative process given by
x 0 given in Ω , x n + 1 = x n [ x n Tol , x n + Tol ; H ] 1 H ( x n ) , n 0 ,
where Tol = ( t o l , t o l , , t o l ) R m for a real number t o l > 0 . Thus, we use a symmetric divided difference to approximate the derivative that appear in Newton’s method. Furthermore, by varying the parameter t o l , we can approach the value of H ( x n ) . Notice that, in the differentiable case, for t o l = 0 we obtain the Newton’s method. The dynamical behavior of this simple iterative process is like Newton’s method, with one varying the parameter t o l .
However, although reducing the value of t o l we can reach a speed of convergence like Newton’s method, its order of convergence is linear. That is why we will consider this method as a predictor, due to its good accessibility, and we will consider then the Center-Steffensen method:
x 0 Ω , α , β [ 0 , 1 ] , y n = x n H ( x n ) , z n = x n + H ( x n ) , , x n + 1 = x n [ y n , z n ; H ] 1 H ( x n ) , n 0 ,
as a corrector method, whose order of convergence is quadratic.
The paper is organized as follows. Section 2 contains the motivation of the paper. In Section 3, we present a semilocal convergence analysis of the new method when operator H is both differentiable and non-differentiable cases. Moreover, some numerical experiments are shown where the theoretical results are proven numerically. Next, Section 4 contains the study of dynamical behavior for the predictor–corrector method. Finally, in Section 5, we present the conclusions of the work carried out.

2. Motivation

When iterative processes defined by divided differences are applied to find the solutions of nonlinear equations, it is important to note that the region of accessibility is reduced with respect to Newton’s method. In practice, we can see this circumstance with the basins of attraction (the set of points of the plane such that initial conditions chosen in the set dynamically evolve to a particular attractor ([22,23]) of iterative methods when they are applied to solve a complex equation H ( z ) = 0 , where H : C C and z C .
First, in the differential case, we compare the dynamical behavior of the Newton’s method, the Steffensen-type methods (3) and the iterative process given in (5) for solving the complex equation H 1 ( z ) = z 3 1 = 0 . In the non-differentiable case, we compare the Steffensen-type methods (3) and the iterative process given in (5) for solving the complex equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) = 0 . Our objective is to justify that the accessibility region of the iterative process (5) is comparable to the one associated to Newton’s method, in the differentiable case, and notably greater compared to Steffensen-type methods (3) in both cases, differentiable and non-differentiable. In each case, the favorable choice of the iterative process (5) as a predictor method is proven.
We will show the fractal pictures that are generated to approximate the three solutions of H 1 ( z ) , z = 1 , z = 0.5 0.866025 i and z = 0.5 + 0.866025 i and the ones generated to approximate the three solutions of H 2 ( z ) , z = 0 , z = 1 6 and z = 1 + 6 . We are interested in identifying the attraction basins of the three solutions z , z and z [23]. These basins also allow us to compare the regions of accessibility of these methods.
In all the cases, the tolerance 10 3 and a maximum of 100 iterations are used. If we have not obtained the desired tolerance after 100 iterations, we do not continue and decide that the iterative method starting at z 0 does not converge to any zero.
The regions of accessibility of the two iterative methods when they are applied to approximate the solutions z , z and z of H 1 ( z ) = z 3 1 = 0 are shown in Figure 1 and Figure 2. The strategy used is the following: A color is assigned to each solution of the equation and if the iteration does not converge, color black is used. To obtain the pictures, red, yellow, and blue colors have been assigned for the attraction basins of the three zeros. The basins shown have been generated using Mathematica 10 [24].
If we observe the behavior of these methods, it is clear that methods (3) are stricter with respect to the starting point than Newton’s method (see the black zone). However, if we consider the iterative process (5), see in Figure 3, varying the parameter t o l , a dynamical behavior similar to Newton’s method can be obtained.
In Figure 1 and Figure 2 are shown the dynamical behavior of Newton’s Method and the Steffensen, Backwards-Steffensen and Center-Steffensen method, where the predictor method (5) is better than the Steffensen-type methods (3).
Once the accessibility has been graphically analyzed, showing that method (5) is better than the Steffensen-type methods (3) and, like Newton’s method in terms of convergence, we want to prove it in a numerical way and, for that purpose, we compute the percentage of points that converges. This information is presented in Table 1.
Nevertheless, the use of derivative-free iterative methods is necessary when the operator H is non-differentiable. For this reason, one aim of this work is, from the predictor method (5), preserve, in some way, the good accessibility of Newton’s method.
Then, in non-differentiable case, if we use the Steffensen-type methods defined in (3) to solve the equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) = 0 , the predictor method (5) improve the accessibility region of Steffensen-type methods (3), as we can see in Figure 4 and Figure 5, where the basins of attraction of the two solutions of this equation are drawn for the mentioned methods.
Once the accessibility has been graphically analyzed, showing that method (5) is better than the other ones, we want to prove it in a numerical way and, for that purpose, we compute the percentage of points which converges. We get this information in Table 2.
As we have just seen, the iterative process predictor (5) has a significantly better dynamic behavior than the Steffensen-type methods, being like Newton’s method in the differentiable case. Therefore, we can say that the iterative process predictor has a good accessibility, improving the one of the Steffensen-type methods in both cases, differentiable and non-differentiable. This leads us to construct an iterative process predictor–corrector, using the Center-Steffensen method as the iterative correction process, which maintains its quadratic convergence. Consequently, we consider the predictor–corrector method:
Given an initial guess u 0 Ω , u j + 1 = u j [ u j Tol , u j + Tol ; H ] 1 H ( u j ) , j = 0 , . . . , N 0 1 , x 0 = u N 0 , y n = x n H ( x n ) , n 0 , z n = x n + H ( x n ) , n 0 , x n + 1 = x n [ y n , z n ; H ] 1 H ( x n ) , n 0 ,
where Tol = ( t o l , t o l , , t o l ) R m for a real number t o l > 0 . Thus, this predictor–corrector method will be a Steffensen-type method with good accessibility and quadratic convergence from an iteration to be determined.

3. Semilocal Convergence

From the dynamic study carried out previously, it is evident that, if we denote by D C o r r = { x 0 Ω : { x n } , g i v e n     b y ( 6 ) , c o n v e r g e s } and D P r e d = { x 0 Ω : { x n } , } given by (5), converges the accessibility domains of iterative processes (6) and (5), it will be verified that D C o r r D P r e d . That is, the set of starting points that ensure convergence for method (6) is less than the corresponding set for method (5). In this section we show that, starting from an element x 0 D P r e d , we can locate a point x N 0 such that x N 0 D C o r r . Therefore, we obtain a starting point that ensures the convergence of method (6). Thus, doing some iterations with the predictor method, we locate a point x N 0 that ensures the convergence of method (6). Therefore, we increase the accessibility of Center-Steffensen method.
The semilocal study of the convergence is based on demanding conditions to the initial approximation u 0 , from certain conditions on the operator H, and provide conditions required to the initial approximation that guarantee the convergence of sequence (7) to the solution x . In order to analyze the semilocal convergence of iterative processes that do not use derivatives in their algorithms, the conditions are usually required on the operator divided difference. Although in the case that the operator H is Fréchet differentiable, the divided difference operator can be defined from the Fréchet derivative of the operator H.

3.1. Differentiable Operators

Next, we establish the semilocal convergence of iterative process given in (7) for differentiable operators. So, we consider H : Ω R m R m a Fréchet differentiable operator and there exists
[ v , w ; H ] = 0 1 H ( t v + ( 1 t ) w ) d t ,
for each pair of distinct points v , w Ω . Notice that, as H is Fréchet differentiable [ x , x ; H ] = H ( x ) .
Now, we suppose the following initial conditions:
(D1)
Let u 0 Ω such that exists Γ 0 = [ H ( u 0 ) ] 1 with Γ 0 β and H ( u 0 ) δ 0 .
(D2)
H ( x ) H ( y ) K x y , x , y Ω , K R + .
Firstly, we obtain some technical results.
Lemma 1.
The following items are verified.
(i) 
Let R > 0 with B ( u 0 , R + Tol ) Ω . If β K ( R + Tol ) < 1 then, for each pair of distinct points y , z B ( u 0 , R + Tol ) , there exists [ y , z ; H ] 1 such that
[ y , z ; H ] 1 β 1 β K ( R + Tol ) .
(ii) 
If u j , u j 1 Ω , for j = 0 , 1 , , N 0 , then
H ( u j ) K 2 ( Tol + u j u j 1 ) u j u j 1 .
(iii) 
If x j , x j 1 Ω , for j 1 , then
H ( x j ) K 2 ( H ( x j 1 + x j x j 1 ) x j x j 1 .
Proof. 
To prove the item ( i ) , from ( D 1 ) , we can write
I Γ 0 [ y , z ; H ] Γ 0 H ( u 0 ) [ y , z ; H ] 0 1 ( H ( t y + ( 1 t ) z ) H ( u 0 ) ) d t β K 0 1 t y + ( 1 t ) z u 0 d t β K 0 1 t ( y u 0 ) + ( 1 t ) ( z u 0 ) d t β K ( R + Tol ) .
Then, by the Banach Lemma for inverse operators [25] the item ( i ) is proved.
Regarding item ( i i ) , from the Taylor expansion for the operator H and (7), we can obtain
H ( u j ) = H ( u j 1 ) + H ( u j 1 ) ( u j u j 1 ) + 0 1 ( H ( u j 1 + t ( u j u j 1 ) ) H ( u j 1 ) ) d t ( u j u j 1 )
= ( H ( u j 1 ) [ u j 1 Tol , u j 1 + Tol ; H ] ) ( u j u j 1 ) +
0 1 ( H ( u j 1 + t ( u j u j 1 ) H ( u j 1 ) ) d t ( u j u j 1 ) .
Taking norms in the last equality obtained previously and, considering (8), the proof of item ( i i ) is evident.
Item ( i i i ) is proved analogously to item ( i i ) , just considering the algorithm of the iterative process predictor–corrector (7). □
To simplify the notation, from now on, we denote
A j = [ u j Tol , u j + Tol ; H ] , B j = [ x j H ( x j ) , x j + H ( x j ) ; H ] ,
and the parameters a 0 = β 2 K δ 0 and b 0 = β K t o l . Other parameters which will be used are:
M = L 2 ( b 0 + L a 0 ) , where L = 1 1 b 0 β K R .
Moreover, notice that the polynomial equation p ( t ) = 0 , where
p ( t ) = 2 a 0 ( 1 b 0 ) ( 2 + a 0 5 b 0 + 3 b 0 2 ) β K t + ( 4 5 b 0 ) β 2 K 2 t 2 2 β 3 K 3 t 3 ,
has at least a positive real root since that p ( 0 ) > 0 and p ( t ) as t . Then, we denote by R the smallest positive root of the polynomial equation p ( t ) = 0 .
Finally, we denote by [ x ] the integer part of the real number x.
Theorem 2.
Let H : Ω R m R m a Fréchet differentiable operator defined on a nonempty open convex domain Ω. Suppose that conditions ( D 1 ) and ( D 2 ) are satisfied and there exists t o l > 0 such that M < 1 , R < 1 b 0 β K and B ( u 0 , R + Tol ) Ω . If we consider
N 0 1 + log ( Tol / δ 0 ) log ( M ) i f Tol < δ 0 , 1 i f Tol > δ 0 ,
then the iterative process predictor–corrector (7), starting at u 0 , converges to x a solution of H ( x ) = 0 . Moreover, u j , x n , x B ( u 0 , R ) ¯ for j = 1 , , N 0 and n 0 .
Proof. 
First, notice that it is easy to check that R = L β δ 0 1 M .
Then, from the item ( i ) , in the previous Lemma, u 0 B ( u 0 , R + Tol ) , there exists A 0 1 such that A 0 1 β 1 β K ( R + Tol ) = L β . Then, u 1 is well defined and u 1 u 0 A 0 1 H ( u 0 ) L β δ 0 < R , with what we get that u 1 B ( u 0 , R ) . Now, obviously, u 1 ± Tol B ( u 0 , R + Tol ) and, again from the item ( i ) in the previous Lemma, there exists A 1 1 such that A 1 1 L β , Then, u 2 is well defined and, from (10), we have that
H ( u 1 ) K 2 ( Tol + u 1 u 0 ) u 1 u 0 K 2 ( Tol + L β δ 0 ) u 1 u 0 .
Moreover, from (13), we get
H ( u 1 ) K 2 ( Tol + L β δ 0 ) L β δ 0 = M δ 0 .
Therefore, we obtain that
u 2 u 1 A 1 1 H ( u 1 ) L β K 2 ( Tol + L β δ 0 ) u 1 u 0 M u 1 u 0 .
And u 2 u 1 < u 1 u 0 , since M < 1 .
Consequently, it is easy to check that u 2 B ( u 0 , R ) since that
u 2 u 0 u 2 u 1 + u 1 u 0 ( 1 + M ) u 1 u 0 < 1 1 M L β δ 0 = R .
Following a recursive procedure, it is easy to check the following relationships for j = 1 , 2 , , N 0 .
(a
There exists A j 1 1 such that A j 1 1 L β ,
(b
H ( u j 1 ) K 2 ( Tol + L β δ 0 ) u j 1 u j 2 ,
(c
H ( u j 1 ) M j 1 δ 0 ,
(d
u j u j 1 M u j 1 u j 2 < u j 1 u j 2 ,
(e
u j u 0 ( 1 + M + + M j 1 ) u 1 u 0 < 1 1 M L β δ 0 = R .
Now, from the algorithm of the iterative process predictor–corrector (7), we consider x 0 = u N 0 B ( u 0 , R ) . Then, from the hypothesis required to the parameter N 0 in (12), we have that x 0 ± H ( x 0 ) u 0 = u N 0 ± H ( u N 0 ) u 0 u N 0 u 0 + M N 0 δ 0 u N 0 u 0 + Tol . Then x 0 ± H ( x 0 ) B ( u 0 , R + Tol ) . So, from item ( i ) of Lemma 1, there exists B 0 1 with B 0 1 L β .
On the other hand, from item ( i i ) of Lemma 1, we obtain
H ( x 0 ) = H ( u N 0 ) K 2 ( Tol + u N 0 u N 0 1 ) u N 0 u N 0 1
K 2 ( Tol + u 1 u 0 ) u N 0 u N 0 1 K 2 ( Tol + L β δ 0 ) u N 0 u N 0 1 .
Then,
x 1 x 0 B 0 1 H ( x 0 ) L β K 2 ( Tol + L β δ 0 ) x 0 u N 0 1 M u N 0 u N 0 1 .
And, as a direct consequence, we get that
x 1 x 0 < u N 0 u N 0 1 M N 0 u 1 u 0 ,
and
x 1 u 0 [ 1 + M + + M N 0 ] u 1 u 0 < 1 1 M L β δ 0 = R .
Therefore, x 1 B ( u 0 , R ) , and, from item ( i i i ) of Lemma 1, we have
H ( x 1 ) K 2 ( H ( x 0 + x 1 x 0 ) x 1 x 0 K 2 ( Tol + u 1 u 0 ) x 1 x 0 M N 0 + 1 δ 0 .
So, we have that x 1 ± H ( x 1 ) u 0 x 1 u 0 + M N 0 + 1 δ 0 x 1 u 0 + Tol .
Then x 1 ± H ( x 1 ) B ( u 0 , R + Tol ) . Now, from item ( i ) of Lemma 1, there exists B 1 1 with B 1 1 L β .
Moreover, we get
x 2 x 1 B 1 1 H ( x 1 ) L β K 2 ( Tol + L β δ 0 ) x 1 x 0 = M x 1 x 0 < x 1 x 0 ,
and
x 2 u 0 [ 1 + M + + M N 0 + 1 ] u 1 u 0 < 1 1 M L β δ 0 = R .
Now, following an inductive procedure, it is easy to check the recurrence relations defined for j 1 as:
(a′
There exists B j 1 1 such that B j 1 1 L β ,
(b′
H ( x j 1 ) K 2 ( Tol + L β δ 0 ) x j 1 x j 2 ,
(c′
H ( x j 1 ) M N 0 + j 1 δ 0 ,
(d′
x j x j 1 M x j 1 x j 2 < x j 1 x j 2 ,
(e′
x j u 0 ( 1 + M + . . . + M N 0 + j 1 ) u 1 u 0 < 1 1 M L β δ 0 = R .
Now, using M < 1 , for n N 0 , we have
x n + j x n i = 1 j x n + i x n + i 1 i = 1 j M N 0 + n + i 1 u 1 u 0 < M N 0 + n 1 M u 1 u 0 .
Hence, { x n } is a Cauchy sequence which converges to x . Since
H ( x n ) M N 0 + n + 1 δ 0 ,
thus, H ( x ) = 0 by using the continuity of H .
Next, we present a uniqueness result for the iterative process predictor–corrector (7).
Theorem 3.
Under conditions of the previous Theorem, the solution x of the equation H ( x ) = 0 is unique in B ( u 0 , R ) .
Proof. 
To prove the uniqueness part, suppose y is another solution of (1) in B ( u 0 , R ) . If Q = [ x , y ; H ] is invertible, then x = y since Q x y = H ( x ) H ( y ) . But
I Γ 0 1 Q Γ 0 1 H ( u 0 ) Q Γ 0 1 0 1 H ( t y + ( 1 t ) x ) H ( u 0 ) d t β K R < 1 .
Therefore, by the Banach Lemma of inverse operators, there exists Q 1 and then x = y .

3.2. Non-Differentiable Operators

In this section, we want to obtain a result of semilocal convergence for iterative process (7) when H is a non-differentiable operator. In order to obtain it, we must suppose that for each pair of distinct points x , y Ω , there exists a first-order divided difference of H at these points. As we consider Ω an open convex domain of R m , this condition is satisfied ([9,26]). Moreover, it is also necessary to impose a condition on the first-order divided difference of the operator H. As it appears in [27,28], a Lipschitz-continuous condition or a Hölder-continuous can be considered, but in the above cases, it is known [29], that the Fréchet derivative of H exists in Ω . Therefore, these conditions cannot be verified if the operator H is non-differentiable. Then, to establish the semilocal convergence of iterative process given in (7) for non-differentiable operator H, we suppose that the following conditions hold:
  • [(ND1)] Let u 0 Ω such that A 0 1 exists with | | A 0 1 | | β 0 and | | H ( u 0 ) | | δ 0 .
  • [(ND2)] | | [ x , y ; H ] [ u , v ; H ] | | P + K | | x u | | + | | y v | | , P , K 0 , with x , y , u , v Ω , x y , u v .
To simplify the notation, from now on, we denote
M ˜ = β 0 ( P + K ( β 0 δ 0 + 2 Tol ) ) and S = M ˜ 1 β 0 ( P + 2 K ( R + Tol ) )
In these conditions, we start our study obtaining a technical result, the proof of which is evident from algorithm given in (7).
Lemma 4.
The following items can be easily verified.
(i
If u j , u j 1 Ω , for j = 0 , 1 , , N 0 , then
H ( u j ) = [ u j , u j 1 ; H ] A j 1 ) ( u j u j 1 .
(ii
If x j , x j 1 Ω , for j 1 , then
H ( x j ) = [ x j , x j 1 ; H ] B j 1 ( x j x j 1 ) .
Theorem 5.
Under the conditions (ND1)-(ND2) if the real equation
t = β 0 δ 0 ( 1 β 0 ( P + 2 K ( t Tol ) ) ) 1 β 0 ( P + 2 K ( t + Tol ) ) M ˜ ,
has at least one positive root, the smallest positive root is denoted by R, and there exists t o l > 0 such that satisfies
M ˜ + β 0 ( P + 2 K ( R + Tol ) ) < 1 ,
and B ( u 0 , R + Tol ) Ω . If we consider
N 0 2 + log ( Tol / M ˜ δ 0 ) log ( S ) i f Tol < β 0 δ 0 ( P + β 0 δ 0 K ) 1 2 β 0 δ 0 , 1 i f Tol > β 0 δ 0 ( P + β 0 δ 0 K ) 1 2 β 0 δ 0 ,
then the iterative process predictor–corrector (7), starting at u 0 , converges to x a solution of H ( x ) = 0 . Moreover, u j , x n , x B ( u 0 , R ) ¯ , for j = 1 , , N 0 and n 0 , and x is unique solution of H ( x ) = 0 in B ( u 0 , R ) Ω .
Proof. 
First, notice that Tol < β 0 δ 0 ( P + β 0 δ 0 K ) 1 2 β 0 δ 0 if and only if Tol < M ˜ δ 0 . Moreover, the smallest positive real root R of (17) satisfies
R = β 0 δ 0 1 S .
Second, we prove that u j is well defined and u j B ( u 0 , R ) for j = 0 , 1 , 2 , , N 0 . From condition (ND1), u 1 is well defined and
u 1 u 0 A 0 1 H ( u 0 ) β 0 δ 0 < R .
Thus, u 1 B ( u 0 , R ) and u 1 ± Tol B ( u 0 , R + Tol ) . Using Lemma 4, we get
H ( u 1 ) = [ u 1 , u 0 ; H ] [ u 0 Tol , u 0 + Tol , H ] u 1 u 0 P + K ( u 1 u 0 + 2 Tol ) u 1 u 0 β 0 P + K ( β 0 δ 0 + 2 Tol ) δ 0 = M ˜ δ 0 .
Now,
I A 0 1 A 1 A 0 1 A 1 A 0 β 0 [ u 1 Tol , u 1 + Tol ; H ] [ u 0 Tol , u 0 + Tol ; H ] | | β 0 P + K ( u 1 u 0 + u 1 u 0 ) β 0 ( P + 2 K R ) < 1 .
Hence, by using Banach Lemma for inverse operators, A 1 1 exists and
A 1 1 β 0 1 β 0 ( P + 2 K R ) .
Thus, u 2 is well defined. Moreover,
u 2 u 1 A 1 1 H ( u 1 ) M ˜ 1 β 0 ( P + 2 K R ) u 1 u 0 M ˜ 1 β 0 ( P + 2 K ( R + Tol ) ) u 1 u 0 = S u 1 u 0 < u 1 u 0 < R ,
and u 2 B ( u 0 , R ) as
u 2 u 0 u 2 u 1 + u 1 u 0 ( S + 1 ) u 1 u 0 < β 0 δ 0 1 S < R .
In a similar way, by using the principle of mathematical induction, we can establish the following recurrence relations. For j = 1 , 2 , , N 0 ,
(A1) 
A j 1 β 0 1 β 0 ( P + 2 K R ) ,
(A2) 
H ( u j ) P + K ( β 0 δ 0 + 2 Tol ) u j u j 1 M ˜ S j 1 δ 0 ,
(A3) 
u j u j 1 S u j 1 u j 2 S j 1 | | u 1 u 0 < β 0 δ 0 < R ,
(A4) 
u j u 0 β 0 δ 0 1 S < R .
To study the convergence of the predictor of (7), we consider x 0 = u N 0 B ( u 0 , R ) . Using Lemma 4, we get
H ( u N 0 ) P + K ( u N 0 u N 0 1 + 2 Tol ) u N 0 u N 0 1 P + K ( β 0 δ 0 + 2 Tol ) S N 0 1 u 1 u 0 β 0 P + K ( β 0 δ 0 + 2 Tol ) S N 0 1 δ 0 = M ˜ S N 0 1 δ 0 ,
and, by the hypothesis required to the parameter N 0 in (19), we have
x 0 ± H ( x 0 ) u 0 x 0 u 0 + H ( x 0 ) u N 0 u 0 + M ˜ S N 0 1 δ 0 < R + Tol ,
so, B 0 = [ x 0 H ( x 0 ) , x 0 + H ( x 0 ) ; H ] is well defined. Now, we consider
I A 0 1 B 0 A 0 1 B 0 A 0 β 0 [ x 0 H ( x 0 ) , x 0 + H ( x 0 ) ; H ] [ u 0 Tol , u 0 + Tol ; H ] β 0 P + 2 K ( x 0 ± H ( x 0 ) u 0 + Tol ) β 0 P + 2 K ( R + Tol ) < 1 .
Hence, B 0 1 exists and
B 0 1 β 0 1 β 0 ( P + 2 K ( R + Tol ) ) .
Using (22) and (23), we get
x 1 x 0 B 0 1 H ( x 0 ) β 0 ( P + K ( β 0 δ 0 + 2 Tol ) ) 1 β 0 ( P + 2 K ( R + Tol ) u N 0 u N 0 1 S u N 0 u N 0 1 S N 0 u 1 u 0 < u 1 u 0 < R ,
and
x 1 u 0 x 1 x 0 + u N 0 u N 0 1 + + u 1 u 0 ( S N 0 + S N 0 1 + + S + 1 ) u 1 u 0 < β 0 δ 0 1 S = R .
Hence, x 1 B ( u 0 , R ) . Again, using Lemma 4 and condition (ND2), we have
H ( x 1 ) [ x 1 , x 0 ; H ] [ x 0 H ( x 0 ) , x 0 + H ( x 0 ) ; H ] x 1 x 0 P + K ( x 1 x 0 + 2 H ( x 0 ) ) S N 0 u 1 u 0 β 0 P + K ( β 0 δ 0 + 2 M ˜ δ 0 S N 0 1 ) S N 0 δ 0 β 0 P + K ( β 0 δ 0 + 2 Tol ) δ 0 S N 0 = M ˜ δ 0 S N 0 ,
and then
x 1 ± H ( x 1 ) u 0 x 1 u 0 + H ( x 1 ) β 0 δ 0 1 S + M ˜ δ 0 S N 0 = R + Tol .
Then B 1 is well defined, therefore
I A 0 1 B 1 A 0 1 B 1 A 0 β 0 P + 2 K ( x 1 ± H ( x 1 ) u 0 + Tol ) β 0 P + 2 K ( R + Tol ) < 1 .
Hence, B 1 1 exists and
B 1 1 β 0 1 β 0 P + 2 K ( R + Tol ) .
Using (24) and (25), we get
x 2 x 1 B 1 1 H ( x 1 ) β 0 P + K ( β 0 δ 0 + 2 Tol ) 1 β 0 P + 2 K ( R + Tol ) x 1 x 0 = S x 1 x 0 < S N 0 + 1 u 1 u 0 < β 0 δ 0 .
and
x 2 u 0 x 2 x 1 + x 1 x 0 + u 1 u 0 ( S N 0 + 1 + S N 0 + + 1 ) u 1 u 0 β 0 δ 0 1 S = R .
Hence, x 2 B ( u 0 , R ) .
Using mathematical induction, we can establish the following recurrence relation for j 1 :
(B1) 
B j 1 β 0 1 β 0 P + 2 K ( R + Tol ) .
(B2) 
H ( x j ) ( P + K ( β 0 δ 0 + 2 Tol ) ) x j x j 1 M ˜ δ 0 S N 0 + j 1 < M ˜ δ 0 .
(B3) 
x j + 1 x j S N 0 x j x j 1 S N 0 + j u 1 u 0 < β 0 δ 0 < R .
(B4) 
x j + 1 u 0 < β 0 δ 0 1 S = R .
Now, using S < 1 , for n N 0
x n + j x n i = 1 j x n + i x n + i 1 S N 0 i = 1 j S n + i 1 u 1 u 0 S N 0 + n 1 S u 1 u 0 .
Hence, { x n } is a Cauchy sequence which converges to x . Since,
H ( x n ) P + K ( β 0 δ 0 + 2 Tol ) x n x n 1 ,
and x n x n 1 0 as n , thus H ( x ) = 0 by using the continuity of H .
Theorem 6.
Under conditions of the previous Theorem, the solution x of the equation H ( x ) = 0 is unique in B ( u 0 , R ) .
Proof. 
To prove the uniqueness of x , let y be another solution of H ( x ) = 0 in B ( u 0 , R ) . If Q = [ y , x ; H ] is invertible, then y = x . Since, Q ( y x ) = H ( y ) H ( x ) = 0 .
But, I A 0 1 Q A 0 1 Q A 0 β 0 P + K ( y u 0 + x u 0 + 2 Tol ) β 0 ( P + 2 K ( R + Tol ) ) < 1 . Hence, by the Banach Lemma for inverse operators, Q 1 exists. Therefore, y = x . □

3.3. Numerical Experiments

Now, we perform a numerical experience to show the applicability of the theoretical results previously obtained. So, we deal with nonlinear integral equations that are used in a great variety of applied problems in electrostatic, low frequency electromagnetic problems, electromagnetic scattering problems and propagation of acoustical and elastic waves ([30,31]). We focus on the nonlinear integral equation of Hammerstein type expressed as follows
[ H ( x ) ] ( s ) = x ( s ) w ( s ) a b G ( s , t ) M ( t , x ( t ) ) d t , s [ a , b ] .
where < a < b < + , G is the Green’s function, w , and M are known functions and x is the solution to be obtained.
We solve the equation H ( x ) = 0 , where H : Ω C [ a , b ] C [ a , b ] by transforming the problem into a nonlinear system. First, we approximate the given integral by a quadrature formula with the corresponding weighs q j and nodes t j , j = 1 , 2 , , n . The discretization of the problem by using these nodes gives us the following nonlinear system:
x j = w j + i = 1 n e j i M ( t i , x i ) , j = 1 , 2 , , n ,
where
e j i = q i G ( t j , t i ) = q i ( b t j ) ( t i a ) b a , i j , q i ( b t i ) ( t j a ) b a , i > j .
We can formulate the system from R n into R n , by using the following functions and matrices,
H ( x ) = x w E M ( t , x ) = 0 ,
with
x = ( x 1 , x 2 , , x n ) T , w = ( w 1 , w 2 , , w n ) T , E = ( e j i ) j , i = 1 n .
To illustrate the theoretical results in both differentiable and non-differentiable cases, we take w j = 1 / 5 for j = 1 , , n ,
M ( t , x ) = ( λ x ( t ) 3 + σ | x ( t ) | ) T ,
with λ , σ R , and [ a , b ] = [ 0 , 1 ] . Specifically, we solve the following nonlinear system,
H ( x ) x 1 5 E λ a x + σ b x = 0 , H : R 8 R 8 ,
where x = ( x 1 , x 2 , , x 8 ) T , 1 5 = 1 5 , 1 5 , , 1 5 T , a x = x 1 3 , x 2 3 , , x 8 3 T ,
b x = | x 1 | , | x 2 | , , | x 8 | T and E = ( e i j ) i , j = 1 8 .
So, we are now in conditions of applying the theoretical development for both cases the differentiable and non-differentiable one.

3.3.1. H a Differentiable Operator

We consider in the above described nonlinear integral the following values λ = 1 and σ = 0 so we have a differentiable problem. Moreover, we work in the domain Ω = B ( 0 , 1 ) R 8 defined with the infinity norm. In these terms, for the associated operator H it is easy to characterize the Fréchet derivative H so we have
H ( x ) x 1 5 λ E a x , H ( x ) = I 3 λ E diag ( x 2 ) .
Then, by applying the theoretical results obtained in previous sections we take as starting point u 0 = ( 1 / 3 , 1 / 3 , , 1 / 3 ) and different values of Tol = ( t o l , t o l , , t o l ) . The values for the parameters than appear in the semilocal convergence study are β = 1.0435 , δ 0 = 0.1380 , K = 0.75 and a 0 = 0.1127 . Other results such as N 0 and the value for the radii of the domains of existence and uniqueness for the solution can be find in Table 3. As can be seen in the results of this table when t o l decreases also does the semilocal convergence radii, being the value of N 0 similar.
Finally, we obtain the approximated solution of the nonlinear integral Equation (31) by applying Newton’s method (4) and the new predictor–corrector Steffensen-type method, In (7). We run the corresponding algorithms with Matlab20 working in variable precision arithmetic with 100 digits, using as stopping criteria | | x n + 1 x n | | < 10 30 and with the starting point and values of t o l used and obtained in the semilocal convergence study. The results in Table 4 show that the behavior of the new predictor–corrector Steffensen method is as good as Newton’s method. The approximated solution gives us following values if we round to 6 digits:
x ˜ = [ 0.20008 , 0.200378 , 0.200749 , 0.201001 , 0.201001 , 0.200749 , 0.200378 , 0.20008 ] .

3.3.2. H a Non-Differentiable Operator

If, in (29) we work again in Ω = B ( 0 , 1 ) by considering m = 8 , λ = 1 and σ = 1 / 2 , we obtain the non-differentiable system of nonlinear equations
H ( x ) x 1 5 E ( a x + 1 2 b x ) .
In these terms, we characterize the divided difference operator by using the following formula:
[ x , y ; H ] i j = 1 x j y j ( H i ( x 1 , , x j , y j + 1 , , y m ) H i ( x 1 , , x j 1 , y j , , y m )
having that
[ u , v ; H ] = I ( λ C + σ D ) ,
where C = ( c j i ) j , i = 1 8 with c j i = 0 if i j but c i i = e i i ( u i 2 + v i 2 + u i v i ) while D = ( d j i ) j , i = 1 8 with d j i = 0 and d i i = e i i | u i | | v i | u i v i . Furthermore, if we work in the domain Ω = B ( 0 , 1 ) we have the following bounds,
[ x , y ; H ] [ u , v ; H ] P + K ( x u + y v ) with P = 2 E | σ | and K = 3 | λ | E .
Now, by taking starting point u 0 = ( 1 / 3 , 1 / 3 , , 1 / 3 ) and different values of Tol = ( t o l , t o l , , t o l ) we have the following bounds for the parameters involved in the semilocal convergence Theorem 5, β 0 = 1.1163 , δ 0 = 0.1588 , K = 0.375 , P = 0.125 and a 0 = 0.0742 . Other results such as N 0 and the value for the radii of the domains of existence and uniqueness for the solution can be find in Table 5. We can corroborate a similar behavior than in the differentiable case, that is, for smaller values of t o l the radii increases.
Finally, we obtain the approximated solution of the nonlinear integral equation (32) by applying center Steffesen method (3) and the new Steffesen method (7). We run the corresponding algorithms in Matlab20 working in variable precision arithmetic with 100 digits, using as stopping criteria | | x n + 1 x n | | < 10 30 and with the starting point and values of t o l obtained in the semilocal convergence study. The results in Table 6 show that the behavior of the new predictor–corrector Steffensen method improves the Center-Steffensen method. The approximated solution gives us following values if we round to 6 digits:
x ˜ = [ 0.201133 , 0.205354 , 0.210668 , 0.214297 , 0.214297 , 0.210668 , 0.205354 , 0.201133 ]

4. Dynamical Behavior of Predictor–Corrector Method

In this section, we compare the behavior of predictor–corrector method (7) for functions H 1 and H 2 used in the motivation section for different values of t o l and N 0 . In this case, we will have a greater demand to obtain the attraction basins. So, in all the cases, the tolerance 10 6 and a maximum of 100 iterations are used. If we have not obtained the desired tolerance with 100 iterations, do not continue and decide that the iterative method starting at z 0 does not converge to any zero.
For the differentiable case, as we can see in Figure 6, Figure 7 and Figure 8, by increasing the value of N 0 we can achieve an accessibility such as that of Newton’s method. Once the accessibility has been graphically analyzed, showing that method (7) is better than the Steffensen-type methods (see Figure 2), we want to see its behavior in a numerical way and, for that purpose, we compute the percentage of points which converges. We get this information in Table 7.
Similarly, in the non-differentiable case, as it can be seen in Figure 9, Figure 10 and Figure 11, we verify that by increasing the value of N 0 , we can achieve an accessibility as presented by Newton’s method in the differentiable case.
Once the accessibility has been graphically analyzed, showing that method (7) is better than the Steffensen-type methods (see Figure 4), we want to see its behavior in a numerical way and, for that purpose, we compute the percentage of points which converges. We get this information in Table 8.

5. Concluding Remarks

Due to the inconvenience of applying Steffensen-type iterative processes in terms of their accessibility, we have built a predictor–corrector iterative process that, while maintaining the efficiency of Steffensen-type methods, improves the accessibility of these methods. Thus, it can be used as an efficient alternative to Newton’s method when applied to nonlinear systems of non-differentiable equations.

Author Contributions

Investigation, M.A.H.-V., S.Y., Á.A.M., E.M. and S.S.; Writing—original draft, M.A.H.-V., S.Y., Á.A.M., E.M. and S.S.; Writing—review and editing, M.A.H.-V., S.Y., Á.A.M., E.M. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the project PGC2018-095896-B-C21-C22 of Spanish Ministry of Economy and Competitiveness and by the project of Generalitat Valenciana Prometeo/2016/089.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Regmi, S. Optimized Iterative Methods with Applications in Diverse Disciplines; Nova Science Publisher: New York, NY, USA, 2021. [Google Scholar]
  2. Argyros, I.K.; Cho, Y.J.; Hilout, S. Numerical Methods for Equations and Its Applications; CRC Press/Taylor and Francis: Boca Raton, FL, USA, 2012. [Google Scholar]
  3. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publishing Co.: Hackensack, NJ, USA, 2013. [Google Scholar]
  4. Barbashov, B.M.; Nesterenko, V.V.; Chervyakov, A.M. General solutions of nonlinear equations in the geometric theory of the relativistic string. Commun. Math. Phys. 1982, 84, 471–481. [Google Scholar] [CrossRef]
  5. Brugnano, L.; Casulli, V. Iterative Solution of Piecewise Linear Systems. SIAM J. Sci. Comput. 2008, 30, 463–472. [Google Scholar] [CrossRef] [Green Version]
  6. Difonzo, F.V.; Masciopinto, C.; Vurro, M.; Berardi, M. Shooting the Numerical Solution of Moisture Flow Equation with Root Water Uptake Models: A Python Tool. Water Resour. Manag. 2021, 35, 2553–2567. [Google Scholar] [CrossRef]
  7. Soheili, A.R.; Soleymani, F. Iterative methods for nonlinear systems associated with finite difference approach in stochastic differential equations. Numer. Algorithms 2016, 71, 89–102. [Google Scholar] [CrossRef]
  8. Gou, F.; Liu, J.; Liu, W.; Luo, L. A finite difference method for solving nonlinear Volterra integral equation. J. Univ. Chin. Acad. Sci. 2016, 33, 329–333. [Google Scholar]
  9. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  10. Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56, 101423. [Google Scholar] [CrossRef]
  11. Argyros, I.K. On the Secant method. Publ. Math. Debrecen 1993, 43, 223–238. [Google Scholar]
  12. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM 1973, 21, 643–651. [Google Scholar] [CrossRef]
  13. Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. On a Steffensen-like method for solving nonlinear equations. Calcolo 2016, 53, 171–188. [Google Scholar] [CrossRef]
  14. Abbasbandy, S.; Asady, B. Newton’s method for solving fuzzy nonlinear equations. Appl. Math. Comput. 2004, 159, 349–356. [Google Scholar] [CrossRef]
  15. Chun, C. Iterative methods improving Newton’s method by the decomposition method. Comput. Math. Appl. 2005, 50, 1559–1568. [Google Scholar] [CrossRef] [Green Version]
  16. Galántai, A. The theory of Newton’s method. J. Comput. Appl. Math. 2000, 124, 25–44. [Google Scholar] [CrossRef] [Green Version]
  17. Kelley, C.T. Solving Nonlinear Equations with Newton’s Method; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2003. [Google Scholar]
  18. Magreñán, Á.A.; Argyros, I.K. A Contemporary Study of Iterative Methods; Academic Press: Cambridge, MA, USA; Elsevier: Hoboken, NJ, USA, 2018. [Google Scholar]
  19. Alarcón, V.; Amat, S.; Busquier, S.; López, D.J. A Steffensen’s type method in Banach spaces with applications on boundary-value problems. J. Comput. Appl. Math. 2008, 216, 243–250. [Google Scholar] [CrossRef]
  20. Argyros, I.K. A new convergence theorem for Steffensen’s method on Banach spaces and applications. Southwest J. Pure Appl. Math. 1997, 1, 23–29. [Google Scholar]
  21. Ezquerro, J.A.; Hernández, M.A.; Romero, N.; Velasco, A.I. On Steffensen’s method on Banach spaces. J. Comput. Appl. Math. 2013, 249, 9–23. [Google Scholar] [CrossRef]
  22. Kneisl, K. Julia sets for the super-Newton method, Cauchy’s method, and Halley’s method. Chaos 2001, 11, 359–370. [Google Scholar] [CrossRef] [PubMed]
  23. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  24. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media/Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  25. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  26. Balazs, M.; Goldner, G. On existence of divided differences in linear spaces. Rev. Anal. Numer. Theor. Approx. 1973, 2, 3–6. [Google Scholar]
  27. Hilout, S. Convergence analysis of a family of Steffensen-type methods for generalized equations. J. Math. Anal. Appl. 2008, 329, 753–761. [Google Scholar] [CrossRef] [Green Version]
  28. Moccari, M.; Lotfi, T. On a two-step optimal Steffensen-type method: Relaxed local and semi-local convergence analysis and dynamical stability. J. Math. Anal. Appl. 2018, 468, 240–269. [Google Scholar] [CrossRef]
  29. Hernández, M.A.; Rubio, M.J. A uniparametric family of iterative processes for solving non-differentiable equations. J. Math. Anal. Appl. 2002, 275, 821–834. [Google Scholar] [CrossRef] [Green Version]
  30. Bruns, D.D.; Bailey, J.E. Nonlinear feedback control for operating a nonisothermal CSTR near an unstable steady state. Chem. Eng. Sci. 1977, 32, 257–264. [Google Scholar] [CrossRef]
  31. Wazwaz, A.M. Applications of Integral Equations; Linear and Nonlinear Integral Equations; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
Figure 1. Newton’s Method applied to H 1 ( z ) = z 3 1 .
Figure 1. Newton’s Method applied to H 1 ( z ) = z 3 1 .
Symmetry 14 00004 g001
Figure 2. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Figure 2. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Symmetry 14 00004 g002
Figure 3. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Figure 3. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Symmetry 14 00004 g003
Figure 4. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) = 0 .
Figure 4. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) = 0 .
Symmetry 14 00004 g004
Figure 5. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) = 0 .
Figure 5. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) = 0 .
Symmetry 14 00004 g005
Figure 6. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Figure 6. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Symmetry 14 00004 g006
Figure 7. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Figure 7. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Symmetry 14 00004 g007
Figure 8. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Figure 8. Basins of attraction to polynomial H 1 ( z ) = z 3 1 .
Symmetry 14 00004 g008
Figure 9. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) .
Figure 9. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) .
Symmetry 14 00004 g009
Figure 10. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) .
Figure 10. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) .
Symmetry 14 00004 g010
Figure 11. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) .
Figure 11. Basins of attraction to equation H 2 ( z ) = z ( z 2 + 2 | z | 5 ) .
Symmetry 14 00004 g011
Table 1. Percentage of convergence points for H 1 ( z ) = z 3 1 .
Table 1. Percentage of convergence points for H 1 ( z ) = z 3 1 .
MethodPercentage of Convergent Points
Newton’s method100%
Steffensen4.35%
Backward-Steffensen6.17%
Center-Steffensen9.15%
Method (5) with t o l = 0.002 100%
Method (5) with t o l = 0.5 100%
Method (5) with t o l = 0.75 100%
Table 2. Percentage of convergence points for H 2 ( z ) = z ( z 2 + 2 | z | 5 ) = 0 .
Table 2. Percentage of convergence points for H 2 ( z ) = z ( z 2 + 2 | z | 5 ) = 0 .
MethodPercentage of Convergent Points
Steffensen5.37%
Backward-Steffensen5.91%
Center-Steffensen9.38%
Method (5) with t o l = 0.002 100%
Method (5) with t o l = 0.5 100%
Method (5) with t o l = 0.75 100%
Table 3. Radii of the semilocal convergence balls for different values of tol.
Table 3. Radii of the semilocal convergence balls for different values of tol.
Tol R N 0
0.130.2530511
0.050.2018631
0.010.1853482
0.0010.1821012
Table 4. Numerical results with starting guess u 0 = 1 3 , 1 3 , , 1 3 T .
Table 4. Numerical results with starting guess u 0 = 1 3 , 1 3 , , 1 3 T .
MethodNewton(7)(7)(7)(7)
tol = 0.13 tol = 0.05 tol = 0.01 tol = 0.001
k56655
| | x n + 1 x n | | 7.46581e-311.60297e-611.60297e-617.52955e-317.38325e-31
| | H ( x n + 1 ) | | 7.56107e-311.58277e-611.58277e-617.43468e-317.47747e-31
Table 5. Radii of the semilocal convergence balls for different values of t o l .
Table 5. Radii of the semilocal convergence balls for different values of t o l .
tol R N 0
0.0350.3075622
0.030.2998092
0.020.2867412
0.010.2759463
Table 6. Numerical results with starting guess x 0 = 1 3 , 1 3 , , 1 3 T .
Table 6. Numerical results with starting guess x 0 = 1 3 , 1 3 , , 1 3 T .
Method(3)(7)(7)(7)(7)
α = β = 1 tol = 0.035 tol = 0.03 tol = 0.02 tol = 0.01
k65555
| | x n + 1 x n | | 1.09662e-617.7866e-317.7866e-317.31874e-314.74014e-31
| | H ( x n + 1 ) | | 1.02405e-617.27131e-317.27131e-316.83442e-314.42645e-31
Table 7. Percentage of convergence points for H 1 ( z ) = z 3 1 .
Table 7. Percentage of convergence points for H 1 ( z ) = z 3 1 .
Method (7)Percentage of Convergent Points
t o l = 0.002 and N 0 = 3 69.34 %
t o l = 0.5 and N 0 = 3 69.51%
t o l = 0.75 and N 0 = 3 70.04%
t o l = 0.002 and N 0 = 5 93.03%
t o l = 0.5 and N 0 = 5 93.27%
t o l = 0.75 and N 0 = 5 94.56%
t o l = 0.002 and N 0 = 10 97.13%
t o l = 0.5 and N 0 = 10 98.89%
t o l = 0.75 and N 0 = 10 99.36%
Table 8. Percentage of convergence points for H 2 ( z ) = z ( z 2 + 2 | z | 5 ) .
Table 8. Percentage of convergence points for H 2 ( z ) = z ( z 2 + 2 | z | 5 ) .
MethodPercentage of Convergent Points
t o l = 0.002 and N 0 = 3 58.44%
t o l = 0.5 and N 0 = 3 58.58%
t o l = 0.75 and N 0 = 3 58.66%
t o l = 0.002 and N 0 = 5 97.60%
t o l = 0.5 and N 0 = 5 97.54%
t o l = 0.75 and N 0 = 5 97.35%
t o l = 0.002 and N 0 = 10 99.58%
t o l = 0.5 and N 0 = 10 99.62%
t o l = 0.75 and N 0 = 10 99.57%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hernández-Verón, M.A.; Yadav, S.; Magreñán, Á.A.; Martínez, E.; Singh, S. An Algorithm Derivative-Free to Improve the Steffensen-Type Methods. Symmetry 2022, 14, 4. https://doi.org/10.3390/sym14010004

AMA Style

Hernández-Verón MA, Yadav S, Magreñán ÁA, Martínez E, Singh S. An Algorithm Derivative-Free to Improve the Steffensen-Type Methods. Symmetry. 2022; 14(1):4. https://doi.org/10.3390/sym14010004

Chicago/Turabian Style

Hernández-Verón, Miguel A., Sonia Yadav, Ángel Alberto Magreñán, Eulalia Martínez, and Sukhjit Singh. 2022. "An Algorithm Derivative-Free to Improve the Steffensen-Type Methods" Symmetry 14, no. 1: 4. https://doi.org/10.3390/sym14010004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop