Abstract
A method without memory as well as a method with memory are developed free of derivatives for solving equations in Banach spaces. The convergence order of these methods is established in the scalar case using Taylor expansions and hypotheses on higher-order derivatives which do not appear in these methods. But this way, their applicability is limited. That is why, in this paper, their local and semi-local convergence analyses (which have not been given previously) are provided using only the divided differences of order one, which actually appears in these methods. Moreover, we provide computable error distances and uniqueness of the solution results, which have not been given before. Since our technique is very general, it can be used to extend the applicability of other methods using linear operators with inverses along the same lines. Numerical experiments are also provided in this article to illustrate the theoretical results.
1. Introduction
Let be a differentiable operator in the Fréchet sense, with D being nonempty, convex, and open set, and be Banach spaces.
A plethora of problems are modeled using the equation
Equation (1) can be defined on the real line or complex plane or constitute a system of equations derived from a descritization of a boundary (see also Numerical Section 4 for such examples). Then, to find a solution of Equation (1), we rely mostly on iterative methods. This is the case since solutions in closed forms can only be obtained in special cases.
The methods of successive substitutions, or Picard’s method and Newton’s method [1,2,3,4], have been used extensively to generate a sequence approximating a solution of Equation (1). But these are of convergence orders one and two, respectively. Another drawback is the usage of the Fréchet derivative in the case of Newton’s method. That is why the secant method is introduced, which avoids the derivative and is of order Later, the Steffensen and the Kurchatov methods are developed, which are also derivative-free and the convergence order is two [5,6,7,8,9]. However, it is important to develop derivative-free methods of orders greater than two. Our study contributes in this direction.
In particular, iterative methods without memory use the current iteration, whereas those with memory rely on the current iteration and the previous ones [10,11]. The idea of using the latter method is to increase the convergence order without additional operator evaluations. This type of method is important, since it is are derivative-free.
In this article, we develop a local and semi-local analysis of convergence for two methods. The first one is without memory and the second method is with memory. The methods are defined, respectively, as:
where or and
These methods are extensions of Traub’s work on Steffensen-like methods [4]. Method (2) is without memory and uses two operator evaluations and one inverse evaluation per complete step. However, Method (3) is with memory, requiring similar calculations, and is faster than Method (2). Methods (2) and (3) are also studied in [12], when They are of orders four and respectively, in the scalar case provided that and [12].
Motivation The convergence order is shown using the Taylor series expansion approach, which is based on derivatives up to order five (not on these methods), limiting their applicability. As a simple but motivational example:
Let Define function g on with
It is clear that, in this example, the exact solution is Clearly, is not bounded on Therefore, the local analysis of convergence for these methods is not guaranteed by the analysis in [4,12]. However, the methods may converge (see Numerical Section 4).
Other concerns are: the lack of upper error estimates on or results on the location and uniqueness of These concerns constitute our motivation for writing this article. These limitations appear in the study of other methods [4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Our approach is applicable to those methods along the same lines.
Novelty We find computable convergence radius and error estimates relying only on the derivative appearing on these methods and generalized conditions on That is how we extend the utilization of these methods. Notice that local analysis of convergence results on iterative methods is significant, since it reveals how difficult it is to pick a starting point, Our idea can be used analogously with other methods and for the same reasons because it is so general. Moreover, a more important and difficult semi-local analysis of convergence (not presented in [12]) is also developed in this paper.
2. Local Analysis
We first develop the ball convergence analysis of Method (2) using real parameters and functions. Let and
Suppose function:
- (i)
- has a minimal zero (MZ) for some function nondecreasing and continuous (NDC). Let
- (ii)
- has an MZ where is NDC and is defined by
- (iii)
- have an MZ respectively. Let and
- (iv)
- has an MZ or some functions NCD and , defined by
The parameter
is shown in Theorem 1 to be a convergence radius for Method (2). Set
The definition of the parameter d implies
are valid for all
By , we denote the closure of open ball with center and of radius
The following conditions are needed.
Suppose:
- (h1)
- There exists an invertible operator L so thatandFor eachSet
- (h2)
- andFor each
- (h3)
- for and to be given laterand
- (h4)
- There exists , satisfying orLet
The local analysis of Method (2) uses condition (H) and is given in:
Theorem 1.
Under the conditions (H) for , pick Then, the sequence is convergent to Moreover, this limit is the only zero of F in the set , given in (h4).
Proof.
The following assertions shall be shown using induction on m
and
with radius d, as defined in, (4) and functions , as given previously. We have
Using (4), (5), (h1), and (h3), we obtain
which, together with a lemma on inverses of linear operators due to Banach [8], imply the linear operator is invertible and
Notice that exists by the first substep of Method (2), from which we can also have
By (4), (6) (for ), (h2), (h3), (10), and (11), we obtain
showing (7) for and that the iterate Notice also that the iterate exists by the second substep of Method (2), from which we can also have
Then, in view of (4), (6) (for ), (10), (12), and (13), we obtain
showing (8) for and the iterate Simply, replace by in the previous calculations to complete the induction for (7) and (8). Then, from the estimation
where we conclude and Let for some with By (h1) and (h4), we obtain
Therefore, follows by the invertibility of Q and the identity □
Remark 1.
- (a)
- We can compute the computational order of convergence (COC), defined byor the approximate computational order of convergence
- (b)
- The choice satisfies the conditions and required to show the fourth convergence order of Method (2). Next, we show how to choose function in this case. Notice that we haveso
- (c)
- The usual choice for [8]. But this implies that the operator F is differentiable at and is simple. This makes it unattractive for solving non-differentiable equations. However, if L is chosen to be different from then one can also solve non-differentiable equations.
- (d)
- The parameter a can be replaced by a real function as follows:Thus, we can setwhere a is a non-decreasing function defined in Then, can replace a in the preceding results.
Next, we develop the ball convergence analysis of Method (3) in an analogous way. But this time, the “” functions are defined as
and
and
and the least zeros of the functions in respectively, exist.
Hence, we arrive at the corresponding local convergence result for Method (3).
Theorem 2.
Under the conditions (H), hold with the conclusions of Theorem 1 hold for Method (3) with and is replaced by , respectively.
3. Semi-Local Analysis
The analysis in this case uses a majorant sequence [1,2,3,8].
Assume the following:
- (e1)
- There exist continuous and nondecreasing functions so that the equation has a smallest positive solution, denoted as Set
- (e2)
- There exists a continuous and nondecreasing function Define the sequence for some , and each byandA convergence criterion for this sequence is:
- (e3)
- There exists such that for each , and It follows by the definition of the sequence and this condition that , and there exists such that These functions are connected to the operators of the method.
- (e4)
- There exists an invertible operator L so that for each and someand forSet
- (e5)
- For and eachand
- (e6)
It follows by (e1) and (e4) that Thus, the linear operator is invertible. That is why we can set The motivational calculations for the majorant sequence follow in turn by induction:
and
Thus, the iterates and the sequence is Cauchy in the Banach space and, as such, it is convergent to some (since is a closed set). By letting , we deduce
Thus, we arrived at the semi-local convergence result for the Method (2).
Theorem 3.
Assume the conditions (e1)–(e6) hold. Then, the sequence is well-defined, remains in , and is convergent to a solution of the equation
The uniqueness of the solution is discussed next.
Proposition 1.
Assume the following:
- (i)
- There exists a solution of the equation for some
- (ii)
- The first condition in (e4) holds in the ball .
- (iii)
- There exists so thatSet Then, the equation is uniquely solvable by in the domain
Proof.
Let with with Then, the divided difference is well-defined. Then, we have the estimate
It follows that the linear operator E is invertible. Then, we can write
Thus, we deduce □
Remark 2.
- (i)
- The limit point can be switched with s in the condition (e6).
- (ii)
- Under all the conditions of Theorem 3, we can take and
- (iii)
- As in the local case, a choice for the real function f can be provided, being motivated by the calculation:Thus, we can takeThe semi-local analysis of convergence for Method (3) follows along the same lines.
4. Numerical Examples
In the first example, we use the standard and popular divided difference [1,4,12]
and as in Remark 1.
The first three examples validate our local convergence analysis results.
Example 1.
Consider the kinematic system
with Let Let Define function F on D for by
Then, we obtain
The conditions (H) are validated if we choose and and Then, by using (i)–(iv) and solving the scalar equations, we deduce that the radii are:
Therefore, Method (3) provides the largest radius for the example. Consequently, we conclude and
The iterates are given in Table 1.
Example 2.
Consider , and , given as
We have that
Then, we find that Hence, the conditions (H) are validated for , and Then, the radii are:
Hence, we conclude and
Example 3.
By the academic example in the introduction, we have , and Then, the radii are:
Hence, we conclude and
Concerning the semi-local case and the application of the methods, we provide two more examples. The first example involvs non-differentiable mappings.
Example 4.
Let The nonlinear and non-differentiable system to be solved is
The system can also be described as
where
The system becomes Then, as which is a real matrix for and by
and
Otherwise, set Notice that these matrices constitute standard divided differences [9,10,11,17]. Let us choose and to be the starters for scheme (2). Then, the solution of the system is for
The solution is obtained after four iterations for both methods.
5. Conclusions
There are some drawbacks when Taylor series expansions are used to find the order of convergence for iterative methods. Some of these are: (a) high order derivatives not on the methods must exist; (b) computable estimates of ; and (c) uniqueness of the solution results are not given. These drawbacks create problems like not knowing how to pick initial points or how many iterates are needed to achieve a pre-decided error tolerance. We developed a technique in this paper so general that it can be applied to extend the applicability of other methods along the same lines [1,2,3,4,5,6,7,8,9,12,13,14,15,16,17]. In particular, we addressed problems (a)–(c) using generalized conditions only on the first derivative and divided differences of order one. Notice that only divided differences of order one appear in these methods. Hence, we extended the applicability of these methods in the more general setting of Banach space-valued equations. Numerical experiments where the convergence criteria are tested complete this paper. The idea of this paper shall be used in future work to extend the applicability of similar methods [5,12,13,14,15,16].
Author Contributions
Conceptualization, S.G., I.K.A. and S.R.; methodology, S.G., I.K.A. and S.R.; software, S.G., I.K.A. and S.R.; validation, S.G., I.K.A. and S.R.; formal analysis, S.G., I.K.A. and S.R.; investigation, S.G., I.K.A. and S.R.; resources, S.G., I.K.A. and S.R.; data curation, S.G., I.K.A. and S.R.; writing—original draft preparation, S.G., I.K.A. and S.R.; writing—review and editing, S.G., I.K.A. and S.R.; visualization, S.G., I.K.A. and S.R.; supervision, S.G., I.K.A. and S.R.; project administration, S.G., I.K.A. and S.R. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data is contained within the article.
Conflicts of Interest
The authors declare that there are no conflict of interest.
Correction Statement
This article has been republished with a minor correction to the Data Availability Statement. This change does not affect the scientific content of the article.
References
- Argyros, I.K.; George, S.; Magreñán, A.A. Local convergence for multi-point-parametric Chebyshev-Halley-type method of higher convergence order. J. Comput. Appl. Math. 2015, 282, 215–224. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magreñán, A.A. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 2015, 71, 1–23. [Google Scholar] [CrossRef]
- Argyros, I.K.; George, S. On the complexity of extending the convergence region for Traub’s method. J. Complex. 2020, 56, 101423. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Kung, H.T.; Traub, J.F. Optimal order of one point and multi point iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
- Magrenan, A.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
- Neta, B. A new family of high order methods for solving equations. Int. J. Comput. Math. 1983, 14, 191–195. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.G. Iterative Solutions of Nonlinear Equations in Several Variables; SIAM: New York, NY, USA, 1970. [Google Scholar]
- Steffensen, J.F. Remarks on iteration. Scand. Actuar. J. 1993, 16, 64–72. [Google Scholar] [CrossRef]
- Shakhno, S.M.; Gnatyshyn, O.P. On an iterative Method of order 1.839... for solving nonlinear least squares problems. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar]
- Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step scheme for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. J. Comput. Appl. Math. 2019, 354, 286–298. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Dzunic, J.; Petkovic, M.S. On generalized biparametric multi point root finding methods with memory. J. Comput. Appl. Math. 2014, 255, 362–375. [Google Scholar] [CrossRef]
- Petkovic, M.S.; Dzunic, J.; Petkovid, L.D. A family of two point methods with memory for solving non linear equations. Appl. Anal. Discret. Math. 2011, 5, 298–317. [Google Scholar] [CrossRef]
- Sharma, J.R.; Gupta, P. On some highly efficient derivative free methods with and without memory for solving nonlinear equations. Int. J. Comput. Methods 2015, 12, 1–28. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).