Abstract
In this study, we suggested the local convergence of three iterative schemes that works for systems of nonlinear equations. In earlier results, such as from Amiri et al. (see also the works by Behl et al., Argryos et al., Chicharro et al., Cordero et al., Geum et al., Guitiérrez, Sharma, Weerakoon and Fernando, Awadeh), authors have used hypotheses on high order derivatives not appearing on these iterative procedures. Therefore, these methods have a restricted area of applicability. The main difference of our study to earlier studies is that we adopt only the first order derivative in the convergence order (which only appears on the proposed iterative procedure). No work has been proposed on computable error distances and uniqueness in the aforementioned studies given on . We also address these problems too. Moreover, by using Banach space, the applicability of iterative procedures is extended even further. We have examined the convergence criteria on several real life problems along with a counter problem that completes this study.
MSC:
65G99; 65H10
1. Introduction
The most common and difficult problem in the field of computational mathematics is to obtain the solutions of
where a Fréchet-differentiable, and Banach domains, , a non-empty convex. It is hard to obtain the exact solution in analytic form for such problems or, in simple words, it is almost fictitious. This is one of main reasons that we must obtain an approximated and efficient solution up to any specific degree of accuracy by means of an iterative procedure.
Therefore, researchers have been putting great effort into developing new iterative methods over the past few decades. In addition, the accuracy of a solution is also dependent on several facts, some of them are: the choice of iterative method, initial approximation/s and structure of the considered problem with software such as Maple, Fortran, MATLAB, Mathematica, and so forth. Further, the people who used these iterative schemes faced several issues, some of which include: choice of starting point, derivative being zero about the root (in the case of derivative free multi-point schemes), difficulty near the initial point, slower convergence, divergence, convergence to an undesired solution, oscillation, failure of the iterative method, and so forth (for further information, please see [1,2,3,4,5]).
We study the local convergence of the Banach domain valued iterative procedures of orders eighth, eighth and seventh, defined for each respectively, by
and
with , a Fréchet-differentiable, and Banach domains, a non-empty, convex and open, an initial guess, , and a standard divided difference of order one [6]. Notice that by , we mean that , which exists as a composition between two linear operators. The following concerns arise for Reference [7] (the same is true for the studies mentioned in the papers [8,9,10,11,12,13,14,15,16,17,18,19,20]):
- (1)
- These procedures were studied in [7] for the special case when , by using Taylor series and hypotheses on the derivatives reaching up to order 9 (not appearing on these iterative procedures). These hypotheses limit the applicability of the iterative procedures. Let us consider a motivational example. Therefore, we assume the following function H on , as:We yield
- (2)
- No computable error bounds . Hence, we do not know in advance how many iterates should be computed to achieve some pre-decided error tolerance.
- (3)
- Uniqueness results are not given in [7]. Here, is a solution of the equation of (1).
In this paper, we address all (1)–(3) problems using only the first derivative, which appears in these iterative procedures. Hence, we extend the applicability of these procedures in the more general setting of a Banach domain. Moreover, because of its generality, our approach can extend the usage of other methods [8,9,10,11,12,13,14,15,16,17,18,19,21,22,23,24,25] in the same way.
2. Local Convergence
We study first of all, iterative procedure (2). Let be a continuous and increasing function. Assume:
(i) Equation
has a minimal positive solution .
Set . Function to be continuous and increasing. Define function on in the following way:
(ii) Equation
has a minimal solution .
(iii) Equation
has a minimal positive solution . Set .
Consider function to be continuous and increasing, where . Define function on in the following way:
(iv) Equation
has a minimal solution .
(v) We assume that equation
has a minimal positive solution and .
Define another function on by :
(vi) Equation
has a minimal solution .
A radius of convergence r shall be shown to be
Notice that
and
for all .
Let stand for the closure of a with center and of radius . The conditions are used in the local convergence analysis of iterative procedure (2) provided the functions are as given previously. Assume:
- (B1)
- is Fréchet- differentiable and there exists such that
- (B2)
- For allSet .
- (B3)
- For allSet .
- (B4)
- For all
- (B5)
- , exists and is defined later.
- (B6)
- There exists such thatSet .
Next, we develop the analysis of iterative procedure (2) by the preceding notation and conditions .
Theorem 1.
Under the conditions for , further suppose that . Then, sequence generated by iterative scheme (2) is well defined, remains in for all and converges to . Moreover, the following assertions hold
and
where the functions are given previously and r is defined by (9). Furthermore, is the only solution of equation given in by .
Proof.
Sequence shall be shown to be well defined, to remain in and to converge to using mathematical induction. In order to achieve this, we shall also show estimates (14)–(16). Let us assume that . Using , (8) and (9), we have
The Banach perturbation lemma on inversible operators [6], together with estimation (16), ensure: the existence of
so
and
The induction for assertions (14)–(16) is terminated by simply substituting and by and , respectively in the preceding calculations. It follows by the estimation
where that . Finally, set with . Then, by hypotheses and , we obtain
so is implied by the existence of and the estimate .
□
Secondly, we study iterative procedure (3) in an analogous way. There will be no change in the function . However, we must re-define the functions and in the following way with :
and
respectively.
Define radius corresponding to method (3) similarly by
Then, we arrive at the following theorem with these changes:
Theorem 2.
Under the conditions for , further suppose that . Then, sequence generated by iterative scheme (3) is well defined, remains in for all and converges to . Moreover, the following assertions hold
and
where the functions are given previously. Furthermore, is the only solution of equation given in by .
Proof.
By simply repeating the proof of Theorem 1 but using iterative procedure (3) instead of method (2), we get the estimates
and
The proof of uniqueness of the solution is given in Theorem 1. □
Next, in order to study the local convergence of iterative procedure (3), we add condition in as follows:
- (B′)
- For some functions continuous and increasing, we have
Again, there are no changes in the function . But, we have to re-define the functions and in the following way for :
and
We define the radius of convergence for method (4) in the following way:
where is the smallest positive solution of the equation
With these new functions, we arrive at the following theorem:
Theorem 3.
Under the conditions for , further suppose that . Then, sequence generated by iterative scheme (4) is well defined, remains in for all and converges to . Moreover, the following assertions hold
and
where the functions are given previously. Furthermore, is the only solution of equation given in by .
3. Numerical Examples
Here, we present the computational results based on the suggested theoretical results in this paper. We also compare the results of iterative procedures (2)–(4) on the basis of radii of convergence. By the proceeding definition of , we choose
for method (4). This way, hypothesis is satisfied. We use . We choose a well mixture of standard and applied science problems for the computational results, which are illustrated in Examples 1–5. The results are listed in Table 1, Table 2, Table 3, Table 4 and Table 5. Additionally, we obtain the approximated by means of
or [19] by:
Table 1.
Radii for Example 1.
Table 2.
Radii for Example 2.
Table 3.
Radii for Example 3.
Table 4.
Radii of convergence for Example 4.
Table 5.
Radii of convergence for Example 5.
In addition, we adopt as the error tolerance and the terminating criteria to solve nonlinear system or scalar equations are: and .
The computations are performed with the package with multiple precision arithmetic.
Example 1.
Following the example presented in the Introduction, for , we can set
In Table 1, we present radii for example (1).
Example 2.
Let and . Assume F on Ω with as
where, . Then, we obtain
the Fréchet-derivative. Hence, for , we have
So, we obtain convergence radii that are mentioned in Table 2.
Example 3.
The kinematic synthesis problem for steering [20,26], is given as
where
and
In Table 6, we present the values of and (in radians).
Table 6.
Values of and (in radians) for Example 3.
The approximated solution is for
Then, we get
We provide the radii of convergence for Example 3 in Table 3.
Example 4.
Consider the following nonlinear system that involves logarithmic functions
where . For , the required zero is . Then, we have for
We mentioned the radii of convergence for Example 4 in Table 4.
Example 5.
Let us consider that and introduce the domain of maps continuous in having the max norm. We consider the following function φ on :
which further yields:
We have and
We list the radii of convergence for Example 5 in Table 5.
4. Conclusions
A comparative study was presented for three high convergence order methods utilizing only the first derivative (and the divided difference of order one) that only exist in these methods. Our analysis generated error bounds and results on the uniqueness of that can be computed using majorant functions. However, in earlier studies, these concerns were not addressed and the procedures were limited to operators with the ninth order derivatives that are not in these methods. Our technique is applicable to extend to other procedures, since it is so general. In our numerical experiments, a comparison is given between the convergence radii.
Author Contributions
R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; Writing—Review & Editing, F.O.M.: Review & Editing. All authors have read and agreed to the published version of the manuscript.
Funding
Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. G-110-130-1441.
Acknowledgments
This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. G-110-130-1441. The authors, therefore, acknowledge with thanks DSR for technical and financial support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1964. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice- Hall Series in Automatic Computation; Chelsa Publishing Company: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
- Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
- Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, A.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Algorith. 2017, 74, 371–391. [Google Scholar] [CrossRef]
- Amiri, A.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. Stability analysis of a parametric family of seventh-order iterative methods for solving nonlinear systems. Appl. Math. Comput. 2018, 323, 43–57. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameter planes of iterative families and methods. Sci. World J. 2013, 2013, 506–519. [Google Scholar] [CrossRef]
- Cordero, A.; Garcéa-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P.; Vindel, P. Chaos in King’s iterative family. Appl. Math. Lett. 2013, 26, 842–848. [Google Scholar] [CrossRef]
- Cordero, A.; Garcéa-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional stability analysis of a family of biparametric iterative methods. J. Math. Chem. 2017, 55, 1461–1480. [Google Scholar] [CrossRef][Green Version]
- Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient high-order iterative methods for solving nonlinear systems and their application on heat conduction problems. Complexity 2017, 2017, 6457532. [Google Scholar] [CrossRef]
- Cordero, A.; Gutiérrez, J.M.; Magreñán, A.A.; Torregrosa, J.R. Stability analysis of a parametric family of iterative methods for solving nonlinear models. Appl. Math. Comput. 2016, 285, 26–40. [Google Scholar] [CrossRef]
- Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension. Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Geum, Y.H.; Kim, Y.I.; Neta, B. A sixth-order family of three-point modified newton-like multiple-root finders and the dynamics behind their extraneous fixed points. Appl. Math. Comput. 2016, 283, 120–140. [Google Scholar] [CrossRef]
- Gutiérrez, J.M.; Hernández, M.A.; Romero, N. Dynamics of a new family of iterative processes for quadratic polynomials. J. Comput. Appl. Math. 2010, 233, 2688–2695. [Google Scholar] [CrossRef]
- Gutiérrez, J.M.; Plaza, S.; Romero, N. Dynamics of a fifth-order iterative method. Int. J. Comput. Math. 2012, 89, 822–835. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA J. 2017, 74, 147–163. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algor. 2010, 54, 395–409. [Google Scholar] [CrossRef]
- Argyros, I.; Behl, R.; Motsa, S.S. Local Convergence of an Optimal Eighth Order Method under Weak Conditions. Algorithms 2015, 8, 645–655. [Google Scholar] [CrossRef]
- Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–88. [Google Scholar] [CrossRef]
- Blanchard, P. The dynamics of Newton’s method. Proc. Symp. Appl. Math. 1994, 49, 139–154. [Google Scholar]
- Blanchard, P. Complex analytic dynamics on the Riemann sphere. Bull. AMS 1984, 11, 85–141. [Google Scholar] [CrossRef]
- Magreñán, A.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
- Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).