Abstract
Symmetries are important in studying the dynamics of physical systems which in turn are converted to solve equations. Jarratt’s method and its variants have been used extensively for this purpose. That is why in the present study, a unified local convergence analysis is developed of higher order Jarratt-type schemes for equations given on Banach space. Such schemes have been studied on the multidimensional Euclidean space provided that high order derivatives (not appearing on the schemes) exist. In addition, no errors estimates or results on the uniqueness of the solution that can be computed are given. These problems restrict the applicability of the methods. We address all these problems by using the first order derivative (appearing only on the schemes). Hence, the region of applicability of existing schemes is enlarged. Our technique can be used on other methods due to its generality. Numerical experiments from chemistry and other disciplines of applied sciences complete this study.
MSC:
65H10; 65G99; 49M15
1. Introduction
Problems from applied sciences such as mathematics, biology, chemistry, and physics (including symmetries) to mention a few are converted to nonlinear equations which are solved by iterative methods, since exact solutions are hard to find. Let and denote Banach spaces and stand for an open and convex set. Moreover, we use the notation for the space of continuous linear operators mapping into . The task of determining a solution of equation:
where is differentiable as defined by Fréchet is of extreme significance in computational disciplines. Finding in a closed form is desirable but rarely attainable. That is why one resorts to develop iterative schemes approximating , if certain convergence criteria hold.
One of the most basic and popular iterative methods is known as the Newton method [1], which is defined as follows:
it has second of order convergence. However, it is a one-point method and one-point methods have several issues concerning the order and computational efficiency. For instance, if we want to attain a third order one-point iterative method then we need the evaluations of function f, first order derivative , and second order derivative (more details can be found in [1,2]). Finding second or higher order derivatives are either time consuming or do not exist. Thus, researchers are focused on the most important class of iterative methods that is known as multi-point methods. These multi-point methods are of great practical importance since they overcome theoretical limits of one-point methods (more details can be found in [1]).
Ostrowski [2] was the first mathematician who suggested an optimal [3] multi-point scheme of fourth order which requires only three functional evaluations. Later, Jarratt [4,5], in 1966 and King [6], in 1975, gave many optimal fourth order multi-point methods. King further demonstrated that Ostrowski’s method was a special case of his scheme. A plethora of such schemes can be found in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33] and the references therein.
In particular, we study the (local) convergence of the three-step scheme [33], given for each
as well as the j-step scheme [33]:
with , is a continuous operator of a scheme with order , and . We have left as general as possible to include numerous special cases of it. But, as an example can be . Other choices are given by (31), (32), and in paper [33]. Schemes (2) and (3) were shown to be of order and , respectively in [33], when . However, they used high order derivatives in order to demonstrate the convergence order.
Moreover, we refer the reader to [33] for a plethora of choices of leading to already studied schemes or new schemes. Some choices are also given by us in the numerical section of this study. The computational efficiencies and other benefits were also presented in [33].
We have some concerns with the aforementioned studies:
- (a)
- The convergence order was established by utilizing the Taylor series expansion requiring higher order derivatives (not appearing on the schemes);
- (b)
- Lack of computable estimates on distances ;
- (c)
- Results related to the uniqueness of the solutions are not given;
- (d)
- We do not know in advance how many iterates are needed to achieve a desired error tolerance given in advance;
- (e)
- Earlier studies have been made only on the multidimensional Euclidean space.
Concerns (a)–(e), limit the applicability of Schemes (2) and (3) and similar ones [1,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,28,29,30,31,32]. Let us consider a motivational example. We assume the following function F on , as:
We yield:
Thus, we see that is not bounded in . Therefore, results requiring the existence of or higher cannot apply for studying the convergence of (2), (3), and [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32].
The novelty of our study lies in the fact that we address concerns (a)–(e) using the derivative (only appearing on the schemes) as well as very general conditions. This way, we also provide computable upper bound estimations on and results on the uniqueness of solutions. Moreover, our results are obtained in the more general setting of a Banach space. Hence, the region of applicability is extended for these schemes. It is worth noticing that computing the convergence radii shows how difficult (limited too) it is to determine initial points. Our idea is so general that it can be used on other schemes in a similar fashion [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33]. We suppose from now on that is a simple solution of Equation (1).
It is worth noticing that the foundations of symmetrical principles include quantum physics and micro-world. These problems once converted to equations of the form (1), their solutions are hard to obtain in closed form or analytically. That is why such schemes are important to study.
2. Analysis in the Sense of Local Convergence
It is convenient for the analysis of scheme (2) to develop some parameters and functions. Suppose that scalar equation has real solutions with . Then, is called the smallest or minimal solution of the equation in . Set .
Suppose equation(s):
(i)
has a smallest solution for some continuous and non-decreasing function . Set .
(ii)
have the smallest solutions , respectively, for some continuous and non-decreasing functions , and with:
(iii)
has a smallest solution for some continuous and non-decreasing function with:
The parameter defined by:
shall be shown to be a convergence radius for scheme (2). Set . It follows from this definition that:
By , we denote the closure of ball with center and of radius .
The local convergence analysis of scheme (2) relies on the conditions provided that the scalar functions are as previously defined.
Suppose:
- (A1)
- For eachSet .
- (A2)
- For eachwhere .
- (A3)
- and
- (A4)
- There exists satisfying:Set .
Next, the local convergence analysis of scheme (2) is developed using conditions with functions as previously defined.
Theorem 1.
Proof.
The following assertions shall be shown using mathematical induction:
and
where the functions as given previously and radius is as defined by (7).
Hence, by a lemma due to Banach for inverses of linear operators [8], with:
Then, iterates , and are well defined. Thus, we can write by the three steps of Scheme (2), respectively:
and
By (7), (9), (10) (for ), (15) (for ), , and (16)–(18), we get respectively:
and
so , and estimations (12)–(14) hold for . The induction for (11)–(14) finishes if , and are replaced by , and , respectively in the preceding calculations. The completion of the induction gives the estimation:
with , from which we have and .
In order to show the uniqueness part, we set with . Then, in view of and , we obtain:
leading to , since , and . □
Remark 1.
- (a)
- By and the estimation:condition can be dropped and be replaced by:
- (b)
- The results obtained here can be used for operators F satisfying the autonomous differential equation [9,10] of the form:where P is a known continuous operator. Since we can apply the results without actually knowing the solution Let us consider an example Then, we can choose .
- (c)
- If and then was shown in [9,10] to be the convergence radius for Newton’s method. It follows from (7) and the definition of that the convergence radius ρ of the method (2) cannot be larger than the convergence radius of the second order Newton’s method. As already noted in [9,10], is at its smallest as large as the convergence ball given by Rheinboldt [28]:In particular, for (where is the constant on ), we have that:andTherefore our convergence ball is at most three times larger than Rheinboldt’s. The same value for is given by Traub [1].
- (d)
- Method (2) is not changing if we use the conditions of Theorem 1 instead of the stronger conditions given in [33]. Moreover, for the error bounds in practice we can use the Computational Order of Convergence (COC) [32]:or Approximate Computational Order of Convergence [32] by:So, the convergence order is obtained in this way without evaluations higher than the first Fréchet derivative.
Next, we present the local convergence of scheme (3) in an analogous way. Define functions on as:
and
where is a continuous and non-decreasing function, and
Suppose equations:
have the smallest solutions in denoted by , respectively.
Define parameter:
Itshall be shown that is a convergence radius for scheme (3).
Consider conditions with replacing . Moreover replace the second condition in by:
Let us call the resulting conditions . Then, as in Theorem 1, we get in turn the estimates that also motivate the introduction of the functions:
Hence, we get the local convergence result for scheme (3).
3. Numerical Applications
We present the computational results based on the suggested theoretical results in this paper. We choose and respectively, in scheme (2) in order to obtain fifth and sixth order iterative procedures. In particular, we have:
where
and
where
For more details of these values please see the article Zhanlav and Otgondroj [33]. Next, we show how to choose the functions for Schemes (31) and (32), respectively.
Suppose equation:
has a smallest solution , where:
Case
Set . Define function by:
The choice of functions and is justified by the estimates:
so
where we also used:
so
and
Case
The proceeding calculations for in this case suggest that this function can be defined by:
since:
Moreover, suppose equation:
has a smallest solution for:
Define function by:
and
The choice of functions and is justified by the estimates:
so
where we also adopted the esitmates:
so
and
Next, we find functions and (with ) but for scheme (32) in an analogous way. We can write:
so
where we also used the following estimate:
since
where
Hence, we define function by:
In view of the previous calculations for finding and the third substep of Schemes (31) and (32), we define :
Moreover, to find radius for Schemes (31) and (32), we use equations involving and as defined previously in this section. We compare these methods on the basis of the radii of convergence. In addition, we choose as the error tolerance. The terminating criteria to solve nonlinear system or scalar equations are: and .
The computations are performed with the package with multiple precision arithmetic.
Example 1.
Following the example presented in introduction, for , we can set:
This way conditions are satisfied. Then, by solving equations , we find solutions and using (5), we determine ρ. Hence, the conclusions of Theorem 1 hold. In Table 1, we present the numerical values of radii and ρ for example (1).
Table 1.
Radii for example (1).
Example 2.
We chose a well-known nonlinear PDE problem of molecular interaction [12,33], which is given as follows:
Subject to the following conditions
where .
First of all, discretize the above PDE (33) by adopting the central divided difference:
which further suggest the following system of nonlinear equations:
where We choose in order to obtain a system of nonlinear equations of order and . The approximated solution is for :
Then, we get by the conditions :
The above functions clearly satisfied the conditions . Then, by solving equations , we find solutions and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. We provide the computational values of radii of convergence for Example 2 in Table 2.
Table 2.
Radii for example 2.
Example 3.
The kinematic synthesis problem for steering [11,31], is given as:
where
and
In Table 3, we present the values of and (in radians).
Table 3.
Values of and (in radians) for Example (3).
The approximated solution is for :
Then, we get:
We can easily cross verify that the above functions satisfied the conditions . Then, by solving equations , we find solutions and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. We provide the computational radii of convergence for Example 3 in Table 4.
Table 4.
Radii for Example (3).
Example 4.
We choose a prominent 2D Bratu problem [7,30], which is given by:
Let us assume that is a numerical result over the grid points of the mesh. In addition, we consider that and are the number of steps in the direction of x and t, respectively. Moreover, we choose that h and k are the respective step sizes in the direction of x and y, respectively. In order to find the solution of PDE (34), we adopt the following approach:
which further yields the succeeding SNE:
By choosing and , we get a large SNE of order which converges to the following required root:
the column vector. Choose and . Then, we have:
This way conditions are satisfied. Then, by solving equations , we find solutions and using (7), we determine ρ. Hence, the conclusions of Theorem 1 hold. The computational results are depicted in Table 5.
Table 5.
Radii of convergence for Example (4).
Remark 2.
We have observed from Examples (1)–(4) that method (32) (for ) has a larger radius of convergence as compared to other particulars cases of (31) and (32). So, we deduce that this particular case is better than other particulars cases of (31) and (32) in terms of convergent points and domain of convergence. It also shows the consistent computational order of convergence.
4. Conclusions
A unified local convergence is presented for a family of higher order Jarratt-type schemes on Banach spaces. Our analysis uses only the derivative on these schemes in contrast to other approaches using and order derivatives for Schemes (2) and (3), respectively (not on these schemes). Hence, the applicability of these schemes is extended. Moreover, our analysis gives computable error distances and uniqueness of the solution answers. This was not done in the earlier work [33]. Our idea provides a new way of looking at iterative schemes. So, it can extend the applicability of this and other schemes [3,4,5,6,7,8,9,10,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. Finally, numerical experiments are conducted to solve problems from chemistry and other disciplines of applied sciences. Notice in particular that many problems from the micro-world are symmetric.
Author Contributions
R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing–Original Draft Preparation; Writing–Review & Editing, F.O.M., C.I.A.: Review & Editing. All authors have read and agreed to the published version of the manuscript.
Funding
Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. KEP-MSc-49-130-42.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Acknowledgments
This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (KEP-MSc-49-130-42). The authors, therefore, acknowledge with thanks DSR for their technical and financial support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice- Hall Series in Automatic Computation; American Mathematical Soc.: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
- Kung, H.T.; Taub, J.F. Optimal order of one-point and multi-point iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
- Jarratt, P. Some efficient fourth order multipoint methods for solving equations. Nord. Tidskr. Inf. Behandl. 1969, 9, 119–124. [Google Scholar] [CrossRef]
- Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comp. 1966, 20, 434–437. [Google Scholar] [CrossRef]
- King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
- Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
- Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, A.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Algorith. 2017, 74, 371–391. [Google Scholar] [CrossRef]
- Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publ. Comp.: Hoboken, NJ, USA, 2013. [Google Scholar]
- Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algor. 2010, 54, 395–409. [Google Scholar] [CrossRef]
- Bahl, A.; Cordero, A.; Sharma, R.; Torregrosa, J.R. A novel bi-parametric sixth order iterative scheme for solving nonlinear systems and its dynamics. Appl. Math. Comput. 2019, 357, 147–166. [Google Scholar] [CrossRef]
- Berardi, M.; Difonzo, F.; Vurro, M.; Lopez, L. The 1D Richards’ equation in two layered soils: A Filippov approach to treat discontinuities. Adv. Water Resour. 2018, 115, 264–272. [Google Scholar] [CrossRef]
- Casulli, V.; Zanolli, P. A Nested Newton-Type Algorithm for Finite Volume Methods Solving Richards’ Equation in Mixed Form. SIAM J. Sci. Comput. 2010, 32, 2255–2273. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameter planes of iterative families and methods. Sci. World J. 2013, 2013, 506–519. [Google Scholar] [CrossRef]
- Cordero, A.; Garcéa-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P.; Vindel, P. Chaos in King’s iterative family. Appl. Math. Lett. 2013, 26, 842–848. [Google Scholar] [CrossRef]
- Cordero, A.; Garcéa-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional stability analysis of a family of biparametric iterative methods. J. Math. Chem. 2017, 55, 1461–1480. [Google Scholar] [CrossRef][Green Version]
- Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient high-order iterative methods for solving nonlinear systems and their application on heat conduction problems. Complexity 2017, 2017, 6457532. [Google Scholar] [CrossRef]
- Cordero, A.; Gutiérrez, J.M.; Magreñán, A.A.; Torregrosa, J.R. Stability analysis of a parametric family of iterative methods for solving nonlinear models. Appl. Math. Comput. 2016, 285, 26–40. [Google Scholar] [CrossRef]
- Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension. Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Geum, Y.H.; Kim, Y.I.; Neta, B. A sixth-order family of three-point modified newton-like multiple-root finders and the dynamics behind their extraneous fixed points. Appl. Math. Comput. 2016, 283, 120–140. [Google Scholar] [CrossRef]
- Gutiérrez, J.M.; Hernández, M.A.; Romero, N. Dynamics of a new family of iterative processes for quadratic polynomials. J. Comput. Appl. Math. 2010, 233, 2688–2695. [Google Scholar] [CrossRef]
- Gutiérrez, J.M.; Plaza, S.; Romero, N. Dynamics of a fifth-order iterative method. Int. J. Comput. Math. 2012, 89, 822–835. [Google Scholar] [CrossRef]
- Illiano, D.; Pop, L.S.; Radu, F.A. Iterative schemes for surfactant transport in porous media. Comput. Geosci. 2021, 25, 805–822. [Google Scholar] [CrossRef]
- Magreñán, A.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
- Petković, M.S.; Neta, B.; Petković, L.D.; Dz̆unić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
- Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Pol. Acad. Sci. Banach Ctr. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA J. 2017, 74, 147–163. [Google Scholar] [CrossRef]
- Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
- Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Zhanlav, T.; Otgondorj, K. Higher order Jarratt-like iterations for solving system of nonlinear equations. Appl. Math. Comput. 2021, 395, 125849. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).