Abstract
The foremost aim of this paper is to suggest a local study for high order iterative procedures for solving nonlinear problems involving Banach space valued operators. We only deploy suppositions on the first-order derivative of the operator. Our conditions involve the Lipschitz or Hölder case as compared to the earlier ones. Moreover, when we specialize to these cases, they provide us: larger radius of convergence, higher bounds on the distances, more precise information on the solution and smaller Lipschitz or Hölder constants. Hence, we extend the suitability of them. Our new technique can also be used to broaden the usage of existing iterative procedures too. Finally, we check our results on a good number of numerical examples, which demonstrate that they are capable of solving such problems where earlier studies cannot apply.
Keywords:
iterative method; local convergence; banach space; lipschitz constant; order of convergence MSC:
65G99; 65H10; 47J25; 47J05; 65D10; 65D99
1. Introduction
One of the most primary and principal problems in numerical analysis associate with how to approximate a locally unique zero of
where is a Fréchet-differentiable operator. In addition, are two Banach spaces and is a convex subset of Banach space . We denote as the space of bounded linear operators from to .
Approximating a unique solution is vital, since several problems can be transform to Equation (1) by adopting mathematical modeling [1,2,3,4,5,6,7,8]. However, it is not always possible to get in a closed form. Therefore, most of the schemes to solve such problems are iterative. The convergence study of iterative schemes involves the information about is known as local convergence. Convergence domain of an iterative method is an important task to guarantee convergence. Hence, it is very essential to suggest the radius of convergence.
We are interested in the local study of multi-point high-order convergent method [1] given by
where is the starting point; for , the method reaches at least fourth order and, for , fifth order. The hypotheses on the derivatives of S restrict the suitability of the scheme in Equation (2). As a motivational example, we suggest a function S on , by
Then, we have that
and
Then, obviously third-order derivative is unbounded on . There is a plethora of research articles on iterative schemes [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]. The initial guess must be close enough to the required solution for guaranteed convergence. However, it is not giving us any idea of: how to choose , find a convergence radius, the bounds on and the uniqueness results. We deal with these problems for the method in Equation (2) in Section 2.
We enlarge the suitability of the scheme in Equation (2) by adopting only hypotheses on the first-order derivative of S and generalized conditions. In addition, we avoid the use of Taylor series expansions. In this way, there is no need to use the higher-order derivatives to illustrate the convergence order of the scheme in Equation (2). We adopt and for the order of convergence, which avoid higher-order derivatives (see Remark 1 (d)). When the generalized conditions are specialized to the Lipschitz case (see Remark 1 (a)), the Hölder case [1] (see Remark 1 (c)) or the advantages mentioned in the Introduction are obtained.
2. Convergence Analysis
The local convergence analysis stand on some parameters and scalar functions. Let us assume , and let be a non-decreasing continuous function on having values in with , where or .
Suppose equation
has a minimal positive solution .
Consider that functions on are continuous and increasing with . Moreover, we choose functions and on the interval as follows:
Suppose that
From Equation (4), we have and as . Then, by the intermediate value theorem, we know that the function has zeros in . Denote by the smallest such zero of function . Assume equations and have minimal positive solutions and , respectively. Set
Furthermore, define some functions and on in the following way:
We obtain again and , as . Let us denote and as the smallest zero of the functions and , respectively, on the interval . Finally, we define the convergence radius as follows:
Then, we have
and
Let and stand, respectively, for the open and closed balls in with center and radius .
Next, we present the local convergence analysis of the method in Equation (2) using the preceding notations.
Theorem 1.
Let be a Fréchet-differentiable operator. We assume that are non-decreasing continuous functions with . Let be such that Equation (4) is satisfied and exist. In addition, we consider the zero is well defined, such that, for each ,
and
Further, we consider that, for each ,
and
where the convergence radius τ is given by Equation (5). Then, the sequence obtained for by the scheme in Equation (2) is well defined, remains in for each , and converges to . Moreover, the following estimates hold
and
where the functions are defined previously. Furthermore, if
then the point is the unique zero of in .
Proof.
We adopt the mathematical induction technique in order to demonstrate that the sequence is well defined in and also converges toward . By the hypothesis , and Equations (3), (5) and (12), we obtain
In view of Equation (20) and Banach Lemma on non-singular operators [2,3], , and are well defined by the first two sub steps of the method in Equation (2) and
Adopting the first sub step of the scheme in Equations (2), (5), (7) (for ), (11), (13), (14) and (21), we yield
which implies Equation (16) for and . We can write by the second sub step of the method in Equation (2)
Notice that with in Equation (20), and are well defined and
By Equations (5), (6) (for ), (9), (11), (13), (14), and (21)–(24), we get
which implies Equation (17) for and .
Then, by the third sub step of the method in Equations (2), (5), (7) (for ), (9), (10), and (21)–(25) (for ), we yield
which shows Equation (18) for and . By changing , by , , in the preceding estimates, we attain at Equations (16)–(18). Therefore, in view of the estimates
we deduce that and .
Finally, we have to illustrate the uniqueness part. We assume that with and define . Using Equations (12) and (19), we get
It is confirmed from Equation (28) that Q is an invertible operator. Then, in view of the identity
we deduce that . □
Remark 1.
- (a)
- It is clear from Equation (12) that the condition in Equation(14) can be released and adopted as follow:since,Further, Singh et al. [1] considered the following conditions for each in the Hölder casefor (corresponding to Equation (4)).In our case, we havethusholds, since . Hence, the improvements, as stated in the Abstract of this paper, hold for (see the numerical examples too).Estimatesused in [1,23] are not better than ours
- (b)
- (c)
- If are constants functions, and , then, we showed in [2,13] using only Equations (12) and (13) for the case of Newton’s method (see the definition of function too)thusTherefore, the convergence radius τ has maximum value and is the convergence radius of Newton’s methodRheindoldt [22] and Traub [8] provided the following convergence radius instead ofOn the other hand, Argyros [2,3] proposed the following convergence radiuswhere is the Lipschitz constant for Equation (8) on Δ. However, we havethusandThe convergence radius q adopted in [24] is smaller than the radius proposed by Dennis and Schnabel [3]However, q cannot be computed using the Lipschitz constants.
- (d)
- By adopting fifth-order derivative of S, the convergence order of the scheme in Equation (2) was demonstrated in [24]. On the other hand, our approach required only hypotheses on first-order derivative of S. To obtain the convergence order, we adopt the following techniques for the computational order of convergenceor the approximate computational order of convergence (ACOC) [19],Neither technique t requires any kind of derivative(s). It is also vital to note that there is no need of exact zero in the case of .
- (e)
- Consider operator S satisfying the autonomous differential equation [2,3]where T is a given continuous operator. By we can use our results without the prior knowledge of required solution For example, Therefore, we obtain .
- (f)
- In view of estimatesand similarlywe can replace the terms in the definition of functions and by , respectively. Ifand say are constants, then the new functions and are tighter than the old one leading to larger τ and tighter error bounds on the distances (if ).
3. Concrete Examples
Here, we test the convergence conditions using concrete examples.
Example 1.
Here, we assume one of the well-known Hammerstein integral equations (see pp. 19–20, [25]) defined by:
where the kernel S is:
We use in Equation (54), where and are the abscissas and weights, respectively. Denoting the approximations of with , then it yields the following system of nonlinear equations:
By Gauss–Legendre quadrature formula, we obtained the values of and when , which are depicted in Table 1.
Table 1.
Abscissas and weights for .
The required approximate root is:
Then, we get and . We have the following radii of convergence for this problem in Table 2.
Table 2.
Distinct convergence radii for Example 1.
Example 2.
The solution is the same as the solution of Equation (1), where is defined by
We get
Moreover,
thus, since ,
Hence, we have
thus, by Remark 1 (a), we can choose
Therefore, our results can be utilized but not the ones in [1] because is unbounded on Δ. We have the following radii of convergence for the problem in Example 2 mentioned in Table 3.
Table 3.
Distinct convergence radii for Example 2.
Example 3.
Consider a system of differential equations,
that describes the movement of a particle in three dimensions with for . Then, the solution relates to , given as
It follows from Equation (61) that
Then, we have that and , where and . e have the following radii of convergence for Example 3, depicted in Table 4 and Table 5.
Table 4.
Convergence radii for Example 3.
Table 5.
Convergence radii for Example 3 with (call them barfunctions).
Example 4.
The chemical reaction [26] illustrated in this case shows how and are utilized at rates and , respectively, for a tank reactor (known as CSTR), given by:
Douglas [27] analyzed the CSTR problem for designing simple feedback control systems. The following mathematical formulation was adopted:
where the parameter has a physical meaning and is described in [26,27]. For the particular value of choice , we obtain the corresponding equation:
The function S has four solutions . Nonetheless, the desired zero is for Equation (62). We assume . Then, we have and
Table 6.
Different radii of convergence for Example 4.
Author Contributions
R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing–Original Draft Preparation; Writing–Review & Editing. A.S.A.: Review & Editing.
Funding
Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. G: 424-130-1440.
Acknowledgments
This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. G: 424-130-1440. The authors, therefore, acknowledge with thanks DSR for technical and financial support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Singh, S.; Gupta, D.K.; Badoni, R.P.; Martínez, E.; Hueso, J.L. Local convergence of a parameter based iteration with Hölder continuous derivative in Banach spaces. Calcolo 2017, 54, 527–539. [Google Scholar] [CrossRef]
- Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publishing Company: Hackensack, NJ, USA, 2013. [Google Scholar]
- Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Increasing the order of convergence of iterative schemes for solving nonlinear system. J. Comput. Appl. Math. 2012, 252, 86–94. [Google Scholar] [CrossRef]
- Hernández, M.A.; Martinez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algor. 2015, 70, 377–392. [Google Scholar] [CrossRef]
- Potra, F.A.; Pták, V. Nondiscrete introduction and iterative process. Res. Notes Math. 1984, 103. [Google Scholar]
- Sharma, J.R.; Ghua, R.K.; Sharma, R. An efficient fourth-order weighted-Newton method for system of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Amat, S.; Busquier, S.; Plaza, S.; Guttiérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef]
- Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequationes Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
- Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magreñán, Á.A. Ball convergence theorems and the convergence planes of an iterative methods for nonlinear equations. SeMA 2015, 71, 39–55. [Google Scholar]
- Argyros, I.K.; George, S. Local convergence of some higher-order Newton-like method with frozen derivative. SeMa 2015, 70, 47–59. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Hernández, M.A. A uniparametric halley type iteration with free second derivative. Int. J. Pure and Appl. Math. 2003, 6, 99–110. [Google Scholar]
- Gutiérrez, J.M.; Hernández, M.A. Recurrence relations for the super-Halley method. Comput. Math. Appl. 1998, 36, 1–8. [Google Scholar] [CrossRef]
- Kou, J. A third-order modification of Newton method for systems of nonlinear equations. Appl. Math. Comput. 2007, 191, 117–121. [Google Scholar]
- Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the numerical solution of systems of nonlinear equations. J. Appl. Math. 2012, 2012. [Google Scholar] [CrossRef]
- Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
- Rheinboldt, W.C. An Adaptive Continuation Process for Solving Systems of Nonlinear Equations; Banach Center Publications: Warszawa, Poland, 1978; Volume 3, pp. 129–142. [Google Scholar]
- Martínez, E.; Singh, S.; Hueso, J.L.; Gupta, D.K. Enlarging the convergence domain in local convergence studies for iterative methods in Banach spaces. Appl. Math. Comput. 2016, 281, 252–265. [Google Scholar] [CrossRef]
- Xiao, X.Y.; Yin, H.W. Increasing the order convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
- Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; Volume 2. [Google Scholar]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).