Abstract
In particular, the problem of approximating a solution of an equation is of extreme importance in many disciplines, since numerous problems from diverse disciplines reduce to solving such equations. The solutions are found using iterative schemes since in general to find closed form solution is not possible. That is why it is important to study convergence order of solvers. We extended the applicability of an eighth-order convergent solver for solving Banach space valued equations. Earlier considerations adopting suppositions up to the ninth Fŕechet-derivative, although higher than one derivatives are not appearing on these solvers. But, we only practiced supposition on Lipschitz constants and the first-order Fŕechet-derivative. Hence, we extended the applicability of these solvers and provided the computable convergence radii of them not given in the earlier works. We only showed improvements for a certain class of solvers. But, our technique can be used to extend the applicability of other solvers in the literature in a similar fashion. We used a variety of numerical problems to show that our results are applicable to solve nonlinear problems but not earlier ones.
1. Introduction
A plethora of problems from diverse disciplines such as Applied Mathematics, Mathematical: Biology, chemistry, Economics, Physics, Environmental Sciences and also Engineering are reduced to equations on abstract spaces via mathematical modeling. The closed form solution is obtained only in rare cases. That is why it is important to develop iterates generating a sequence converging to the solution based on some suitable hypotheses on the initial information. Hence, we consider the problem of finding approximate unique solution of
is one of the top priorities in the field of numerical analysis. We consider that is a Fréchet differentiable operator, are Banach spaces, and is a convex subset of . The is the space of continuous operators from to .
We have several examples where researchers demonstrated the applicability of (1). They transformed the real life problems to (1) by adopting mathematical modeling and details can be found in [1,2,3,4,5,6,7]. We have to target on iterative solvers since it is not always feasible to access the solution in an explicit pattern. We have a small number of globally convergent methods that do not require a sufficiently close starting point, e.g., Bisection method or regula falsi method. But, most of the algorithms determine one zero at a time. If the zero has been determined with sufficient accuracy, the polynomial is deflated and the algorithm is applied again on the deflated polynomial. In this way, we can determine all zeros simultaneously and also have theoretical importance [8] for the details of methods can be seen in [9,10,11,12,13,14,15,16,17,18,19]. Therefore, we have extended amount of iterative solvers to solve problems like expression (1). The analysis of solvers involves local convergence that stands on the knowledge around . It also ensures the convergence of iteration procedures. One of the most significant tasks in the analysis of iterative procedures is to yield the convergence region. Hence, we suggest the radius of convergence.
We rewrite for this purpose the iterative solver suggested in [20] in the following way:
where is an initial point, given as , and is a bilinear operator. In the special case, when , , where for each solver (2) reduces to a fourth-order convergent solver studied in [20]. Shah et al. [20] suggested fourth-order convergence by adopting Taylor series expansions and suppositions up to the ninth-order derivative of the involved function. Such constraints hamper the suitability of solver (2). But, only first-order derivative emerges in the solver (2). Let us assume the succeeding function on , as
Then, we have that
and
From the above derivatives, it is straightforward to see that the 3rd-order derivative of is unbounded in . In the available literature, we have a bulk number of research articles [1,2,3,4,5,6,7,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]. In the majority of these articles, authors mention that starting guess must be adequately close to . But this is not offering us an idea of: how to pick , how much closeness is sufficient for convergence, find radius; bounds on and results on uniqueness. We deal with all these questions for solver (2) in the next section.
In the present study, we adopt only conditions on the first-order derivative of with generalized Lipschitz conditions. In addition, we are avoiding the Taylor series expansions because it proceeds with higher-order derivatives of , but we adopt Lipschitz parameters. In this way, we are not committed to adopt higher-order derivatives for convergence order of (2). Further, we adopted the following and for computing the convergence order:
or the approximate computational order of convergence () [21], defined as
where the computational order of convergence and the approximate computational order of convergence [21], respectively. They do not require higher than one derivatives. It is vital to note that does not need the prior information of exact root . Finally, we investigate the applicability of our results on several numerical examples, where earlier works did not exhibit this behavior.
2. Study of Local Convergence
In this section, we suggest the local convergence study of solver (2). Therefore, we adopt some scalar functions that are non-decreasing continuous functions from to such that . We assume
has a minimal positive solution and
In addition, we describe functions , and on as follows:
We have that and , as . By adopting the intermediate value theorem, we can say that both function and have zeros in . Call as and the smallest such zeros in of the functions and , respectively. Further, we represent functions and on as follows:
We have again that and as . Let us call to be minimal zero of in . Finally, we describe the convergence radius r in the following way:
Then, we have that for each
and
The and are two open and closed balls, respectively in centered at . Both have the radius .
The local convergence analysis of solver (2) is based on conditions :
- (A1)
- is a Fréchet-differentiable operator.
- (A2)
- with are non-decreasing continuous functions.
- (A3)
- There exists a zero of such that for everyandSet .
- (A4)
- andwhere .
Then, we present the main local convergence result.
Theorem 1.
Under the conditions sequence obtained for by solver (2) exists, remains in for all and converges to α, so that
and
Furthermore, if
then α is the unique root of in .
Proof.
We select the mathematical induction to show expressions (19)–(21) are well defined in . Further, they converge to required zero . Adopting hypothesis , (5)–(7) and (13), we obtain
From the expression (22) and the Banach Lemma on inverse operators [1,2] that , are well defined and
To show that exists, it suffices to show that . Using (5), (6), (8), (13) and (15), we get in turn that
so is well defined and
By the definition of and the first substep of (2), we can write
We also have by (16)
In view of (2), (5), (6), (9), (14), (15), (22) and (24), we obtain
which illustrates (22) for and .
Next, we have to prove that . By (5), (6), (10), (13), (15), (16) and (28), we get
Hence, is valid by solver (2), and
Then, by the last sub step of solver (2), (5), (6), (11), (15), (28) and (30), we have in turn that
which illustrates (20) for and . By restoring , by , , in the succeeding estimates, we attain (19) and (20). Then, in view of the estimates
that attain and . Finally, the uniqueness of solution is required. Therefore, we assume that with and .
So, Q is invertible in view of
that yields . ☐
Remark 1.
- (a)
- (b)
- (c)
- If are constants functions, then we haveandThe stands for the radius of the following Newton’s solverRheindoldt [22] and Traub [5] also suggested convergence radius instead ofand by Argyros [1,2]where is a Lipschitz parameter for (10) on . Hence, we havesoandThe convergence radius q suggested by Dennis and Schabel [1] is smaller than the radiusHowever, q can not be calculated by the Lipschitz conditions.
- (d)
- By adopting conditions on the ninth-order derivative of operator Γ, the order of convergence of solver (2) was provided by Shah et al. [20]. But, we assume hypotheses only on first-order derivative of operator Γ. For obtaining the computational order of convergence , we adopted expressions (3) and (4).
- (e)
- Assume [1,2] satisfying the autonomous differential equationwhere P is a given and continuous operator. Then, our results apply. But, without knowledge of α and choose Hence, we select .
3. Numerical Experimentation
Here, we illustrate the theoretical consequences suggested in Section 2. Next, we choose in the first four examples.
Example 1.
Let and . We study the mixed Hammerstein-like equation [6,23], defined by
where
defined in . The solution is the same as zero of (1), where , given as:
But
Then, we have that
and since ,
Therefore, we can choose
Hence, by Remark 2.2(a), we can set
But, theorems in [20] can not be utilized to solve this problem because is not Lipschitz. Notice though that our theorems can be utilized. We have the following radii for Example 1:
so
Example 2.
Consider, setting and . Then, for define a function as follows:
Then, we obtain
Hence, for we can choose , , and . By adopting these functions and parameters, we obtain the following radii for Example 2:
so
Example 3.
Let us choose that , facilitated by the max norm. In addition, we consider and for every . Choose a function Γ on
which yields
Then, we have that and , and . We have the following radii for Example 3:
so
Example 4.
By the academic problem that we considered in the introduction. We can choose and , and . By adopting these functions and parameters, we yield the following radii, for Example 4:
so
4. Concluding Assertions
We first generalized solver (2) from functions on the real line to Banach space valued operators. Then, we presented a local convergence analysis in this setting and by using generalized-continuity conditions. Our analysis uses only the first derivative appearing in the solver. In the special case of the real line, derivatives up to the order seven were used. Notice that these high order derivatives do not appear in the solver (2) and also limit the applicability of the solver, as we saw in the introduction. Hence, the applicability of solver (2) has been significantly extended. Numerical examples and applications complete the paper.
Author Contributions
Both authors have equal contribution for this paper. All authors have read and agreed to the published version of the manuscript.
Funding
This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-274-130-1440). The authors, therefore, acknowledge with thanks DSR technical and financial support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
- Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publ. Comp.: Hackensack, NJ, USA, 2013. [Google Scholar]
- Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Increasing the order of convergence of iterative schemes for solving nonlinear system. J. Comput. Appl. Math. 2012, 252, 86–94. [Google Scholar] [CrossRef]
- Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth-order weighted-Newton method for system of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Hernández, M.A.; Martinez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algorithms 2015, 70, 377–392. [Google Scholar] [CrossRef]
- Potra, F.A.; Pták, V. Nondiscrete Introduction and Iterative Process, Research Notes in Mathematics; Pitman: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
- Henrici, P. Applied and Computational Complex Analysis; Wiley and Sons: New York, NY, USA, 1974; Volume 1. [Google Scholar]
- Aberth, O. Iteration Methods for Finding all Zeros of a Polynomial Simultaneously. Math. Comput. 1973, 27, 339–344. [Google Scholar] [CrossRef]
- Lázaro, M.; Martín, P.; Agüero, A.; Ferrer, I. The Polynomial Pivots as Initial Values for a New Root-Finding Iterative Method. J. Appl. Math. 2015, 2015, 413816. [Google Scholar] [CrossRef]
- Pan, V.Y. Optimal and nearly optimal algorithms for approximating polynomial zeros. Comput. Math. Appl. 1996, 3, 97–138. [Google Scholar] [CrossRef][Green Version]
- Pan, V.Y. Solving a polynomial equation: Some history and recent progress. SIAM Rev. 1997, 39, 187–220. [Google Scholar] [CrossRef]
- Pan, V.Y. Univariate polynomials: Nearly optimal algorithms for numerical factorization and root-finding. J. Symb. Comput. 2002, 33, 701–733. [Google Scholar] [CrossRef]
- Pan, V.Y.; Zheng, A.L. New progress in real and complex polynomial root-finding. Comput. Math. Appl. 2011, 61, 1305–1334. [Google Scholar] [CrossRef][Green Version]
- Kyurkchiev, N.V. Initial Approximations and Root Finding Methods; Willey: New York, NY, USA, 1998. [Google Scholar]
- Henrici, P.; Watkins, B.O. Finding zeros of a polynomial by the Q-D algorithm. Commun. ACM 1965, 8, 570–574. [Google Scholar] [CrossRef]
- Hubbard, J.; Schleicher, D.; Sutherland, S. How to find all roots of complex polynomials by Newton’s method. Invent. Math. 2001, 146, 1–33. [Google Scholar] [CrossRef]
- Petković, M.; Ilić, S.; Tričković, S. A family of simultaneous zero-finding methods. Comput. Math. Appl. 1997, 34, 49–59. [Google Scholar] [CrossRef]
- Petković, M.S.; Petković, L.D.; Herceg, D.D. Point estimation of a family of simultaneous zero-finding methods. Comput. Math. Appl. 1998, 36, 1–12. [Google Scholar] [CrossRef]
- Shah, F.A.; Noor, M.A.; Shafiq, M.A. Some generalized recurrence relations and iterative methods for nonlinear equations by using decomposition techniques. Appl. Math. Comput. 2015, 251, 378–386. [Google Scholar]
- Kou, J. A third-order modification of Newton method for systems of nonlinear equations. Appl. Math. Comput. 2007, 191, 117–121. [Google Scholar]
- Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Pol. Acad. Sci. Banach Ctr. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
- Gutiérrez, J.M.; Hernández, M.A. Recurrence realtions for the super-Halley method. Comput. Math. Appl. 1998, 36, 1–8. [Google Scholar] [CrossRef]
- Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: New York, NY, USA, 2013. [Google Scholar]
- Amat, S.; Busquier, S.; Plaza, S.; Guttiérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef]
- Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequationes Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
- Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magreñán, Á.A. Ball convergence theorems and the convergence planes of an iterative methods for nonlinear equations. SeMA 2015, 71, 39–55. [Google Scholar]
- Argyros, I.K.; George, S. Local convergence of some higher-order Newton-like method with frozen derivative. SeMa 2015, 70, 47–59. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Hernández, M.A. A uniparametric halley type iteration with free second derivative. Int. J. Pure Appl. Math. 2003, 6, 99–110. [Google Scholar]
- Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the solution of systems of nonlinear equations. J. Appl. Math. 2012, 2012, 751975. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).