Abstract
In this article, we propose a new methodology to construct and study generalized three-step numerical methods for solving nonlinear equations in Banach spaces. These methods are very general and include other methods already in the literature as special cases. The convergence analysis of the specialized methods is been given by assuming the existence of high-order derivatives which are not shown in these methods. Therefore, these constraints limit the applicability of the methods to equations involving operators that are sufficiently many times differentiable although the methods may converge. Moreover, the convergence is shown under a different set of conditions. Motivated by the optimization considerations and the above concerns, we present a unified convergence analysis for the generalized numerical methods relying on conditions involving only the operators appearing in the method. This is the novelty of the article. Special cases and examples are presented to conclude this article.
MSC:
49M15; 47H17; 65J15; 65G99; 47H17; 41A25; 49M15
1. Introduction
A plethora of applications from diverse disciplines of computational sciences are converted to nonlinear equations such as
using modeling (mathematical) [1,2,3,4]. The nonlinear operator F is defined on an open and convex subset of a Banach space X with values in X. The solution of the equation is denoted by Numerical methods are mainly used to find This is the case since the analytic form of the solution is obtained in special cases.
Researchers, as well as practitioners, have proposed numerous numerical methods under a different set of convergence conditions using high-order derivatives, which are not present in the methods.
Let us consider an example.
Example 1.
Define the function F on by
Clearly, the point solves the equation It follows that
Then, the function F does not have a bounded third derivative in
Hence, many high convergence methods (although they may converge) cannot apply to show convergence. In order to address these concerns, we propose a unified approach for dealing with the convergence of these numerical methods that take into account only the operators appearing on them. Hence, the usage of these methods becomes possible and under weaker conditions.
Let be a starting point. Define the generalized numerical method by
where and are given operators chosen so that
The specialization of (2) is
where or or or and are linear operators on and with values in respectively. By choosing some of the linear operators equal to the O linear operators in (3), we obtain the methods studied in [5]. Moreover, if then we obtain the methods studied in [6,7]. In particular, the methods in [5] are of the special form
where they, as the methods in [7,8], are of the form
where is a given parameter, and are linear operators acting between and In particular, operators must have a special form to obtain the fourth, seventh or eighth order of convergence.
Further specifications of operators “” lead to well-studied methods, a few of which are listed below (other choices can be found in [6,7,9,10]):
- Newton method (second order) [1,4,11,12]:
- Jarrat method (second order) [13]:
- Traub-type method (fifth order) [14]:
- Homeir method (third order) [15]:
- Cordero–Torregrosa (third Order) [2]:or
- Noor–Wasseem method (third order) [3]:
- Xiao–Yin method (third order) [16]:
- Corder–Torregrosa method (fifth order) [2]:or
- Sharma–Arora method (fifth order) [17,18]:
- Xiao–Yin method (fifth order) [16]:
- Traub-type method (second order) [14]:where is a divided difference of order one.
- Moccari–Lofti method (fourth order) [19]:
- Wang–Zang method (seventh order) [8,16,20]:where is any fourth-order Steffensen-type iteration method.
- Sharma–Arora method (seventh order) [17]:
The local, as well as the semi-local, convergence for methods (4) and (5), were presented in [17], respectively, using hypotheses relating only to the operators on these methods. However, the local convergence analysis of method (6) requires the usage of derivatives or divided differences of higher than two orders, which do not appear in method (6). These high-order derivatives restrict the applicability of method (6) to equations whose operator F has high-order derivatives, although method (6) may converge (see Example 1).
Similar restrictions exist for the convergence of the aforementioned methods of order three or above.
It is also worth noticing that the fifth convergence order method by Sharma [18]
cannot be handled with the analyses given previously [5,6,7] for method (4), method (5), or method (6).
Based on all of the above, clearly, it is important to study the convergence of method (2) and its specialization method (3) with the approach employed for method (4) or (5). This way, the resulting unified convergence criteria can apply to their specialized methods listed or not listed previously. Hence, this is the motivation as well as the novelty of the article.
There are two important types of convergence: the semi-local and the local. The semi-local uses information involving the initial point to provide criteria, assuring the convergence of the numerical method, while the local one is based on the information about the solution to find the radii of the convergence balls.
The local convergence results are vital, although the solution is unknown in general since the convergence order of the numerical method can be found. This kind of result also demonstrates the degree of difficulty in selecting starting points. There are cases when the radius of convergence of the numerical method can be determined without the knowledge of the solution.
As an example, let Suppose function F satisfies an autonomous differential [5,21] equation of the form
where H is a continuous function. Notice that or In the case of , we can choose (see also the numerical section).
Moreover, the local results can apply to projection numerical methods, such as Arnoldi’s, the generalized minimum residual numerical method (GMRES), the generalized conjugate numerical method (GCS) for combined Newton/finite projection numerical methods, and in relation to the mesh independence principle to develop the cheapest and most efficient mesh refinement techniques [1,5,11,21].
In this article, we introduce a majorant sequence and use our idea of recurrent functions to extend the applicability of the numerical method (2). Our analysis includes error bounds and results on the uniqueness of based on computable Lipschitz constants not given before in [5,13,21,22,23,24] and in other similar studies using the Taylor series. This idea is very general. Hence, it applies also to other numerical methods [10,14,22,25].
2. Convergence Analysis of Method
The local is followed by the semi-local convergence analysis. Let and for some Consider functions and be continuous and nondecreasing in each variable.
Suppose that equations
have the smallest solutions, The parameter defined by
shall be shown to be a radius of convergence for method (2). Let It follows by the definition of radius that for all
The notation denotes an open ball with center and of radius By , we denote the closure of
The following conditions are used in the local convergence analysis of the method (2).
Suppose the following:
Next, the main local convergence analysis is presented for method (2).
Theorem 1.
Suppose that the conditions (H1)–(H4) hold and Then, the sequence generated by method (2) is well defined and converges to Moreover, the following estimates hold
and
Proof.
Let Then, it follows from the first condition in (H1) the definition of (26) (for ) and the first substep of method (2) for that
showing estimate (27) for and the iterate Similarly,
and
showing estimates (28), (29), respectively and the iterates By simply replacing by in the preceding calculations, the induction for estimates (27)–(29) is terminated. Then, from the estimate
where
we conclude and □
Remark 1.
It follows from the proof of Theorem 1 that can be chosen in particular as and Thus, the condition (H2) should hold for all and not Clearly, in this case, the resulting functions are at least as tight as the functions , leading to an at least as large radius of convergence as ρ (see the numerical section).
Concerning the semi-local convergence of method (2), let us introduce scalar sequences and defined for and the rest of the iterates, depending on operators and F (see how in the next section). These sequences shall be shown to be majorizing for method (2). However, first, a convergence result for these sequence is needed.
Lemma 1.
Suppose that
and
for some Then, the sequence is convergent to its unique least upper bound
Proof.
Theorem 2.
Suppose the following:
(H5) Iterates generated by method (2) exist, belong in and satisfy the conditions of Lemma 1 for all
(H6)
and
for all and
(H7)
Then, there exists such that
Proof.
It follows by condition (H5) that sequence is complete as convergent. Thus, by condition (H6), sequence is also complete in a Banach space X, and as such, it converges to some (since is a closed set). □
Remark 2.
(i) Additional conditions are needed to show The same is true for the results on the uniqueness of the solution.
(ii) The limit point is not given in the closed form. So, it can be replaced by λ in Theorem 2.
3. Special Cases I
The iterates of method (3) are assumed to exist, and operator F has a divided difference of order one.
Local Convergence
Three possibilities are presented for the local cases based on different estimates for the determination of the functions It follows by method (3) that
- (P1)
- andHence, the functions are selected to satisfyA practical non-discrete choice for the function is given byAnother choice is given byThe choices of functions and can follow similarly.
- (P2)
- Let be a linear operator. By we denoteThus, the functions must satisfyandClearly, the function can be chosen again as in case (P1). The functions and can be defined similarly.
- (P3)
- Assume ∃ function continuous and non-decreasing such thatThen, we can writeleading to
Similarly, for the other two steps, we obtain in the last choice
and
Thus, the function satisfies
or
Finally, concerning the choice of the function by the third substep of method (3)
so the function must satisfy
or
where
The functions and can also be defined with the other two choices as those of function given previously.
Semi-local Convergence
Concerning this case, we can have instead of the conditions of Theorem 2 (see (H6)) but for method (3)
and
Notice that under these choices,
and
Then, the conclusions of Theorem 2 hold for method (3). Even more specialized choices of linear operators appearing on these methods as well as function can be found in the Introduction, the next section, or in [1,2,11,21] and the references therein.
4. Special Cases II
The section contains even more specialized cases of method (2) and method (3). In particular, we study the local and semi-local convergence first of method (22) and second of method (20). Notice that to obtain method (22), we set in method (3)
Moreover, for method (20), we let
and
5. Local Convergence of Method
The local convergence analysis of method (23) utilizes some functions parameters. Let
Suppose the following:
- (i)
- ∃ function continuous and non-decreasing such that equationhas a smallest solution Let
- (ii)
- ∃ function continuous and non-decreasing such that equationhas a smallest solution where the function defined by
- (iii)
- Equationhas a smallest solution Letand
- (iv)
- Equationhas a smallest solution where the function is defined as
- (v)
- Equationhas a smallest solution where the function is defined by
The parameter defined by
is proven to be a radius of convergence for method (2) in Theorem 3. Let Then, it follows by these definitions that
and
The conditions required are as follows:
(C1) Equation has a simple solution
(C2)
Set
(C3)
and
(C4)
Next, the main local convergence result follows for method (23).
Theorem 3.
Suppose that conditions (C1)–(C4) hold and Then, the sequence generated by method (23) is well defined in remains in and is convergent to Moreover, the following assertions hold:
and
where functions are defined previously and the radius ρ is given by Formula (37).
Proof.
Let By using conditions (C1), (C2) and (37), we have that
If then the iterate is well defined by the first substep of method (23) and we can write
Thus, the iterate and (41) holds for The iterate is well defined by the second substep of method (23), so we can write
Notice that linear operator exists by (45) (for ). It follows by (37), (40) (for ), (C3), (45) (for ), in turn that
Thus, the iterate and (42) holds for where we also used (C1) and (C2) to obtain the estimate
Moreover, the iterate is well defined by the third substep of method (23), so we can have
leading to
Therefore, the iterate and (43) holds for
The uniqueness of the solution result for method (23) follows.
Proposition 1.
Suppose the following:
- (i)
- Equation has a simple solution for some
- (ii)
- Condition (C2) holds.
- (iii)
- There exists such that
Set Then, the only solution of equation in the set is
Proof.
Let be such that Define the linear operator It then follows by (ii) and (52) that
Hence, we deduce by the invertibility of J and the estimate □
Remark 3.
Under all conditions of Theorem 3, we can set
Example 2.
Consider the motion system
with Let Let Let function F on Ω for given as
Using this definition, we obtain the derivative as
Hence, Let with Moreover, the nor for is
Conditions (C1)–(C3) are verified for and Then, the radii are
Example 3.
If is equipped with the max-norm, consider given as
We obtain
Clearly, and the conditions (C1)–(C3) hold for and Then, the radii are
6. Semi-Local Convergence of Method
As in the local case, we use some functions and parameters for the method (23).
Suppose:
There exists function that is continuous and non-decreasing such that equation
has a smallest solution Consider function to be continuous and non-decreasing. Define the scalar sequences for and by
This sequence is proven to be majorizing for method (23) in Theorem 4. However, first, we provide a general convergence result for sequence (54).
Lemma 2.
Suppose that
and there exists such that
Then, sequence converges to some
Proof.
Next, the operator F is related to the scalar functions.
Suppose the following:
- (h1)
- There exists such that and
- (h2)
- for allSet
- (h3)
- for all
- (h4)
- Conditions of Lemma 2 hold.and
- (h5)
We present the semi-local convergence result for the method (23).
Theorem 4.
Suppose that conditions (h1)–(h5) hold. Then, sequence given by method (23) is well defined, remains in and converges to a solution of equation Moreover, the following assertions hold:
and
Proof.
Thus, the iterate and (57) holds for
Let Then, as in Theorem 3, we get
Hence, if we set , iterates and are well defined by method (23) for Suppose iterates also exist for all integer values k smaller than Then, we have the estimates
and
where we also used
so
and
so
and
Proposition 2.
Suppose:
- (i)
- There exists a solution of equation for some
- (ii)
- Condition (h2) holds.
- (iii)
- There exists such that
Set Then, is the only solution of equation in the region
Proof.
Let with Define the linear operator Then, by (h2) and (62), we obtain in turn that
Thus, □
The next two examples show how to choose the functions , and the parameter
Example 4.
Set Let us consider a scalar function F defined on the set for by
Choose Then, the conditions (h1)–(h3) are verified for and
Example 5.
Consider and Then the problem [5]
is also given as integral equation of the form
where ι is a constant and is the Green’s function
Consider as
Choose and Then, clearly since If Then, conditions (C1)–(C3) are satisfied for
Hence,
7. Local Convergence of Method
The local analysis is using on certain parameters and real functions. Let and be positive parameters. Set provided that
Define the function by
Notice that parameter
is the only solution of equation
in the set
Define the parameter by
Notice that Set
Define the function by
The equation
has a smallest solution by the intermediate value theorem, since and as It shall be shown that R is a radius of convergence for method (20). It follows by these definitions that
and
The following conditions are used:
- (C1)
- There exists a solution of equation such that
- (C2)
- There exist positive parameters and such thatandSet
- (C3)
- There exists a positive constant such thatand
- (C4)
Next, the local convergence of method (20) is presented using the preceding terminology and conditions.
Theorem 5.
Under conditions (C1)–(C4), further suppose that Then, the sequence generated by method (20) is well defined in stays in and is convergent to so that
and
where the functions and the radius ρ are defined previously.
Proof.
It follows by method (20), (C1), (C2) and in turn that
Hence, the iterate exists by the first substep of method (20) for It follows from the first substep of method (20), (C2) and (C3), that
Hence, and
Thus, the iterate exists by the second sub-step of method (20). Then, as in (70) we obtain in turn that
Therefore, the iterate and (67) holds for
Concerning the uniqueness of the solution (not given in [9]), we provide the result.
Proposition 3.
Suppose:
- (i)
- The point is a simple solution for some of equation
- (ii)
- There exists positive parameter such that
- (iii)
- There exists such that
Set Then, is the only solution of equation in the set
Proof.
Thus, we conclude by the invertability of P and identity □
Remark 4.
(i) Notice that not all conditions of Theorem 5 are used in Proposition 3. If they were, then we can set
(ii) By the definition of set we have
Therefore, the parameter
where is the corresponding Lipschitz constant in [1,3,9,19] appearing in the condition
Thus, the radius of convergence in [1,7,8,20] uses instead of That is by (78)
8. Majorizing Sequences for Method
Let be given positive parameters and and Consider recurrent polynomials defined on the interval T for by
and polynomials
and
Then, the following auxiliary result connecting these polynomials can be shown.
Lemma 3.
The following assertions hold:
polynomials and have smallest zeros in the interval denoted by and respectively,
and
Moreover, define functions on the interval T by
and
Then,
and
Proof.
Assertions (81)–(84) hold by the definition of these functions and basic algebra. By the intermediate value theorem polynomials and have zeros in the interval since and Then, assertions (85) and (86) follow by the definition of these polynomials and zeros and Next, assertions (91) and (94) also follow from (87), (88) and the definition of these polynomials. □
The preceding result is connected to the scalar sequence defined by
where
Moreover, define parameters and
Then, the first convergence result for sequence follows.
Lemma 4.
Suppose
and
Then, scalar sequence is non-decreasing, bounded from above by and converges to its unique least upper bound Moreover, the following error bounds hold
and
Proof.
By the definition of we obtain
so and (103) holds for Suppose assertions (101)–(103) hold for each By (99) and (100) we have
and
By the induction hypotheses sequences are increasing. Evidently, estimate (101) holds if
or
where By (91), (93), and (98) estimate (107) holds.
By (92) and (94), assertion (108) holds. Hence, (100) and (103) also hold. Notice that can be written as where and Hence, we get
and
so
It follows that sequence is non-decreasing, bounded from above by Thus, it converges to □
Next, a second convergence result for sequence (95) is presented but the sufficient criteria are weaker but more difficult to verify than those of Lemma 4.
Lemma 5.
Suppose
and
hold. Then, sequence is increasing and bounded from above by so it converges to its unique least upper bound
9. Semi-Local Convergence of Method
The conditions (C) shall be used in the semi-local convergence analysis of method (20).
Suppose
- (C1)
- There exist such that and
- (C2)
- There exists such that for allSet for
- (C3)
- There exists such that for all
- (C4)
- where
Remark 5.
The results in [19] are given in the non-affine form. The benefits of using affine invariant results over non-affine are well-known [1,5,11,21]. In particular, they assumed and
(C3)′ holds for all By the definition of the set we get
so
and
Hence, K can replace in the results in [19]. Notice also that using (C3)′ they estimated
and
where are defined for by
where But using the weaker condition (C2) we obtain respectively,
and
which are tighter estimates than (115) and (116), respectively. Hence, can replace and (118), (119) can replace (115), (116), respectively, in the proof of Theorem 3 in [19]. Examples where (112)–(114) are strict can be found in [1,5,11,21]. Simple induction shows that
and
These estimates justify the claims made at the introduction of this work along the same lines. The local results in [19] can also be extended using our technique.
Next, we present the semi-local convergence result for the method (20).
Theorem 6.
Suppose that conditions (C) hold. Then, iteration generated by method (20) exists in remains in and with so that
Proof.
It follows from the comment above Theorem 6. □
Next, we present the uniqueness of the solution result, where conditions (C) are not necessarily utilized.
Proposition 4.
Suppose the following:
- (i)
- There exists a simple solution for some
- (ii)
- Condition (C2) holdsand
- (iii)
- There exists such that
Set Then, the element is the only solution of equation in the region
Proof.
Let with Define Then, in view of (ii) and (iii),
Therefore, we conclude is a consequence of the invertibility of Q and the identity □
Remark 6.
(i) Notice that r can be chosen to be
(ii) The results can be extended further as follows. Replace
(C3)″ and Then, we have
(iii)
Another way is if we define the set provided that Moreover, suppose Then, we have if condition (C3)″ on , say, with constant . Then, we have that
also holds. Hence, tighter or can replace K in Theorem 6.
10. Conclusions
The convergence analysis is developed for generalized three-step numerical methods. The advantages of the new approach include weaker convergence criteria and a uniform set of conditions utilizing information on these methods in contrast to earlier works on special cases of these methods, where the existence of high-order derivatives is assumed to prove convergence. The methodology is very general and does not depend on the methods. That is why it can be applied to multi-step and other numerical methods that shall be the topic of future work.
The weak point of this methodology is the observation that the computation of the majorant functions “h” at this generality is hard in general. Notice that this is not the case for the special cases of method (2) or method (3) given below them (see, for example, Examples 4 and 5). As far as we know, there is no other methodology that can be compared to the one introduced in this article to handle the semi-local or the local convergence of method (2) or method (3) at this generality.
Author Contributions
Conceptualization, M.I.A., I.K.A., S.R. and S.G.; methodology, M.I.A., I.K.A., S.R. and S.G.; software, M.I.A., I.K.A., S.R. and S.G.; validation, M.I.A., I.K.A., S.R. and S.G.; formal analysis, M.I.A., I.K.A., S.R. and S.G.; investigation, M.I.A., I.K.A., S.R. and S.G.; resources, M.I.A., I.K.A., S.R. and S.G.; data curation, M.I.A., I.K.A., S.R. and S.G.; writing—original draft preparation, M.I.A., I.K.A., S.R. and S.G.; writing—review and editing, M.I.A., I.K.A., S.R. and S.G.; visualization, M.I.A., I.K.A., S.R. and S.G.; supervision, M.I.A., I.K.A., S.R. and S.G.; project administration, M.I.A., I.K.A., S.R. and S.G.; funding acquisition, M.I.A., I.K.A., S.R. and S.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Appell, J.; DePascale, E.; Lysenko, J.V.; Zabrejko, P.P. New results on Newton-Kantorovich approximations with applications to nonlinear integral equations. Numer. Funct. Anal. Optim. 1997, 18, 1–17. [Google Scholar] [CrossRef]
- Ezquerro, J.A.; Hernandez, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Birkhäuser: Cham Switzerland, 2018. [Google Scholar]
- Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
- Regmi, S.; Argyros, I.K.; George, S.; Argyros, C. Numerical Processes for Approximating Solutions of Nonlinear Equations. Axioms 2022, 11, 307. [Google Scholar] [CrossRef]
- Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press: Boca Raton, FL, USA; Taylor and Francis Group: Abingdon, UK, 2022. [Google Scholar]
- Zhanlav, K.H.; Otgondorj, K.H.; Sauul, L. A unified approach to the construction of higher-order derivative-free iterative methods for solving systems of nonlinear equations. Int. J. Comput. Math. 2021. [Google Scholar]
- Zhanlav, T.; Chun, C.; Otgondorj, K.H.; Ulziibayar, V. High order iterations for systems of nonlinear equations. Int. J. Comput. Math. 2020, 97, 1704–1724. [Google Scholar] [CrossRef]
- Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameters. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
- Moccari, M.; Lofti, T. On a two-step optimal Steffensen-type method: Relaxed local and semi-local convergence analysis and dynamical stability. J. Math. Anal. Appl. 2018, 468, 240–269. [Google Scholar] [CrossRef]
- Shakhno, S.M.; Gnatyshyn, O.P. On an iterative Method of order 1.839… for solving nonlinear least squares problems. Appl. Math. Comput. 2005, 161, 253–264. [Google Scholar]
- Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
- Potra, F.-A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Publishing: Boston, MA, USA, 1984. [Google Scholar]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
- Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
- Xiao, X.; Yin, H. Achieving higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2017, 311, 251–261. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
- Sharma, J.R.; Guha, R.K. Simple yet efficient Newton-like method for systems of nonlinear equations. Calcolo 2016, 53, 451–473. [Google Scholar] [CrossRef]
- Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef] [Green Version]
- Wang, X.; Zhang, T. A family of Steffensen type methods with seventh-order convergence. Numer. Algor. 2013, 62, 429–444. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier: Amsterdam, The Netherlands; Academic Press: New York, NY, USA, 2018. [Google Scholar]
- Grau-Sanchez, M.; Grau, A.; Noguera, M. Ostrowski type methods for solving system of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
- Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef] [Green Version]
- Kou, J.; Wang, X.; Li, Y. Some eight order root finding three-step methods. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 536–544. [Google Scholar] [CrossRef]
- Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).