Abstract
New three-step with-memory iterative methods for solving nonlinear equations are presented. We have enhanced the convergence order of an existing eighth-order memory-less iterative method by transforming it into a with-memory method. Enhanced acceleration of the convergence order is achieved by introducing two self-accelerating parameters computed using the Hermite interpolating polynomial. The corresponding R-order of convergence of the proposed uni- and bi-parametric with-memory methods is increased from 8 to 9 and 10, respectively. This increase in convergence order is accomplished without requiring additional function evaluations, making the with-memory method computationally efficient. The efficiency of our with-memory methods NWM9 and NWM10 increases from 1.6818 to 1.7320 and 1.7783, respectively. Numeric testing confirms the theoretical findings and emphasizes the superior efficacy of suggested methods when compared to some well-known methods in the existing literature.
Keywords:
nonlinear equations; adaptive method with memory; self-accelerator; r-order; efficiency index MSC:
65H05; 28A80; 41A25
1. Introduction
The pursuit of computing accurate solutions for nonlinear equations is a continual quest in the field of numerical computation. The analytical methods used to find the exact roots of nonlinear equation , where is a real function defined on the open interval I, are either complex or nonexistent. We can only depend on the iterative method to obtain an approximate solution with the desired level of accuracy. The Newton method is one of the most widely used iterative methods to find the solution of nonlinear equations, given by
The Newton method is a single-point, second-order method that requires the evaluation of one function and one derivative at each iteration. The Newton method is a method without memory and the optimal order is as per the Kung–Traub Conjecture [1], which states that any iterative method requiring k function evaluations per iteration is considered optimal when the order of convergence equals . The one-point iteration method, traditionally relying on k-function evaluations, achieves a maximum order of k [1,2]. Multipoint iterative schemes are highly significant as they surpass the theoretical limits of any one-point iterative method. Nevertheless, they may also lead to decreases in the efficiency index of the method. The performance of an iterative method is quantified by its efficiency index, expressed as follows [1,3]:
Here, represents the order of the iterative method. However, the with-memory approach not only exceeds this theoretical limit but also enhances the efficiency indices of the methods. In recent years, there has been a lot of interest in extending without-memory methods to with-memory methods by using accelerating parameters [4,5]. The order of convergence in multi-point with-memory iterative methods is greatly boosted without any additional function evaluation by utilizing information from both the current and previous iterations.
In this manuscript, a novel parametric three-point iterative method with memory is developed, wherein the R-order of convergence is enhanced from 8 to 9 and 10 using one and two parameters, respectively. The remaining part of this work is organized as follows: In Section 2, we develop new parametric three-point iterative methods with memory by introducing self-accelerating parameters using Hermite-interpolating polynomials. In Section 3, we present the results of numerical calculations by comparing the newly proposed methods with other well-known methods on test functions. Finally, concluding remarks are provided in Section 4.
2. Analysis of Convergence for With-Memory Methods
In this section, firstly we will introduce a parameter in the second step and then introduce another parameter in the third step of the three-step scheme proposed by Sharma and Arora [6] in 2021. We increase the order of convergence by replacing the parameters and with the iterative parameters and .
2.1. The Uni-Parametric With-Memory Method and Its Convergence Analysis
Here, we introduce the parameter in the second sub-step of the without-memory scheme of the eighth order, presented in article [6]:
The error expressions for each sub-step of the above scheme are:
and
where and , for and . We obtain the following with-memory iterative scheme by replacing with in (3):
and the above scheme is represented by NWM9. Now, from Expression (6), it is clear that the convergence order of Algorithm (3) is eight when . Next, to accelerate the order of convergence of the algorithm presented in (3) from eight to nine, we can assume , but in actual fact, the exact values of and are not attainable in practice. So, we will assume the parameter as . The parameter can be calculated by using the available data from the current and previous iterations and satisfies the condition , such that the eighth-order asymptotic convergence constant should be zero in Error Expression (6). The formula for is as follows:
where
Note: The condition is satisfied by the Hermite interpolation polynomial for So, can be expressed as for
Theorem 1.
Let be the Hermite polynomial of degree m that interpolates a function f at interpolation nodes belonging to an interval I, where the derivative is continuous in I and the Hermite polynomial , , . Denote and suppose that
- (1)
- All nodes are sufficiently near to the root ξ.
- (2)
- The condition holds. Then,andwhere .
Proof.
We can calculate the expression of the sixth-degree and fifth-degree Hermite interpolation polynomial as:
Now, we get the below-mentioned equations by differentiating Equation (14) three times at the point and Equation (15) two times at the point , respectively.
Next, a Taylor series expansion of at the points and in I and about the zero ξ of f provides
The definition of the R-order of convergence [7] and the following statement [8] can be used to estimate the order of convergence of the iterative scheme (7).
Theorem 2.
If the errors evaluated by an iterative root finding method (IM) fulfill
then the R-order of convergence of IM, denoted by , satisfies the inequality , where is the unique positive solution of the equation .
Presently, for the new iterative scheme with Memory (7), we can state the subsequent convergence theorem.
Theorem 3.
Proof.
Suppose the IM produces the sequence of converging the root ξ of with an R-order , then we express
and
Next, will tend to the asymptotic error constant of taking , and then
The error expression for the with-memory scheme (7) can be obtained using (4)–(6) and the varying parameter .
and
Here, the higher order terms in Equations (34)–(36) are excluded. Now, let the R-order convergence of the iterative sequences and be p and q, respectively, then
and
2.2. The Bi-Parametric With-Memory Method and Its Convergence Analysis
Now, we introduce a new parameter in the third sub-step of the single parametric with-memory method presented in (7)
Now, we get the error expressions for each sub-step of (43) as:
and
where , and , for and . We obtain the following with-memory iterative scheme by replacing with in (43):
and the above scheme is represented by NWM10. Now, from (46), it is clear that the convergence order of Algorithm (43) is nine when . Next, to accelerate the order of convergence of the algorithm presented in (43) from nine to ten, we can assume , but in actual fact, the exact values of and are not attainable in practice. So, we will assume the parameter as . The parameter can be computed using the information from both the current and previous iterations and satisfies the condition , such that the ninth-order asymptotic convergence constant should be zero in the error expression (46). The formula for is as follows:
where
and and can be calculated by Equation (9).
Note: The condition is satisfied by the Hermite interpolation polynomial for So, can be expressed as for
Theorem 4.
Let be the Hermite polynomial of degree m that interpolates a function f at interpolation nodes within an interval I, where the derivative is continuous in I and the Hermite polynomial satisfies , , . Denote and suppose that
- (1)
- All nodes are sufficiently near to the root ξ.
- (2)
- The condition holds. Then,andwhere = ( − + + + − − + + − ).
Proof.
We can calculate the expression of the seventh-degree Hermite interpolation polynomial as:
Now, the following equation is derived by taking the fourth derivative of Equation (55) at the point .
Next, a Taylor series expansion of at the points in I and about the zero ξ of f provides
and
Now, for the iterative scheme with Memory (47), we can state the subsequent convergence theorem.
Theorem 5.
Proof.
Suppose the IM produces the sequence of converging to the root ξ of with an R-order , then we express
and
Next, will tend to the asymptotic error constant of taking , and then
The error expression of the with-memory scheme (47) can be obtained using (44)–(46) and the varying parameter .
and
Here, the higher-order terms in Equations (66)–(68) are excluded. Now, let the R-order convergence of the iterative sequences and are p and q, respectively, then
and
Note: There are a number of optimal-order without-memory methods available in the literature such as [6,9] but every without-memory iterative scheme cannot be extended to the with-memory version.
3. Numerical Results and Discussion
In this section, we provide numerical examples to elucidate the efficiency and effectiveness of the newly formulated three-step with-memory methods NWM9 (7) and NWM10 (47). These methods are compared with some existing well-known three-step methods, SA8, NAJJ10, XT10 and NJ10, as presented in references [5,6,10,11], respectively, using the test functions mentioned in Table 1. The aforementioned iterative methods are now listed below.
Table 1.
Test problems, their zeros and initial approximations.
In 2021, Sharma and Arora [6] proposed a three-step eighth-order without-memory iterative method (SA8), which is defined as:
In 2018, Choubey et al. [10] proposed a tenth-order with-memory iterative method (NAJJ10) using two self-accelerating parameters, which is defined as:
where and are calculated as and .
In 2013, Wang and Zhang [5] developed a family of three-step with-memory iterative schemes (XT10) for nonlinear equations given by:
where ,,, and . Furthermore, is calculated as .
In 2016, Choubey and Jaiswal [11] developed a bi-parametric with-memory iterative method (NJ10) with tenth-order convergence, the sub-steps of which are:
where T, ∈ R and is calculated as .
Planck’s radiation law problem: It calculates the energy density within an isothermal blackbody and is given by [12]:
where is the wavelength of the radiation, T is the absolute temperature of the blackbody, B is the Boltzmann constant, P is the Planck constant and c is the speed of light. We are interested in determining the wavelength that corresponds to the maximum energy density . From (79), we obtain
so that the maxima of v occur when
After that, if , then (81) is satisfied if
Thus, the solutions to the equation provide the maximum wavelength of radiation, denoted as , as determined by the following formula:
where is a solution of (82).
In Table 1, we have considered five distinct nonlinear functions, displaying their roots () and their initial approximations (). The formula of computational order of convergence (COC) is given by [13]:
All the compared results are given in Table 2. Table 2 contains the absolute differences between the last two consecutive iterations () and the absolute residual error of up to three iterations for each function along with the COC for the proposed methods in comparison to some well-known existing methods. All computations presented here have been performed in MATHEMATICA 12.2. The findings showcased in Table 2 validate the theoretical results of the newly proposed methods, highlighting their efficiency in comparison to some well-known iterative methods. Also, the errors in consecutive iterations have been presented though Figure 1.
Table 2.
Comparisons of test methods after three () iterations.
Figure 1.
Comparison of the methods based on the error in consecutive iterations, , after the first three full iterations.
From Table 2, it is confirmed that the accuracy of the results improved not only compared to the without-memory scheme but also with some well-used with-memory schemes.
4. Conclusions
In this manuscript, we have introduced three-step with memory iterative techniques for solving nonlinear equations by introducing single and double self-accelerating parameters. The primary goal is to enhance the convergence order of the optimal eighth method without requiring additional computations. This is achieved by introducing self-acceleration parameters and their estimates in the eighth-order method. The estimates of these self-accelerating parameters are calculated using the Hermite interpolating polynomial. The inclusion of parameters increases the R-order of convergence of the with-memory methods NWM9 and NWM10 from 8 to 9 and 10, respectively. The results show that the suggested techniques NWM9 and NWM10 have faster convergence and smaller asymptotic constant values than other current approaches. Furthermore, the overall performance of the newly presented approach is outstanding, with a quick convergence speed that makes it a promising alternative for solving nonlinear equations.
By applying the discussed approach, the interested researcher may extend well-known higher optimal order without-memory iterative methods to with-memory algorithms along with single or multi self-accelerating parameters for achieving improved efficiency for single or multivariate functions.
Author Contributions
Conceptualization, J.P.J.; methodology, S.K.M.; formal analysis, S.K.M.; writing-review and editing, E.S. and S.P. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No public involvement in any aspect of this research.
Acknowledgments
The authors would like to pay their sincere thanks to the reviewers for their useful suggestions.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Society: Providence, RI, USA, 1982. [Google Scholar]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
- Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press Inc.: Cambridge, MA, USA, 1973. [Google Scholar]
- Khan, W.A. Numerical simulation of Chun-Hui He’s iteration method with applications in engineering. Int. J. Numer. Methods Heat Fluid Flow 2022, 32, 944–955. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, T. Some Newton-type iterative methods with and without memory for solving nonlinear equations. Int. J. Comput. Methods 2014, 11, 1350078. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. An efficient family of weighted-Newton methods with optimal eighth order convergence. Appl. Math. Lett. 2014, 29, 1–6. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1970. [Google Scholar]
- Alefeld, G.; Herzberger, J. Introduction to Interval Computation; Academic Press: New York, NY, USA, 2012. [Google Scholar]
- Babajee, D.K.R.; Cordero, A.; Soleymani, F.; Torregrosa, J.R. On improved three-step schemes with high efficiency index and their dynamics. Numer. Algorithms 2014, 65, 153–169. [Google Scholar] [CrossRef]
- Choubey, N.; Cordero, A.; Jaiswal, J.P.; Torregrosa, J.R. Dynamical techniques for analyzing iterative schemes with memory. Complexity 2018, 2018, 1232341. [Google Scholar]
- Choubey, N.; Jaiswal, J.P. Two-and three-point with memory methods for solving nonlinear equations. Numer. Anal. Appl. 2017, 10, 74–89. [Google Scholar] [CrossRef]
- Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
- Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).