All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
In this article, we introduce a novel three-step iterative algorithm with memory for finding the roots of nonlinear equations. The convergence order of an established eighth-order iterative method is elevated by transforming it into a with-memory variant. The improvement in the convergence order is achieved by introducing two self-accelerating parameters, calculated using the Hermite interpolating polynomial. As a result, the R-order of convergence for the proposed bi-parametric with-memory iterative algorithm is enhanced from 8 to . Notably, this enhancement in the convergence order is accomplished without the need for extra function evaluations. Moreover, the efficiency index of the newly proposed with-memory iterative algorithm improves from to . Extensive numerical testing across various problems confirms the usefulness and superior performance of the presented algorithm relative to some well-known existing algorithms.
Addressing nonlinear equations is a critical challenge in science and engineering, particularly in fields such as gas dynamics and elasticity, where problems are often reduced to solving single-variable nonlinear equations , where acts as a scalar function within an open interval D. Traditional analytical methods frequently prove inadequate for determining the roots of these complex nonlinear equations, making iterative numerical methods indispensable, with the ongoing advancement of computational technology.
The classical one-point Newton’s method [1] for a nonlinear equation is defined by the iterative formula:
where is the function and is its derivative. The Newton–Raphson method, known for its quadratic convergence near the roots, requires the evaluation of both the function and its derivative in each iteration. Researchers consistently strive to enhance the convergence rate of iterative methods.
Multipoint methods for solving nonlinear equations offer significant advantages over one-point methods due to their computational efficiency and higher convergence order. Researchers have shown considerable interest in constructing optimal multipoint methods without memory, following Kung–Traub’s conjecture [2], which uses functional evaluations to reach the optimal convergence order.
Recent innovations have improved a variety of numerical methods, including the Adomian decomposition Newton–Raphson [1], bisection [3], Chebyshev–Halley [4], Chun–Neta [5], collocation [6], Galerkin [7], and Jarratt methods [8], as well as the Nash–Moser iteration [9], Thukral method [10], Osada method [11], Ostrowski method [12], Picard iteration [13], diverse quadrature formulas [14,15], super-Halley method [16], and Traub–Steffensen method [17].
With the increase in the convergence rate, the number of evaluations of the required function also increases, which can lead to a reduction in the efficiency index. The efficiency index of an iterative method quantifies its performance, and it is defined as [2,18]:
where is the iterative method convergence rate and is the number of function and derivative evaluations performed per iteration.
On the other hand, iterative methods that incorporate memory make use of information from both recent and past iterations to boost both the convergence order and the efficiency index. Recent advancements in the field have seen significant contributions in extending without-memory methods to with-memory methods using self-accelerating parameters. In 2022, Choubey et al. [19] transformed a fourth-order without-memory iterative method into a with-memory method using one self-accelerating parameter and achieved sixth-order convergence. In 2023, Sharma et al. [20] upgraded an eighth-order without-memory iterative method to a with-memory method using two self-accelerating parameters and attained tenth-order convergence. Also in 2023, Abdullah et al. [21] developed a with-memory method by enhancing a without-memory method with one parameter, which improved its convergence order from 6 to 7.2749. Additionally, in the same year, Thangkhenpau et al. [22] developed a derivative-free without-memory iterative method with eighth-order convergence and then expanded it to a with-memory method using four self-accelerating parameters, which resulted in an increase in the convergence order from 8 to 15.5156. In their pursuit of resolving nonlinear equations with multiple roots, Thangkhenpau et al. introduced a novel scheme offering both with- and without-memory-based variants [23]. In recent years, the development of with-memory iterative methods has garnered considerable interest among researchers. For a deeper understanding, one can refer to [23,24,25,26,27,28,29,30,31,32,33,34] and the references cited therein.
In this research paper, a novel bi-parametric three-step with-memory iterative algorithm is introduced, which elevates the R-order of convergence from 8 to 10.5208. The algorithm achieves an efficiency index of 1.6011. The paper is structured to enhance understanding and analysis. Section 2 details the development of this new bi-parametric three-point with-memory iterative algorithm by integrating self-accelerating parameters into the first and third steps of an existing eighth-order without-memory iterative algorithm, accompanied by a thorough convergence analysis. Section 3 presents an extensive evaluation through numerical tests, providing a rigorous comparison of the proposed method with other well-established algorithms. Finally, Section 4 summarizes this study, offering a detailed synthesis of the results and their implications.
2. Analysis of Convergence for With-Memory Algorithm
In this section, the two parameters used in the first and third steps, respectively, of Algorithm in [35], proposed by Butsakorn Kong-ied in 2021, are utilized to increase its order of convergence.
Using Taylor-series approximation, the expressions for and can be written as:
where A = , is the zero of , = , and = for .
After substituting the values of Equations (4) and (5) into the first step of (3), the expression for the error of is given by:
where = .
Furthermore, the expression for can be written as:
After substituting the values of Equations (4)–(7) into the second step of (3), the expression for the error of is given by:
where = .
Also, the expression for and can be given by:
Finally, after substituting the values of Equations (8)–(10) into the third step of (3), the expression for the error of is given by:
Now, by replacing and , which are calculated using Equations (13) and (14), respectively, with and in Equation (3), the following with-memory iterative scheme is obtained:
The above scheme is denoted by NWM11. At this point, from (11), it is clear that the convergence order of Algorithm (3) is 8 when and , respectively. To accelerate the order of convergence from 8 to 11 of Algorithm (3), one can assume that and ; however, the exact values of , and are not attainable in practice. Let us assume the parameters and as and , respectively. The parameters and can be updated iteratively using the available data from the current and previous iterations, aiming for them to satisfy the conditions and such that the asymptotic convergence constants for the 8th, 9th, and 10th orders in the error expression (11) will be zero. and are chosen as follows:
where
It should be noted that the condition is satisfied by the Hermite interpolation polynomial for So, and can be expressed as and , respectively, for
Theorem 1.
Let be the Hermite polynomial of degree m, interpolating the Ω function at interpolation nodes within an interval , and the derivative is continuous in I with and . Suppose that all nodes are in the neighborhood of the root ξ. Then,
and
Again, after simplification, the result is
Proof.
The sixth-degree and fifth-degree Hermite interpolation polynomials are
In order to obtain the following equations, Equation (22) is differentiated three times in , and Equation (23) is differentiated two times in :
The Taylor-series expansion of at points and in I and about the simple zero of provides
Similarly,
where . Putting (28) and (30) into (24), and (27) and (29) into (25), we obtain
and
By using Equations (26), (31) and (32), the result is
and
Hence,
or
This completes the proof of Theorem 1. □
R-Order of Convergence: It can be said that sequence converges to with an R-order of convergence of at least if there are constants and such that [36]
Using the above definition of R-order of convergence, along with the statement in [37], provides an estimate of the order of convergence of the iterative scheme (12).
Theorem 2.
If the errors are evaluated through the iterative root-finding method, the following relation exists
and then the R-order of convergence of the IM, denoted as , satisfies the inequality , where is the unique positive solution of the equation [37].
Going further, the new iterative scheme with memory (12) is regulated by the subsequent convergence theorem.
Theorem 3.
In the iterative method (12), let be the varying parameters that are calculated using Equations (13) and (14). If an initial guess is near a simple zero ξ of , then the R-order of convergence of the iterative method (12) with memory is at least 10.5208.
Proof.
Let the iterative method (IM) generate the sequence of , which converges to the root of . By means of R-order , it is expressed as
and
Next, of tends to . The result will be an asymptotic error constant when , and then
The resulting error expression of the with-memory scheme (12) can be obtained using Equations (6), (8) and (11) and the varying parameters and .
and
It should be noted that in Equations (44)–(46), the higher-order terms are excluded.
Furthermore, if the R-order convergence of the iterative sequences and are p and q, respectively, then
and
Now, from Equations (42) and (44), the obtained result is
Also, from Equations (37), (42) and (45), the following is obtained
Again, from Equations (37), (38), (42) and (46), the result is
where .
Since , equating the exponents of from the set of relations (47)–(49), (48)–(50) and (43)–(51), the following system of equations is obtained:
The solution of (52) is , , and . As a result, the R-order of convergence of the with-memory iterative method (12) is at least . □
3. Numerical Discussion
In this section, the convergence behavior of the newly developed with-memory method (NWM11) presented in (12) is explored. The goal of this section is to assess the effectiveness of a recently developed iterative method by applying it to a range of nonlinear equations. The nonlinear test functions, along with their roots and initial guesses for numerical analysis, are detailed below:
Example 1.
, ,
Example 2.
, ,
Example 3.
, ,
Example 4.
, ,
Example 5.
, ,
Example 6.
, ,
Example 7.
, ,
Example 8.
, ,
The proposed method is evaluated against several well-established methods documented in the literature, including BK8 (53), KP10 (54), OSO10 (55), NJ10 (56), NAJJ10 (57), XT10 (58), and NWM11 (12), which are described below.
In 2021, Butsokorn Kong-ied (BK8) [35] developed an eighth-order iterative method, which is defined as:
In 2024, Devi and Maroju (KP10) [38] developed a tenth-order iterative method, defined as:
In 2023, Ogbereyivwe et al. (OSO10) [39] developed a tenth-order iterative method, defined as:
where and .
In 2016, Choubey and Jaiswal (NJ10) [32] developed a bi-parametric with-memory iterative method, with tenth-order convergence for solving nonlinear equations, defined as:
where T, ∈ R and is calculated as .
In 2018, Choubey et al. (NAJJ10) [33] proposed a tenth-order with-memory iterative method using two self-accelerating parameters, defined as:
where and ∈ R are calculated as and .
In 2013, Wang and Zhang (XT10) [34] developed a family of three-step with-memory iterative schemes for nonlinear equations, defined as:
where , , , , and . Also, is calculated as .
All the comparative results for these methods are summarized in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. These tables present the absolute differences between the last two consecutive iterations () and the absolute residual error () of up to three iterations for each function, along with the computational order of convergence (COC) for the proposed method in comparison to some well-known existing methods. The determination of the is achieved using the following equation [40]:
For all numerical calculations, the programming software Mathematica 12.2 was used. For the newly proposed with-memory algorithm (NWM11), the parameter values = and = were selected to start the initial iteration.
Based on the numerical results in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 and Figure 1, it can be concluded that the newly proposed with-memory algorithm (NWM11) is competitive and demonstrates fast convergence toward the roots with minimal absolute residual error and a minimum error value in consecutive iterations compared to the aforementioned existing methods. Additionally, the numerical results indicate that the computational order of convergence supports the theoretical convergence order of the newly presented family of algorithms in the test functions.
4. Conclusions
In this paper, a three-point with-memory iterative algorithm featuring two self-accelerating parameters is presented. By incorporating these parameters, computed using the Hermite interpolating polynomial, into an existing eighth-order method, its R-order of convergence is enhanced from 8 to 10.5208, and its efficiency index is enhanced from EI = 1.3161 to EI = 1.6011, without additional function evaluations. This algorithm not only accelerates convergence but also requires fewer function evaluations compared to other established algorithms, despite its higher convergence order. The findings in this paper demonstrate that the newly developed NWM11 algorithm offers superior performance with faster convergence and lower asymptotic constants, positioning it as a highly efficient alternative for solving nonlinear equations.
Author Contributions
Conceptualization, S.K.M. and S.P.; methodology, S.K.M. and S.P.; software, S.K.M., S.P. and C.E.S.; validation, S.K.M. and S.P.; formal analysis, S.P., S.K.M. and L.J.; resources, S.K.M.; writing—original draft preparation, S.P. and S.K.M.; writing—review and editing, S.K.M., S.P. and C.E.S.; visualization, S.P. and S.K.M.; supervision, S.P. and L.J. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Technical University of Cluj-Napoca’s open-access publication grant.
Data Availability Statement
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
References
Pho, K.H. Improvements of the Newton–Raphson method. J. Comput. Appl. Math.2022, 408, 114106. [Google Scholar] [CrossRef]
Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Soc.: Providence, RI, USA, 1982; Volume 312. [Google Scholar]
Gutierrez, C.; Gutierrez, F.; Rivara, M.C. Complexity of the bisection method. Theor. Comput. Sci.2007, 382, 131–138. [Google Scholar] [CrossRef]
Sharma, H.; Kansal, M. A modified Chebyshev–Halley-type iterative family with memory for solving nonlinear equations and its stability analysis. Math. Methods Appl. Sci.2023, 46, 12549–12569. [Google Scholar] [CrossRef]
Petković, I.; Herceg, D. Computers in mathematical research: The study of three-point root-finding methods. Numer. Algorithms2020, 84, 1179–1198. [Google Scholar] [CrossRef]
Lu, Y.; Tang, Y. Solving Fractional Differential Equations Using Collocation Method Based on Hybrid of Block-pulse Functions and Taylor Polynomials. Turk. J. Math.2021, 45, 1065–1078. [Google Scholar] [CrossRef]
Assari, P.; Dehghan, M. A meshless local Galerkin method for solving Volterra integral equations deduced from nonlinear fractional differential equations using the moving least squares technique. Appl. Numer. Math.2019, 143, 276–299. [Google Scholar] [CrossRef]
Argyros, I.K.; Sharma, D.; Argyros, C.I.; Parhi, S.K.; Sunanda, S.K.; Argyros, M.I. Extended three step sixth order Jarratt- like methods under generalized conditions for nonlinear equations. Arab. J. Math.2022, 11, 443–457. [Google Scholar] [CrossRef]
Temple, B.; Young, R. Inversion of a non-uniform difference operator and a strategy for Nash–Moser. Methods Appl. Anal.2022, 29, 265–294. [Google Scholar] [CrossRef]
Putri, R.Y.; Wartono, W. Modifikasi metode Schroder tanpa turunan kedua dengan orde konvergensi empat. Aksioma J. Mat. Dan Pendidik. Mat.2020, 11, 240–251. [Google Scholar] [CrossRef]
Argyros, I.K.; George, S. Local convergence of osada’s method for finding zeros with multiplicity. In Understanding Banach Spaces; Sáanchez, D.G., Ed.; Nova Science Publishers: Hauppauge, NY, USA, 2019; pp. 147–151. [Google Scholar]
Postigo Beleña, C. Ostrowski’s Method for Solving Nonlinear Equations and Systems. J. Mech. Eng. Autom.2023, 13, 1–6. [Google Scholar] [CrossRef]
Ivanov, S.I. General Local Convergence Theorems about the Picard Iteration in Arbitrary Normed Fields with Applications to Super–Halley Method for Multiple Polynomial Zeros. Mathematics2020, 8, 1599. [Google Scholar] [CrossRef]
Coclite, G.M.; Fanizzi, A.; Lopez, L.; Maddalena, F.; Pellegrino, S.F. Numerical methods for the nonlocal wave equation of the peridynamics. Appl. Numer. Math.2020, 155, 119–139. [Google Scholar] [CrossRef]
Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput.2007, 188, 257–261. [Google Scholar] [CrossRef]
Nisha, S.; Parida, P.K. Super-Halley method under majorant conditions in Banach spaces. Cubo (Temuco)2020, 22, 55–70. [Google Scholar] [CrossRef]
Sharma, J.R.; Kumar, D.; Argyros, I.K. An efficient class of Traub-Steffensen-like seventh order multiple-root solvers with applications. Symmetry2019, 11, 518. [Google Scholar] [CrossRef]
Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press: Cambridge, MA, USA, 1973. [Google Scholar]
Choubey, N.; Jaiswal, J.P.; Choubey, A. Family of multipoint with memory iterative schemes for solving nonlinear equations. Int. J. Appl. Comput. Math.2022, 8, 83. [Google Scholar] [CrossRef]
Sharma, E.; Mittal, S.K.; Jaiswal, J.P.; Panday, S. An Efficient Bi-Parametric With-Memory Iterative Method for Solving Nonlinear Equations. Appl. Math.2023, 3, 1019–1033. [Google Scholar] [CrossRef]
Abdullah, S.; Choubey, N.; Dara, S. An efficient two-point iterative method with memory for solving non-linear equations and its dynamics. J. Appl. Math. Comput.2024, 70, 285–315. [Google Scholar] [CrossRef]
Thangkhenpau, G.; Panday, S.; Mittal, S.K. New Derivative-Free Families of Four-Parametric with and Without Memory Iterative Methods for Nonlinear Equations. In International Conference on Science, Technology and Engineering; Springer Nature: Singapore, 2023; pp. 313–324. [Google Scholar]
Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel Parametric Families of with and without Memory Iterative Methods for Multiple Roots of Nonlinear Equations. Mathematics2023, 11, 2036. [Google Scholar] [CrossRef]
Liu, C.S.; Chang, C.W. New Memory-Updating Methods in Two-Step Newton’s Variants for Solving Nonlinear Equations with High Efficiency Index. Mathematics2024, 12, 581. [Google Scholar] [CrossRef]
Erfanifar, R. A class of efficient derivative free iterative method with and without memory for solving nonlinear equations. Comput. Math. Comput. Model. Appl.2022, 1, 20–26. [Google Scholar]
Howk, C.L.; Hueso, J.L.; Martínez, E.; Teruel, C. A class of efficient high-order iterative methods with memory for nonlinear equations and their dynamics. Math. Meth. Appl. Sci.2018, 41, 7263–7282. [Google Scholar] [CrossRef]
Sharma, H.; Kansal, M.; Behl, R. An Efficient Two-Step Iterative Family Adaptive with Memory for Solving Nonlinear Equations and Their Applications. Math. Comput. Appl.2022, 27, 97. [Google Scholar] [CrossRef]
Thangkhenpau, G.; Panday, S.; Bolundut, L.C.; Jäntschi, L. Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations. Symmetry2023, 15, 1546. [Google Scholar] [CrossRef]
Thangkhenpau, G.; Panday, S.; Chanu, W.H. New efficient bi-parametric families of iterative methods with engineering applications and their basins of attraction. Result. Control Opt.2023, 12, 100243. [Google Scholar] [CrossRef]
Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of optimal iterative methods with their applications and basins of attraction. Symmetry2022, 14, 2020. [Google Scholar] [CrossRef]
Wang, X.; Zhang, T. Some Newton-type iterative methods with and without memory for solving nonlinear equations. Int. J. Comput. Meth.2014, 11, 1350078. [Google Scholar] [CrossRef]
Kong-ied, B. Two new eighth and twelfth order iterative methods for solving nonlinear equations. Int. J. Math. Comput. Sci.2021, 16, 333–344. [Google Scholar]
Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
Alefeld, G.; Herzberger, J. Introduction to Interval Computation; Academic Press: Berlin, Germany, 2012. [Google Scholar]
Devi, K.; Maroju, P. Local convergence study of tenth-order iterative method in Banach spaces with basin of attraction. AIMS Math.2024, 9, 6648–6667. [Google Scholar] [CrossRef]
Ogbereyivwe, O.; Izevbizua, O.; Umar, S.S. Some high-order convergence modifications of the Householder method for nonlinear equations. Commun. Nonlinear Anal.2023, 11, 1–11. [Google Scholar]
Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett.2000, 13, 87–93. [Google Scholar] [CrossRef]
Figure 1.
Comparison of the algorithms based on the error in consecutive iterations, , after the first three iterations.
Figure 1.
Comparison of the algorithms based on the error in consecutive iterations, , after the first three iterations.
Table 1.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Table 1.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Method
COC
BK8
KP10
OSO10
NJ10
NAJJ10
XT10
NWM11
Table 2.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Table 2.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Method
COC
BK8
KP10
OSO10
NJ10
NAJJ10
XT10
NWM11
Table 3.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Table 3.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Method
COC
BK8
KP10
OSO10
NJ10
NAJJ10
XT10
NWM11
Table 4.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Table 4.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Method
COC
BK8
KP10
OSO10
NJ10
NAJJ10
XT10
NWM11
Table 5.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Table 5.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Method
COC
BK8
KP10
OSO10
NJ10
NAJJ10
XT10
NWM11
Table 6.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Table 6.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Method
COC
BK8
KP10
OSO10
NJ10
NAJJ10
XT10
NWM11
Table 7.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Table 7.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Method
COC
BK8
KP10
OSO10
NJ10
NAJJ10
XT10
NWM11
Table 8.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Table 8.
Comparison of without-memory and with-memory algorithms after the first three (n = 3) iterations for .
Method
COC
BK8
KP10
OSO10
NJ10
NAJJ10
XT10
NWM11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Panday, S.; Mittal, S.K.; Stoenoiu, C.E.; Jäntschi, L.
A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations. Mathematics2024, 12, 1809.
https://doi.org/10.3390/math12121809
AMA Style
Panday S, Mittal SK, Stoenoiu CE, Jäntschi L.
A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations. Mathematics. 2024; 12(12):1809.
https://doi.org/10.3390/math12121809
Chicago/Turabian Style
Panday, Sunil, Shubham Kumar Mittal, Carmen Elena Stoenoiu, and Lorentz Jäntschi.
2024. "A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations" Mathematics 12, no. 12: 1809.
https://doi.org/10.3390/math12121809
APA Style
Panday, S., Mittal, S. K., Stoenoiu, C. E., & Jäntschi, L.
(2024). A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations. Mathematics, 12(12), 1809.
https://doi.org/10.3390/math12121809
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.
Article Metrics
No
No
Article Access Statistics
For more information on the journal statistics, click here.
Multiple requests from the same IP address are counted as one view.
Panday, S.; Mittal, S.K.; Stoenoiu, C.E.; Jäntschi, L.
A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations. Mathematics2024, 12, 1809.
https://doi.org/10.3390/math12121809
AMA Style
Panday S, Mittal SK, Stoenoiu CE, Jäntschi L.
A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations. Mathematics. 2024; 12(12):1809.
https://doi.org/10.3390/math12121809
Chicago/Turabian Style
Panday, Sunil, Shubham Kumar Mittal, Carmen Elena Stoenoiu, and Lorentz Jäntschi.
2024. "A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations" Mathematics 12, no. 12: 1809.
https://doi.org/10.3390/math12121809
APA Style
Panday, S., Mittal, S. K., Stoenoiu, C. E., & Jäntschi, L.
(2024). A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations. Mathematics, 12(12), 1809.
https://doi.org/10.3390/math12121809
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.