Next Article in Journal
Universal Knowledge Graph Embedding Framework Based on High-Quality Negative Sampling and Weighting
Next Article in Special Issue
Hybrid Chebyshev-Type Methods for Solving Nonlinear Equations
Previous Article in Journal
Some Results on Multivalued Proximal Contractions with Application to Integral Equation
Previous Article in Special Issue
Two-Step Fifth-Order Efficient Jacobian-Free Iterative Method for Solving Nonlinear Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Ninth-Order Memory-Based Iterative Technique for Efficiently Solving Nonlinear Equations

by
Shubham Kumar Mittal
1,
Sunil Panday
1,* and
Lorentz Jäntschi
2
1
Department of Mathematics, National Institute of Technology Manipur, Langol, Imphal 795004, Manipur, India
2
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 103-105 Muncii Blvd., 400641 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(22), 3490; https://doi.org/10.3390/math12223490
Submission received: 4 October 2024 / Revised: 3 November 2024 / Accepted: 6 November 2024 / Published: 8 November 2024
(This article belongs to the Special Issue New Trends and Developments in Numerical Analysis: 2nd Edition)

Abstract

:
In this article, we present a novel three-step with-memory iterative method for solving nonlinear equations. We have improved the convergence order of a well-known optimal eighth-order iterative method by converting it into a with-memory version. The Hermite interpolating polynomial is utilized to compute a self-accelerating parameter that improves the convergence order. The proposed uni-parametric with-memory iterative method improves its R-order of convergence from 8 to 8.8989 . Additionally, no more function evaluations are required to achieve this improvement in convergence order. Furthermore, the efficiency index has increased from 1.6818 to 1.7272 . The proposed method is shown to be more effective than some well-known existing methods, as shown by extensive numerical testing on a variety of problems.

1. Introduction

Many complex problems in science and engineering involve nonlinear equations of the form ζ ( x ) = 0 , where ζ : D R R is a scalar function defined over an open interval D. Solutions to these equations typically cannot be expressed in closed form. As traditional analytical methods are often insufficient for solving such equations, iterative numerical techniques have become indispensable, especially with advancements in computational technology that enable more efficient and accurate solutions. Newton’s method [1] is a widely used iterative technique for approximating the simple root ξ of ζ ( x ) = 0 . It follows the iterative formula:
x n + 1 = x n ζ ( x n ) ζ ( x n ) , n = 0 , 1 , 2 ,
where ζ is the function and ζ is its derivative. Newton’s method is known for its quadratic convergence near the root and it requires the evaluation of both the function and its derivative in each iteration. Researchers continue to refine Newton’s method to improve convergence rates and enhance its practical applicability. Multipoint iterative methods have emerged as the most efficient root-solvers, surpassing the theoretical limitations of one-point methods in terms of convergence order and computational efficiency. These advantages have led to a surge in interest in multipoint methods in the current era. The advancement of symbolic computation and multi-precision arithmetic has further accelerated their development.
However, while improving the convergence rate is advantageous, it may also lead to a higher number of function evaluations, which can ultimately decrease the efficiency index of these methods. The efficiency index, as discussed in [2], measures the balance between the order of convergence and the number of functional evaluations per step, represented by the formula E = ρ 1 / γ , where ρ is the order of convergence and γ is the number of functional and derivative evaluations conducted per iteration. Researchers have a particular interest in developing accelerated multipoint methods with memory due to their high computational efficiency. They aim to enhance the order of these methods beyond the limits of optimal methods, guided by Kung–Traub’s conjecture, which posits that n + 1 function evaluations can achieve the optimal convergence order of 2 n [2].
However, iterative methods that incorporate memory leverage the state of recent and previous iterations to increase the efficiency index as well as the convergence order. Significant efforts have been made recently in the field to expand without-memory methods to with-memory methods by employing self-accelerating parameters. Liu et al. [3] upgraded a single-step without-memory method with a second order of convergence to a with-memory method using one self-accelerating parameter and achieved a fourth order of convergence. Sharma et al. [4] upgraded an eighth-order without-memory iterative method to a with-memory method using two self-accelerating parameters and attained a tenth order of convergence. Additionally, Thangkhenpau et al. [5] developed a derivative-free without-memory iterative method with an eighth order of convergence and then expanded it to a with-memory method using four self-accelerating parameters, which resulted in an increase in the convergence order from 8 to 15.5156. In recent years, the development of with-memory iterative methods has garnered considerable interest among researchers. Notable contributors to the development of with-memory methods include Choubey et al. [6,7,8], Raziyeh Erfanifar [9], Howk et al. [10], Sharma et al. [11,12], Wang and Zhang [13], Liu et al. [14], and Panday et al. [15].
In this article, we proposed a new three-step, uni-parametric, with-memory iterative technique while involving complex error computations. It is designed for modular implementation to facilitate practical adoption in solving nonlinear equations efficiently and it improves the R-order of convergence from 8 to 8.8989. An efficiency index of 1.7272 is attained by this method. By adding a self-accelerating parameter to the third step of an existing optimal eighth-order without-memory iterative technique [16] and conducting a comprehensive convergence analysis, a new uni-parametric three-point with-memory iterative method was developed, as described in Section 2. A detailed assessment using numerical tests is presented in Section 3, which offers a thorough comparison of the suggested method with other well-established methods. Section 4 provides a comprehensive summary of the research findings and their implications.

2. Analysis of Convergence for With-Memory Method

In this section, we use a parameter α in the third step of the three-step optimal eighth-order scheme proposed by Matthies et al. [16] in 2016, to increase its order of convergence from eight to nine. After adding the parameter α in the third step of [16], we obtain
y n = x n ζ ( x n ) ζ ( x n ) , z n = x n ζ ( x n ) ζ ( x n ) 1 + ζ ( y n ) ζ ( x n ) + 1 + 1 1 + ζ ( x n ) ζ ( x n ) ζ ( y n ) ζ ( x n ) 2 , x n + 1 = z n ζ ( z n ) ζ [ z n , y n ] + ( z n y n ) ζ [ z n , y n , x n ] + ( z n y n ) ( z n x n ) ζ [ z n , y n , x n , x n ] + α ζ ( z n ) ,
where ζ [ z n , y n ] , ζ [ z n , y n , x n ] , and ζ [ z n , y n , x n , x n ] represent the divided differences and are defined by ζ [ z n , y n ] = ζ ( z n ) ζ ( y n ) z n y n , ζ [ z n , y n , x n ] = ζ [ z n , y n ] ζ [ y n , x n ] z n x n , and ζ [ z n , y n , x n , x n ] = ζ [ z n , y n , x n ] ζ [ y n , x n , x n ] z n x n .
Using Taylor-series approximation, the expressions for ζ ( x n ) and ζ ( x n ) can be written as
ζ ( x n ) = A ( e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + c 5 e n 5 + c 6 e n 6 + c 7 e n 7 + c 8 e n 8 ) + O e n 9 ,
ζ ( x n ) = A ( 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + 5 c 5 e n 4 + 6 c 6 e n 5 + 7 c 7 e n 6 + 8 c 8 e n 7 + 9 c 9 e n 8 ) + O e n 9 ,
where A = ζ ( ξ ) , ξ is the simple root of ζ ( x ) , e n = x n ξ , and c j = ζ ( j ) ( ξ ) j ! ζ ( ξ ) for j = 2 , 3 , .
Now, the error expressions for the first two sub-steps of (2) is given by [16]:
e n , y = c 2 e n 2 + 2 c 2 2 + 2 c 3 e n 3 + 4 c 2 3 7 c 2 c 3 + 3 c 4 e n 4 + O e n 5 ,
e n , z = c 2 c 2 + 5 c 2 2 c 3 e n 4 + 8 c 2 3 36 c 2 4 2 c 3 2 + c 2 2 ( 1 + 32 c 3 ) + c 2 ( 4 c 3 2 c 4 ) e n 5 + O e n 6 ,
and we obtain the error expansion for the third sub-step of (2) as:
e n + 1 = c 2 2 c 2 + 5 c 2 2 c 3 ( α + c 2 ) ( c 2 + 5 c 2 2 c 3 ) + c 4 e n 8 c 2 ( 8 ( 19 + 45 α ) c 2 6 + 360 c 2 7 + 2 c 2 5 ( 13 + 76 α 196 c 3 ) + 4 c 3 2 ( α c 3 + c 4 ) + c 2 4 2 + 26 α 8 ( 15 + 49 α ) c 3 + 66 c 4 + 2 c 2 3 ( α 5 ( 1 + 12 α ) c 3 + 42 c 3 2 + ( 7 + 10 α ) c 4 5 c 5 ) + c 2 2 2 c 3 ( 5 α + 6 ( 1 + 7 α ) c 3 ) + c 4 + 4 ( α 12 c 3 ) c 4 2 c 5 + 2 c 2 6 α c 3 2 2 c 3 3 + c 4 2 + c 3 ( ( ( 3 + 2 α ) c 4 ) + c 5 ) ) e n 9 + O e n 10 ,
where e n , y = y n ξ , e n , z = z n ξ , and α R . We obtain the following with-memory iterative scheme by replacing α with a self-accelerating parameter α n in (2):
y n = x n ζ ( x n ) ζ ( x n ) , z n = x n ζ ( x n ) ζ ( x n ) 1 + ζ ( y n ) ζ ( x n ) + 1 + 1 1 + ζ ( x n ) ζ ( x n ) ζ ( y n ) ζ ( x n ) 2 , x n + 1 = z n ζ ( z n ) ζ [ z n , y n ] + ( z n y n ) ζ [ z n , y n , x n ] + ( z n y n ) ( z n x n ) ζ [ z n , y n , x n , x n ] + α n ζ ( z n ) .
The above scheme is represented by NWM9. Now, from (7), it is clear that the convergence order of the algorithm (2) is eight when α c 4 c 2 + 5 c 2 2 c 3 c 2 . Next, to accelerate the order of convergence of the algorithm presented in (8) from eight to nine, we can assume α = c 4 c 2 + 5 c 2 2 c 3 c 2 = ζ ( i v ) ( ξ ) ζ ( ξ ) 12 ζ ( ξ ) ζ ( ξ ) + 30 ( ζ ( ξ ) ) 2 4 ζ ( ξ ) ζ ( ξ ) ζ ( ξ ) 2 ζ ( ξ ) , but in reality, the exact values of ζ ( ξ ) , ζ ( ξ ) , ζ ( ξ ) , and ζ ( i v ) ( ξ ) are not attainable in practice. So, we assume the parameter α as α n . The parameter α n can be calculated by using the available data from the current and previous iterations and satisfies the condition lim n α n = c 4 c 2 + 5 c 2 2 c 3 c 2 = ζ ( i v ) ( ξ ) ζ ( ξ ) 12 ζ ( ξ ) ζ ( ξ ) + 30 ( ζ ( ξ ) ) 2 4 ζ ( ξ ) ζ ( ξ ) ζ ( ξ ) 2 ζ ( ξ ) such that the eighth-order asymptotic convergence constant should be zero in the error expression (7). The formula for α n is as follows:
α n = H 7 ( i v ) ( z n ) ζ ( x n ) 12 H 5 ( x n ) ζ ( x n ) + 30 ( H 5 ( x n ) ) 2 4 H 6 ( y n ) ζ ( x n ) H 5 ( x n ) 2 ζ ( x n ) ,
where the Hermite interpolating polynomials H m ( x ) for m = 5 , 6 , 7 are given by
H 7 ( x ) = ζ ( z n ) + ( x z n ) ζ [ z n , y n ] + ( x z n ) ( x y n ) ζ [ z n , y n , x n ] + ( x z n ) ( x y n ) ( x x n ) ζ [ z n , y n , x n , x n ] + ( x z n ) ( x y n ) ( x x n ) 2 ζ [ z n , y n , x n , x n , z n 1 ] + ( x z n ) ( x y n ) ( x x n ) 2 ( x z n 1 ) ζ [ z n , y n , x n , x n , z n 1 , y n 1 ] + ( x z n ) ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ζ [ z n , y n , x n , x n , z n 1 , y n 1 , x n 1 ] + ( x z n ) ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) ζ [ z n , y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] ,
H 6 ( x ) = ζ ( y n ) + ( x y n ) ζ [ y n , x n ] + ( x y n ) ( x x n ) ζ [ y n , x n , x n ] + ( x y n ) ( x x n ) 2 ζ [ y n , x n , x n , z n 1 ] + ( x y n ) ( x x n ) 2 ( x z n 1 ) ζ [ y n , x n , x n , z n 1 , y n 1 ] + ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ζ [ y n , x n , x n , z n 1 , y n 1 , x n 1 ] + ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) ζ [ y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] ,
H 5 ( x ) = ζ ( x n ) + ( x x n ) ζ [ x n , x n ] + ( x x n ) 2 ζ [ x n , x n , z n 1 ] + ( x x n ) 2 ( x z n 1 ) ζ [ x n , x n , z n 1 , y n 1 ] + ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ζ [ x n , x n , z n 1 , y n 1 , x n 1 ] + ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) ζ [ x n , x n , z n 1 , y n 1 , x n 1 , x n 1 ] ,
Note: The condition H m ( x n ) = ζ ( x n ) is satisfied by the Hermite interpolation polynomial H m ( x ) for m = 5 , 6 , 7 . So, α n = H 7 ( i v ) ( z n ) ζ ( x n ) 12 H 5 ( x n ) ζ ( x n ) + 30 ( H 5 ( x n ) ) 2 4 H 6 ( y n ) ζ ( x n ) H 5 ( x n ) 2 ζ ( x n ) can be expressed as α n = H 7 ( i v ) ( z n ) H m ( x n ) 12 H 5 ( x n ) H m ( x n ) + 30 ( H 5 ( x n ) ) 2 4 H 6 ( y n ) H m ( x n ) H 5 ( x n ) 2 H m ( x n ) for m = 5 , 6 , 7 .
Theorem 1.
Let H m be the Hermite polynomial of degree m, interpolating the function ζ at interpolation nodes z n , y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 within an interval D R and the derivative ζ ( m + 1 ) is continuous in D with H m ( x n ) = ζ ( x n ) and H m ( x n ) = ζ ( x n ) . Suppose that all nodes z n , y n , x n , x n , z n 1 , y n 1 , x n 1 , x n 1 are in the neighborhood of the root ξ. Then,
H 7 ( i v ) ( z n ) = 24 ζ ( ξ ) ( c 4 c 8 e n 1 , z e n 1 , y e n 1 2 ) ,
H 6 ( y n ) = 6 ζ ( ξ ) ( c 3 c 7 e n 1 , z e n 1 , y e n 1 2 ) ,
H 5 ( x n ) = 2 ζ ( ξ ) ( c 2 c 6 e n 1 , z e n 1 , y e n 1 2 ) ,
and
α n = H 7 ( i v ) ( z n ) ζ ( x n ) 12 H 5 ( x n ) ζ ( x n ) + 30 ( H 5 ( x n ) ) 2 4 H 6 ( y n ) ζ ( x n ) H 5 ( x n ) 2 ζ ( x n ) c 4 c 2 + 5 c 2 2 c 3 c 2 + c 2 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 + 5 c 2 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + 5 c 6 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 3 e n 1 , y 3 e n 1 6 10 c 2 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 c 3 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + c 7 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 .
Again, after simplification, we have
α n + c 4 c 2 + 5 c 2 2 c 3 + c 2 = ( α n + c 2 ) ( c 2 + 5 c 2 2 c 3 ) + c 4 ( c 2 c 2 + 5 c 2 2 c 3 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + 5 c 2 2 c 8 c 2 + 5 c 2 2 c 3 + 5 c 6 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 10 c 2 c 6 c 8 c 2 + 5 c 2 2 c 3 c 3 c 8 c 2 + 5 c 2 2 c 3 + c 7 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 ) e n 1 , z e n 1 , y e n 1 2 .
Proof. 
We can calculate the expression of the seventh-degree, sixth-degree, and fifth-degree Hermite interpolation polynomial as
ζ ( x ) H 7 ( x ) = ζ ( 8 ) ( δ ) 8 ! ( x z n ) ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) 2 ,
ζ ( x ) H 6 ( x ) = ζ ( 7 ) ( δ ) 7 ! ( x y n ) ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) 2 ,
ζ ( x ) H 5 ( x ) = ζ ( 6 ) ( δ ) 6 ! ( x x n ) 2 ( x z n 1 ) ( x y n 1 ) ( x x n 1 ) 2 .
Now, we obtain the below-mentioned equations by rearranging and differentiating Equation (15) four times at the point x = z n , Equation (16) three times at the point x = y n , and Equation (17) two times at the point x = x n , respectively.
H 7 ( i v ) ( z n ) = ζ ( i v ) ( z n ) 24 ζ ( 8 ) ( δ ) 8 ! ( z n z n 1 ) ( z n y n 1 ) ( z n x n 1 ) 2 ,
H 6 ( y n ) = ζ ( y n ) 6 ζ ( 7 ) ( δ ) 7 ! ( y n z n 1 ) ( y n y n 1 ) ( y n x n 1 ) 2 ,
H 5 ( x n ) = ζ ( x n ) 2 ζ ( 6 ) ( δ ) 6 ! ( x n z n 1 ) ( x n y n 1 ) ( x n x n 1 ) 2 .
Next, Taylor’s series expansion of ζ at the points z n , y n , and x n in D and δ D about the zero ξ of ζ provides
ζ ( x n ) = ζ ( ξ ) 1 + 2 c 2 e n + 3 c 3 e n 2 + O ( e n 3 ) ,
ζ ( x n ) = ζ ( ξ ) 2 c 2 + 6 c 3 e n + O ( e n 2 ) ,
ζ ( y n ) = ζ ( ξ ) 6 c 3 + 24 c 4 e n , y + O ( e n , y 2 ) .
Similarly,
ζ ( i v ) ( z n ) = ζ ( ξ ) 24 c 4 + 120 c 5 e n , z + O ( e n , z 2 ) ,
ζ ( 6 ) ( δ ) = ζ ( ξ ) 6 ! c 6 + 7 ! c 7 e δ + O ( e δ 2 ) ,
ζ ( 7 ) ( δ ) = ζ ( ξ ) 7 ! c 7 + 8 ! c 8 e δ + O ( e δ 2 ) ,
ζ ( 8 ) ( δ ) = ζ ( ξ ) 8 ! c 8 + 9 ! c 9 e δ + O ( e δ 2 ) ,
where e δ = δ ξ . Putting (24) and (27) in (18), (23) and (26) in (19), and (22) and (25) in (20), we obtain
H 7 ( i v ) ( z n ) = 24 ζ ( ξ ) ( c 4 c 8 e n 1 , z e n 1 , y e n 1 2 ) ,
H 6 ( y n ) = 6 ζ ( ξ ) ( c 3 c 7 e n 1 , z e n 1 , y e n 1 2 ) ,
and
H 5 ( x n ) = 2 ζ ( ξ ) ( c 2 c 6 e n 1 , z e n 1 , y e n 1 2 ) ,
Using Equations (21), (28), (29) and (30), we have
H 7 ( i v ) ( z n ) ζ ( x n ) 12 H 5 ( x n ) ζ ( x n ) + 30 ( H 5 ( x n ) ) 2 4 H 6 ( y n ) ζ ( x n ) H 5 ( x n ) 2 ζ ( x n ) c 4 c 2 + 5 c 2 2 c 3 c 2 + c 2 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 + 5 c 2 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + 5 c 6 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 3 e n 1 , y 3 e n 1 6 10 c 2 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 c 3 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + c 7 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 .
And hence,
α n c 4 c 2 + 5 c 2 2 c 3 c 2 + ( c 2 c 2 + 5 c 2 2 c 3 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + 5 c 2 2 c 8 c 2 + 5 c 2 2 c 3 + 5 c 6 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 10 c 2 c 6 c 8 c 2 + 5 c 2 2 c 3 c 3 c 8 c 2 + 5 c 2 2 c 3 + c 7 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 ) e n 1 , z e n 1 , y e n 1 2 .
or
α n + c 4 c 2 + 5 c 2 2 c 3 + c 2 = ( α n + c 2 ) ( c 2 + 5 c 2 2 c 3 ) + c 4 ( c 2 c 2 + 5 c 2 2 c 3 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + 5 c 2 2 c 8 c 2 + 5 c 2 2 c 3 + 5 c 6 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 10 c 2 c 6 c 8 c 2 + 5 c 2 2 c 3 c 3 c 8 c 2 + 5 c 2 2 c 3 + c 7 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 ) e n 1 , z e n 1 , y e n 1 2 .
This completes the proof of Theorem 1. □
The definition of R-order of convergence [17] and the statement in [18] can be used to estimate the order of convergence of the iterative scheme (8).
Theorem 2.
If the errors e j = x j ξ evaluated by an iterative root-finding method IM fulfill
e k + 1 i = 0 m 2 ( e k i ) m i , k k ( { e k } )
then the R-order of convergence of IM, denoted with O R ( I M , ξ ) , satisfies the inequality O R ( I M , ξ ) s * , where s * is the unique positive solution of the equation s n + 1 i = 0 n m i s n i = 0 [18].
Proof. 
The proof of the above Theorem 2 can be found in [18]. □
Presently, for the new iterative scheme with memory (8), we can state the subsequent convergence theorem.
Theorem 3.
In the iterative method (8), let α n be a varying parameter, calculated by Equation (9). If an initial guess x 0 is sufficiently near to a simple zero ξ of ζ ( x ) , then the R-order of convergence of the iterative method (8) with memory is at least 8.8989.
Proof. 
Let the iterative method (IM) generate the sequence of { x n } which converges to the root ξ of ζ ( x ) . By means of R-order O R ( I M , ξ ) r , we express
e n + 1 D n , r e n r ,
and
e n D n 1 , r e n 1 r .
Next, D n , r will tend to the asymptotic error constant D r of I M by taking n ; then,
e n + 1 D n , r ( D n 1 , r e n 1 r ) r = D n , r D n 1 , r r e n 1 r 2 .
The resulting error expression of the with-memory scheme (8) can be obtained using (5)–(7) and the varying parameter α n .
e n , y = y n ξ c 2 e n 2 ,
e n , z = z n ξ c 2 ( c 2 + 5 c 2 2 c 3 ) e n 4 ,
and
e n + 1 = x n + 1 ξ c 2 2 c 2 + 5 c 2 2 c 3 ( α + c 2 ) ( c 2 + 5 c 2 2 c 3 ) + c 4 e n 8 .
Here, the higher-order terms in Equations (38)–(40) are excluded.
Now, let the R-order convergence of the iterative sequences { y n } and { z n } be p and q, respectively; then,
e n , y D n , p e n p D n , p ( D n 1 , r e n 1 r ) p = D n , p D n 1 , r p e n 1 r p ,
and
e n , z D n , q e n q D n , q ( D n 1 , r e n 1 r ) q = D n , q D n 1 , r q e n 1 r q .
Now, by Equations (36) and (38), we obtain
e n , y c 2 e n 2 c 2 ( D n 1 , r e n 1 r ) 2 c 2 D n 1 , r 2 e n 1 2 r .
Also, by Equations (36) and (39), we obtain
e n , z c 2 ( c 2 + 5 c 2 2 c 3 ) e n 4 c 2 ( c 2 + 5 c 2 2 c 3 ) ( D n 1 , r e n 1 r ) 4 c 2 ( c 2 + 5 c 2 2 c 3 ) D n 1 , r 4 e n 1 4 r .
Again, by Equations (33), (36) and (40), we have
e n + 1 c 2 2 c 2 + 5 c 2 2 c 3 ( α + c 2 ) ( c 2 + 5 c 2 2 c 3 ) + c 4 e n 8 c 2 2 c 2 + 5 c 2 2 c 3 ( c 2 c 2 + 5 c 2 2 c 3 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + 5 c 2 2 c 8 c 2 + 5 c 2 2 c 3 + 5 c 6 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 10 c 2 c 6 c 8 c 2 + 5 c 2 2 c 3 c 3 c 8 c 2 + 5 c 2 2 c 3 + c 7 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 ) e n 1 , z e n 1 , y e n 1 2 ( D n 1 , r e n 1 r ) 8 c 2 2 c 2 + 5 c 2 2 c 3 ( c 2 c 2 + 5 c 2 2 c 3 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + 5 c 2 2 c 8 c 2 + 5 c 2 2 c 3 + 5 c 6 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 10 c 2 c 6 c 8 c 2 + 5 c 2 2 c 3 c 3 c 8 c 2 + 5 c 2 2 c 3 + c 7 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 ) D n 1 , q e n 1 q D n 1 , p e n 1 p e n 1 2 ( D n 1 , r e n 1 r ) 8 c 2 2 c 2 + 5 c 2 2 c 3 ( c 2 c 2 + 5 c 2 2 c 3 c 6 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 + 5 c 2 2 c 8 c 2 + 5 c 2 2 c 3 + 5 c 6 2 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z 2 e n 1 , y 2 e n 1 4 10 c 2 c 6 c 8 c 2 + 5 c 2 2 c 3 c 3 c 8 c 2 + 5 c 2 2 c 3 + c 7 c 8 c 2 + 5 c 2 2 c 3 e n 1 , z e n 1 , y e n 1 2 ) D n 1 , q D n 1 , p D n 1 , r 8 e n 1 8 r + p + q + 2 .
since r > q > p . By equating the exponents of e n 1 present in the set of relations (41)–(43), (42)–(44), and (37)–(45), we attain the resulting system of equations:
r p = 2 r r q = 4 r r 2 = 8 r + p + q + 2
The solution of the system of Equation (46) is specified by r = 8.8989 , q = 4 , and p = 2 . As a result, the R-order of convergence of the with-memory iterative method (8) is at least 8.8989 . □

3. Numerical Discussion

This section examines the convergence behavior of the newly developed with-memory technique NWM9 introduced in (8). Our goal is to evaluate the effectiveness of a recently developed iterative method by applying it to a variety of nonlinear problems. The nonlinear test functions, along with their roots and initial guesses for our numerical analysis, are described below:
  • Example 1: ζ 1 ( x ) = 1 + x 2 · e cos x / 2 ( x + 1 ) e sin x / 2 , x 0 = 0.9 , ξ 0.8475
  • Example 2: ζ 2 ( x ) = e x 3 + cos x + 1 x 2 + x + 1 , x 0 = 0.8 , ξ 1.0787
  • Example 3: ζ 3 ( x ) = x · e x 2 sin 2 x + 3 cos x + 5 , x 0 = 1.2 , ξ 1.2076
  • Example 4: ζ 4 ( x ) = e x 2 ( 1 + x 3 + x 6 ) ( x 2 ) , x 0 = 1.95 , ξ 2.0000
  • Example 5: ζ 5 ( x ) = x 7 4 x 4 + x 1 , x 0 = 1.58 , ξ 1.5749
  • Example 6: ζ 6 ( x ) = e x 2 4 + sin ( x 2 ) x 4 + 15 , x 0 = 2.1 , ξ 2.0000
  • Example 7: ζ 7 ( x ) = e x 2 + x + 2 1 , x 0 = 2.01 , ξ 2.0000
  • Example 8 [19,20]: In civil engineering, beams in mathematical models are horizontal elements that support loads and span openings, sometimes called lintels if made of stone or brick. “Floor joist” or “roof joist” designates beams supporting floors or roofs, respectively. Stringers support lighter bridge deck loads, while floor beams handle heavier transverse loads. Girders, constructed from metal plates or concrete, bear terminal loads of smaller beams, enhancing rigidity and extending spans. Various nonlinear mathematical models have been developed to specify the precise beam location. The model below is an example which was taken from [19,20]:
    ζ 8 ( x ) = x 4 + 4 x 3 24 x 2 + 16 x + 16 = 0 .
The roots of the above fourth-order polynomial are 2, 2 and 4 ± 2 3 and the initial guess for ζ 8 ( x ) is taken as x 0 = 0.55 ”.
We compare our method NWM9 (8) to various well-established methods published in the literature, including MSSV8 (48), ACD8 (49), LE8 (50), SH8 (51), BAC8 (52), and TKM9 (53), which are discussed below:
In 2016, Matthies et al. (MSSV8) [16] developed an optimal eighth-order iterative method which is defined as
y n = x n ζ ( x n ) ζ ( x n ) , z n = x n ζ ( x n ) ζ ( x n ) 1 + ζ ( y n ) ζ ( x n ) + 1 + 1 1 + ζ ( x n ) ζ ( x n ) ζ ( y n ) ζ ( x n ) 2 , x n + 1 = z n ζ ( z n ) ζ [ z n , y n ] + ( z n y n ) ζ [ z n , y n , x n ] + ( z n y n ) ( z n x n ) ζ [ z n , y n , x n , x n ] .
In 2024, Abdullah et al. (ACD8) [21] developed an optimal eighth-order iterative method which is defined as
y n = x n ζ ( x n ) ζ ( x n ) , z n = x n ζ ( x n ) ( ζ ( x n ) 3 + 2 ζ ( x n ) ζ ( y n ) 2 ) ζ ( x n ) ( ζ ( x n ) ζ ( y n ) ) ( ζ ( x n ) 2 + ζ ( y n ) 2 ) , x n + 1 = z n ζ ( z n ) ( z n y n ) ζ ( z n ) ζ ( y n ) ( A ( u n ) + B ( v n ) + H ( w n ) ) ,
where u n = ζ ( z n ) ζ ( y n ) , v n = ζ ( z n ) ζ ( x n ) , and w n = ζ ( y n ) ζ ( x n ) .
In 2014, Lotfi and Eftekhari (LE8) [22] developed an optimal eighth-order iterative method which is defined as
y n = x n ζ ( x n ) ζ ( x n ) , z n = y n ζ ( x n ) ζ ( x n ) 2 ζ ( y n ) ζ ( y n ) ζ ( x n ) , x n + 1 = z n ( K ( t 1 ) × L ( t 2 ) × P ( t 3 ) ) ζ ( z n ) ζ [ x n , y n ] ζ [ x n , z n ] ζ [ y n , z n ] ,
where t 1 = ζ ( z n ) ζ ( x n ) , t 2 = ζ ( y n ) ζ ( x n ) , and t 3 = ζ ( z n ) ζ ( y n ) .
In 2020, Solaiman and Hashim (SH8) [23] developed an optimal eighth-order iterative method which is defined as
y n = x n ζ ( x n ) ζ ( x n ) , z n = y n ζ ( y n ) ζ ( y n ) 2 ( ζ ( y n ) ) 2 ζ ( y n ) R ( x n , y n ) 4 ( ζ ( y n ) ) 4 4 ζ ( y n ) ( ζ ( y n ) ) 2 R ( x n , y n ) + ( ζ ( y n ) ) 2 ( R ( x n , y n ) ) 2 , x n + 1 = z n ζ ( z n ) ζ ( z n ) ,
where ζ ( y n ) , ζ ( z n ) , and R ( x n , y n ) are approximated as
ζ ( y n ) 2 ζ [ y n , x n ] ζ ( x n ) .
ζ ( z n ) ζ [ z n , x n ] ( 2 + x n z n y n z n ) ( x n z n ) 2 ( x n y n ) ( y n z n ) ζ [ x n , y n ] + ζ ( x n ) y n z n x n y n .
R ( x n , y n ) 3 ζ ( y n ) ζ ( x n ) y n x n 2 ζ ( y n ) ζ ( x n ) 2 x n y n .
In 2020, Behl et al. (BAC8) [24] developed an optimal eighth-order iterative method which is defined as
w n = x n + β ζ ( x n ) ; β R , y n = x n ζ ( x n ) ζ [ w n , x n ] , z n = y n ζ ( w n ) ζ ( y n ) ( y n x n ) ( ζ ( w n ) ζ ( y n ) ) ( ζ ( y n ) ζ ( x n ) ) , x n + 1 = z n ζ ( z n ) ( w n x n ) ( w n y n ) ( x n y n ) ζ [ y n , z n ] ( w n x n ) ( w n z n ) ( x n z n ) a ( y n z n ) ,
where a = ζ [ x n , z n ] ( w n y n ) ( w n z n ) ζ [ w n , z n ] ( x n y n ) ( x n z n ) .
In 2021, Torkashvand et al. (TKM9) [25] proposed a family of with-memory iterative methods with a ninth order of convergence which is defined as
w n = x n + β n ζ ( x n ) , y n = x n ζ ( x n ) ζ [ x n , w n ] , z n = y n ζ [ x n , y n ] ζ [ y n , w n ] + ζ [ x n , w n ] ζ [ x n , y n ] 2 ζ ( y n ) , x n + 1 = z n ζ ( z n ) ζ [ x n , z n ] + ( ζ [ w n , x n , y n ] ζ [ w n , x n , z n ] ζ [ y n , x n , z n ] ) ( x n z n ) ,
where β n is a self-accelerating parameter and can be calculated as β n = 1 ζ ( ξ ) and approximated by equation (3.11) of [25] with β 0 = 0.001 .
Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 summarize the comparative results of all the methods. In these tables, we present the absolute differences between the last two consecutive iterations ( | x n x n 1 | ) and the absolute residual error ( | ζ ( x n ) | ) of up to three iterations for each function, along with the C O C for the proposed methods in comparison to some well-known existing methods. The following equation is used to determine the C O C [26]:
C O C = l o g | ζ ( x n ) / ζ ( x n 1 ) | l o g | ζ ( x n 1 ) / ζ ( x n 2 ) |
All numerical computations were performed using the programming software Mathematica 12.2. To begin the initial iteration of our newly proposed with-memory method NWM9, we set the parameter value α 0 to 0.01 .
The numerical results in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 and Figure 1 show that the newly proposed with-memory method NWM9 is highly competitive and possesses fast convergence towards the roots with minimal absolute residual error and a minimum error value in consecutive iteration as compared to the other existing methods. Furthermore, the numerical findings show that the computational order of convergence aligns with the theoretical convergence order of the newly proposed method in the test functions.

4. Conclusions

In this article, we present a three-point with-memory iterative method using a self-accelerating parameter, elevating its convergence to the ninth order for nonlinear equations. We improve the efficiency index of an existing eighth-order method from EI = 1.6818 to EI = 1.7272 and raise its R-order of convergence from 8 to 8.8989 by adding this parameter, which is calculated using the Hermite interpolating polynomial, without the need for extra function evaluation. Though it has a greater convergence order than other known approaches, this approach not only speeds up convergence but also needs fewer function evaluations. Our results show that the recently proposed approach NWM9 is a very efficient option for solving nonlinear equations, providing better performance with faster convergence and smaller asymptotic constants.

Author Contributions

Conceptualisation, S.K.M. and S.P.; methodology, S.K.M. and S.P.; software, S.K.M., S.P., and L.J.; validation, S.K.M. and S.P.; formal analysis, S.P., S.K.M., and L.J.; resources, S.K.M.; writing—original draft preparation, S.P. and S.K.M.; writing—review and editing, S.K.M., S.P., and L.J.; visualization, S.P. and S.K.M.; supervision, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pho, K.H. Improvements of the Newton–Raphson method. J. Comput. Appl. Math. 2022, 408, 114106. [Google Scholar] [CrossRef]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Mathematical Association of America: Washington, DC, USA, 1982; Volume 312. [Google Scholar]
  3. Liu, C.S.; Chang, C.W.; Kuo, C.L. Memory-Accelerating Methods for One-Step Iterative Schemes with Lie Symmetry Method Solving Nonlinear Boundary-Value Problem. Symmetry 2024, 16, 120. [Google Scholar] [CrossRef]
  4. Sharma, E.; Mittal, S.K.; Jaiswal, J.P.; Panday, S. An Efficient Bi-Parametric with-Memory Iterative Method for Solving Nonlinear Equations. AppliedMath 2023, 3, 1019–1033. [Google Scholar] [CrossRef]
  5. Thangkhenpau, G.; Panday, S.; Mittal, S.K. New Derivative-Free Families of Four-Parametric with and Without Memory Iterative Methods for Nonlinear Equations. In Proceedings of the International Conference on Science, Technology and Engineering, Coimbatore, India, 17–18 November 2023; Springer Nature: Singapore, 2023; pp. 313–324. [Google Scholar] [CrossRef]
  6. Choubey, N.; Jaiswal, J.P.; Choubey, A. Family of multipoint with memory iterative schemes for solving nonlinear equations. Int. J. Appl. Comput. Math. 2022, 8, 83. [Google Scholar] [CrossRef]
  7. Choubey, N.; Jaiswal, J.P. Two-and three-point with memory methods for solving nonlinear equations. Numer. Anal. Appl. 2017, 10, 74–89. [Google Scholar] [CrossRef]
  8. Choubey, N.; Cordero, A.; Jaiswal, J.P.; Torregrosa, J.R. Dynamical techniques for analyzing iterative schemes with memory. Complexity 2018, 2018, 1232341. [Google Scholar] [CrossRef]
  9. Erfanifar, R. A class of efficient derivative free iterative method with and without memory for solving nonlinear equations. Comput. Math. Comput. Model. Appl. 2022, 1, 20–26. [Google Scholar]
  10. Howk, C.L.; Hueso, J.L.; Martínez, E.; Teruel, C. A class of efficient high-order iterative methods with memory for nonlinear equations and their dynamics. Math. Methods Appl. Sci. 2018, 41, 7263–7282. [Google Scholar] [CrossRef]
  11. Sharma, H.; Kansal, M.; Behl, R. An Efficient Two-Step Iterative Family Adaptive with Memory for Solving Nonlinear Equations and Their Applications. Math. Comput. Appl. 2022, 27, 97. [Google Scholar] [CrossRef]
  12. Sharma, E.; Panday, S.; Mittal, S.K.; Joița, D.M.; Pruteanu, L.L.; Jäntschi, L. Derivative-free families of with-and without-memory iterative methods for solving nonlinear equations and their engineering applications. Mathematics 2023, 11, 4512. [Google Scholar] [CrossRef]
  13. Wang, X.; Zhang, T. Some Newton-type iterative methods with and without memory for solving nonlinear equations. Int. J. Comput. Methods 2014, 11, 1350078. [Google Scholar] [CrossRef]
  14. Liu, C.S.; Chang, C.W. New Memory-Updating Methods in Two-Step Newton’s Variants for Solving Nonlinear Equations with High Efficiency Index. Mathematics 2024, 12, 581. [Google Scholar] [CrossRef]
  15. Panday, S.; Mittal, S.K.; Stoenoiu, C.E.; Jäntschi, L. A New Adaptive Eleventh-Order Memory Algorithm for Solving Nonlinear Equations. Mathematics 2024, 12, 1809. [Google Scholar] [CrossRef]
  16. Matthies, G.; Salimi, M.; Sharifi, S.; Varona, J.L. An optimal three-point eighth-order iterative method without memory for solving nonlinear equations with its dynamics. Jpn. J. Ind. Appl. Math. 2016, 33, 751–766. [Google Scholar] [CrossRef]
  17. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
  18. Alefeld, G.; Herzberger, J. Introduction to Interval Computation; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  19. Abdullah, S.; Choubey, N.; Dara, S. Dynamical analysis of optimal iterative methods for solving nonlinear equations with applications. J. Appl. Anal. Comput. 2024, 14, 3349–3376. [Google Scholar]
  20. Naseem, A.; Rehman, M.A.; Qureshi, S.; Ide, N.A.D. Graphical and numerical study of a newly developed root-finding algorithm and its engineering applications. IEEE Access 2023, 11, 2375–2383. [Google Scholar] [CrossRef]
  21. Abdullah, S.; Choubey, N.; Dara, S. Optimal fourth-and eighth-order iterative methods for solving nonlinear equations with basins of attraction. J. Appl. Math. Comput. 2024, 70, 3477–3507. [Google Scholar] [CrossRef]
  22. Lotfi, T.; Eftekhari, T. A New Optimal Eighth-Order Ostrowski-Type Family of Iterative Methods for Solving Nonlinear Equations. Chin. J. Math. 2014, 2014, 369713. [Google Scholar] [CrossRef]
  23. Solaiman, O.S.; Hashim, I. Optimal Eighth-Order Solver for Nonlinear Equations with Applications in Chemical Engineering. Intell. Autom. Soft Comput. 2021, 27, 379–390. [Google Scholar] [CrossRef]
  24. Behl, R.; Alshomrani, A.S.; Chun, C. A general class of optimal eighth-order derivative free methods for nonlinear equations. J. Math. Chem. 2020, 58, 854–867. [Google Scholar] [CrossRef]
  25. Torkashv, V.; Kazemi, M.; Moccari, M. Structure a family of three-step with-memory methods for solving nonlinear equations and their dynamics. Math. Anal. Convex Optim. 2021, 2, 119–137. [Google Scholar]
  26. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Figure 1. Comparison of the methods based on the error in consecutive iterations, | x n x n 1 | , after the first three iterations.
Figure 1. Comparison of the methods based on the error in consecutive iterations, | x n x n 1 | , after the first three iterations.
Mathematics 12 03490 g001
Table 1. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 1 ( x ) .
Table 1. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 1 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | ζ 1 ( x 3 ) | COC
MSSV8 0.05252 3.7900 × 10 11 7.6785 × 10 84 2.3346 × 10 665 8.0000
ACD8 0.05252 3.4318 × 10 11 2.1348 × 10 84 5.1265 × 10 670 8.0000
LE8 0.05252 3.9805 × 10 12 3.7451 × 10 95 2.4628 × 10 759 8.0000
SH8 0.05252 4.5810 × 10 12 2.2774 × 10 92 9.1021 × 10 735 8.0000
BAC8 0.05252 1.4930 × 10 10 1.8619 × 10 78 1.1669 × 10 621 8.0000
TKM9 0.05252 1.1405 × 10 11 3.6015 × 10 96 7.2790 × 10 817 8.5292
NWM9 0.05784 4.1302 × 10 11 1.9899 × 10 93 3.8586 × 10 831 8.9618
Table 2. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 2 ( x ) .
Table 2. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 2 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | ζ 2 ( x 3 ) | COC
MSSV8 0.27875 1.4921 × 10 7 8.9745 × 10 55 1.3200 × 10 431 8.0000
ACD8 0.27875 2.5147 × 10 6 1.7019 × 10 45 6.4324 × 10 358 8.0000
LE8 0.27875 4.6097 × 10 6 4.9934 × 10 44 8.1313 × 10 347 8.0000
SH8 0.27875 1.7169 × 10 6 2.8891 × 10 48 1.5956 × 10 381 8.0000
BAC8 0.26044 1.8314 × 10 2 1.8518 × 10 11 1.7828 × 10 82 7.9934
TKM9 0.27875 1.5174 × 10 6 1.6804 × 10 51 5.9549 × 10 434 8.5281
NWM9 0.27875 7.3581 × 10 8 7.2640 × 10 62 1.5269 × 10 544 8.9548
Table 3. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 3 ( x ) .
Table 3. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 3 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | ζ 3 ( x 3 ) | COC
MSSV8 0.00765 2.9645 × 10 15 1.4589 × 10 114 1.0193 × 10 907 8.0000
ACD8 0.00765 1.3767 × 10 15 1.3565 × 10 117 2.4483 × 10 932 8.0000
LE8 0.00765 2.1364 × 10 17 7.5667 × 10 134 3.8042 × 10 1064 8.0000
SH8 0.00765 3.8290 × 10 17 1.5719 × 10 131 2.5750 × 10 1045 8.0000
BAC8 0.00765 8.0553 × 10 11 1.2473 × 10 74 8.3702 × 10 584 8.0000
TKM9 0.00765 4.2164 × 10 16 3.1405 × 10 130 9.7191 × 10 1103 8.5327
NWM9 0.00765 2.9645 × 10 15 4.6276 × 10 129 6.0799 × 10 1139 8.8852
Table 4. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 4 ( x ) .
Table 4. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 4 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | ζ 4 ( x 3 ) | COC
MSSV8 0.05000 7.8095 × 10 10 9.1958 × 10 72 4.5442 × 10 567 8.0000
ACD8 0.05000 7.5624 × 10 10 1.8286 × 10 72 2.8568 × 10 573 8.0000
LE8 0.05000 1.0812 × 10 10 9.4320 × 10 80 4.2313 × 10 632 8.0000
SH8 0.05000 1.6034 × 10 11 3.3188 × 10 87 1.4946 × 10 692 8.0000
BAC8 0.05000 4.5664 × 10 9 8.6966 × 10 65 2.0124 × 10 510 8.0000
TKM9 0.05000 2.4915 × 10 10 4.9782 × 10 82 1.3669 × 10 693 8.5313
NWM9 0.05000 7.7203 × 10 10 3.6473 × 10 80 5.5446 × 10 706 8.9007
Table 5. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 5 ( x ) .
Table 5. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 5 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | ζ 5 ( x 3 ) | COC
MSSV8 0.00506 2.8919 × 10 14 3.9009 × 10 104 1.9376 × 10 821 8.0000
ACD8 0.00506 3.5871 × 10 15 3.6316 × 10 112 1.8167 × 10 886 8.0000
LE8 0.00506 1.3336 × 10 16 1.6545 × 10 125 4.2086 × 10 995 8.0000
SH8 0.00506 1.8393 × 10 16 5.7646 × 10 124 2.4321 × 10 982 8.0000
BAC8 0.00506 2.5941 × 10 9 8.1745 × 10 59 3.6057 × 10 453 8.0000
TKM9 0.00506 3.8227 × 10 15 1.0436 × 10 119 3.1705 × 10 1010 8.5323
NWM9 0.00506 2.9141 × 10 14 3.1787 × 10 117 7.0560 × 10 1033 8.9092
Table 6. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 6 ( x ) .
Table 6. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 6 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | ζ 6 ( x 3 ) | COC
MSSV8 0.10000 2.1291 × 10 10 1.9115 × 10 78 2.1784 × 10 621 8.0000
ACD8 0.10000 2.7671 × 10 10 1.7064 × 10 78 9.6367 × 10 623 8.0000
LE8 0.10000 7.6557 × 10 10 6.7672 × 10 75 6.8097 × 10 594 8.0000
SH8 0.10000 4.9364 × 10 10 1.4124 × 10 76 1.7126 × 10 607 8.0000
BAC8 0.05633 4.3659 × 10 2 9.7679 × 10 6 5.3083 × 10 35 8.3857
TKM9 0.10000 5.6142 × 10 10 9.0560 × 10 82 1.0615 × 10 692 8.5296
NWM9 0.10000 2.6493 × 10 10 2.3101 × 10 85 9.8138 × 10 753 8.9103
Table 7. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 7 ( x ) .
Table 7. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 7 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | ζ 7 ( x 3 ) | COC
MSSV8 0.01000 4.6629 × 10 15 9.0468 × 10 114 5.4488 × 10 903 8.0000
ACD8 0.01000 1.6809 × 10 15 7.8996 × 10 118 5.6400 × 10 936 8.0000
LE8 0.01000 1.5489 × 10 17 5.1353 × 10 138 2.2496 × 10 1101 8.0000
SH8 0.01000 5.5733 × 10 17 5.0151 × 10 131 6.4674 × 10 1043 8.0000
BAC8 0.01000 1.0436 × 10 14 1.6235 × 10 110 1.6709 × 10 876 8.0000
TKM9 0.01000 7.7009 × 10 16 6.9025 × 10 129 5.6202 × 10 1093 8.5324
NWM9 0.01000 4.6629 × 10 15 3.3257 × 10 128 2.3441 × 10 1133 8.8878
Table 8. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 8 ( x ) .
Table 8. Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for ζ 8 ( x ) .
Method | ( x 1 x 0 ) | | ( x 2 x 1 ) | | ( x 3 x 2 ) | | ζ 8 ( x 3 ) | COC
MSSV8 0.01410 6.4479 × 10 16 1.5052 × 10 122 5.9152 × 10 974 8.0000
ACD8 0.01410 3.2077 × 10 17 7.2736 × 10 134 2.2649 × 10 1065 8.0000
LE8 0.01410 4.6217 × 10 17 6.0900 × 10 133 2.4665 × 10 1058 8.0000
SH8 0.01410 1.2569 × 10 18 5.3125 × 10 147 2.4104 × 10 1172 8.0000
BAC8 0.01410 2.1035 × 10 10 2.6249 × 10 72 6.8772 × 10 566 8.0000
TKM9 0.01410 2.6040 × 10 16 7.0174 × 10 135 6.8920 × 10 1145 8.5322
NWM9 0.01410 6.4479 × 10 16 1.0519 × 10 137 3.8029 × 10 1232 9.0000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mittal, S.K.; Panday, S.; Jäntschi, L. Enhanced Ninth-Order Memory-Based Iterative Technique for Efficiently Solving Nonlinear Equations. Mathematics 2024, 12, 3490. https://doi.org/10.3390/math12223490

AMA Style

Mittal SK, Panday S, Jäntschi L. Enhanced Ninth-Order Memory-Based Iterative Technique for Efficiently Solving Nonlinear Equations. Mathematics. 2024; 12(22):3490. https://doi.org/10.3390/math12223490

Chicago/Turabian Style

Mittal, Shubham Kumar, Sunil Panday, and Lorentz Jäntschi. 2024. "Enhanced Ninth-Order Memory-Based Iterative Technique for Efficiently Solving Nonlinear Equations" Mathematics 12, no. 22: 3490. https://doi.org/10.3390/math12223490

APA Style

Mittal, S. K., Panday, S., & Jäntschi, L. (2024). Enhanced Ninth-Order Memory-Based Iterative Technique for Efficiently Solving Nonlinear Equations. Mathematics, 12(22), 3490. https://doi.org/10.3390/math12223490

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop