Next Article in Journal
New n-Dimensional Finite Element Technique for Solving Boundary Value Problems in n-Dimensional Space
Previous Article in Journal
Exploring the Structural and Traversal Properties of Total Graphs over Finite Rings
Previous Article in Special Issue
Superconvergence of Mixed Finite Element Method with Bernstein Polynomials for Stokes Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Method with Memory Derived from Newton’s Scheme in Solving Nonlinear Equations: Higher R-Order Convergence and Numerical Performance

by
Runqi Xue
1,
Yalin Li
2,
Enbin Song
1 and
Tao Liu
2,*
1
College of Mathematics, Sichuan University, Chengdu 610064, China
2
School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(5), 387; https://doi.org/10.3390/axioms14050387
Submission received: 14 April 2025 / Revised: 17 May 2025 / Accepted: 20 May 2025 / Published: 21 May 2025
(This article belongs to the Special Issue Numerical Analysis and Applied Mathematics)

Abstract

:
This article introduces a novel iterative solver with memory, derived from Newton’s scheme, for nonlinear scalar equations. The key innovation lies in integrating memory into the iterative process, enhancing the convergence rate without increasing the quantity of functional evaluations compared to conventional two-point solvers. The proposed method achieves a superior R-order of convergence by adaptively utilizing past iterates to refine the current approximation. A rigorous theoretical analysis is presented to establish the convergence properties of the method. Extensive computational experiments demonstrate that the method with memory not only accelerates convergence but also enhances accuracy with fewer iterations. These findings suggest that the proposed solver has significant potential for applications in scientific computing and numerical optimization, where efficient and high-order iterative methods are crucial.

1. Introductory Notes

Solving nonlinear scalar equations is fundamental in numerous scientific and engineering disciplines, as many real-world problems involve nonlinear behavior that cannot be accurately captured by linear models [1,2,3]. These equations frequently arise in physics, engineering, economics, and biology, where exact analytical solutions are often unavailable [4,5]. In physics and engineering, nonlinear equations govern the behavior of complex systems such as fluid dynamics, structural mechanics, and electrodynamics [6]. A notable example is the motion of a projectile under air resistance, modeled by the following equation:
m ¯ d v d t = m ¯ g k ¯ v 2 ,
where v s . represents velocity, m ¯ is the mass, g is gravitational acceleration, and k ¯ is the drag coefficient. Similarly, in financial mathematics [7], the valuation of options under transactions is determined by the nonlinear Black–Scholes equation, which is inherently nonlinear due to the presence of transactions. To enlighten the issue, when solving the nonlinear partial differential equation using the finite difference discretization, this process leads to a collection of nonlinear algebraic equations, mainly of a large sparse size, that needs to be tackled by iterative Newton-type methods. Biological and medical sciences also rely heavily on nonlinear models to describe population dynamics, such as the logistic growth model, as follows:
d P d t = r ¯ P 1 P K ,
wherein r ¯ is the intrinsic growth rate, P is the population size, and K shows the carrying capacity of the environment.
Let us now introduce the fundamental concepts of solving nonlinear equations using iterative methods [1], with a particular focus on multipoint methods. These methods are known for their higher computational efficiency compared to classical one-point methods. We provide a classification of iteration solvers via the information needed from current and previous iterates, and we discuss key features such as the convergence order, computational efficiency, initial approximations, and stopping criteria.

1.1. Classification of Iterative Solvers

Iterative schemes for resolving
f ( x ) = 0 ,
can be classified based on the information they use [8,9]:
  • One-point methods: These methods use only the current approximation x k to calculate the subsequent approximation x k + 1 . A classic example is Newton’s scheme (NIM, second-order speed), expressed as follows:
    x k + 1 = x k f ( x k ) f ( x k ) .
  • One-point methods with memory [8]: These methods reuse previous approximations x k 1 , , x k n to compute x k + 1 . The secant solver is a well-known instance expressed as follows:
    x k + 1 = x k x k x k 1 f ( x k ) f ( x k 1 ) f ( x k ) .
  • Multipoint methods without memory: These methods use the current approximation x k and additional expressions w 1 ( x k ) , w 2 ( x k ) , , w n ( x k ) to compute x k + 1 ; see [10,11] for more details.
  • Multipoint methods with memory: These methods use information from multiple previous iterates, including both approximations and additional expressions [12,13].
Despite the widespread use of Newton-type iterative methods, traditional solvers face several well-known limitations. One-point methods, like Newton’s method, exhibit quadratic convergence but require the computation of derivatives, which can be computationally expensive or analytically intractable for complex functions. Additionally, these methods are highly sensitive to the choice of the initial approximation; poor initial guesses can lead to divergence or slow convergence. To address these issues, multipoint methods have been developed, improving convergence rates by incorporating additional function evaluations. However, these approaches often come at the cost of increased computational effort, as each iteration requires multiple function evaluations, reducing overall efficiency.
Multipoint methods without memory aim to balance accuracy and computational cost by evaluating the function at strategically chosen auxiliary points [14]. Nevertheless, their performance is still limited by the quantity of function evaluations required per cycle, which restricts their efficacy for large-scale problems. On the other hand, schemes with memory offer a promising alternative by utilizing past iterates to boost convergence without additional functional evaluations. This advantage allows them to achieve higher-order convergence while maintaining computational efficiency. However, existing memory-based approaches often lack a systematic framework for integrating historical information optimally, leading to suboptimal performance in certain cases. The present work aims to address these limitations by constructing a Newton-type iterative solver with memory that maximizes convergence efficiency while preserving the quantity of function evaluations per cycle.

1.2. One-Point Iteration Solvers

Several one-point iterative schemes for finding simple zeros are reviewed, including that in [15]; Halley’s method (cubic convergence), expressed as follows [16]:
x k + 1 = x k f ( x k ) f ( x k ) 1 1 f ( x k ) f ( x k ) 2 f ( x k ) 2 .
And the Chebyshev’s method (cubic convergence), expressed as follows [17]:
x k + 1 = x k f ( x k ) f ( x k ) 1 + f ( x k ) f ( x k ) 2 f ( x k ) 2 .

1.3. Methods for Multiple Zeros

For functions with multiple zeros, modified versions of one-point methods are introduced. For example, Schröder’s method for multiple zeros of multiplicity m is given by the following [18,19]:
x k + 1 = x k m f ( x k ) f ( x k ) .

1.4. Motivation and Organization

The development of methods with memory has been a research direction in recent years [20,21], as such methods offer the potential for higher convergence rates while preserving computational efficiency. By integrating memory into a Newton-type solver, it is possible to construct a more effective iterative scheme that reduces the number of required iterations and improves accuracy. Motivated by these considerations, this study aims to construct a higher-order iterative scheme with memory that outperforms traditional solvers in terms of convergence speed and computational cost. Our approach is designed to achieve a superior R-order of convergence while maintaining the identical quantity of function evaluations as existing two-point methods. Through rigorous analysis and extensive numerical tests, we show that the proposed scheme provides a practical and efficient alternative in resolving nonlinear equations.
The remainder of this investigation is structured as follows. Section 2 lays out the mathematical formulations for two-point solvers, providing the necessary theoretical background. Section 3 presents our main contribution–a two-step method with memory and a derivative-free approach–derived from a Newton-type solver. We provide a detailed derivation of the proposed method along with a mathematical analysis of its convergence order. Section 4 validates the efficacy of the proposed scheme through various numerical tests and test scenarios. Finally, Section 5 concludes the study with a discussion of the findings and several directions for forthcoming research.

2. Concerning Two-Point Methods

This section provides an overview of two-point methods, which give a significant role in the computational resolution of nonlinear equations; see, e.g., [22,23]. The development of such methods aims at enhancing convergence efficiency by utilizing function evaluations at multiple points within each iteration.
Two-point methods extend the classical one-point iterative schemes by incorporating an additional evaluation point, thus improving the order of convergence. The pioneering work of Traub (1964) [8] established the systematic study of these methods, although optimal schemes emerged later. Notably, Ostrowski’s fourth-order method, developed in 1960, set a foundation for subsequent advancements.

2.1. Order of Convergence

The convergence speed r of an iterative scheme is defined as the rate at which the sequence of approximations { x k } tends to the root α . For an iteration function ϕ , the speed of convergence is provided by the following [8]:
lim k | ϕ ( x k ) α | | x k α | r = A r ,
where A r is the asymptotic error constant.
In [1], the authors provided a generic definition of the convergence order, known as the R-order given as follows ( k = 0 , 1 , )
R r { x k } = lim k sup e k 1 k , if r = 1 , lim k sup e k 1 r k , if r > 1 ,
wherein e k = x k α , and R r is the R-factor. The R-order of convergence for an iteration process at the point α is
R order = , if R m = 0 , for all m [ 1 , ) , inf { m [ 1 , ) : R m = 1 } , otherwise .

2.2. Calculational Efficacy

The computational efficacy of an iteration solver is a measure of its performance, considering both the convergence speed and the cost per iterate. The efficiency index (TEI) is defined as [8]:
T E I = r 1 / θ f ,
where θ f is the quantity of functional evaluations per iterate. The conjecture of Kung–Traub [24] suggests that multipoint solvers without memory, needing n + 1 functional evaluations per iterate, can have maximum order 2 n . Only the optimal schemes reach this level, but they are few.

2.3. Composite Solvers

Composite multipoint schemes construct higher-order schemes by composing existing iterations. For example, combining Newton’s scheme and Halley’s method as follows:
ϕ 2 ( x ) = x f ( x ) f ( x ) f ( x ) 2 f ( x ) + f ( x ) ,
leads to a sixth-order iteration defined by the following:
ϕ 3 ( x ) = ϕ 2 ( ϕ 1 ( x ) ) .
Although theoretically promising, the computational efficiency of such compositions must be carefully analyzed to justify their practical use.

2.4. Traub’s Two-Point Methods

Traub introduced several third-order two-point methods based on interpolation. A representative iteration is provided by the following:
x k + 1 = x k f ( x k ) f ( x k 1 2 u ( x k ) ) ,
wherein u ( x ) = f ( x ) f ( x ) and the denominator estimates the derivative at an adjusted point to improve accuracy.
In the 1960s and 1970s, Traub [8] pioneered the study of multipoint schemes, systematically improving Newton-type and Steffensen-type solvers by incorporating interpolation and additional function evaluations to enhance convergence rates. His work laid the foundation for what are now known as optimal iterative methods, striking a balance between computational efficiency and convergence order. Traub’s third-order two-point methods, such as the iteration scheme given above, exemplify this approach by refining derivative approximations through strategically chosen auxiliary points. These advancements influenced subsequent research on iterative schemes with/without memory, inspiring the development of more efficient solvers that exploit past iterates to further improve convergence without additional function evaluations. Today, Traub’s framework remains fundamental in the design of high-order root-finding algorithms, particularly in contexts requiring rapid and accurate numerical solutions.

2.5. Ostrowski’s Fourth-Order Method

Ostrowski’s method (OSM) was one of the first two-point schemes achieving fourth-order convergence while maintaining only three functional evaluations per iteration; see for instance [25]. It is given as follows:
y k = x k f ( x k ) f ( x k ) , x k + 1 = y k f ( y k ) f ( x k ) · f ( x k ) f ( x k ) 2 f ( y k ) .
This solver serves as a basis for many generalizations, including King’s family of methods [26].
The method excels when applied to functions with well-behaved derivatives, where the denominator in (11) remains stable, ensuring numerical robustness. In cases where the function exhibits strong smoothness properties, OSM provides rapid convergence with fewer iterations compared to lower-order methods. However, its performance can degrade when applied to functions with multiple roots or near singularities, where derivative approximations may lead to numerical instability.

2.6. Derivative-Free Approximations and Generalizations

Several approaches avoid explicit second derivatives by using divided differences or numerical integration techniques [27]. A notable example is the Newton-secant two-point solver expressed as follows:
x k + 1 = x k u ( x k ) f ( x k ) f ( x k ) f ( x k u ( x k ) ) ,
which approximates f ( x ) using secant-based estimations.
Optimal two-point methods achieve the best trade-off between convergence rate and computational cost, with Ostrowski’s method and its variants reaching a TEI of approximately 1.587.
Two-point methods offer advantages in solving nonlinear equations, particularly when derivative evaluations are costly or unreliable. Their development continues to be an active area of research, with modern extensions incorporating adaptive strategies and hybrid approaches to further enhance robustness and performance.
Before ending this section, it is worth mentioning that the author in [28] proposed an effective one-step Steffensen-like solver with memory, achieving an R-order of convergence equal to 1 2 ( 3 + 17 ) , which is formulated as
w k = x k + β k f ( x k ) , β k = 1 N 2 ( x k ) , p k = N 3 ( w k ) 2 N 3 ( w k ) , x k + 1 = x k f ( x k ) f [ x k , w k ] + p k f ( w k ) ,
where N ( · ) is an interpolatory polynomial of appropriate degree. Method (13) provides a family of methods with two parameters, which could be adapted in order to enhance the convergence order without any additional function evaluations. This motivates us to investigate a bi-parametric family of methods in the next section.

3. An Enhanced Iterative Scheme with Memory

Let us initially consider the following structure:
z k = x k f ( x k ) f ( x k ) , k 0 , x k + 1 = z k f ( z k ) f ( x k ) f ( x k ) ( 1 / 2 ) f ( z k ) f ( x k ) ( 5 / 2 ) f ( z k ) .
Now we propose a Newton-type iteration method by modifying Newton’s scheme in the first substep and introducing a free parameter a. This modification preserves the convergence order while incorporating a into the error equation, allowing for enhanced solver speed. So first we consider
z k = x k f ( x k ) f ( x k ) 1 + a 2 f ( x k ) f ( x k ) 2 , k 0 , x k + 1 = z k f ( z k ) f ( x k ) f ( x k ) ( 1 / 2 ) f ( z k ) f ( x k ) ( 5 / 2 ) f ( z k ) ,
and then, we replace the first derivative of this Newton-type method (15) as follows:
z k = x k f ( x k ) Appr 1 + a 2 f ( x k ) Appr 2 , k 0 , x k + 1 = z k f ( z k ) Appr f ( x k ) ( 1 / 2 ) f ( z k ) f ( x k ) ( 5 / 2 ) f ( z k ) ,
where
Appr = f ( x k ) f ( h k ) x k h k = f [ x k , h k ] , h k = x k + β f ( x k ) 3 , β R { 0 } .
On the one hand, this iteration scheme is based on the Newton-type solver (15), while on the other hand, it can lead to a Steffensen-type method (16) and could also be called a Steffensen-type method. Writing Taylor series up to appropriate orders and making several simplification, it is possible to derive the theorem below.
Theorem 1.
Suppose that α D is a simple zero of the sufficiently differentiable smooth function f : D C C . As long as the initial guess x 0 is selected close enough to α, then (16) converges to α with fourth-order accuracy.
Proof. 
The proof is straightforward (and is similar to [27,29]). Let α be the simple root of the function f. Assuming that f is adequately smooth, we expand f ( x k ) and its derivative f ( x k ) in a Taylor series over α , leading to the following expressions:
f ( x k ) = f ( α ) e r r k + l 2 e r r k 2 + l 3 e r r k 3 + l 4 e r r k 4 + O ( e r r k 5 ) ,
and
Appr = f ( α ) 1 + 2 l 2 e r r k + 3 l 3 e r r k 2 + 4 l 4 e r r k 3 + O ( e r r k 4 ) .
In this context, the error term is represented as e r r k = x k α , while the coefficients l k are defined as
l k = 1 k ! f ( k ) ( α ) f ( α ) , k 2 .
Inserting Equations (18) and (19) into (16), we arrive at the following:
z k = α + l 2 e r r k 2 + a 2 2 l 2 2 + 2 l 3 e r r k 3 l 2 3 a 2 + β f ( α ) 3 7 l 3 + 4 l 2 3 + 3 l 4 e r r k 4 + O ( e r r k 5 ) .
Utilizing Equations (18)–(20), the corresponding error equation associated with (16) is given by the following:
err k + 1 = l 2 a + l 3 err k 4 + O ( err k 5 ) .
This concludes the proof by illustrating the fourth convergence order. □
Our objective is to eliminate the asymptotic error constant, expressed as η = l 2 a + l 3 . To achieve this, the R-order can be enhanced by implementing the following transformation:
a = l 3 .
Since the exact location of the root is unknown, the direct application of relation (22) is not feasible in its precise form. Therefore, an iterative approximation must be employed. This gives the improvement of a memory-enhanced variant of Newton’s scheme, constructed using the following:
a k l ¯ 3 ,
where l ¯ j l j . Consider the case where N 3 ( t ) represents the Newton interpolatory polynomial of degree three, constructed from the four available approximates of the root x k , x k 1 , z k 1 , h k 1 at the conclusion of every cycle. In this context, we introduce a novel method with memory, as follows (PM1):
z k = x k f ( x k ) Appr 1 + a k 2 f ( x k ) Appr 2 , k 0 , x k + 1 = z k f ( z k ) Appr f ( x k ) 1 / 2 f ( z k ) f ( x k ) 5 / 2 f ( z k ) , a k = ( ( 1 / 6 ) N 3 ( x k ) / N 3 ( x k ) ) , k 1 .
It is important to note that the interpolating polynomial can be expressed in the following form in the Wolfram Mathematica environment [30]:
data = {{x, fx}, {W, fW}, {Y, fY}, {X, fX}};
L[t_] := InterpolatingPolynomial[data, t] // Simplify;
a1[k] = (1/6) (L’’’[X]/L’[X]);
The speed enhancement for (24) relies on the introduction of a free nonzero parameter, which is varied at each iteration. This parameter is computed via information from both the current and previous iterations. Therefore, the method can be classified as one with memory, as per Traub’s classification [8].
We now proceed to discuss the analytical aspects underlying our presented scheme, defined in (24).
Theorem 2.
Consider a function f ( x ) that is adequately smooth in the vicinity of its simple zero, denoted by α. If an initial estimate x 0 is chosen such that it lies adequately close to α, then the iterative scheme (24) incorporating memory exhibits an R-order of convergence of no less than 4.23607.
Proof. 
Suppose { x k } denotes the sequence of approximated roots obtained via (24). The error equations for (24), with the self-accelerator a = a k , are given by the following:
err ^ k = h k α l k , 1 err k .
err ˜ k = z k α l k , 2 err k 2 .
err k + 1 = x k + 1 α l k , 4 err k 4 .
A key distinction between our approach and previous methodologies, such as the one presented in [28], lies in the treatment of the acceleration parameter. In prior works, the acceleration was primarily implemented through modifications to the parameter β . In contrast, our approach retains β as a fixed nonzero parameter while incorporating a novel memorization mechanism applied to the new parameter a at the conclusion of the first substep. This modification introduces an alternative acceleration strategy that differs fundamentally from the conventional adjustments made solely to β . We derive the following result:
l 2 a + l 3 err k 1 .
By substituting the expression for l 2 a + l 3 from (28) into (27), the following formulation can be obtained:
err k + 1 l k , 4 err k 1 err k 4 .
It is possible to notice that, in general, the error equation takes the form err k + 1 A err k r , where the parameters A and r must be determined. Consequently, it follows that err k A err k 1 r , leading to the subsequent derivation:
err k 1 A 1 / r err k 1 / r .
From this, it is straightforward to obtain
err k r A 1 / r C err k 4 + 1 / r ,
where C is a constant. This formulation ultimately leads to the equation
r = 4 + 1 r ,
which admits the following two possible solutions: { 2 5 , 5 + 2 } . Evidently, the value r = 5 + 2 is the appropriate choice, establishing the R-order of convergence for the memory-based method (24). This concludes the proof. □
The enhancement of the R-order is attained without introducing any additional functional evaluations, ensuring that the newly developed memory-based method maintains a high TEI. This approach extends the existing scheme (16), effectively increasing the R-order from 4 to 5 + 2 . The procedure of the proposed solver is provided in the Flowchart Figure 1.
The proposed acceleration scheme (24) is novel and effective, offering an improvement in the speed, while requiring no further functional evaluations. This advantage sets it apart from optimal two-step methods that lack memory.
Furthermore, it is worth mentioning that another formulation of the presented memory-based scheme can be derived by employing a backward finite difference approximation at the initial stage of the first substep, coupled with a slight modification in the acceleration parameters. In other words, we introduce an alternative memory-based approach (PM2) that preserves the R-order of r = 5 + 2 , which is formulated as follows (only h k differs and Appr will not be changed compared to PM1):
h k = x k β f ( x k ) 3 , z k = x k f ( x k ) Appr 1 + a k 2 f ( x k ) Appr 2 , k 0 , x k + 1 = z k f ( z k ) Appr f ( x k ) 1 / 2 f ( z k ) f ( x k ) 5 / 2 f ( z k ) , a k = ( ( 1 / 6 ) N 3 ( x k ) / N 3 ( x k ) ) , k 1 .
Theorem 3.
Consider a function f ( x ) that is adequately smooth in the vicinity of its simple zero, denoted by α. If a starting estimate x 0 is chosen such that it lies close enough to α, then the solver (33) incorporating memory exhibits an R-order of convergence of no less than 4.23607.
Proof 
Since the proof follows the same reasoning as Theorem 2, it is omitted for brevity. □
From one perspective, since β is a parameter that can assume either a positive or negative sign, the order of convergence and the conditions required to achieve it remain identical, as established in Theorems 2 and 3. However, in practical computations, the choice of sign influences the magnitude of posterior error as well as round-off errors, which, in turn, may lead to variations in the final numerical results. These differences will be analyzed in Section 4.
The extension of the proposed techniques to the broader context of solving nonlinear systems of equations F ( x ) = 0 may be systematically achieved by incorporating the formal definition of divided difference operators (DDOs), thereby enabling the effective treatment of these operators in the following manner. Specifically, the first-order DDO of the vector-valued function F, evaluated at the pair of points x and y, can be formulated in a component-wise fashion, described as follows:
[ x , y ; F ] i , j = F i ( x 1 , , x j , y j + 1 , , y n ) F i ( x 1 , , x j 1 , y j , , y n ) x j y j , 1 i , j n .
The expression under consideration represents a bounded linear operator that fulfills the identity [ y , x ; F ] ( y x ) = F ( y ) F ( x ) . According to the result established by Potra [31], a necessary and sufficient criterion exists for characterizing the divided difference operator through the framework of the Riemann integral; more specifically, if function F adheres to the Lipschitz-type continuity condition
[ x , y ; F ] [ u , v ; F ]   H ( x u + y v ) .
It is worth mentioning that the work in [32] proposed a symmetric operator for the second-order DDO, albeit at a higher computational expense compared to the first-order DDO. Accordingly, it must be stated that the extension of the proposed solvers for a nonlinear system is inefficient due to the load of computing DDO matrices. Because of this, we restrict our investigation to the scalar case in which the proposed methods are efficient.

4. Computational Aspects

The computational efficacy of various iteration solvers, both with and without memory, could be systematically assessed by employing the definition of the TEI given in Section 1. Based on this criterion, we obtain
T E I ( 3 ) 1.4142 < T E I ( 4 ) = T E I ( 5 ) 1.4422
< T E I ( 11 ) = T E I ( 16 ) 1.5874 < T E I ( 24 ) = T E I ( 33 ) 1.6180 .
We perform an evaluation and comparative analysis of different iterative schemes for determining the simple roots of several nonlinear test functions. These experiments are implemented in the Mathematica programming environment [33], utilizing multiple-precision arithmetic with 5000-digit precision to accurately illustrate the high R-order exhibited by both PM1 and PM2. The comparison is conducted among methods that require the same number of functional evaluations per iterate.
In this section, the computational order of convergence is determined utilizing the following relation [29,34,35]:
ζ = ln | f ( x k ) f ( x k 1 ) | ln | f ( x k 1 ) f ( x k 2 ) | .
The obtained ζ value provides a reliable estimation of the theoretical order of convergence, provided that the iterative method does not exhibit pathological behavior, such as slow convergence in the initial iterations or oscillatory behavior of approximations.
Finally, the pieces of evidence of numerical comparisons for the selected experiments are presented. These experiments were conducted using the stopping criterion set as δ = 10 500 . Additionally, a parameter β = a 0 = 0.0001 was used when necessary to fine-tune the iterative process, especially in cases where small initial approximations were employed.
The selection of starting approximations is crucial for the convergence of iteration solvers. A non-iterative method via numerical integration, known as the numerical integration method, was introduced for finding initial approximations [36]. This method uses sigmoid-like functions such as tanh and arctan to approximate the root without requiring derivative information.
Common criteria include checking the difference between successive approximations as follows:
| x k x k 1 | < τ ,
or evaluating the function value at the following approximation:
| f ( x k ) | < δ .
These criteria ensure that the iterative process terminates when the desired accuracy is achieved or when further iterations are unlikely to improve the result.
Example 1.
([37]). We examine the nonlinear test within the domain D = [ 1.5 , 2.5 ] below
f 1 ( x ) = ( x 3 8 ) ( x 2 tan ( x ) ) ,
utilizing an initial estimate of x 0 = 1.9 and α = 2 . The associated observations are provided in Table 1.
It is crucial to emphasize that when applying any root-finding algorithm exhibiting local convergence, the careful selection of starting approximations is of paramount significance. If the starting values lie sufficiently near to the desired zeros, the anticipated (analytical) convergence order is observed in practice. Conversely, when the starting approximations are distant from the actual roots, iterative methods tend to exhibit slower convergence, particularly in the early stages of the iterative process.
Example 2.
Next, we analyze the performance of different iterative methods in determining the complex root of the nonlinear problem
f 2 ( x ) = sin ( x ) + 1 x + x + ( 1 + 2 I ) ,
by initializing the procedure with x 0 = 1.3 2.3 I , where the exact root is given by α 0.2886 1.2422 I . The results for this scenario are presented in Table 2.
A close inspection of Table 1 and Table 2 clearly demonstrates that the root approximations exhibit remarkable accuracy when the proposed memory-based method is employed. Furthermore, the values from the fourth and fifth iterations, as reported in Table 1 and Table 2, are included solely to illustrate the convergence speed of the evaluated solvers. In most practical applications, such additional iterations are generally unnecessary.
The numerical observations presented in Table 1 and Table 2 illustrate the convergence behavior of various iterative methods applied to solve nonlinear scalar equations. In Example 1, the methods were tested on the function f 1 ( x ) within the domain D = [ 1.5 , 2.5 ] , with a starting guess x 0 = 1.9 . The obtained results demonstrate that the OSM, the PM1, and the PM2 exhibit significantly faster convergence rates compared to the NIM. Specifically, OSM, PM1, and PM2 attain highly accurate solutions within just five iterations, with residual function values decreasing to approximately 10 856 , 10 907 , and 10 983 , respectively. Additionally, the convergence factor ζ confirms the expected order of convergence, where NIM converges quadratically ( ζ = 2.00 ), while OSM, PM1, and PM2 attain higher convergence rates. The higher accuracy of PM1 and PM2 compared to OSM suggests the effectiveness of approximations in accelerating convergence.
For the complex-valued nonlinear Equation f 2 ( x ) analyzed in Example 2, a similar pattern emerges. The iterative methods were initialized at x 0 = 1.3 2.3 I , targeting the exact complex root α 0.2886 1.2422 I . The numerical findings in Table 2 further confirm the superiority of higher-order methods over NIM. OSM, PM1, and PM2 exhibit remarkably small residual function values in just five iterations, dropping to approximately 10 300 , 10 304 , and 10 315 , respectively. Again, the order of convergence values, ζ 4.00 for OSM and ζ 4.23 for PM1 and PM2, validate the high efficiency of these methods. Notably, while NIM requires more iterations to reach a reasonable accuracy, the new approaches maintain their superior convergence behavior, reinforcing their practical utility in solving nonlinear problems with complex roots. For Example 2, we provide Figure 2, which shows the history of convergence for another initial approximation through the comparison of different methods.
A value for parameter beta has been selected for numerical testing and it is better to choose a small value in order to lead to larger attraction basins. However, based on the structure of either PM1 or PM2, memorization has been done only on a k , which means that there is no adaptation or acceleration based on β . This is because in h k = x k β f ( x k ) 3 , there is a power of three for f ( x k ) .
Moreover, the developed memory-based approaches (24) and (33) have been applied to a range of test problems, consistently yielding results that align with the aforementioned observations. Consequently, we can assert that the theoretical findings are well supported by numerical tests, thereby affirming the robustness and high computational efficiency of the proposed method.

5. Conclusions

Classical iteration solvers, such as Newton’s scheme, are widely employed due to their quadratic convergence properties; however, their efficiency can be further enhanced by incorporating memory, which exploits previously computed information to accelerate convergence without increasing the number of function evaluations. In this article, we have introduced a scheme with memory. The construction of this method leverages the existing framework of Newton’s method while incorporating memory to improve the convergence properties. Theoretical analysis confirms that the presented scheme achieves a higher R-order of convergence compared to the original two-point solver, without increasing the computational cost in terms of functional evaluations. Numerical experiments further validate the efficiency and robustness of the method, demonstrating its superiority in terms of convergence speed and accuracy.
The success of the proposed solver with memory provides some avenues for future works. Firstly, the extension of this approach to other families of iteration solvers, such as Halley’s or Chebyshev’s schemes, could be explored to further enhance their convergence properties. Secondly, the application of the method with memory to systems of nonlinear equations is a promising direction. Additionally, the development of hybrid methods that combine memory-based techniques with other acceleration strategies, such as weight functions or parameter optimization, could lead to even more efficient algorithms.

Author Contributions

All authors contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Research Project of Jilin Provincial Department of Education (JJKH20251638KJ), the Open Fund Project of Marine Ecological Restoration and Smart Ocean Engineering Research Center of Hebei Province (HBMESO2321), the Technical Service Project of Eighth Geological Brigade of Hebei Bureau of Geology and Mineral Resources Exploration (KJ2022-021), the Technical Service Project of Hebei Baodi Construction Engineering Co., Ltd. (KJ2024-012) and the Sichuan Science and Technology Program (2025ZNSFSC0073).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

It is important to highlight that data sharing does not apply to this manuscript, as no novel datasets were produced during the course of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Soheili, A.R.; Amini, M.; Soleymani, F. A family of Chaplygin–type solvers for Itô stochastic differential equations. Appl. Math. Comput. 2019, 340, 296–304. [Google Scholar] [CrossRef]
  3. Adullah, S.; Choubey, N.; Dara, S. A new three-step optimal without memory iterative scheme for solving non-linear equations with basins of attraction. Comput. Methods Differ. Equ. 2024, in press. [Google Scholar] [CrossRef]
  4. Algehyne, E.A.; Ebaid, A.; El-Zahar, E.R.; Aldhabani, M.S.; Areshi, M.; Al-Jeaid, H.K. Projectile Motion in Special Theory of Relativity: Re-Investigation and New Dynamical Properties in Vacuum. Mathematics 2023, 11, 3890. [Google Scholar] [CrossRef]
  5. Shil, S.; Kumar Nashine, H.; Soleymani, F. On an inversion-free algorithm for the nonlinear matrix problem Xα+A∗X−βA+B∗X−γB=I. Int. J. Comput. Math. 2022, 99, 2555–2567. [Google Scholar] [CrossRef]
  6. Ahmad, F.; ur Réhman, S.; Zaka Ullah, M.; Moaiteq Aljahdali, H.; Ahmad, S.; Alshomrani, A.S.; Carrasco, J.A.; Ahmad, S.; Sivasankaran, S. Frozen Jacobian multistep iterative method for solving nonlinear IVPs and BVPs. Complexity 2017, 2017, 9407656. [Google Scholar] [CrossRef]
  7. Al-Zhour, Z.; Barfeie, M.; Soleymani, F.; Tohidi, E. A computational method to price with transaction costs under the nonlinear Black-Scholes model. Chaos Solitons Fractals 2019, 127, 291–301. [Google Scholar] [CrossRef]
  8. Traub, J. Iterative Methods for the Solution of Equations, 2nd ed.; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  9. McNamee, J.M.; Pan, V.Y. Numerical Methods for Roots of Polynomials–Part II; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  10. Bahi, M.; Beggas, M.; Nesba, N.; Imtiaz, A. A numerical solution of parabolic quasi-variational inequality nonlinear using Newton-multigrid method. Iran. J. Numer. Anal. Optim. 2024, 14, 991–1015. [Google Scholar]
  11. Torkashv, V.; Kazemi, M.; Azimi, M. Efficient family of three-step with-memory methods and their dynamics. Comput. Methods Differ. Equ. 2024, 12, 599–609. [Google Scholar]
  12. Wang, X.; Tao, Y. A new Newton method with memory for solving nonlinear equations. Mathematics 2020, 8, 108. [Google Scholar] [CrossRef]
  13. Torkashvand, V.; Lotfi, T.; Fariborzi Araghi, M.A. A new family of adaptive methods with memory for solving nonlinear equations. Math. Sci. 2019, 13, 1–20. [Google Scholar] [CrossRef]
  14. Ma, X.; Nashine, H.K.; Shil, S.; Soleymani, F. Exploiting higher computational efficiency index for computing outer generalized inverses. Appl. Numer. Math. 2022, 175, 18–28. [Google Scholar] [CrossRef]
  15. Kyncheva, V.K.; Yotov, V.V.; Ivanov, S.I. Convergence of Newton, Halley and Chebyshev iterative methods as methods for simultaneous determination of multiple polynomial zeros. Appl. Numer. Math. 2017, 112, 146–154. [Google Scholar] [CrossRef]
  16. Halley, E. A new, exact, and easy method of finding the roots of any equations generally, and that without any previous reduction. Philos. Trans. R. Soc. 1694, 18, 136–148. (In Latin) [Google Scholar] [CrossRef]
  17. Chebyshev, P. Complete Works of P.L. Chebishev; USSR Academy of Sciences: Moscow, Russia, 1973; pp. 7–25. (In Russian) [Google Scholar]
  18. Ivanov, S.I. A general approach to the study of the convergence of Picard iteration with an application to Halley’s method for multiple zeros of analytic functions. J. Math. Anal. Appl. 2022, 513, 126238. [Google Scholar] [CrossRef]
  19. Ivanov, S.I. Unified Convergence Analysis of Chebyshev-Halley Methods for Multiple Polynomial Zeros. Mathematics 2022, 10, 135. [Google Scholar] [CrossRef]
  20. Cordero, A.; Torregrosa, J.R. Low-complexity root-finding iteration functions with no derivatives of any order of convergence. J. Comput. Appl. Math. 2014, 275, 502–515. [Google Scholar] [CrossRef]
  21. Liu, C.-S.; Chang, C.-W. Updating to optimal parametric values by memory-dependent methods: Iterative schemes of fractional type for solving nonlinear equations. Mathematics 2024, 12, 1032. [Google Scholar] [CrossRef]
  22. Abdullah, S.; Choubey, N.; Dara, S. An efficient two-point iterative method with memory for solving non-linear equations and its dynamics. J. Appl. Math. Comput. 2023, 70, 285–315. [Google Scholar] [CrossRef]
  23. Džunić, J.; Petković, M.S. On generalized biparametric multipoint root finding methods with memory. J. Comput. Appl. Math. 2014, 255, 362–375. [Google Scholar] [CrossRef]
  24. Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
  25. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  26. King, R. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  27. Kiyoumarsi, F. On the construction of fast Steffensen–type iterative methods for nonlinear equations. Int. J. Comput. Meth. 2018, 15, 1850002. [Google Scholar] [CrossRef]
  28. Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algor. 2013, 63, 549–569. [Google Scholar] [CrossRef]
  29. Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parametric family with memory for nonlinear equations. Numer. Algor. 2015, 68, 323–335. [Google Scholar] [CrossRef]
  30. Kumar, N.; Cordero, A.; Jaiswal, J.P.; Torregrosa, J.R. An efficient extension of three-step optimal iterative scheme into with memory and its stability. J. Anal. 2025, 33, 267–290. [Google Scholar] [CrossRef]
  31. Potra, F.-A. A characterisation of the divided differences of an operator which can be represented by Riemann integrals. Revu. Anal. Numér. Théor. Approx. 1980, 2, 251–253. [Google Scholar]
  32. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  33. Wagon, S. Mathematica in Action, 3rd ed.; Springer: New York, NY, USA, 2010. [Google Scholar]
  34. Wang, X.; Zhang, T. A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo 2014, 51, 1–15. [Google Scholar] [CrossRef]
  35. Kumar Mittal, S.; Panday, S.; Jäntschi, L. Enhanced ninth-order memory-based iterative technique for efficiently solving nonlinear equations. Mathematics 2024, 12, 3490. [Google Scholar] [CrossRef]
  36. Yun, B.I. A non-iterative method for solving non-linear equations. Appl. Math. Comput. 2008, 198, 691–699. [Google Scholar] [CrossRef]
  37. Sharifi, M.; Karimi Vanani, S.; Khaksar Haghani, F.; Arab, M.; Shateyi, S. On a derivative-free variant of King’s family with memory. Sci. World J. 2014, 2014, 514075. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the proposed memory-based solver.
Figure 1. Flowchart of the proposed memory-based solver.
Axioms 14 00387 g001
Figure 2. Convergence histories for different methods under x 0 = 1 2 I .
Figure 2. Convergence histories for different methods under x 0 = 1 2 I .
Axioms 14 00387 g002
Table 1. Detailed comparative analysis of the results obtained for Example 1.
Table 1. Detailed comparative analysis of the results obtained for Example 1.
Solvers f ( x 1 ) f ( x 2 ) f ( x 3 ) f ( x 4 ) f ( x 5 ) ζ
NIM 1.20 2.22 × 10 2 7.48 × 10 6 8.46 × 10 13 1.08 × 10 26 2.00
OSM 2.64 × 10 2 2.39 × 10 12 1.60 × 10 52 3.22 × 10 213 5.26 × 10 856 4.00
PM1 5.49 × 10 2 4.22 × 10 11 8.76 × 10 50 1.25 × 10 213 1.09 × 10 907 4.23
PM2 3.33 × 10 2 4.28 × 10 12 5.66 × 10 54 2.21 × 10 231 6.85 × 10 983 4.23
Table 2. Detailed comparative analysis of the results obtained for Example 2.
Table 2. Detailed comparative analysis of the results obtained for Example 2.
Solvers f ( x 1 ) f ( x 2 ) f ( x 3 ) f ( x 4 ) f ( x 5 ) ζ
NIM 2.13 0.93 0.04 1.20 × 10 4 8.93 × 10 10 2.00
OSM 0.76 1.98 × 10 4 1.71 × 10 18 9.53 × 10 75 9.12 × 10 300 4.00
PM1 1.03 7.86 × 10 4 8.13 × 10 17 9.95 × 10 72 2.30 × 10 304 4.23
PM2 0.95 5.63 × 10 4 2.02 × 10 17 2.72 × 10 74 3.23 × 10 315 4.23
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xue, R.; Li, Y.; Song, E.; Liu, T. An Enhanced Method with Memory Derived from Newton’s Scheme in Solving Nonlinear Equations: Higher R-Order Convergence and Numerical Performance. Axioms 2025, 14, 387. https://doi.org/10.3390/axioms14050387

AMA Style

Xue R, Li Y, Song E, Liu T. An Enhanced Method with Memory Derived from Newton’s Scheme in Solving Nonlinear Equations: Higher R-Order Convergence and Numerical Performance. Axioms. 2025; 14(5):387. https://doi.org/10.3390/axioms14050387

Chicago/Turabian Style

Xue, Runqi, Yalin Li, Enbin Song, and Tao Liu. 2025. "An Enhanced Method with Memory Derived from Newton’s Scheme in Solving Nonlinear Equations: Higher R-Order Convergence and Numerical Performance" Axioms 14, no. 5: 387. https://doi.org/10.3390/axioms14050387

APA Style

Xue, R., Li, Y., Song, E., & Liu, T. (2025). An Enhanced Method with Memory Derived from Newton’s Scheme in Solving Nonlinear Equations: Higher R-Order Convergence and Numerical Performance. Axioms, 14(5), 387. https://doi.org/10.3390/axioms14050387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop