1. Introduction
The resolution of nonlinear equations of the form
represents a fundamental problem in numerical analysis, with broad applications in science and engineering [
1,
2]. Unlike linear equations, nonlinear equations often lack closed-form analytical solutions, so iterative methods are needed for their approximation. Iterative schemes constitute a class of numerical techniques designed to generate successive approximations to the roots of nonlinear equations. These methods begin with an initial guess and refine it through repeated application of a defined algorithm, ideally converging to a solution within a desired tolerance.
In recent decades, many iterative methods have been designed for this purpose, and each technique exhibits distinct convergence properties, stability criteria, and computational requirements. However, most of the proposed algorithms do not work as expected when the solution to be approximated has a multiplicity greater than one.
A common drawback in iterative methods for solving nonlinear equations with multiple solutions lies in the requirement to have a priori knowledge of the
m multiplicity of the solution. We can find in the literature classical schemes that do not require such knowledge. The first known example is Schröder’s method [
3], designed by applying Newton’s method to
, yielding
Schröder’s scheme holds the quadratic convergence of Newton’s procedure. However, the literature on this subject is limited and there are few methods that, in addition to working efficiently, do not require knowledge of this multiplicity.
Regarding the design of efficient methods, one of the most widely used techniques is the inclusion of memory in the iterative expression. Iterative methods with memory were introduced by Traub [
4], where the next iterate
is obtained using the current iterate
and previous ones
. Its general expression is
These types of methods allow the order of convergence to be increased without adding functional evaluations [
5] and generally show greater stability [
6].
In this regard, Cordero et al. [
7] designed two methods with a similar idea based on Kurchatov’s second-order convergent method [
8]. The first method, KM, consists of the application of the Kurchatov scheme to
, resulting in
holding the second order of convergence. The second method, KMD, applies the Kurchatov procedure to
and also holds the second order of convergence of the original method.
Our aim in this work is to provide efficient methods applicable to solving nonlinear problems with multiple roots. In addition to the issue of knowing the multiplicity, there is also a need to design efficient schemes that are applicable to nonlinear and non-differentiable functions with multiple roots. First of all, we deal with the design of an iterative scheme for solving multiple-root nonlinear equations without knowing the multiplicity
m with memory. From the secant method
whose order of convergence is
, our purpose is the application of the secant method to
, which yields
Method (
2), named from now on SD, is a procedure to obtain the multiple roots of the nonlinear equation
that includes memory and derivatives. For non-differentiable problems, based on [
9], we propose the application of the secant method to
, resulting in
Let us remark that (
3), named from now on
, is a family of iterative procedures to obtain the multiple roots of the nonlinear equation
that includes memory and is derivative-free.
This work is organized as follows.
Section 2 analyzes the error equation of method (
2) and explores its stability.
Section 3 studies the convergence of family (
3).
Section 4 computes the numerical benchmark. Finally,
Section 5 covers the main conclusions.
2. Method with Memory and Derivatives for Solving Multiple-Root Nonlinear Equations
The SD method can be applied to solve nonlinear differentiable equations with multiple roots from the secant point of view. In this section, we address the analysis of its convergence and the study of its stability.
2.1. Analysis of Convergence of SD Method
Theorem 1. Let be a sufficiently differentiable function in an open set D, and let α be a root of unknown multiplicity of . If the initial guesses and are close enough to α, then (2) converges to α with order , with its error equation beingwhere for ; is the error at step k and is the error at step , given by and , respectively; and collects the products of the terms and , whose powers sum to 3 or greater values. Proof. Let
. Expanding
and
around
via Taylor series yields
The quotient
is
so the expression for
is
The difference between
and
yields
Therefore, the error equation is
Assuming that the method has
R-order
p [
10], i.e.,
, then
. In the case of (
4),
. The positive root of
is
, so the order of the method is
. □
2.2. Dynamics of SD Method
The stability analysis of iterative methods with memory differs from that of procedures that lack memory. Despite the fundamentals being similar [
11,
12], the analysis must be adapted to the fixed point functions in
[
13,
14].
Since an iterative method with memory of the form
cannot have fixed points, we define the auxiliary map
such that
Provided that iterative schemes are applied to polynomials, the resulting iteration function is rational.
A fixed point
of
satisfies
, so
Recalling [
15], a fixed point
can be classified in terms of the eigenvalues
of
(the Jacobian matrix of
) as follows:
Attracting point when , ;
unstable when one eigenvalue satisfies ;
repelling when , .
Dynamical planes represent the basins of attraction of attracting fixed points. The dynamical planes are generated using a mesh of 400 × 400 initial guesses in the square
, following the guidelines of [
16]. We establish the convergence to the attracting fixed points when the difference between two consecutive iterates is lower than
. In addition, we consider that the method will not converge if 40 iterations are carried out. The orange and blue basins represent the convergence to the attracting fixed points. Black basins represent divergence. A white star represents every fixed point.
Let us apply the iterative method (
2) to solve the polynomial
, where
is a root of multiplicity
m. Its fixed point operator is
Proposition 1. The fixed points of are
and , which are superattracting;
, which is unstable.
Proof. Fixed points satisfy
, so
and
If
, the previous equation is satisfied. Otherwise,
so three values have been obtained:
,
, and
.
The Jacobian matrix is
whose eigenvalues are
and
. Replacing the values of the fixed points in the eigenvalues yields the following:
For and , so is superattracting;
For and , so is superattracting;
For and , so is unstable.
□
Proposition 2. The operator does not have free critical points.
Proof. Solving det
, we obtain at least one eigenvalue equal to 0.
When , the eigenvalues of are and . The second eigenvalue is 0 only when . Therefore, is a critical point that matches the attracting fixed point.
When , the eigenvalues of are and . The second eigenvalue is 0 only when . Therefore, is a critical point that matches the attracting fixed point.
Since the critical points match the roots of the nonlinear equation and there are no additional critical points, operator does not have free critical points. □
Figure 1 represents the dynamical plane of method (
2) when applied to
.
Figure 1 reveals that the iterative method has a wide stability due to the absence of black regions. Every initial guess converges to one of the roots of the polynomial.
The multiplicity value directly affects the shape and size of the basins of attraction. Applying (
2) to the polynomial
for different values of
m, we obtain the dynamical planes of
Figure 2.
As
Figure 2 shows, the basin of attraction of
increases as
m does.
3. Derivative-Free Method with Memory for Solving Multiple-Root Nonlinear Equations
Family can be applied to solve nonlinear non-differentiable equations with multiple roots from the secant point of view. In this section, we address the analysis of its convergence.
Theorem 2. Let be a sufficiently differentiable function in an open set D, and let α be a root of unknown multiplicity of . If the initial guesses and are close enough to α, then (3) converges to α with order , with its error equation beingwhere , and collects the products of the terms and , whose powers sum to 3 or greater values. Proof. Proceeding in a similar way to Theorem 1, we obtain the Taylor expansion of
around
as
and
Approximating the
power with Newton’s binomial
the previous term yields
The difference between
is
so the divided difference results in
Therefore, expression
is
and
The difference between
and
yields
Therefore, the error equation is
Assuming that the method has
R-order
p [
10], i.e.,
, then
. In the case of (
4),
. The positive root of
is
, so the order of the method is
. □
Note that parameter does not affect the lowest-order term of the previous error equation, which is why all the members of family keep the same order of convergence for all values of .
4. Numerical Benchmark
We are assessing the proposed iterative schemes solving the nonlinear equations
, which has one root with multiplicity two, .
, which has one root with multiplicity three, .
, which has three roots with multiplicity two, , , and .
We compare the proposed methods with the methods coming from [
17], which we denote by KM and KMD, with and without derivatives, respectively, and with the technique coming from [
7], which we denote by gTM, which includes derivatives.
The numerical tests are conducted with Matlab R2023a in a PC equipped with an Intel-Core i7-14700 processor (Intel, Santa Clara, CA, USA) at 2.10 GHz with 16 GB of RAM. The operations use variable-precision arithmetics with 500 digits of mantissa to guarantee that the stopping criterion is reached without division by zero problems. We also use a maximum of 100 iterations as a stopping criterion.
The results are collected in
Table 1,
Table 2 and
Table 3. These tables collect the initial guess
, the number of iterations
k to reach the solution, the approximated computational order of convergence ACOC [
18], the residual
at the last iterate, the value of the function
, and the CPU time (in the seconds) at the last iterate.
As shown in
Table 1, all the methods obtain good results for the chosen initial points. The approximate computational convergence order coincides with the theoretical one. Although schemes SD and
require the most iterations, they are the methods in which the residual of the last iteration is the smallest, except for the KM method, which we can consider to be the best-performing method for approximating the multiple roots of
. Interestingly, the table shows that, for the initial points chosen, we see that the SD and SF methods require less CPU time than the corresponding methods with and without derivatives, respectively.
As we can see in
Table 2 and
Table 3, all the methods obtain good results for the chosen initial points. The approximate computational convergence order coincides with the theoretical one, and the number of iterations used to verify the stopping criterion is almost the same for all methods. In addition, the CPU time is lower for methods SD and
. However, as expected, the accuracy of the schemes proposed in this work is lower because they are iterative schemes that have a lower theoretical order of convergence than those considered for comparison.
5. Conclusions
In this work, we modify a memory-based method to make it applicable to obtain multiple roots (without needing to know their multiplicity), while maintaining their order of convergence. The order of convergence of the method is proved theoretically, and this study is complemented with a dynamical analysis of the scheme applied to nonlinear polynomials with a simple root and others with multiplicity . This analysis shows the dynamical planes for various multiplicities, obtaining vast basins of attraction in all cases, with points that converge to the multiple roots. As a result of this study, stable convergence to both simple and multiple roots is expected.
We then modify the proposed method to obtain the method, a derivative-free memory-based family of iterative methods with the same characteristics as the SD method but also including a real parameter. This variant of the SD method with a parameter enhances its flexibility and applicability in situations where derivative information is difficult or costly to obtain.
By running the KM, KMD, gTM, SD, and SF methods on several examples, we can conclude that the proposed SD and SF methods show excellent performance and are more efficient than the KM, KMD, and gTM methods in terms of runtime and computational cost. From these findings, we conclude that the SD and methods have several comparative advantages. First, they efficiently handle multiple roots without multiplicity knowledge. Moreover, the variant offers a derivative-free option that broadens its scope of application. Finally, both methods outperform classical approaches in terms of runtime and computational efficiency.
Nevertheless, some limitations must be acknowledged. The present work focused exclusively on problems on the real line, leaving aside the treatment of systems of nonlinear equations or problems in the complex plane. Moreover, while dynamical properties were investigated for the SD method in polynomial cases, a deeper exploration is needed for more general nonlinear functions, also including a deeper analysis of the influence of parameter on the stability of the iterative family. These limitations open interesting directions for future research. Potential extensions include adapting the SD and families to systems of nonlinear equations, exploring their applicability in the numerical solution of nonlinear PDE discretization, and performing a more comprehensive dynamical study in higher dimensions and complex domains. Such developments would further establish the versatility and robustness of memory-based iterative methods in modern computational mathematics.
Author Contributions
Conceptualization, F.I.C. and N.G.-S.; methodology, F.I.C.; software, J.H.J.; validation, J.H.J.; formal analysis, F.I.C.; investigation, J.H.J.; resources, F.I.C.; data curation, J.H.J.; writing—original draft preparation, J.H.J.; writing—review and editing, N.G.-S.; visualization, N.G.-S.; supervision, F.I.C.; project administration, F.I.C.; funding acquisition, F.I.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by “Ayuda a Primeros Proyectos de Investigación (PAID-06-23) and (PAID-11-24), both from Vicerrectorado de Investigación de la Universitat Politècnica de València (UPV)”.
Conflicts of Interest
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
References
- Bhavna; Bhatia, S. Convergence analysis of optimal iterative family for multiple roots and its applications. J. Math. Chem. 2024, 62, 2007–2038. [Google Scholar] [CrossRef]
- Argyros, C.; Argyros, M.I.; Argyros, I.K.; Magreñán, A.A.; Sarría, I. Local and Semi-local convergence for Chebyshev two point like methods with applications in different fields. J. Comput. Appl. Math. 2023, 426, 115072. [Google Scholar] [CrossRef]
- Schröder, E. Über unendlich viele algorithmen zur auflösung der gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Petković, M.S.; Džunić, J.; Neta, B. Interpolatory multipoint methods with memory for solving nonlinear equations. Appl. Math. Comput. 2011, 218, 2533–2541. [Google Scholar] [CrossRef]
- Abdullah, S.; Choubey, N.; Dara, S. An efficient two-point iterative method with memory for solving non-linear equations and its dynamics. J. Appl. Math. Comput. 2024, 70, 285–315. [Google Scholar] [CrossRef]
- Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Modifying Kurchatov’s method to find multiple roots of nonlinear equations. Appl. Numer. Math. 2024, 198, 11–21. [Google Scholar] [CrossRef]
- Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dolk Akad Nauk SSSR 1971, 198, 524–526. [Google Scholar]
- King, R.F. A secant method for multiple roots. BIT Numer. Math. 1977, 17, 321–328. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
- Beardon, A.F. Iteration of Rational Functions: Complex Analytic Dynamical Systems; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
- Milnor, J. Dynamics in oNe Complex Variable: Introductory Lectures; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
- Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. A multidimensional dynamical approach to iterative methods with memory. Appl. Math. Comput. 2015, 271, 701–715. [Google Scholar] [CrossRef]
- Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. Stability of King’s family of iterative methods with memory. J. Comput. Appl. Math. 2017, 318, 504–514. [Google Scholar] [CrossRef]
- Robinson, R.C. An Introduction to Dynamical Systems: Continuous and Discrete; American Mathematical Society: Providence, RI, USA, 2012. [Google Scholar]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
- Cordero, A.; Neta, B.; Torregrosa, J.R. Memorizing Schröder’s Method as an Efficient Strategy for Estimating Roots of Unknown Multiplicity. Mathematics 2021, 9, 2570. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).