Next Article in Journal
Research on CNC Machine Tool Spindle Fault Diagnosis Method Based on Deep Residual Shrinkage Network with Dynamic Convolution and Selective Kernel Attention Model
Previous Article in Journal
Few-Shot Breast Cancer Diagnosis Using a Siamese Neural Network Framework and Triplet-Based Loss
Previous Article in Special Issue
Machine Learning Models for Predicting Postoperative Complications and Hospitalization After Percutaneous Nephrolithotomy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Secant-Type Iterative Classes for Nonlinear Equations with Multiple Roots

by
Francisco I. Chicharro
1,*,†,
Neus Garrido-Saez
1,† and
Julissa H. Jerezano
1,2,†
1
Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camí de Vera s/n, 46022 València, Spain
2
Departamento de Matemática, Universidad Nacional Autónoma de Honduras, San Pedro Sula 21102, Honduras
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2025, 18(9), 568; https://doi.org/10.3390/a18090568
Submission received: 31 July 2025 / Revised: 4 September 2025 / Accepted: 5 September 2025 / Published: 9 September 2025

Abstract

General-purpose iterative methods for solving nonlinear equations provide approximations to solving problems without closed-form solutions. However, these methods lose some properties when the problems have multiple roots or are not differentiable, in which case specific methods are used. However, in most problems the multiplicity of the root is unknown, which reduces the range of methods available to us. In this work we propose two iterative classes with memory for solving multiple-root nonlinear equations without knowing the multiplicity. One of the proposals includes derivatives, but the other is derivative-free, obtained from the previous one using divided differences and a parameter in its iterative expression. The order of convergence of the proposed schemes is analyzed. The stability of the methods is studied using real dynamics, showing the good behavior of the methods. A numerical benchmark confirms the theoretical study.

1. Introduction

The resolution of nonlinear equations of the form f ( x ) = 0 represents a fundamental problem in numerical analysis, with broad applications in science and engineering [1,2]. Unlike linear equations, nonlinear equations often lack closed-form analytical solutions, so iterative methods are needed for their approximation. Iterative schemes constitute a class of numerical techniques designed to generate successive approximations to the roots of nonlinear equations. These methods begin with an initial guess and refine it through repeated application of a defined algorithm, ideally converging to a solution within a desired tolerance.
In recent decades, many iterative methods have been designed for this purpose, and each technique exhibits distinct convergence properties, stability criteria, and computational requirements. However, most of the proposed algorithms do not work as expected when the solution to be approximated has a multiplicity greater than one.
A common drawback in iterative methods for solving nonlinear equations with multiple solutions lies in the requirement to have a priori knowledge of the m multiplicity of the solution. We can find in the literature classical schemes that do not require such knowledge. The first known example is Schröder’s method [3], designed by applying Newton’s method to u ( x ) = f ( x ) f ( x ) , yielding
x k + 1 = x k u ( x k ) u ( x k ) = x k f ( x k ) f ( x k ) f ( x k ) 2 f ( x k ) f ( x k ) , k = 0 , 1 ,
Schröder’s scheme holds the quadratic convergence of Newton’s procedure. However, the literature on this subject is limited and there are few methods that, in addition to working efficiently, do not require knowledge of this multiplicity.
Regarding the design of efficient methods, one of the most widely used techniques is the inclusion of memory in the iterative expression. Iterative methods with memory were introduced by Traub [4], where the next iterate x k + 1 is obtained using the current iterate x k and previous ones x k 1 , x k 2 , . Its general expression is
x k + 1 = ξ x k , x k 1 , , x k K , k = K , K + 1 ,
These types of methods allow the order of convergence to be increased without adding functional evaluations [5] and generally show greater stability [6].
In this regard, Cordero et al. [7] designed two methods with a similar idea based on Kurchatov’s second-order convergent method [8]. The first method, KM, consists of the application of the Kurchatov scheme to u ( x ) = f ( x ) f ( x ) , resulting in
x k + 1 = x k u ( x k ) u [ 2 x k x k 1 , x k 1 ] , k = 1 , 2 ,
holding the second order of convergence. The second method, KMD, applies the Kurchatov procedure to v ( x ) = f ( x ) f [ x + f ( x ) , x ] and also holds the second order of convergence of the original method.
Our aim in this work is to provide efficient methods applicable to solving nonlinear problems with multiple roots. In addition to the issue of knowing the multiplicity, there is also a need to design efficient schemes that are applicable to nonlinear and non-differentiable functions with multiple roots. First of all, we deal with the design of an iterative scheme for solving multiple-root nonlinear equations without knowing the multiplicity m with memory. From the secant method
x k + 1 = x k f ( x k ) ( x k x k 1 ) f ( x k ) f ( x k 1 ) , k = 1 , 2 , ,
whose order of convergence is Φ = 1 + 5 2 , our purpose is the application of the secant method to u ( x ) = f ( x ) f ( x ) , which yields
x k + 1 = x k u ( x k ) ( x k x k 1 ) u ( x k ) u ( x k 1 ) , k = 1 , 2 ,
Method (2), named from now on SD, is a procedure to obtain the multiple roots of the nonlinear equation f ( x ) = 0 that includes memory and derivatives. For non-differentiable problems, based on [9], we propose the application of the secant method to v ( x ) = f ( x ) f [ x + β f ( x ) , x ] , resulting in
x k + 1 = x k v ( x k ) ( x k x k 1 ) v ( x k ) v ( x k 1 ) , k = 1 , 2 ,
Let us remark that (3), named from now on SD β , is a family of iterative procedures to obtain the multiple roots of the nonlinear equation f ( x ) = 0 that includes memory and is derivative-free.
This work is organized as follows. Section 2 analyzes the error equation of method (2) and explores its stability. Section 3 studies the convergence of family (3). Section 4 computes the numerical benchmark. Finally, Section 5 covers the main conclusions.

2. Method with Memory and Derivatives for Solving Multiple-Root Nonlinear Equations

The SD method can be applied to solve nonlinear differentiable equations with multiple roots from the secant point of view. In this section, we address the analysis of its convergence and the study of its stability.

2.1. Analysis of Convergence of SD Method

Theorem 1.
Let f : D R R be a sufficiently differentiable function in an open set D, and let α be a root of unknown multiplicity m N { 1 } of f ( x ) = 0 . If the initial guesses x 0 and x 1 are close enough to α, then (2) converges to α with order Φ = 1 + 5 2 , with its error equation being
e k + 1 = c 1 m e k e k 1 + O 3 ( e k , e k 1 ) ,
where c j = m ! ( m + j ) ! f ( m + j ) ( α ) f ( m ) ( α ) for j = 2 , 3 , ; e k is the error at step k and e k 1 is the error at step k 1 , given by e k = x k α and e k 1 = x k 1 α , respectively; and O 3 collects the products of the terms e k and e k 1 , whose powers sum to 3 or greater values.
Proof. 
Let e k = x k α . Expanding f ( x k ) and f ( x k ) around α via Taylor series yields
f ( x k ) = f ( m ) ( α ) m ! e k m 1 + c 1 e k + O e k m + 2 , f ( x k ) = f ( m ) ( α ) m ! e k m 1 m + ( m + 1 ) c 1 e k + O e k m + 1 .
The quotient u ( x k ) = f ( x k ) f ( x k ) is
u ( x k ) = f ( x k ) f ( x k ) = f ( m ) ( α ) m ! e k m 1 + c 1 e k + O e k m + 2 f ( m ) ( α ) m ! e k m 1 m + ( m + 1 ) c 1 e k + O e k m + 1 = 1 m e k c 1 m e k 2 + O ( e k 3 ) ,
so the expression for u ( x k 1 ) is
u ( x k 1 ) = 1 m e k 1 c 1 m e k 1 2 + O ( e k 1 3 ) .
The difference between u ( x k ) and u ( x k 1 ) yields
u ( x k ) u ( x k 1 ) = 1 m e k e k 1 c 1 m e k 2 e k 1 2 + O 3 ( e k , e k 1 )
Therefore, the error equation is
x k + 1 α = x k α u ( x k ) ( x k α x k 1 + α ) u ( x k ) u ( x k 1 ) = e k u ( x k ) ( e k e k 1 ) u ( x k ) u ( x k 1 ) = = e k 1 m e k c 1 m e k 2 ( e k e k 1 ) 1 m e k e k 1 c 1 m e k 2 e k 1 2 = e k e k c 1 m e k 2 ( e k e k 1 ) ( e k e k 1 ) 1 c 1 m ( e k + e k 1 ) = = c 1 m e k e k 1 1 c 1 m ( e k e k 1 ) = c 1 m e k e k 1 + O 3 ( e k , e k 1 ) .
Assuming that the method has R-order p [10], i.e., e k + 1 e k p e k e k 1 p , then e k + 1 e k p e k 1 p 2 . In the case of (4), e k + 1 e k e k 1 e k 1 p e k 1 e k 1 p + 1 . The positive root of p 2 = p + 1 is Φ = 1 + 5 2 , so the order of the method is Φ . □

2.2. Dynamics of SD Method

The stability analysis of iterative methods with memory differs from that of procedures that lack memory. Despite the fundamentals being similar [11,12], the analysis must be adapted to the fixed point functions in R 2 [13,14].
Since an iterative method with memory of the form x k + 1 = σ ( x k , x k 1 ) cannot have fixed points, we define the auxiliary map Σ : R 2 R 2 such that
Σ ( x k 1 , x k ) = ( x k , x k + 1 ) = x k , σ ( x k , x k 1 ) , k = 1 , 2 ,
Provided that iterative schemes are applied to polynomials, the resulting iteration function is rational.
A fixed point ( w , z ) of Σ satisfies ( w , z ) = Σ w , z , so
w = z z = σ ( w , z )
Recalling [15], a fixed point x = ( w , z ) can be classified in terms of the eigenvalues λ 1 , λ 2 , , λ n of Σ (the Jacobian matrix of Σ ) as follows:
  • Attracting point when | λ j |   < 1 , j ;
  • unstable when one eigenvalue satisfies | λ j |   > 1 ;
  • repelling when | λ j |   > 1 , j .
Dynamical planes represent the basins of attraction of attracting fixed points. The dynamical planes are generated using a mesh of 400 × 400 initial guesses in the square [ 3 , 3 ] × [ 3 , 3 ] , following the guidelines of [16]. We establish the convergence to the attracting fixed points when the difference between two consecutive iterates is lower than 10 6 . In addition, we consider that the method will not converge if 40 iterations are carried out. The orange and blue basins represent the convergence to the attracting fixed points. Black basins represent divergence. A white star represents every fixed point.
Let us apply the iterative method (2) to solve the polynomial p ( x ) = ( x 1 ) m ( x + 1 ) , where x = 1 is a root of multiplicity m. Its fixed point operator is
Σ ( w , z ) = z ( m + 1 ) ( w + z ) + ( m 1 ) ( w z + 1 ) ( m 1 ) ( w + z ) + ( m + 1 ) ( w z + 1 ) .
Proposition 1.
The fixed points of Σ ( w , z ) are
  • ( w , z ) 1 = ( 1 , 1 ) and ( w , z ) 2 = ( 1 , 1 ) , which are superattracting;
  • ( w , z ) 3 = 1 m 1 + m , 1 m 1 + m , which is unstable.
Proof. 
Fixed points satisfy ( w , z ) = Σ w , z = z , ( m + 1 ) ( w + z ) + ( m 1 ) ( w z + 1 ) ( m 1 ) ( w + z ) + ( m + 1 ) ( w z + 1 ) , so w = z and
z = 2 z ( m + 1 ) + ( m 1 ) ( z 2 + 1 ) 2 z ( m 1 ) + ( m + 1 ) ( z 2 + 1 ) z 3 ( m + 1 ) + 2 z 2 ( m 1 ) + z ( m + 1 ) = z 2 ( m 1 ) + 2 z ( m + 1 ) + m 1 z 3 ( m + 1 ) + z 2 ( m 1 ) z ( m + 1 ) m + 1 = 0 ( m + 1 ) z ( z 2 1 ) = ( m 1 ) ( 1 z 2 ) .
If z = ± 1 , the previous equation is satisfied. Otherwise,
z = 1 m 1 + m .
so three values have been obtained: w , z 1 = ( 1 , 1 ) , w , z 2 = ( 1 , 1 ) , and w , z 3 = 1 m 1 + m , 1 m 1 + m .
The Jacobian matrix is
Σ ( w , z ) = 0 1 4 m z 2 1 m w z + m w + m z + w z + m w z + 1 2 4 m w 2 1 m w z + m w + m z + w z + m w z + 1 2 ,
whose eigenvalues are λ 1 ( w , z ) and λ 2 ( w , z ) . Replacing the values of the fixed points in the eigenvalues yields the following:
  • For ( w , z ) 1 = ( 1 , 1 ) : λ 1 ( 1 , 1 ) = 0 and λ 2 ( 1 , 1 ) = 0 , so ( w , z ) 1 is superattracting;
  • For ( w , z ) 2 = ( 1 , 1 ) : λ 1 ( 1 , 1 ) = 0 and λ 2 ( 1 , 1 ) = 0 , so ( w , z ) 2 is superattracting;
  • For ( w , z ) 3 = 1 m 1 + m , 1 m 1 + m : λ 1 1 m 1 + m , 1 m 1 + m = 1 5 2 < 1 and λ 2 1 m 1 + m , 1 m 1 + m = 1 + 5 2 > 1 , so ( w , z ) 3 is unstable.
Proposition 2.
The operator Σ ( w , z ) does not have free critical points.
Proof. 
Solving det ( Σ ( w , z ) ) = 0 , we obtain at least one eigenvalue equal to 0.
det ( Σ ( w , z ) ) = 0 4 m z 2 1 ( m 1 ) ( w + z ) + ( m + 1 ) ( w z + 1 ) 2 = 0 z = ± 1 .
When z = 1 , the eigenvalues of Σ ( w , 1 ) are λ 1 ( w , 1 ) = 0 and λ 2 ( w , 1 ) = m 1 + w 1 w . The second eigenvalue is 0 only when w = 1 . Therefore, ( w , z ) 1 = ( 1 , 1 ) is a critical point that matches the attracting fixed point.
When z = 1 , the eigenvalues of Σ ( w , 1 ) are λ 1 ( w , 1 ) = 0 and λ 2 ( w , 1 ) = 1 m 1 w 1 + w . The second eigenvalue is 0 only when w = 1 . Therefore, ( w , z ) 2 = ( 1 , 1 ) is a critical point that matches the attracting fixed point.
Since the critical points match the roots of the nonlinear equation and there are no additional critical points, operator Σ ( w , z ) does not have free critical points. □
Figure 1 represents the dynamical plane of method (2) when applied to p ( x ) = ( x 1 ) 2 ( x + 1 ) .
Figure 1 reveals that the iterative method has a wide stability due to the absence of black regions. Every initial guess converges to one of the roots of the polynomial.
The multiplicity value directly affects the shape and size of the basins of attraction. Applying (2) to the polynomial p ( x ) = ( x 1 ) m ( x + 1 ) for different values of m, we obtain the dynamical planes of Figure 2.
As Figure 2 shows, the basin of attraction of x = 1 increases as m does.

3. Derivative-Free Method with Memory for Solving Multiple-Root Nonlinear Equations

Family SF β can be applied to solve nonlinear non-differentiable equations with multiple roots from the secant point of view. In this section, we address the analysis of its convergence.
Theorem 2.
Let f : D R R be a sufficiently differentiable function in an open set D, and let α be a root of unknown multiplicity m N { 1 } of f ( x ) = 0 . If the initial guesses x 0 and x 1 are close enough to α, then (3) converges to α with order Φ = 1 + 5 2 , with its error equation being
e k + 1 = c 1 m e k e k 1 + O 3 ( e k , e k 1 ) ,
where c 1 = m ! ( m + 1 ) ! f ( m + 1 ) ( α ) f ( m ) ( α ) , and O 3 collects the products of the terms e k and e k 1 , whose powers sum to 3 or greater values.
Proof. 
Proceeding in a similar way to Theorem 1, we obtain the Taylor expansion of f ( x k ) around α as
f ( x k ) = f ( m ) ( α ) m ! e k m 1 + c 1 e k + O e k m + 2 ,
and
f x k + β f ( x k ) = f ( m ) ( α ) m ! e k + β f ( x k ) m 1 + c 1 e k + β f ( x k ) + O e k m + 2 .
Approximating the m th power with Newton’s binomial
e k + β f x k m = j = 0 m m j e k m j β j f j ( x k ) e k m + m e k m 1 β f ( x k )
the previous term yields
f x k + β f ( x k ) = f ( m ) ( α ) m ! e k m + m e k m 1 β f ( x k ) 1 + c 1 e k + c 1 β f ( x k ) + O e k m + 2 .
The difference between f ( x k + β f ( x k ) ) f ( x k ) is
f x k + β f ( x k ) f ( x k ) = f ( m ) ( α ) m ! e k m 1 β f ( x k ) m + c 1 e k ( m + 1 ) + c 1 m β f ( x k ) + O e k m + 2 ,
so the divided difference results in
f [ x k + β f ( x k ) , x k ] = f x k + β f ( x k ) β f ( x k ) = f ( m ) ( α ) m ! e k m 1 m + c 1 e k ( m + 1 ) + O e k m + 1 .
Therefore, expression v ( x ) = f ( x ) f [ x + β f ( x ) , x ] is
v ( x k ) = f ( m ) ( α ) m ! e k m 1 + c 1 e k + O e k m + 2 f ( m ) ( α ) m ! e k m 1 m + c 1 e k ( m + 1 ) + O e k m + 1 = 1 m e k c 1 m e k 2 + O ( e k 3 ) ,
and
v ( x k 1 ) = 1 m e k 1 c 1 m e k 1 2 + O ( e k 1 3 ) .
The difference between v ( x k ) and v ( x k 1 ) yields
v ( x k ) v ( x k 1 ) = 1 m e k e k 1 c 1 m e k 2 e k 1 2 + O 3 ( e k , e k 1 ) .
Therefore, the error equation is
x k + 1 α = x k α v ( x k ) ( x k α x k 1 + α ) v ( x k ) v ( x k 1 ) = e k v ( x k ) ( e k e k 1 ) v ( x k ) v ( x k 1 ) = = e k 1 m e k c 1 m e k 2 ( e k e k 1 ) 1 m e k e k 1 c 1 m e k 2 e k 1 2 = = e k e k c 1 m e k 2 1 c 1 m e k + e k 1 = c 1 m e k e k 1 + O 3 ( e k , e k 1 ) .
Assuming that the method has R-order p [10], i.e., e k + 1 e k p e k e k 1 p , then e k + 1 e k p e k 1 p 2 . In the case of (4), e k + 1 e k e k 1 e k 1 p e k 1 e k 1 p + 1 . The positive root of p 2 = p + 1 is Φ = 1 + 5 2 , so the order of the method is Φ . □
Note that parameter β does not affect the lowest-order term of the previous error equation, which is why all the members of family SF β keep the same order of convergence for all values of β .

4. Numerical Benchmark

We are assessing the proposed iterative schemes solving the nonlinear equations
  • f 1 ( x ) = x 2 e x sin ( x ) + x = 0 , which has one root with multiplicity two, x * = 0 .
  • f 2 ( x ) = ( x 2 e x 3 x + 2 ) 3 = 0 , which has one root with multiplicity three, x * 0.25753029 .
  • f 3 ( x ) = ( x 1 ) 2 ( x 2 ) 2 ( x 3 ) 2 = 0 , which has three roots with multiplicity two, x 1 * = 1 , x 2 * = 2 , and x 3 * = 3 .
We compare the proposed methods with the methods coming from [17], which we denote by KM and KMD, with and without derivatives, respectively, and with the technique coming from [7], which we denote by gTM, which includes derivatives.
The numerical tests are conducted with Matlab R2023a in a PC equipped with an Intel-Core i7-14700 processor (Intel, Santa Clara, CA, USA) at 2.10 GHz with 16 GB of RAM. The operations use variable-precision arithmetics with 500 digits of mantissa to guarantee that the stopping criterion | x k + 1 x k | + | f ( x k ) | < 10 25 is reached without division by zero problems. We also use a maximum of 100 iterations as a stopping criterion.
The results are collected in Table 1, Table 2 and Table 3. These tables collect the initial guess x 0 , x 1 , x 2 , the number of iterations k to reach the solution, the approximated computational order of convergence ACOC [18], the residual | x k + 1 x k | at the last iterate, the value of the function | f ( x k + 1 ) | , and the CPU time (in the seconds) at the last iterate.
As shown in Table 1, all the methods obtain good results for the chosen initial points. The approximate computational convergence order coincides with the theoretical one. Although schemes SD and SF 1 require the most iterations, they are the methods in which the residual of the last iteration is the smallest, except for the KM method, which we can consider to be the best-performing method for approximating the multiple roots of f 1 . Interestingly, the table shows that, for the initial points chosen, we see that the SD and SF methods require less CPU time than the corresponding methods with and without derivatives, respectively.
As we can see in Table 2 and Table 3, all the methods obtain good results for the chosen initial points. The approximate computational convergence order coincides with the theoretical one, and the number of iterations used to verify the stopping criterion is almost the same for all methods. In addition, the CPU time is lower for methods SD and SF 1 . However, as expected, the accuracy of the schemes proposed in this work is lower because they are iterative schemes that have a lower theoretical order of convergence than those considered for comparison.

5. Conclusions

In this work, we modify a memory-based method to make it applicable to obtain multiple roots (without needing to know their multiplicity), while maintaining their order of convergence. The order of convergence of the method is proved theoretically, and this study is complemented with a dynamical analysis of the scheme applied to nonlinear polynomials with a simple root and others with multiplicity m > 1 . This analysis shows the dynamical planes for various multiplicities, obtaining vast basins of attraction in all cases, with points that converge to the multiple roots. As a result of this study, stable convergence to both simple and multiple roots is expected.
We then modify the proposed method to obtain the SF β method, a derivative-free memory-based family of iterative methods with the same characteristics as the SD method but also including a real parameter. This variant of the SD method with a parameter enhances its flexibility and applicability in situations where derivative information is difficult or costly to obtain.
By running the KM, KMD, gTM, SD, and SF methods on several examples, we can conclude that the proposed SD and SF methods show excellent performance and are more efficient than the KM, KMD, and gTM methods in terms of runtime and computational cost. From these findings, we conclude that the SD and SF β methods have several comparative advantages. First, they efficiently handle multiple roots without multiplicity knowledge. Moreover, the SF β variant offers a derivative-free option that broadens its scope of application. Finally, both methods outperform classical approaches in terms of runtime and computational efficiency.
Nevertheless, some limitations must be acknowledged. The present work focused exclusively on problems on the real line, leaving aside the treatment of systems of nonlinear equations or problems in the complex plane. Moreover, while dynamical properties were investigated for the SD method in polynomial cases, a deeper exploration is needed for more general nonlinear functions, also including a deeper analysis of the influence of parameter β on the stability of the SF β iterative family. These limitations open interesting directions for future research. Potential extensions include adapting the SD and SF β families to systems of nonlinear equations, exploring their applicability in the numerical solution of nonlinear PDE discretization, and performing a more comprehensive dynamical study in higher dimensions and complex domains. Such developments would further establish the versatility and robustness of memory-based iterative methods in modern computational mathematics.

Author Contributions

Conceptualization, F.I.C. and N.G.-S.; methodology, F.I.C.; software, J.H.J.; validation, J.H.J.; formal analysis, F.I.C.; investigation, J.H.J.; resources, F.I.C.; data curation, J.H.J.; writing—original draft preparation, J.H.J.; writing—review and editing, N.G.-S.; visualization, N.G.-S.; supervision, F.I.C.; project administration, F.I.C.; funding acquisition, F.I.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “Ayuda a Primeros Proyectos de Investigación (PAID-06-23) and (PAID-11-24), both from Vicerrectorado de Investigación de la Universitat Politècnica de València (UPV)”.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Bhavna; Bhatia, S. Convergence analysis of optimal iterative family for multiple roots and its applications. J. Math. Chem. 2024, 62, 2007–2038. [Google Scholar] [CrossRef]
  2. Argyros, C.; Argyros, M.I.; Argyros, I.K.; Magreñán, A.A.; Sarría, I. Local and Semi-local convergence for Chebyshev two point like methods with applications in different fields. J. Comput. Appl. Math. 2023, 426, 115072. [Google Scholar] [CrossRef]
  3. Schröder, E. Über unendlich viele algorithmen zur auflösung der gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
  4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  5. Petković, M.S.; Džunić, J.; Neta, B. Interpolatory multipoint methods with memory for solving nonlinear equations. Appl. Math. Comput. 2011, 218, 2533–2541. [Google Scholar] [CrossRef]
  6. Abdullah, S.; Choubey, N.; Dara, S. An efficient two-point iterative method with memory for solving non-linear equations and its dynamics. J. Appl. Math. Comput. 2024, 70, 285–315. [Google Scholar] [CrossRef]
  7. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Modifying Kurchatov’s method to find multiple roots of nonlinear equations. Appl. Numer. Math. 2024, 198, 11–21. [Google Scholar] [CrossRef]
  8. Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dolk Akad Nauk SSSR 1971, 198, 524–526. [Google Scholar]
  9. King, R.F. A secant method for multiple roots. BIT Numer. Math. 1977, 17, 321–328. [Google Scholar] [CrossRef]
  10. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  11. Beardon, A.F. Iteration of Rational Functions: Complex Analytic Dynamical Systems; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  12. Milnor, J. Dynamics in oNe Complex Variable: Introductory Lectures; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  13. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. A multidimensional dynamical approach to iterative methods with memory. Appl. Math. Comput. 2015, 271, 701–715. [Google Scholar] [CrossRef]
  14. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. Stability of King’s family of iterative methods with memory. J. Comput. Appl. Math. 2017, 318, 504–514. [Google Scholar] [CrossRef]
  15. Robinson, R.C. An Introduction to Dynamical Systems: Continuous and Discrete; American Mathematical Society: Providence, RI, USA, 2012. [Google Scholar]
  16. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
  17. Cordero, A.; Neta, B.; Torregrosa, J.R. Memorizing Schröder’s Method as an Efficient Strategy for Estimating Roots of Unknown Multiplicity. Mathematics 2021, 9, 2570. [Google Scholar] [CrossRef]
  18. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Figure 1. Dynamical plane of SD method applied to ( x 1 ) 2 ( x + 1 ) .
Figure 1. Dynamical plane of SD method applied to ( x 1 ) 2 ( x + 1 ) .
Algorithms 18 00568 g001
Figure 2. Dynamical planes of SD method applied to ( x 1 ) m ( x + 1 ) : (a) m = 3 . (b) m = 5 . (c) m = 20 .
Figure 2. Dynamical planes of SD method applied to ( x 1 ) m ( x + 1 ) : (a) m = 3 . (b) m = 5 . (c) m = 20 .
Algorithms 18 00568 g002
Table 1. Results for equation f 1 ( x ) = 0 .
Table 1. Results for equation f 1 ( x ) = 0 .
Method x 0 x 1 x 2 kACOC | x k + 1 x k | | f ( x k + 1 ) | CPU Time
SD 0.5 0.6 10 1.6184 3.165767 · 10 35 1.173023 · 10 112 0.00255
KM 0.5 0.6 8 2.0002 3.156288 · 10 41 1.137939 · 10 162 0.00281
gTM 0.5 0.6 0.7 9 1.8301 3.003245 · 10 26 8.021029 · 10 95 0.00296
ine SF 1 0.5 0.6 10 1.6184 6.49214 · 10 37 8.69093 · 10 118 0.00301
KMD 0.5 0.6 7 2.0226 1.938624 · 10 32 1.299837 · 10 126 0.00360
Table 2. Results for equation f 2 ( x ) = 0 .
Table 2. Results for equation f 2 ( x ) = 0 .
Method x 0 x 1 x 2 kACOC | x k + 1 x k | | f ( x k + 1 ) | CPU Time
SD 0.1 0.05 7 1.6028 7.704388 · 10 36 3.008738 · 10 285 0.00248
KM 0.1 0.05 6 1.9948 5.700043 · 10 46 2.835103 · 10 454 0.00280
gTM 0.1 0.05 0.1 6 1.8071 1.681077 · 10 34 4.406565 · 10 310 0.00282
ine SF 1 0.1 0.05 10 1.5675 1.381989 · 10 26 1.288346 · 10 210 0.00350
KMD 0.1 0.05 9 1.9985 2.495773 · 10 46 1.018431 · 10 457 0.00420
Table 3. Results for equation f 3 ( x ) = 0 .
Table 3. Results for equation f 3 ( x ) = 0 .
Method x 0 x 1 x 2 kACOC | x k + 1 x k | | f ( x k + 1 ) | CPU Time
SD 1.8 1.7 7 1.9866 7.222381 · 10 34 4.215194 · 10 133 0.00252
KM 1.8 1.7 7 1.9831 8.220072 · 10 41 6.169903 · 10 161 0.00315
gTM 1.8 1.7 1.6 8 1.8318 2.697373 · 10 45 3.503401 · 10 164 0.00340
ine SF 1 1.8 1.7 8 1.6212 1.109661 · 10 27 2.570449 · 10 88 0.00334
KMD 1.8 1.7 7 1.9919 8.11058 · 10 40 1.217645 · 10 156 0.03518
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chicharro, F.I.; Garrido-Saez, N.; Jerezano, J.H. Secant-Type Iterative Classes for Nonlinear Equations with Multiple Roots. Algorithms 2025, 18, 568. https://doi.org/10.3390/a18090568

AMA Style

Chicharro FI, Garrido-Saez N, Jerezano JH. Secant-Type Iterative Classes for Nonlinear Equations with Multiple Roots. Algorithms. 2025; 18(9):568. https://doi.org/10.3390/a18090568

Chicago/Turabian Style

Chicharro, Francisco I., Neus Garrido-Saez, and Julissa H. Jerezano. 2025. "Secant-Type Iterative Classes for Nonlinear Equations with Multiple Roots" Algorithms 18, no. 9: 568. https://doi.org/10.3390/a18090568

APA Style

Chicharro, F. I., Garrido-Saez, N., & Jerezano, J. H. (2025). Secant-Type Iterative Classes for Nonlinear Equations with Multiple Roots. Algorithms, 18(9), 568. https://doi.org/10.3390/a18090568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop