Next Article in Journal
On Matrix Linear Diophantine Equation-Based Digital-Adaptive Block Pole Placement Control for Multivariable Large-Scale Linear Process
Previous Article in Journal
Identifying the Central Aspects of Parental Stress in Latinx Parents of Children with Disabilities via Psychological Network Analysis
Previous Article in Special Issue
Ricci–Yamabe Solitons on Sasakian Manifolds with the Generalized Tanaka–Webster Connection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Stage Numerical Algorithm for the Simultaneous Extraction of All Zeros of Meromorphic Functions

Faculty of Physics and Technology, University of Plovdiv Paisii Hilendarski, 24 Tzar Asen, 4000 Plovdiv, Bulgaria
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(4), 138; https://doi.org/10.3390/appliedmath5040138
Submission received: 11 August 2025 / Revised: 17 September 2025 / Accepted: 26 September 2025 / Published: 6 October 2025

Abstract

In this paper, we present an effective two-stage numerical algorithm for the simultaneous finding of all roots of meromorphic functions in a region within the complex plane. At the first stage, we construct a polynomial with the same roots as the ones of the considered function; at the next step, we apply some method for the simultaneous approximation of its roots. To show the efficiency and applicability of our algorithm together with its advantages over the classical Newton, Halley and Chebyshev’s iterative methods, we conduct three numerical examples, where we apply it to two test functions and to an important engineering problem.

1. Introduction

Suppose that f : C C is an arbitrary function. Solving the equation
f ( x ) = 0
is one of the main tasks that arises from the real world problems. It is well known that iteration methods are among the most efficient tools for solving (1). Undoubtedly, the most famous among them are Newton’s method, Halley’s method [1] and Chebyshev’s method [2]. Convergence analysis and some historical notes about these methods applied to simple and multiple zeros of analytic functions can be found in [3,4,5]. However, it could be a very difficult task to find all the roots of (1), or even detect their number, within a given finite region applying iterative methods that involve derivatives of f. A simple example is the function f ( x ) = x which root is zero but the mentioned methods might not find it because its first derivative has singularity at x = 0 .
The polynomials are quite a different case. The iteration methods for polynomial zeros have drawn a great interest among the mathematicians in the last 70 years (see, e.g., [6,7,8] and the references therein). In particular, detailed convergence analysis of Newton, Halley and Chebyshev’s methods for simple and multiple polynomial zeros has been conducted in the recent papers [9,10,11,12]. Let P ( x ) = a 0 x n + a 1 x n 1 + + a n be a complex polynomial of degree n 1 . In 1891, Weierstrass [13] offered a different approach for finding the zeros of P, namely to compute all of them at once, i.e., simultaneously. He established and studied the first simultaneous method which can be defined as follows:
x ( k + 1 ) = x ( k ) W ( x ( k ) ) , k = 0 , 1 , 2 , ,
where the Weierstrass iteration function W : D C n C n is given by W ( x ) = ( W 1 ( x ) , , W n ( x ) ) with
W i ( x ) = P ( x i ) a 0 j i ( x i x j ) ( i = 1 , , n ) ,
where D denotes the set of all vectors in C n with pairwise distinct components. The second simultaneous method in the literature is due to Bulgarian mathematicians Dochev and Byrnev [14], and the third one is due to Ehrlich [15] in 1967 and was rediscovered by Börsch-Supan [16] in 1970.
The approximation of analytic functions by polynomials has a very long history associated with the works of Taylor, Euler, Lagrange and Newton since the early 18th century. However, a great disadvantage of such an approach is that the zeros of f may bear no relation with those of its approximation P. An obvious example is
f ( x ) = e x with P ( x ) = i = 0 n x i i ! .
Therefore, to reduce the solving of (1) to the solving of some polynomial equation, many authors have attempted to find a polynomial that has exactly the same zeros as f in some domain within the complex plane (see, e.g., [17,18,19]). In 1995, Tovmasyan and Kosheleva summarized the results in this direction in the following theorem:
Theorem 1
([20] Theorem 1). Let D C be an ( n + 1 ) -connected domain with closure Γ, the function f : C C be analytic in D, and let P be a monic polynomial of degree n 1 with coefficients
a l = 1 l j = 0 l 1 a j c l j , l = 1 , , n ,
where
c 0 = n and c k = 1 2 π i Γ z k f ( z ) f ( z ) d z , k = 1 , 2 , .
Then, the zeros of f in D coincide with the zeros of P.
In order to avoid using the derivatives of f, we reformulate this theorem in the following equivalent form (see also [21]):
Theorem 2.
Let the assumptions of Theorem 1 be fulfilled with
c k = 1 2 π i i z k A r g f ( z ) | Γ k Γ z k 1 ln f ( z ) d z , k = 1 , 2 , .
Then, the zeros of f in D coincide with the ones of P.
Proof. 
The proof follows immediately from Theorem 1 because of the identity
Γ z k f ( z ) f ( z ) d z = z k ( ln | f ( z ) | + i A r g f ( z ) ) | Γ k Γ z k 1 ln f ( z ) d z = i z k A r g f ( z ) | Γ k Γ z k 1 ln f ( z ) d z .
   □
Obviously, the main drawback of Theorem 2 is that it only pertains to analytic functions but not to meromorphic functions, which, as a larger class of functions, have many more applications in natural sciences, engineering and others. The problem of locating the zeros and poles of meromorphic functions also has a long history. Since 1932, many authors proposed different numerical techniques to solve this problem; however, most of them are either ill-conditioned or computationally high-cost because of generating orthogonal polynomials for certain bilinear forms, deal with Vandermonde or Hankel matrices, embedding additional refinement mechanisms, etc. (see, e.g., [22,23,24] and references therein).
In this paper, we propose a new two-stage numerical algorithm for the simultaneous determination of all roots of Equation (1) in a region D C whenever f is meromorphic in D . Our algorithm unites and refines several well-known techniques in an effective and easy implementable procedure. More precisely, at the first stage, only by tracking the phase of f ( A r g f ), we find a domain D D , where f is analytic. Then, using Theorem 2, we compute the coefficients of a polynomial P, which has the same zeros as f in D. At the second stage, we implement a method for the simultaneous approximation of all the zeros of P. In summary, the main novelties and advantages of our study are as follows:
(i)
We use a new effective empirical method for locating the poles of f only by tracking its phase;
(ii)
Our algorithm may not require the computation of any derivatives, depending on the choice of the method at the second stage;
(iii)
We compute all the zeros of f at once with high accuracy.

2. Description of the Algorithm

In this section, we first provide a general description of our algorithm. Then, we implement it using some particular approaches at certain steps.

2.1. The General Algorithm

Let the function f : C C be meromorphic in a closed region D C with closure  Γ .
  • Stage 1. Take a rectangle containing the domain D and compute ln | f ( z ) | in any node of a mesh of p × q points. Identifying the points where ln | f ( z ) | > K , for a preselected real number K (a suitable choice is K > 1 ), cover the rectangle with squares or circles with side (radius) r, which is preselected depending on the function f, namely, the closer the poles or the poles and roots of f are to each other, a smaller r should be chosen. To sift out the false poles, we track the changing of A r g f ( z ) on these squares (circles). Note that, A r g f ( z ) decreases around a pole. If more than one pole or pole with roots are detected in some of the squares, we chose a smaller r and track the changing of A r g f ( z ) on the newly taken squares (circles). This procedure is repeated until we isolate all the poles of f. Then, by setting D 1 , , D s to be the domains containing the poles of f, we obtain a domain
    D = D ( D 1 D s )
    in which the function f is analytic. Finally, setting Γ to be the closure of D and applying Theorem 2, we obtain the coefficients a 1 a n of the corresponding polynomial P.
  • Stage 2.  Choose an initial vector x ( 0 ) C n and a simultaneous method to apply it for computing all the zeros of P.

2.2. Our Implementation

  • Stage 1. In our implementation, we consider D as a square with side R meshed by 8000 × 8000 points and cover it with circles with radius r which is different in the different examples. Identifying the nodes with ln | f ( z ) | > 1.1 , on any of the circles we apply Cauchy’s argument principle and track the continuity of A r g f ( z ) by the function unwrap(angle(f)) of MATLAB in order to detect the number of poles in it. Thus, we extract the domain D in which the function f is analytic. Then, computing the integrals in Theorem 2 by the vectorized adaptive quadrature ([25]), we get the coefficients of the polynomial P.
  • Stage 2.  Using the second coefficient a 1 and the degree n of the polynomial P, we generate Aberth’s initial approximation (see [26]) x ( 0 ) C n , which is given as follows:
    x ν ( 0 ) = a 1 n + r 0 exp ( i θ ν ) , θ ν = π n 2 ν 3 2 , ν = 1 , , n ,
    where r 0 = R / 2 . Then, in order to avoid any usage of derivatives, we use the following family of cubically convergent simultaneous methods that has been constructed and studied in [27,28]:
    x ( k + 1 ) = T α ( x ( k ) ) , k = 0 , 1 , 2 ,
    with the iteration function T α being defined by
    T α ( x ) = ( T 1 ( x ) , , T n ( x ) ) with T i ( x ) = x i W i ( x ) 1 + ( α 1 ) j i W j ( x ) x i x j 1 + α j i W j ( x ) x i x j ,
    where α C , while W is the above-defined Weierstrass correction.
Remark 1.
At the second step, different ways for choosing the initial guess can be used, e.g., to take its coordinates randomly from D (see, for example, [29] and the references therein).

3. Numerical Examples

In this section, we apply our algorithm to some functions that cause problems for the classical iteration methods, such as Newton, Halley and Chebyshev’s ones. In order to obtain higher precision in extracting the poles and the roots of the considered functions, we use a mesh of 8000 × 8000 points and origin-centered squares with sides 4 and 2.6 for Example 1 and Example 2, respectively, while a square with side 0.6 centered in 0.9 is used in Example 3. At the second stage, we apply the family (6) with α = 1 , which, in fact, is the famous Ehrlich’s method [15] that is also known as Börsch-Supan’s one [16], and we implement the following a posteriori error estimate ([29], Corollary 4.2):
  • A posteriori error estimate.  Let P be a complex polynomial of degree n 2 and let ( x ( k ) ) k = 0 be a sequence of vectors in C n with pairwise distinct coordinates. Then, for every k 0 , there is a vector ξ C n of the roots of f such that
    E ( x ( k ) ) = W ( x ( k ) ) d ( x ( k ) ) < τ x ( k ) ξ ε k = Φ ( E ( x ( k ) ) ) W ( x ( k ) ) ,
    where the number τ and the function Φ : [ 0 , τ ] R + are defined by
    τ = 1 ( 1 + n 1 ) 2 and Φ ( t ) = 2 1 ( n 2 ) t + ( 1 ( n 2 ) t ) 2 4 t .
Using this error estimate, we apply the following stopping criterion:
E ( x ( k ) ) < τ and ε k < 10 10 .
Moreover, to show the convergence behavior of Newton, Halley and Chebyshev’s methods, when applied to the chosen functions, we define the quantity ϵ k = | z k z k + 1 | for all k 0 and we use the stopping criterion ϵ k < 10 10 (see Table 1).
In the tables below, for any of the examples, we give the values of k, E ( x ( k ) ) , τ , ε k , ϵ k and ε k + 1 . All numbers in the remainder are given with at least six decimal digits.
In our first example, we take a function from [30] that causes many problems for the famous Newton, Halley and Chebyshev’s methods.
Example 1
([30] Example 3). Consider the following function:
f 1 ( z ) = 1 z 2 ( z 1 ) ( z 2 + 9 ) + z sin z + e 3 z + 4
with poles 0 and 1 and with roots 0.163231 ± 1.778842 i , 0.349178 ± 1.194062 i , 0.978436 , 0.169748 , 0.133271 in the circle | z | 2 .
At the first stage, we obtain the corresponding polynomial of f 1 as follows:
P 1 ( x ) = x 7 + 0.009906 x 6 + 3.939586 x 5 2.271493 x 4 + 2.251773 x 3 4.866621 x 2 + 0.125048 x + 0.109315
and we use Aberth’s initial vector (5) with n = 7 , a 1 = 0.009906 and r 0 = 2 .
No matter the rough initial guess, one can see from Table 2 that the stopping criterion (8) is satisfied at the sixth iteration with an error estimation that is less than 10 11 . At the next iteration, the roots are found with an accuracy of 10 34 .
We have to note that trying to find the zeros of f 1 Chebyshev’s method diverges when starting from the initial point 0.999990 + 0.044504 i (see Table 1) while starting from 1.948440 + 0.445041 i it converges to the root outside the circle. What for Newton and Halley’s methods, it is seen by Table 1 that Halley’s method ‘jumps out of the circle’ starting from the initial guess 0.000669 0.445041 i no matter its closeness to the roots inside and Newton’s method finds the root 0.349178 + 1.194062 i instead of someone closer to the initial guess. In fact, all three methods encounter difficulties to find the root 0.978436 maybe because of its closeness to a pole.
In Figure 1a,b, the values of ln | f 1 ( z ) | and the trajectories of approximations to the roots are shown. On the left, the white stars depict the zeros, while the black ones are the poles of f. On the right, blue points depict the initial guess, and the red ones are the roots.
To show the high precision of our algorithm, for the next example, we constructed a function with very close poles and zeros.
Example 2.
We consider the function
f 2 ( z ) = ( z + 0.001 ) ( z 1.001 0.2 i ) sin z e z e 0.0005
in the circle | z | 1.3 .
We obtain its corresponding polynomial as
P 2 ( x ) = x 3 ( 1 + 0.2 i ) x 2 ( 0.001 + 0.0002 i ) x + 5.991 × 10 11 3.765 × 10 10 i
and we use Aberth’s initial approximation (5) with r 0 = 1.3 .
One can see from Table 2 that the stopping criterion (8) is satisfied at the tenth iteration with an error estimation less than 10 13 , and at the eleventh step all the roots are found with an accuracy of 10 36 . On the other hand, both Halley and Chebyshev’s methods run into great difficulties to find the root 0 of f 2 . One can see from Table 1, that regardless of how close the initial guess is to 0 the three methods converge to another root.
As in the previous example, in Figure 2a,b, we plot the values of ln | f 2 ( z ) | and the trajectories of approximations to the roots.
In the last example, we consider a function that models an important chemical engineering problem which has been studied in ([31], Problem 7). It is worth noting that the mentioned authors, being unable to solve the problem directly due to its singularities, have transformed it into a much suitable transcendental function. In our example, a solution of the basic problem is given.
Example 3
(Fractional conversion in a reactor [5,31]). The fractional chemical conversion in a reactor is described by the following function:
f 3 ( z ) = z 1 z 5 ln 0.4 ( 1 z ) 0.4 0.5 z + 4.45977 ,
where z is the fractional conversion of the limiting reactant. We consider f 3 in the circle with center 0.9 and radius 0.3 which contains its zeros 0.757396 and 1.098983 .
At the first step, we get the following corresponding polynomial:
P 3 ( x ) = x 2 1.856380 x 0.832366
and then we use Aberth’s initial approximation (5) with r 0 = 0.3 .
One can see from Table 2 that the stopping criterion (8) is satisfied at the fourth iteration with an error estimation less than 10 24 and at the next step all the roots are found with an accuracy of 10 75 . For Chebyshev’s method or even for Newton’s one there are infinitely many divergent points in the considered region. An example is the point 0.703493 + 0.042135 i no matter its closeness to one of the roots (see Table 1).
The values of ln | f 3 ( z ) | and the trajectories of approximations to the roots are plotted in Figure 3a and Figure 3b, respectively.

4. Conclusions

We have united and refined several previous approaches into a new effective and easy implementable two-step numerical algorithm for the simultaneous extraction of all the zeros of a meromorphic function f in a certain domain within the complex plane. Our method involves some simple techniques for excluding the poles and finding the roots of f without a need of computing any of its derivatives. To show the advantages and applicability of our method, we have conducted three examples, where we apply it to two test functions and to a chemical engineering problem for which the famous Newton, Halley and Chebyshev’s iterative methods fail or run into great difficulties in finding some of the roots. Our method face no difficulties to extract all the roots of ill-conditioned functions with high precision but the manual adjustment of the parameters at the first stage cause a bit of inconvenience. Our approach could be further extended for finding the roots of multivariate functions and system of linear equations.

Author Contributions

Conceptualization, I.K.I.; formal analysis, S.I.I. and I.K.I.; investigation, I.K.I. and S.I.I.; methodology, I.K.I. and S.I.I.; software, I.K.I. The authors contributed equally to the writing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Halley, E. A new, exact, and easy method of finding the roots of any equations generally, and that without any previous reduction. Philos. Trans. R. Soc. 1694, 18, 136–148. (In Latin) [Google Scholar] [CrossRef]
  2. Chebyshev, P. Complete Works of P.L. Chebyshev; USSR Academy of Sciences: Moscow, Russia, 1973; pp. 7–25. Available online: http://books.e-heritage.ru/book/10079542 (accessed on 1 January 2020). (In Russian)
  3. Proinov, P.D. General local convergence theory for a class of iterative processes and its applications to Newton’s method. J. Complexity 2009, 25, 38–62. [Google Scholar] [CrossRef]
  4. Ivanov, S.I. A general approach to the study of the convergence of Picard iteration with an application to Halley’s method for multiple zeros of analytic functions. J. Math. Anal. Appl. 2022, 513, 126238. [Google Scholar] [CrossRef]
  5. Kostadinova, S.G.; Ivanov, S.I. Chebyshev’s Method for Multiple Zeros of Analytic Functions: Convergence, Dynamics and Real-World Applications. Mathematics 2024, 12, 3043. [Google Scholar] [CrossRef]
  6. Pan, V.Y. Solving a polynomial equation: Some history and recent progress. SIAM Rev. 1997, 39, 187–220. [Google Scholar] [CrossRef]
  7. Petković, M. Point Estimation of Root Finding Methods; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2008; Volume 1933. [Google Scholar] [CrossRef]
  8. McNamee, J.M.; Pan, V. Numerical Methods for Roots of Polynomials. Part II; Studies in Computational Mathematics; Elsevier: Amsterdam, The Netherlands, 2013; Volume 16. [Google Scholar]
  9. Proinov, P.D.; Ivanov, S.I. On the convergence of Halley’s method for multiple polynomial zeros. Mediterr. J. Math. 2015, 12, 555–572. [Google Scholar] [CrossRef]
  10. Proinov, P.D.; Ivanov, S.I. On the convergence of Halley’s method for simultaneous computation of polynomial zeros. J. Numer. Math. 2015, 23, 379–394. [Google Scholar] [CrossRef]
  11. Ivanov, S.I. On the convergence of Chebyshev’s method for multiple polynomial zeros. Results Math. 2016, 69, 93–103. [Google Scholar] [CrossRef]
  12. Ivanov, S.I. Unified Convergence Analysis of Chebyshev-Halley Methods for Multiple Polynomial Zeros. Mathematics 2022, 10, 135. [Google Scholar] [CrossRef]
  13. Weierstrass, K. Neuer Beweis des Satzes, dass jede ganze rationale Function einer Veränderlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Veränderlichen. Sitzungsber. Königl. Preuss. Akad. Wiss. Berlinn 1891, II, 1085–1101. Available online: https://biodiversitylibrary.org/page/29263197 (accessed on 1 January 2020).
  14. Dochev, K.; Byrnev, P. Certain modifications of Newton’s method for the approximate solution of algebraic equations. USSR Comput. Math. Math. Phys. 1964, 4, 174–182. [Google Scholar] [CrossRef]
  15. Ehrlich, L. A modified Newton method for polynomials. Commun. ACM 1967, 10, 107–108. [Google Scholar] [CrossRef]
  16. Börsch-Supan, W. Residuenabschätzung für Polynom-Nullstellen mittels Lagrange Interpolation. Numer. Math. 1970, 14, 287–296. [Google Scholar] [CrossRef]
  17. Lehmer, D.H. The Graeffe process as applied to power series. MTOAC 1945, 1, 377–383. [Google Scholar] [CrossRef]
  18. Delves, L.M.; Lyness, J.N. A Numerical Method for Locating the Zeros of an Analytic Function. Math. Comput. 1967, 21, 543–560. [Google Scholar] [CrossRef]
  19. Davies, B. Locating the zeros of an analytic function. Math. Comput. 1986, 66, 36–49. [Google Scholar] [CrossRef]
  20. Tovmasyan, N.; Kosheleva, T. On a certain method for finding zeros of analytic functions and its application to solving boundary value problems. Sib. Math. J. 1995, 36, 988–998. [Google Scholar] [CrossRef]
  21. Giri, D.D.V. Complex Variable Theorems for Finding Zeroes and Poles of Transcendental Functions. J. Phys. Conf. Ser. 2019, 1334, 012005. [Google Scholar] [CrossRef]
  22. Chen, H. On locating the zeros and poles of a meromorphic function. J. Comput. Appl. Math. 2022, 402, 113796. [Google Scholar] [CrossRef]
  23. Dziedziewicz, S.; Warecka, M.; Lech, R.; Kowalczyk, P. Self-adaptive mesh generator for global complex roots and poles finding algorithm. IEEE Trans. Microw. Theory Tech. 2023, 71, 2854–2863. [Google Scholar] [CrossRef]
  24. Frantsuzov, V.A.; Artemyev, A.V. A global argument-based algorithm for finding complex zeros and poles to investigate plasma kinetic instabilities. J. Comput. Appl. Math. 2025, 456, 116217. [Google Scholar] [CrossRef]
  25. Shampine, L. Vectorized adaptive quadrature in MATLAB. J. Comput. Appl. Math. 2008, 211, 131–140. [Google Scholar] [CrossRef]
  26. Aberth, O. Iteration Methods for Finding all Zeros of a Polynomial Simultaneously. Math. Comp. 1973, 27, 339–344. [Google Scholar] [CrossRef]
  27. Ivanov, S.I. A unified semilocal convergence analysis of a family of iterative algorithms for computing all zeros of a polynomial simultaneously. Numer. Algorithms 2017, 75, 1193–1204. [Google Scholar] [CrossRef]
  28. Pavkov, T.M.; Kabadzhov, V.G.; Ivanov, I.K.; Ivanov, S.I. Local convergence analysis of a one parameter family of simultaneous methods with applications to real-world problems. Algorithms 2023, 16, 103. [Google Scholar] [CrossRef]
  29. Proinov, P.D.; Ivanov, S.I. A new family of Sakurai-Torii-Sugiura type iterative methods with high order of convergence. J. Comput. Appl. Math. 2024, 436, 115428. [Google Scholar] [CrossRef]
  30. Kravanja, P.; Barel, M.V.; Haegemans, A. On computing zeros and poles of meromorphic functions. In Computational Methods and Function Theory; World Scientific Publishing Co. Pte Ltd.: Singapore, 1999; Volume 1997, pp. 359–369. [Google Scholar] [CrossRef]
  31. Gritton, K.S.; Seader, J.; Lin, W.J. Global homotopy continuation procedures for seeking all roots of a nonlinear equation. Comput. Chem. Eng. 2001, 25, 1003–1019. [Google Scholar] [CrossRef]
Figure 1. Graph of the two stages for Example 1.
Figure 1. Graph of the two stages for Example 1.
Appliedmath 05 00138 g001
Figure 2. Graph of the two stages for Example 2.
Figure 2. Graph of the two stages for Example 2.
Appliedmath 05 00138 g002
Figure 3. Graph of the two stages for Example 3.
Figure 3. Graph of the two stages for Example 3.
Appliedmath 05 00138 g003
Table 1. Convergence of Newton, Halley and Chebyshev’s methods for Examples 1–3.
Table 1. Convergence of Newton, Halley and Chebyshev’s methods for Examples 1–3.
FunctionMethodInitial GuessRootk ϵ k
Newton 0.999990 + 0.044504 i 0.349178 + 1.194062 i 14 3.852 × 10 13
f 1 Halley 0.000669 0.445041 i 1.413711 3.573154 i 6 1.872 × 10 20
Chebyshev 0.999990 + 0.044504 i The method diverges
Newton 0.000001 0.001136 i 0.001 5 1.643 × 10 14
f 2 Halley 0.000001 0.001136 i 0.001 4 1.135 × 10 12
Chebyshev 0.000001 0.001136 i 0.001 4 4.644 × 10 14
Newton 0.703493 + 0.042135 i The method diverges
f 3 Halley 0.703493 + 0.042135 i 0.757396 3 1.323 × 10 22
Chebyshev 0.703493 + 0.042135 i The method diverges
Table 2. Numerical data for Examples 1–3.
Table 2. Numerical data for Examples 1–3.
Polynomialk E f ( x ( k ) ) τ ε k ε k + 1
P 1 6 8.77 × 10 12 0.084040 5.384 × 10 12 2.875 × 10 34
P 2 10 1.037 × 10 11 0.171573 1.036 × 10 14 1.116 × 10 36
P 3 4 1.481 × 10 25 0.25 5.060 × 10 26 1.110 × 10 75
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ivanov, I.K.; Ivanov, S.I. A Two-Stage Numerical Algorithm for the Simultaneous Extraction of All Zeros of Meromorphic Functions. AppliedMath 2025, 5, 138. https://doi.org/10.3390/appliedmath5040138

AMA Style

Ivanov IK, Ivanov SI. A Two-Stage Numerical Algorithm for the Simultaneous Extraction of All Zeros of Meromorphic Functions. AppliedMath. 2025; 5(4):138. https://doi.org/10.3390/appliedmath5040138

Chicago/Turabian Style

Ivanov, Ivan K., and Stoil I. Ivanov. 2025. "A Two-Stage Numerical Algorithm for the Simultaneous Extraction of All Zeros of Meromorphic Functions" AppliedMath 5, no. 4: 138. https://doi.org/10.3390/appliedmath5040138

APA Style

Ivanov, I. K., & Ivanov, S. I. (2025). A Two-Stage Numerical Algorithm for the Simultaneous Extraction of All Zeros of Meromorphic Functions. AppliedMath, 5(4), 138. https://doi.org/10.3390/appliedmath5040138

Article Metrics

Back to TopTop