You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

14 December 2025

Shared-Pole Carathéodory–Fejér Approximations for Linear Combinations of φ-Functions

Department of Mathematics, King Khalid University, Abha 61421, Saudi Arabia
This article belongs to the Special Issue Numerical Methods for Scientific Computing

Abstract

We develop a shared denominator Carathéodory–Fejér (CF) method for efficiently evaluating linear combinations of φ -functions for matrices whose spectrum lies in the negative real axis, as required in exponential integrators for large stiff ODE systems. This entire family is approximated with a single set of poles (a common denominator). The shared pole set is obtained by assembling a stacked Hankel matrix from Chebyshev boundary data for all target functions and computing a single SVD; the zeros of the associated singular-vector polynomial, mapped via the standard CF slit transform, yield the poles. With the poles fixed, per-function residues and constants are recovered by a robust least squares fit on a suitable grid of the negative real axis. For any linear combination of resolvent operators applied to right-hand sides, the evaluation reduces to one shifted linear solve per pole with a single combined right-hand side, so the dominant cost matches that of computing a single φ -function action. Numerical experiments indicate geometric convergence at a rate consistent withHalphen’s constant, and for highly stiff problems our algorithm outperforms existing Taylor and Krylov polynomial-based algorithms.

1. Introduction

Over the last two decades, the φ -functions have become central objects in the design of exponential integrators for stiff systems; see, e.g., the survey by Hochbruck and Ostermann [1], the review by Minchev and Wright [2], and the references therein. Typical target problems include semilinear diffusion–reaction equations, Schrödinger-type equations, advection–diffusion–reaction systems, and more general evolution equations obtained by spatial discretization of parabolic or highly oscillatory PDEs in physics and engineering leading to a system of ODEs:
u ( t ) = A u ( t ) + g ( u ( t ) , t ) , u ( t 0 ) = u 0 , u ( t ) C d ,
where A C d × d is obtained from spatial discretization and g is a nonlinear vector function. In such applications, the matrix A is usually large and sparse and in many cases it is negative semidefinite.
Exponential integrators have emerged as a successful class of numerical methods for systems of ODEs. A broad class of exponential integrators reduces each stage to a linear combination of φ –function actions of the form (see, e.g., [1,3,4,5,6,7,8,9,10]):
w = j = 0 p φ j ( A ) v j .
For a scalar argument z C , the functions φ k admit a convenient integral representation
φ 0 ( z ) = e z , φ k ( z ) = 1 ( k 1 ) ! 0 1 e ( 1 τ ) z τ k 1 d τ , k 1 ,
which, by the holomorphic functional calculus, extends directly to matrices:
φ k ( A ) = 1 ( k 1 ) ! 0 1 e ( 1 τ ) A τ k 1 d τ , k 1 .
Equivalently, one can use the power series representation φ k ( A ) = m = 0 A m ( m + k ) ! .
As an illustration of the pivotal role of the φ -functions in exponential integrators, consider the problem (1) with step size h > 0 . The classical fourth-order exponential time differencing Runge–Kutta method (ETDRK4) of Cox and Matthews [9], modified later by Kassam and Trefethen [11] for stability, reads as follows. Given u n u ( t n ) , define the stage values
a n = e h 2 A u n + A 1 e h 2 A I g ( u n , t n ) , b n = e h 2 A u n + A 1 e h 2 A I g ( a n , t n + h 2 ) , c n = e h 2 A a n + A 1 e h 2 A I 2 g ( b n , t n + h 2 ) g ( u n , t n ) .
Then, the step from t n to t n + 1 = t n + h is given by
u n + 1 = e h A u n + h 2 A 3 { 4 I h A + e h A 4 I 3 h A + ( h A ) 2 g ( u n , t n ) + 2 2 I + h A + e h A ( 2 I + h A ) g ( a n , t n + h 2 ) + g ( b n , t n + h 2 ) + 4 I 3 h A ( h A ) 2 + e h A ( 4 I h A ) g ( c n , t n + h ) } .
Using the fact that ([12], Section 10.7.4)
φ 0 ( z ) = e z , φ 1 ( z ) = e z 1 z , φ 2 ( z ) = e z 1 z z 2 , φ 3 ( z ) = e z 1 z 1 2 z 2 z 3 ,
the ETDRK4 scheme can be written in the form
a n = φ 0 h 2 A u n + h 2 φ 1 h 2 A g ( u n , t n ) , b n = φ 0 h 2 A u n + h 2 φ 1 h 2 A g ( a n , t n + h 2 ) , c n = φ 0 h 2 A a n + h 2 φ 1 h 2 A 2 g ( b n , t n + h 2 ) g ( u n , t n ) .
The step from t n to t n + 1 = t n + h is then given by
u n + 1 = φ 0 ( h A ) u n + h φ 1 ( h A ) g ( u n , t n ) + h φ 2 ( h A ) 3 g ( u n , t n ) + 2 g ( a n , t n + h 2 ) + 2 g ( b n , t n + h 2 ) g ( c n , t n + h ) + 4 h φ 3 ( h A ) g ( u n , t n ) g ( a n , t n + h 2 ) g ( b n , t n + h 2 ) + g ( c n , t n + h ) .
This formulation is algebraically equivalent to the original ETDRK4 scheme of Cox and Matthews, but all matrix coefficients are expressed as linear combinations of the φ -functions φ j , j = 0 , 1 , 2 , 3 , making the scheme stable and less prone to subtractive cancellation [11].
The demand of computing such a combination in (2) has led to a substantial body of work devoted specifically to the numerical evaluation of matrix φ -functions. Niesen and Wright [8] proposed a Krylov subspace algorithm for computing φ k ( A ) v that is now widely used in exponential integrator codes. Their algorithm has been improved and extended to several Krylov-based algorithms that simultaneously evaluate several linear combinations of the form (2); see, e.g., Luan et al. [13], Gaudreault et al. [5], and Caliari et al. [14]. Recently, Al-Mohy [3] proposed an algorithm based on the Taylor series that simultaneously calculates several linear combinations of the form (2). For the implementation of rational Krylov subspaces, see, e.g., Moret [15], Bergermann and Stoll [16], and the references therein.
For algorithms of φ -functions of medium size, Berland, Skaflestad, and Wright [17] developed the expint MATLAB package, emphasizing that the stability and efficiency of exponential integrators hinge on the accurate evaluation of the underlying φ -functions. A more recent contribution includes the scaling and recovering algorithm by Al-Mohy and Liu [18], which extends the work of Al-Mohy and Higham [19] for the matrix exponential.
In this context, our goal in the present work is to develop a CF-based rational approximation framework that exploits a shared set of poles for the family { φ j } j = 0 p . By constructing near-best scalar rational approximants with a common denominator on the negative real axis, and then lifting them to the matrix level, we obtain an efficient mechanism for evaluating general linear combinations of the form (2) using only one set of shifted factorizations ( A θ I ) across all indices j and all vectors v j .
In this manuscript, we focus on approximating the linear combination (2) simultaneously by constructing a shared-pole CF rational approximation for each φ j of the form [20]
r j ( x ) = r ( j ) + = 1 n η ( j ) x θ , x R : = ( , 0 ] , j = 0 , , p ,
where the pole set { θ } = 1 n is common to all j, while the residues η ( j ) and constants r ( j ) depend on j. Equivalently,
r j ( x ) = p j , n ( x ) q n ( x ) , q n ( x ) = = 1 n ( x θ ) ,
so that all r j share the same denominator q n and differ only in their numerators p j , n .
The CF approach proposed by Trefethen [21,22] and Trefethen and Gutknecht [23] constructs near-best real rational approximants to scalar functions from boundary data using singular structure of a Hankel matrix. Its roots trace back to the early 20th-century work of Carathéodory and Fejér on the relation between the extrema of harmonic functions and their coefficients [24]; an extensive historical review is given in [23]. The use of CF approximants for matrix functions goes back to Trefethen, Weideman, and Schmelzer ([20], Section 4) for the matrix exponential. Schmelzer and Trefethen [25] subsequently used CF approximants to evaluate actions φ j ( A ) v , typically with distinct pole sets for each j. They also advocate a common-pole strategy exploiting block-matrix identities among the φ -functions ([25], Section 4), yielding rational approximants that share a single denominator and thereby allowing reuse of the same shifted factorizations A θ I across multiple right-hand sides. They did not develop a general framework for arbitrary linear combinations of the form (2) with heterogeneous right-hand sides, which we provide below. Moreover, they explicitly remark that these approximants are far from optimal.
A key advantage of our approach is that the required approximation degree (and hence the accuracy up to the conditioning of the problems) of the CF rational approximants is its tendency to be independent of the spectral radius of A: the same shared pole set yields geometric decay uniformly on R , so large spectral radius does not force a higher degree n. By contrast, the algorithms based on Taylor series, like that of Al-Mohy [3] and based on the standard Krylov like those of [5,8,14] typically require degrees that grow with A  [16].
This paper is organized as follows: In Section 2, we show how the shared poles, residues, and constants can be computed and propose an algorithm for their computation. In Section 3, we present several results showing that the shared-pole rational approximants retain the exponential accuracy. Section 4 presents the main algorithm for computing the linear combination in (2). Next we present our numerical experiments in Section 5. Finally, we draw some concluding remarks in Section 6.
In the next section, we describe how to construct the shared pole set { θ } = 1 n together with the per–function residues η ( j ) and constants r ( j ) that define the CF approximants r j in (3).

2. Constructing Shared Poles and Per–Function Residues and Constants

This section builds on the CF analysis and practice of [20,23,25]. Let f 0 , , f p be analytic in C \ R and continuous up to R . Assume further that | f j ( z ) | 0 as | z | in a left sector containing R , for j = 0 , , p .
We consider the conformal map
z ( w ) = σ w 1 w + 1 2 ,
where σ > 0 is a suitably chosen scaling parameter. It maps the interior of the unit disk onto C \ R and the unit circle w = e i θ , 0 θ 2 π , onto R , on which
z ( e i θ ) = σ e i θ 1 e i θ + 1 2 = σ t 1 t + 1 , t = cos θ .
Since ( t 1 ) / ( t + 1 ) as t 1 + and f j ( z ) 0 as z along R by assumption, we set
F j ( 1 ) : = lim t 1 + f j σ t 1 t + 1 = 0 ,
so that F j is continuous on [ 1 , 1 ] . For each j = 0 , , p we define the boundary function
F j ( t ) = f j z ( w ) = f j σ t 1 t + 1 , t [ 1 , 1 ] .
Since F j is continuous on [ 1 , 1 ] , it has the Chebyshev expansion
F j ( t ) = k = 0 c k ( j ) T k ( t ) ,
where c 0 ( j ) = a 0 ( j ) and c k ( j ) = 2 a k ( j ) for k 1 , with
a k ( j ) = 1 π 1 1 F j ( t ) T k ( t ) 1 t 2 d t = 1 2 π 0 2 π F j ( cos θ ) cos ( k θ ) d θ ,
since the Chebyshev polynomial T k ( t ) = cos ( k arccos t ) . Numerically these integrals are approximated using the composite trapezoidal rule in θ , which converges geometrically for 2 π -periodic analytic functions ([26], Theorem 3.1). The fast Fourier transform (FFT) can then be used to compute the coefficients efficiently because of periodicity ([27], Section 5.5).
Trefethen and Gutknecht ([23], Section 1) truncate F j as
F j ( K ) ( t ) = k = 0 K c k ( j ) T k ( t ) ,
and write t = ( w + w ¯ ) / 2 for | w | = 1 , so that T k ( t ) = ( w k + w ¯ k ) / 2 for k 1 , yielding
F j ( K ) ( t ) = c 0 ( j ) + 1 2 g j ( w ) + g j ( w ¯ ) , g j ( w ) = k = 1 K c k ( j ) w k .
The key consequence of the CF theorem ([21], Theorem 3) is as follows: The polynomial g j in (5) has a unique best rational approximation of type ( n , n ) , denoted r n n ( j ) , with all poles lying outside the unit disk, and
g j ( w ) r n n ( j ) ( w ) = s n + 1 w K B j ( w ) , B j ( w ) = u 1 + u 2 w + + u K w K 1 v K + v K 1 w + + v 1 w K 1 ,
where B j ( w ) is a finite Blaschke product of degree n and v = [ v 1 , v 2 , , v K ] T and u = [ u 1 , u 2 , , u K ] T are the right and left singular vectors, respectively, associated with the singular value s n + 1 of the Hankel matrix
H ( j ) = c 1 ( j ) c 2 ( j ) c K 1 ( j ) c K ( j ) c 2 ( j ) c 3 ( j ) c K ( j ) 0 c K 1 ( j ) c K ( j ) 0 0 c K ( j ) 0 0 0 .
That is, H ( j ) v = s n + 1 u ; see ([22], Proposition 3.1) and ([23], Theorem 1).
In our applications the functions f j are real-valued on R (for example, the scalar φ -functions), and, hence, the boundary functions F j are real-valued on [ 1 , 1 ] . Consequently, all Chebyshev coefficients c k ( j ) are real and the Hankel matrices H ( j ) are real. It follows that we may take the singular vectors u and v in the relation H ( j ) v = s n + 1 u to be real, so that the associated Blaschke product B j and the type ( n , n ) rational approximant r n n ( j ) have real coefficients and poles occurring in complex conjugate pairs. For simplicity, we adopt this real-valued setting throughout the paper. The construction extends in a straightforward way to complex-valued functions f j , at the expense of working with complex Hankel matrices and singular vectors; we do not pursue this generality here.
Trefethen ([22], p. 303) shows that the best rational approximant r n n ( j ) has n poles counted with multiplicity and they are the zeros of the denominator polynomial of B ( w ) in (6) that lie outside the unit circle. Thus, the residues and constants associated with partial fraction form of r n n ( j ) are also encoded in the vector u representing the numerator of B ( w ) ; see Trefethen and Gutknecht ([23], Section 1) for further details. After obtaining the poles and residues in the w-plane, the conformal map z ( w ) defined in (4) is applied to transplant them in the z-plane and construct a near-best rational approximation for f j .
Our novel approach in this paper is to construct a family of rational approximants r n n ( j ) in a near-best sense for the functions f j such that r n n ( j ) , j = 0 , , p , have the same denominator. That is, they share the same set of poles. We begin by forming the Hankel matrices H ( j ) , j = 0 , p , as prescribed above and then build the block-row matrix
H = w 0 H ( 0 ) w 1 H ( 1 ) w p H ( p ) ,
which we refer to as the stacked weighted Hankel matrix with positive weights w j . We then compute the thin singular value decomposition (SVD) H = U S V T and let v n + 1 R K denote the right singular vector associated with the ( n + 1 ) st singular value s n + 1 (singular values ordered nonincreasingly). Equivalently, v n + 1 minimizes the quadratic form
j = 0 p w j H ( j ) v 2 2 over unit vectors v orthogonal to span { v 1 , , v n } .
The vector v n + 1 is regarded as the coefficients of the polynomial
Q ( q ) = i = 1 K v n + 1 i q K i .
We evaluate the zeros { q } of Q lying outside the unit disk and retain the n zeros nearest to the unit circle. Finally, we map them to the slit plane via the conformal map z ( w ) in (4) as
θ = z ( q ) , = 1 , , n .
The resulting { θ } = 1 n constitute a shared pole set (a common denominator) for all functions { f j } . With these poles fixed, we approximate each f j by a shared denominator partial fraction
r j ( x ) = r ( j ) + = 1 n η ( j ) x θ , x R .
To compute the residues η ( j ) and constants r ( j ) , we choose suitable grid points { x k } k = 1 τ [ M , 0 ] R (for some truncation parameter M > 0 proportional to σ in practice) and solve a robust least squares problem
min r ( j ) , η ( j ) L β ( j ) f j ( x ) 2 , L = 1 ( x k θ ) 1 k , , β ( j ) = r ( j ) η 1 ( j ) η n ( j ) ,
where f j ( x ) = [ f j ( x 1 ) , f j ( x 2 ) , , f j ( x τ ) ] T and 1 denotes the column vector whose entries are all ones. Another way to compute the residues η ( j ) and constants r ( j ) is to use the ( n + 1 ) st vector of U, denote it u. By blocking u in accordance with the rows of H,
u = [ u ( 0 ) , u ( 1 ) , , u ( p ) ] T ,
we have
H ( j ) v n + 1 = s n + 1 w j u ( j ) , j = 0 , , p .
This shows that the blocks u ( j ) play, for each f j , a role analogous to the left singular vector in the one-function CF construction. In principle, one could recover residue information from these vectors along the lines of [22,23]. In Algorithm 1, however, we opt for the simpler and more robust least squares procedure (9); we do not exploit this alternative in our implementation.
Algorithm 1 Shared Denominator CF on R : poles { θ } , residues { η ( j ) } , and constants r ( j ) .
Input: 
Functions f 0 , , f p satisfying the assumptions in Section 2; target degree n; CF parameters: scale σ > 0 , truncation K > n , number of Chebyshev points N > K ; positive weights w 0 , , w p (default w j 1 ); residue fit parameters: LS grid size d; far-left bound M (e.g., M 100 σ ).
Output: 
Shared poles { θ } = 1 n ; per-function constants r ( j ) and residues { η ( j ) } = 1 n .
  1:
Map and sample on the unit circle
  2:
Define x ( t ) = σ t 1 t + 1 and sample t k = cos ( 2 π i k / N ) ,    k = 0 , , N 1 .
  3:
for  j = 0 , , p  do
  4:
     F j ( t k ) f j x ( t k ) ;
  5:
    Compute the Cheb. coef. c 0 ( j ) , , c K ( j ) exploiting FFT.
  6:
    Form K × K Hankel matrices H ( j ) .           ▹ see (7)
  7:
end for
  8:
Stacked-weighted Hankel SVD and pole extraction
  9:
Build H = w 0 H ( 0 ) w 1 H ( 1 ) w p H ( p ) .
10:
Compute the thin SVD H = U S V T .  ▹ singular values nonincreasing.
11:
Extract v n + 1 , the ( n + 1 ) st column of V.
12:
Compute the roots of the polynomial i = 1 K ( v n + 1 ) i q K i .
13:
Select the n roots nearest to the unit circle with | q | > 1 , denote { q } = 1 n .
14:
θ = σ q 1 q + 1 2 , = 1 , , n .            ▹ shared poles
15:
Per-function residues and constants via x-plane least squares on  [ M , 0 ]
16:
Build a log-dense grid { x k } k = 1 τ on [ M , 0 ] .
17:
Construct the matrix:
L = 1 ( x k θ ) 1 k , C τ × ( n + 1 ) .
18:
Compute the thin Q R factorization L = Q R .
19:
for  j = 0 , , p  do
20:
     y f j ( x k ) k = 1 τ
21:
    Solve R β ( j ) = Q y         ▹ least squares for β ( j ) C n + 1
22:
    Set r ( j ) = β 1 ( j ) and η ( j ) = β + 1 ( j ) for = 1 , , n .
23:
end for
24:
return shared poles { θ } , constants { r ( j ) } , residues { η ( j ) } .
The quadratic form in (8) makes clear how the single denominator is optimized across the family { f j } . Larger w j bias the pole set toward features of f j (improving its fit, potentially at the expense of others). A simple and effective choice is w 0 = = w p , so each f j contributes equally to the shared poles. In the terminology of [25], the “common poles” used in practice correspond to taking the denominator from a single base function (often e x ); in our framework, this is the extreme weight choice w 0 = 1 and w j = 0 for 1 j p . This perspective aligns with their comment that such approximations are “far from optimal.”

3. Exponential Accuracy of Shared-Denominator Rational Approximants

Let D : = C \ R . Write A ( D ) for the class of functions analytic in D and continuous up to R in the nontangential sense. The results below show that shared denominator (shared-pole) CF rational approximants retain exponential accuracy. Trefethen, Weideman, and Schmelzer [20] observed that quadrature formulas can be interpreted as rational approximations (and conversely). Consequently, it suffices to establish exponential convergence of the periodic trapezoidal rule for the boundary integral representation (after mapping D to the exterior disk); this holds whenever the integrand extends analytically to a fixed annulus about the unit circle, and thus yields exponential accuracy for the CF approximants; see, e.g., ([26], Theorem 2.1).
Theorem 1.
Fix a finite family F = { f 0 , , f p } A ( D ) . There exist constants C > 0 and ρ > 1 , depending only on F and on R , such that for each n N there are polynomials
q n ( deg q n n ) , p j , n ( deg p j , n n , 0 j p ) ,
with a single denominator q n (independent of j) satisfying
max 0 j p sup x 0 f j ( x ) p j , n ( x ) / q n ( x ) C ρ n .
Proof. 
Let Φ : D { w : | w | > 1 } be a conformal bijection with Φ ( ) = and Φ ( ) > 0 , and let g D ( z , ) = log | Φ ( z ) | . For any σ > 1 define the level curve Γ σ : = { z D : | Φ ( z ) | = σ } and parameterize it by ζ ( θ ) = Φ 1 ( σ e i θ ) , 0 θ < 2 π . For x R ,
f j ( x ) = 1 2 π 0 2 π f j ( ζ ( θ ) ) ζ ( θ ) ζ ( θ ) x d θ = 1 n m = 0 n 1 ω m f j ( ζ m ) ζ m x + E j , n ( x ) ,
with ζ m = ζ ( 2 π m / n ) and ω m = ζ ( 2 π m / n ) . The integrand is analytic and bounded in a strip | Im θ | < a , so by the exponentially convergent trapezoidal rule, sup x K | E j , n ( x ) | C 1 η n for some η > 1 independent of j and x; see ([26], Theorem 2.1). Clearing denominators gives a rational r j , n ( x ) = p j , n ( x ) / q n ( x ) with q n ( x ) = m = 0 n 1 ( ζ m x ) , common to all j. Taking ρ = η proves the claim.    □
This leads to two important corollaries. The first shows that any linear combination admits shared denominator approximants with exponential accuracy; the second treats matrix arguments.
Corollary 1.
For any α C p + 1 , there exist C α > 0 and ρ > 1 such that the combination F α : = j = 0 p α j f j admits type ( n , n ) shared denominator approximants with
sup x 0 F α ( x ) P n ( x ) / q n ( x ) C α ρ n .
Proof. 
Let r j , n ( x ) = p j , n ( x ) / q n ( x ) be the shared denominator approximants from Theorem 1, so that
sup x 0 f j ( x ) r j , n ( x ) C j ρ n , 0 j p .
For α = ( α 0 , , α p ) define P n ( x ) : = j = 0 p α j p j , n ( x ) and R n ( x ) : = P n ( x ) / q n ( x ) . Then,
F α ( x ) R n ( x ) = j = 0 p α j f j ( x ) r j , n ( x ) j = 0 p α j C j ρ n ;
hence, sup x 0 | F α ( x ) P n ( x ) / q n ( x ) | C α ρ n with C α : = j = 0 p | α j | C j .    □
Corollary 2.
Let A be normal with spectrum σ ( A ) R . Then there exist C > 0 and ρ 1 > 1 such that
f j ( A ) p j , n ( A ) / q n ( A ) 2 = sup x 0 f j ( x ) p j , n ( x ) / q n ( x ) C ρ 1 n .
If A is nonnormal and the field of values W ( A ) C \ R , Crouzeix’s theorem [28] yields
f j ( A ) p j , n ( A ) / q n ( A ) C ρ 2 n ,
for some C > 0 and ρ 2 > 1 .
Proof. 
First, suppose A is normal and σ ( A ) R . By the spectral theorem, there is a unitary U and a real diagonal Λ with entries in R such that A = U Λ U * . For any rational r without poles on R and since the 2-norm is unitarily invariant, we have
f j ( A ) r ( A ) 2 = U f j ( Λ ) r ( Λ ) U * 2 = f j ( Λ ) r ( Λ ) 2 = max λ σ ( A ) f j ( λ ) r ( λ ) .
Apply Theorem 1 with a contour Γ σ enclosing R ; the same construction provides a uniform bound on any compact subset inside Γ σ , hence on σ ( A ) R :
f j ( A ) p j , n ( A ) / q n ( A ) 2 sup x 0 f j ( x ) p j , n ( x ) / q n ( x ) C ρ 1 n ,
for some C > 0 and ρ 1 > 1 .
If A is nonnormal and W ( A ) C \ R , pick σ > 1 so that the level curve Γ σ encloses the compact set W ( A ) at positive distance. By the same Cauchy-trapezoidal construction, the rational r n = p j , n / q n satisfies
sup z W ( A ) f j ( z ) r n ( z ) C ^ ρ 2 n ,
for some C ^ > 0 and ρ 2 > 1 . Therefore, we have
f j ( A ) r n ( A ) C Crx sup z W ( A ) f j ( z ) r n ( z ) C ρ 2 n
by Crouzeix’s theorem with the universal constant C Crx 1 + 2 .    □
In view of classical potential theoretic results on best type- ( n , n ) rational approximation of e x on R , the geometric rate ρ in Theorem 1 can be identified with the reciprocal of Halphen’s constant H 1 / 9.28903 , i.e., ρ = 1 / H 9.28903 ; see ([20], Section 4). Motivated by this prediction, in the numerical sweep of Section 5.1 we vary the CF scale parameter σ { 5 , 7 , 9 , 11 , 13 } and measure the familywise worst error E n and fitted rates ρ . As reported in Table 1, for all p = 0 , , 4 the fitted rates lie between 9 and 10 and σ = 9 consistently yields both the largest ρ and the smallest worst error E 14 . Thus, σ = 9 is a robust, near-optimal choice for the CF scale parameter, and we adopt this fixed default in Algorithm 1.
Table 1. Fitted rate ρ and familywise worst error E 14 on [ M , 0 ] with M = 100 σ , 5 σ 13 .

4. Evaluating the Linear Combination of the φ -Functions with Shared Poles

The functions of interest { φ j } j = 0 p are evaluated on the slit plane C \ R , with R playing the role of a branch cut, and they satisfy the decay property φ j ( z ) 0 as | z | in a sector containing the negative real axis. A shared denominator captures the common structure across the family. If r j denotes a CF rational approximant to φ j as in (3), then for matrix arguments with spectra lying in the negative real axis we have
r j ( A ) = r ( j ) I + = 1 n η ( j ) ( A θ I ) 1 .
Consequently, the linear combination (2) is approximated by
j = 0 p φ j ( A ) v j j = 0 p r ( j ) v j + = 1 n η ( j ) ( A θ I ) 1 v j = j = 0 p r ( j ) v j + = 1 n ( A θ I ) 1 v rhs ( ) ,
where
v rhs ( ) : = j = 0 p η ( j ) v j , = 1 , , n .
Thus, we solve only n shifted linear systems—one per pole—so the cost scales with the pole count n, rather than with ( p + 1 ) n as it would if each r j had a different pole set.
A substantial computational saving is possible when the matrix A and the vectors { v j } j = 0 p are real, which is the most common situation in practice. Because each φ j is real-valued on R , the CF approximants r j can be chosen with real coefficients, so all non-real poles and residues occur in complex conjugate pairs (apart from any real poles). In this case, we order the poles so that for some even degree n we have
θ n + 1 = θ ¯ , = 1 , , n 2 ,
and we reorder the residues accordingly:
η n + 1 ( j ) = η ( j ) ¯ , j = 0 , , p .
Since A and the v j are real, this implies
v rhs ( n + 1 ) = j = 0 p η n + 1 ( j ) v j = j = 0 p η ( j ) v j ¯ = v rhs ( ) ¯ , = 1 , , n 2 .
Hence, the corresponding solutions satisfy
x ( n + 1 ) : = ( A θ n + 1 I ) 1 v rhs ( n + 1 ) = ( A θ I ) 1 v rhs ( ) ¯ = x ( ) ¯ ,
and, therefore,
= 1 n ( A θ I ) 1 v rhs ( ) = 2 Re = 1 n / 2 ( A θ I ) 1 v rhs ( ) .
In the real case, it thus suffices to solve only n / 2 shifted linear systems.
We are now in a position to present our algorithm for computing linear combinations of several φ -function actions; it is given in Algorithm 2. It is worth noticing that the m shifted linear systems in line 11 of Algorithm 2 need not be solved sequentially. Since each system corresponds to a distinct pole and has an independent right-hand side, they can be solved in parallel.
Algorithm 2 Linear combination of φ -functions with shared poles.
Input:
  • ⋄  A C d × d and vectors v 1 , , v τ C d
  • ⋄ Degree n (default n = 14 ) of the rational approximants r j (for real data, n is even so that poles occur in conjugate pairs)
  • ⋄ A family { φ j k } k = 1 τ , where j k I N { 0 } and | I | = τ
  • ⋄ Shared CF data: poles { θ } = 1 n , residues η ( j k ) , and constants r ( j k ) constructed via Algorithm 1 for the scalar functions φ j 1 , , φ j τ
Output:  w k = 1 τ φ j k ( A ) v k .
  1:
Order { θ } = 1 n so that complex-conjugate pairs ( θ , θ n + 1 ) are adjacent, and reorder the corresponding η ( j k ) accordingly.
  2:
m = n
  3:
if realData then
  4:
     m n / 2
  5:
end if
  6:
V = [ v 1 , v 2 , , v τ ]   ▹ d × τ matrix
  7:
w = 0   ▹ d × 1 zero vector
  8:
for  = 1 to m do
  9:
     γ ( ) [ η ( j 1 ) , η ( j 2 ) , , η ( j τ ) ] T
10:
     v rhs ( ) V γ ( )   ▹ one combined right-hand side
11:
    Solve ( A θ I ) x ( ) = v rhs ( )
12:
    if realData then
13:
         x ( ) 2 Re x ( )
14:
    end if
15:
     w w + x ( )
16:
end for
17:
w w + V [ r ( j 1 ) , r ( j 2 ) , , r ( j τ ) ] T

5. Numerical Experiments

This section presents three numerical experiments. The first investigates the algorithmic parameters and the geometric convergence rate of the CF approximation while the second experiment implements Algorithm 2 for a highly nonnormal matrix. The third experiment involves the 2D Poisson matrix.
All runs were performed in MATLAB R2022b on a single desktop (Intel® Core™ i7–7700T @ 2.90 GHz, 16 GB RAM, Intel Corporation, Santa Clara, CA, USA). To contextualize performance and accuracy, we compare the following five routines:
cfphimv:   
our MATLAB routine for Algorithm 2 (https://github.com/aalmohy/cfphimv (accessed on 9 December 2025)).
phimv:    
Al-Mohy’s algorithm ([3], Algorithm 2) (https://github.com/aalmohy/phimv (accessed on 20 October 2025)).
phi_funm: 
Al-Mohy and Liu’s algorithm ([18], Algorithm 5.1) (https://github.com/xiaobo-liu/phi_funm (accessed on 20 October 2025)). This algorithm evaluates several φ –functions of a moderate size matrix jointly via a scaling and recovering strategy. We use this routine to compute a reference solution for medium-sized problems.
bamphi:  
Caliari, Cassini, and Živković’s routine [14], combining Newton form polynomial interpolation at special nodes with Krylov techniques (https://github.com/francozivcovich/bamphi (accessed on 20 October 2025)).
kiops:    
The adaptive Krylov solver of Gaudreault, Rainwater, and Tokman [5] based on the incomplete orthogonalization procedure (https://gitlab.com/stephane.gaudreault/kiops (accessed on 20 October 2025)).
All phimv, bamphi, and kiops natively accept block right-hand sides and, in one call, evaluate several linear combinations of the form (2). We use each routine with its default settings; for kiops, we set the tolerance to the double-precision machine epsilon for consistency with the other routines.

5.1. Numerical Sweep for Shared-Denominator CF Tables

The purpose of this experiment is to (i) empirically verify geometric convergence of the shared denominator CF approximants for the family { φ j } j = 0 p on R , and (ii) identify practical defaults for the CF scale  σ and degree n that deliver double precision accuracy uniformly across j.
For p { 0 , 1 , 2 , 3 , 4 } we build type- ( n , n ) shared-pole tables at n { 6 , 8 , 10 , 12 , 14 } and σ { 5 , 7 , 9 , 11 , 13 } . Pole extraction via Algorithm 1 uses K = 100 Chebyshev coefficients and N = 1024 Chebyshev points. With the shared poles { θ } fixed, per-function residues and constants are fitted by least squares (9) on a log-dense training grid of size τ = 8000 over [ M , 0 ] with M = 100 σ . Accuracy is measured on an independent testing grid of size τ = 20 , 000 over [ M , 0 ] . For each configuration, we compute the familywise worst error
E n = max 0 j p max x [ M , 0 ] φ j ( x ) r j ( x ) .
We report the geometric rate as ρ , defined by a least squares fit of log E n a + b n over the interior degrees { 8 , 10 , 12 } to reduce endpoint bias, yielding ρ = e b and C = e a . Across all p, the fitted rates lie near the Halphen’s constant and the worst errors at n = 14 are between 10 14 and 10 13 . Similar behavior holds for σ { 5 , 7 , 11 , 13 } , with σ = 9 consistently optimal by rate and end error.
We conclude that choosing σ 9 is sufficient to achieve a nearly optimal geometric convergence rate across p = 0 , , 4 . This default aligns with the scales reported in [20,25]. Accordingly, we recommend σ = 9 as the CF scale parameter in Algorithm 1.

5.2. Chebyshev Spectral Laplacian (Dirichlet)

We use a Chebyshev collocation discretization on the interval [ 0 , L ] with homogeneous Dirichlet boundary conditions ([3], Section 5). Starting from the Chebyshev first derivative matrix D on ξ [ 1 , 1 ] , the change in variables x = L 2 ( ξ + 1 ) implies d d x = 2 L d d ξ , so the discrete second derivative on [ 0 , L ] is ( 2 / L ) 2 D 2 . Enforcing Dirichlet conditions by deleting the first and last rows/columns yields a dense, strongly nonnormal matrix A R ( N 1 ) × ( N 1 ) whose spectrum lies on the negative real axis and whose spectral radius grows with N, making it a stiff, representative test for exponential integrators. The code in Table 2 (adapted from [29]) constructs A for general N and L.
Table 2. MATLAB code for the discrete second derivative on [ 0 , L ] with Dirichlet conditions.
We consider N { 50 , 100 , 150 , 200 , 250 } and L = 2 . The aim is to demonstrate the speed of our shared denominator CF rational approximant (with n = 14 ) relative to Taylor- and Krylov polynomial-based routines. As A and nonnormality grow, polynomial approaches typically require higher degrees (or smaller steps) to control the error, so their cost escalates. By contrast, the CF rational approximants employ a fixed set of poles that capture the branch cut of the φ -family, making accuracy essentially insensitive to the spectral radius of A; the dominant work reduces to a small number of shifted linear solves. This is therefore a highly stiff test where rational approximants should retain both accuracy and speed as N increases. For p = 3 and each N, we generate a random set of vectors { v 0 , v 1 , , v p } R N 1 and evaluate w N = j = 0 p φ j ( A ) v j using cfphimv, phimv, bamphi, and kiops. The reference solution is computed using phi_funm.
The data in Table 3 highlight three key points. (i) The cost of cfphimv is essentially flat in N: even as A 2 increases from 3.0 × 10 5 to 1.9 × 10 8 and the matrix becomes more nonnormal, the runtime stays below 0.5  s and the relative error remains in the 10 11 10 10 range. (ii) Krylov/Taylor-based methods (phimv, bamphi, kiops) degrade rapidly with N: phimv and bamphi become slower by one to two orders of magnitude and eventually time out, and kiops either times out or returns unusably large errors (up to 10 8 ). (iii) Beyond N 100 , only cfphimv continues to deliver both accuracy and subsecond turnaround. This supports the main claim of the paper: once the shared pole set is precomputed, evaluating the full linear combination of φ -functions reduces to solving a small number of shifted systems, and this remains stable and fast even for highly stiff, strongly nonnormal matrices.
Table 3. Runtimes and relative errors for the linear combination w N = j = 0 p φ j ( A ) v j with a hard per-call timeout of 300 s. “TO” = timed out; “ER” = routine error.

5.3. Two-Dimensional Poisson Matrix

In this experiment we use the two-dimensional Poisson matrix P R N 2 × N 2 obtained from the standard five-point finite difference discretization of Δ u = f on ( 0 , 1 ) 2 with homogeneous Dirichlet boundary conditions. With lexicographic ordering of the N 2 interior grid points, P can be written as a sum of Kronecker products
P = I N T + T I N ,
where T R N × N is the tridiagonal Toeplitz matrix tridiag ( 1 , 2 , 1 ) corresponding to the one-dimensional second-difference operator. The matrix A can be generated by the MATLAB command P = gallery(‘poisson’, N). Poisson matrices of this type arise ubiquitously in the finite difference and finite element discretization of diffusion and heat equations, electrostatics and potential problems, pressure Poisson equations in incompressible flow, and in image processing and graph-based models where discrete Laplacians are used for smoothing and regularization.
The spectral structure of P can be characterized explicitly. The one-dimensional matrix T is diagonalized by the discrete sine transform matrix S with entries
S j k = 2 N + 1 sin j k π N + 1 , j , k = 1 , , N ,
yielding T = S Λ S T with eigenvalues
λ k = 2 2 cos k π N + 1 = 4 sin 2 k π 2 ( N + 1 ) , k = 1 , , N .
By standard properties of Kronecker products,
P = ( S S ) Λ 2 D ( S S ) T , Λ 2 D = diag ( λ k , ) , λ k , = λ k + λ , k , = 1 , , N ,
so the eigenvectors of P are tensor products s k s of the one-dimensional sine modes. Detailed expositions of this construction and its use in fast Poisson solvers and spectral analysis can be found; for example, in Strang ([27], Section 5.5) and Golub and Van Loan ([30], Section 4.8).
To evaluate f ( P ) b efficiently for a scalar matrix function f analytic on the spectrum of P and a vector b C N 2 , we exploit the explicit diagonalization
P = ( S S ) Λ 2 D ( S S ) T , Λ 2 D = diag ( λ k , ) , λ k , = λ k + λ .
We first reshape b into an N × N matrix B such that vec ( B ) = b , where the vec operator stacks the columns of a matrix on top of each other. Using the identity
( S S ) T vec ( B ) = vec S T B S ,
we compute the spectral coefficients B ^ = S T B S . Defining
F k , = f ( λ k , ) , k , = 1 , , N ,
we apply f ( Λ 2 D ) by pointwise multiplication,
Z k , = F k , B ^ k , , k , = 1 , , N ,
and transform back via B f = S Z S T , so that
f ( P ) b = vec ( B f ) .
In this procedure, we never form S S or f ( P ) explicitly; the dominant cost is a small number of N × N matrix–matrix products (or fast sine transforms), i.e., O ( N 3 ) flops with an explicit S or O ( N 2 log N ) flops with FFT-based discrete sine transforms ([30], Section 4.8). We use this spectral machinery, combined with multiprecision arithmetic using the Multiprecision Computing Toolbox (ver. 5.1.0) [31], to generate highly accurate reference solutions.
We now use this eigenvalue decomposition as a high-accuracy reference to evaluate the linear combination (2). Let h = 1 / ( N + 1 ) be the mesh size and N = 2 k for k = 6 , , 10 . We define the scaled operator
A = 1 h 2 P .
Thus, A is negative definite and increasingly stiff as N grows (indeed, A 1 = O ( N 2 ) ). For each N and four ( p = 3 ) randomly generated vectors { v j } j = 0 p R N 2 , we evaluate the linear combination
w N = j = 0 p φ j ( A ) v j
using the routines cfphimv, phimv, kiops, and bamphi. The reference solution is computed via the spectral diagonalization of A described above, with φ j ( λ k , / h 2 ) applied to the eigenvalues in multiprecision arithmetic, and the relative error is measured in the 1-norm against this reference.
Table 4 presents the results. For the smallest sizes ( N = 64,128), all four algorithms reach very small relative errors, typically between 10 14 and 10 11 , confirming that they all resolve the highly stiff operator A on this model problem. However, the runtimes differ markedly: cfphimv is already more than an order of magnitude faster than phimv and substantially faster than kiops and bamphi. As N increases, the stiffness and dimension grow rapidly ( A 1 increases by roughly two orders of magnitude over the tested range). For N = 256 , phimv already hits the time limit of 600 s, while kiops and bamphi remain accurate but require tens of seconds. For N 512 , all three polynomial- or Krylov-based routines time out, whereas cfphimv continues to deliver accurate results with runtimes ranging from a fraction of a second (for N = 64 ) to about 80 s (for N = 1024 ). The modest growth of the cfphimv error with N (remaining around 10 9 at N = 1024 ) indicates that the rational approximation error and the effect of the reference solver are both well under control. Overall, this experiment shows that the shared denominator CF approach can handle very stiff, large-scale diffusion operators for linear combinations of φ -functions at a cost comparable to a small number of shifted linear solves, while competing Taylor and rational Krylov methods become prohibitively slow or fail to complete within the time limit.
Table 4. Runtimes and relative errors for the linear combination w N = j = 0 p φ j ( A ) v j with A the scaled 2D Poisson matrix and a hard per-call timeout of 600 s. “TO” = timed out.

6. Conclusions

We presented a shared denominator CF framework for evaluating linear combinations of φ –functions with stable matrices, a core task in exponential integrators. The key idea is to approximate { φ j } j = 0 p on R with rational functions that share a single pole set { θ } = 1 n , while allowing per-function residues { η ( j ) } and constants { r ( j ) } . We obtain the poles by a single SVD of a stacked weighted Hankel built from Chebyshev boundary data of all functions, and then recover the residues and constants via a robust least squares fit on a log-dense grid of R . With these ingredients fixed, any linear combination w = j = 0 p φ j ( A ) v j is reduced to solving only n / 2 (for real data and even n) shifted linear systems independent of p—one per shared pole—against a single combined right-hand side per shift as described in Algorithm 2. Thus, the dominant cost matches that of evaluating a single φ -function action.
On the theoretical side, we proved a “no assumptions” shared denominator approximation theorem on R that yields a geometric rate C ρ n for a finite family of analytic functions and their linear combinations, and we lifted these bounds to matrix arguments for normal matrices (via the spectral theorem) and for nonnormal matrices whose field of values avoids the cut (via Crouzeix’s theorem). This places our construction on the same exponential convergence footing as CF/contour-trapezoid approaches, with the observed rates closely tied to the classical constants for slit domains.
Our numerical sweeps corroborate the theory: for p = 0 , , 4 and moderate degrees n 14 , the worst case scalar errors across { φ j } decay geometrically to 10 14 , with a near-uniformly effective CF scale around σ 9 . The experiments also clarify the distinct roles of the parameters used in practice. Importantly, because the rational tables are constructed on the continuous negative real axis and then applied to A via shifted solves, their effectiveness is largely insensitive to the spectral radius of A, in contrast to Taylor and polynomial Krylov approaches whose difficulty grows directly with A .
Because our evaluation reduces to solves with the shifted operators ( A θ I ) against a single combined right-hand side per pole, the method is immediately amenable to matrix-free implementations.   

Funding

This research was funded by the Deanship of Scientific Research at King Khalid University through the Research Groups Program under Grant No. RGP.1/318/45.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We thank the reviewers for their insightful comments and suggestions that helped to improve the presentation of this paper.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Hochbruck, M.; Ostermann, A. Exponential integrators. Acta Numer. 2010, 19, 209–286. [Google Scholar] [CrossRef]
  2. Minchev, B.V.; Wright, W.M. A Review of Exponential Integrators for First Order Semi-Linear Problems; Technical Report 2/05; Norwegian University of Science and Technology: Trondheim, Norway, 2005. [Google Scholar]
  3. Al-Mohy, A.H. Computing Linear Combinations of φ-Function Actions for Exponential Integrators. arXiv 2025, arXiv:2509.26475. [Google Scholar]
  4. Al-Mohy, A.H.; Higham, N.J. Computing the Action of the Matrix Exponential, with an Application to Exponential Integrators. SIAM J. Sci. Comput. 2011, 33, 488–511. [Google Scholar] [CrossRef]
  5. Gaudreault, S.; Rainwater, G.; Tokman, M. KIOPS: A Fast Adaptive Krylov Subspace Solver for Exponential Integrators. J. Comput. Phys. 2018, 372, 236–255. [Google Scholar] [CrossRef]
  6. Hochbruck, M.; Lubich, C.; Selhofer, H. Exponential Integrators for Large Systems of Differential Equations. SIAM J. Sci. Comput. 1998, 19, 1552–1574. [Google Scholar] [CrossRef]
  7. Koskela, A.; Ostermann, A. Exponential Taylor Methods: Analysis and Implementation. Comput. Math. Appl. 2013, 65, 487–499. [Google Scholar] [CrossRef]
  8. Niesen, J.; Wright, W.M. Algorithm 919: A Krylov Subspace Algorithm for Evaluating the φ-Functions Appearing in Exponential Integrators. ACM Trans. Math. Softw. 2012, 38, 22. [Google Scholar] [CrossRef]
  9. Cox, S.; Matthews, P. Exponential Time Differencing for Stiff Systems. J. Comput. Phys. 2002, 176, 430–455. [Google Scholar] [CrossRef]
  10. Luan, V.T. Efficient Exponential Runge–Kutta Methods of High Order: Construction and Implementation. BIT Numer. Math. 2021, 61, 535–560. [Google Scholar] [CrossRef]
  11. Kassam, A.K.; Trefethen, L.N. Fourth-Order Time-Stepping for Stiff PDEs. SIAM J. Sci. Comput. 2005, 26, 1214–1233. [Google Scholar] [CrossRef]
  12. Higham, N.J. Functions of Matrices: Theory and Computation; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2008; p. xx+425. [Google Scholar] [CrossRef]
  13. Luan, V.T.; Pudykiewicz, J.A.; Reynolds, D.R. Further Development of Efficient and Accurate Time Integration Schemes for Meteorological Models. J. Comput. Phy. 2019, 376, 817–837. [Google Scholar] [CrossRef]
  14. Caliari, M.; Cassini, F.; Zivcovich, F. BAMPHI: Matrix-Free and Transpose-Free Action of Linear Combinations of φ-Functions from Exponential Integrators. J. Comput. Appl. Math. 2023, 423, 114973. [Google Scholar] [CrossRef]
  15. Moret, I. On RD-rational Krylov approximations to the core-functions of exponential integrators. Numer. Linear Algebra Appl. 2007, 14, 445–457. [Google Scholar] [CrossRef]
  16. Bergermann, K.; Stoll, M. Adaptive Rational Krylov Methods for Exponential Runge–Kutta Integrators. SIAM J. Matrix Anal. Appl. 2024, 45, 744–770. [Google Scholar] [CrossRef]
  17. Berland, H.; Skaflestad, B.; Wright, W.M. EXPINT—A MATLAB Package for Exponential Integrators. ACM Trans. Math. Softw. 2007, 33, 4-es. [Google Scholar] [CrossRef]
  18. Al-Mohy, A.H.; Liu, X. A Scaling and Recovering Algorithm for the Matrix φ-Functions. arXiv 2025, arXiv:2506.01193. [Google Scholar]
  19. Al-Mohy, A.H.; Higham, N.J. A New Scaling and Squaring Algorithm for the Matrix Exponential. SIAM J. Matrix Anal. Appl. 2009, 31, 970–989. [Google Scholar] [CrossRef]
  20. Trefethen, L.N.; Weideman, J.A.C.; Schmelzer, T. Talbot quadratures and rational approximations. BIT Numer. Math. 2006, 46, 653–670. [Google Scholar] [CrossRef]
  21. Trefethen, L.N. Near-circularity of the error curve in complex Chebyshev approximation. J. Approx. Theory 1981, 31, 344–367. [Google Scholar] [CrossRef]
  22. Trefethen, L.N. Rational Chebyshev approximation on the unit disk. Numer. Math. 1981, 37, 297–320. [Google Scholar] [CrossRef]
  23. Trefethen, L.N.; Gutknecht, M.H. The Carathéodory–Fejér Method for Real Rational Approximation. SIAM J. Numer. Anal. 1983, 20, 420–436. [Google Scholar] [CrossRef]
  24. Carathéodory, C.; Fejér, L. Über den Zusammenhang der Extremen von harmonischen Funktionen mit ihren Koeffizienten und über den Picard–Landau’schen Satz. Rend. Circ. Mat. Palermo 1911, 32, 218–239. [Google Scholar] [CrossRef]
  25. Schmelzer, T.; Trefethen, L.N. Evaluating Matrix Functions for Exponential Integrators via Carathéodory–Fejér Approximation and Contour Integrals. Electron. Trans. Numer. Anal. 2007, 29, 1–18. [Google Scholar]
  26. Trefethen, L.N.; Weideman, J.A.C. The Exponentially Convergent Trapezoidal Rule. SIAM Rev. 2014, 56, 385–458. [Google Scholar] [CrossRef]
  27. Strang, G. Introduction to Applied Mathematics; Wellesley–Cambridge Press: Wellesley, MA, USA, 1986. [Google Scholar]
  28. Crouzeix, M.; Palencia, C. The Numerical Range is a (1 + 2 )-Spectral Set. SIAM J. Matrix Anal. Appl. 2017, 38, 649–655. [Google Scholar] [CrossRef]
  29. Trefethen, L.N. Spectral Methods in MATLAB; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar] [CrossRef]
  30. Golub, G.H.; Van Loan, C.F. Matrix Computations; Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
  31. Advanpix. Multiprecision Computing Toolbox; Advanpix: Tokyo, Japan, 2025; Available online: http://www.advanpix.com (accessed on 15 August 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.