Next Article in Journal
A Theoretical Analysis of Profile Conformance Improvement Due to Suspension Injection
Next Article in Special Issue
Solving High-Dimensional Problems in Statistical Modelling: A Comparative Study
Previous Article in Journal
Evolutionary Games and Dynamics in Public Goods Supply with Repetitive Actions
Previous Article in Special Issue
Iterative Methods for the Computation of the Perron Vector of Adjacency Matrices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Eigenvalue Estimates via Pseudospectra †

by
Georgios Katsouleas
*,‡,
Vasiliki Panagakou
and
Panayiotis Psarrakos
Department of Mathematics, Zografou Campus, National Technical University of Athens, 15773 Athens, Greece
*
Author to whom correspondence should be addressed.
This paper is dedicated to Mr. Constantin M. Petridi.
These authors contributed equally to this work.
Mathematics 2021, 9(15), 1729; https://doi.org/10.3390/math9151729
Submission received: 30 May 2021 / Revised: 16 July 2021 / Accepted: 18 July 2021 / Published: 22 July 2021
(This article belongs to the Special Issue Numerical Linear Algebra and the Applications)

Abstract

:
In this note, given a matrix A C n × n (or a general matrix polynomial P ( z ) , z C ) and an arbitrary scalar λ 0 C , we show how to define a sequence μ k k N which converges to some element of its spectrum. The scalar λ 0 serves as initial term ( μ 0 = λ 0 ), while additional terms are constructed through a recursive procedure, exploiting the fact that each term μ k of this sequence is in fact a point lying on the boundary curve of some pseudospectral set of A (or P ( z ) ). Then, the next term in the sequence is detected in the direction which is normal to this curve at the point μ k . Repeating the construction for additional initial points, it is possible to approximate peripheral eigenvalues, localize the spectrum and even obtain spectral enclosures. Hence, as a by-product of our method, a computationally cheap procedure for approximate pseudospectra computations emerges. An advantage of the proposed approach is that it does not make any assumptions on the location of the spectrum. The fact that all computations are performed on some dynamically chosen locations on the complex plane which converge to the eigenvalues, rather than on a large number of predefined points on a rigid grid, can be used to accelerate conventional grid algorithms. Parallel implementation of the method or use in conjunction with randomization techniques can lead to further computational savings when applied to large-scale matrices.

1. Introduction

The theory of pseudospectra originates in numerical analysis and can be traced back to Landau [1], Varah [2], Wilkinson [3], Demmel [4], and Trefethen [5], motivated by the need to obtain insights into systems evolving in ways that the eigenvalues alone could not explain. This is especially true in problems where the underlying matrices or linear operators are non-normal or exhibit in some sense large deviations from normality. A better understanding of such systems can be gained through the concept of pseudospectrum, which, for a matrix A C n × n and a positive parameter ϵ > 0 , was introduced as the subset of the complex plane that is bounded by the ϵ 1 –level set of the norm of the resolvent ( μ I A ) 1 . A second definition stated in terms of perturbations characterizes the elements of this set as eigenvalues of some perturbation A + E with E ϵ . In this sense, the notion of pseudospectrum provides information that goes beyond eigenvalues, while retaining the advantage of being a natural extension of the spectral set. In fact, for different values of magnitude ϵ , pseudospectrum provides a global perspective on the effects of perturbations; this is in stark contrast to the concept of condition number, where only the worst-case scenario is considered.
On one hand, pseudospectrum may be used as a visualization tool to reveal information regarding the matrix itself and the sensitivity of its eigenvalues. Applications within numerical analysis include convergence of nonsymmetric matrix iterations [6], backward error analysis of eigenvalue algorithms [7], and stability of spectral methods [8]. On the other hand, it is a versatile tool that has been used to obtain quantitative bounds on the transient behavior of differential equations in finite time, which may deviate from the long-term asymptotic behavior [9]. Important results involving pseudospetra have been also been obtained in the context of spectral theory and spectral properties of banded Toeplitz matrices [10,11]. Although emphasis has been placed on the standard eigenproblem, attention has also been drawn to matrix pencils [12] and more general matrix polynomials [13,14] arising in vibrating systems, control theory, etc. For a comprehensive overview of this research field and its applications, the interested reader may refer to [15].
In this note, we propose an application of pseudospectral sets as a mean to obtain eigenvalue estimates in the vicinity of some complex scalar. In particular, given a matrix (or a general matrix polynomial) and a scalar λ 0 C , we construct a sequence μ k k N that converges to some element of its spectrum. The scalar λ 0 serves as initial term ( μ 0 = λ 0 ), while additional terms are constructed through an iterative procedure, exploiting the fact that each term μ k of this sequence is in fact a point lying on the boundary curve of some pseudospectral set. Then, the next term in the sequence is detected in the perpendicular direction to the tangent line at the point μ k . Repeating the construction for a tuple of initial points encircling the spectrum, several peripheral eigenvalues are approximated. Since the pseudospectrum may be disconnected, this procedure allows the identification of individual connected components and, as a by-product, a convenient and numerically efficient procedure for approximate pseudospectrum computation emerges. Moreover, this approach is clearly amenable to parallelization or randomization and can lead to significant computational savings when applied to probems involving large–scale matrices.
Our paper is organized as follows. In Section 2, we provide the necessary theoretical background on the method and provide examples for the constant matrix case. As confirmed by numerical experiments, the method can provide a sufficiently accurate pseudospectrum computation at a much-reduced computational cost, especially in cases where the spectrum is convexly independent (i.e., each eigenvalue does not lie in the convex hull of the others) or exhibits large eigenvalue gaps. A second application of the method on Perron-root approximation for non–negative matrices is presented. Then, Section 3 shows how the procedure may be modified to estimate the spectrum of more general matrix polynomials. Numerical experiments showcasing the application of the method on damped mass–spring and gyroscopic systems conclude the paper.

2. Eigenvalues via Pseudospectra

Let the matrix A C n × n with spectrum σ ( A ) = μ C : det ( μ I A ) = 0 , where det ( · ) denotes the determinant of a matrix. With respect to the · 2 –norm, the pseudospectrum of A is defined by
σ ϵ ( A ) = μ C : 1 ( μ I A ) 1 2 ϵ = μ C : μ σ ( A + E ) for some E C n × n with E ϵ = μ C : s min ( μ I A ) ϵ ,
where s min ( · ) denotes the smallest singular value of a matrix and ϵ > 0 is the maximum norm of admissible perturbations.
For every choice of increasing positive parameters 0 < ϵ 1 < ϵ 2 < ϵ 3 < , the corresponding closed, strictly nested sequence of pseudospectra
σ ϵ 1 ( A ) σ ϵ 2 ( A ) σ ϵ 3 ( A )
is obtained. In fact, the respective boundaries satisfy the inclusions
σ ϵ 1 ( A ) μ C : s min ( μ I A ) = ϵ 1 σ ϵ 2 ( A ) μ C : s min ( μ I A ) = ϵ 2 σ ϵ 3 ( A ) μ C : s min ( μ I A ) = ϵ 3
It is also clear that, for any λ σ ( A ) , s min ( λ I A ) = 0 .
Our objective now is to exploit the properties of these sets to detect an eigenvalue of A in the vicinity of a given scalar λ 0 C \ σ ( A ) . This given point of interest may be considered to lie on the boundary of some pseudospectral set, i.e., there exists some non–negative parameter ϵ ^ 1 > 0 , such that
λ 0 σ ϵ ^ 1 ( A ) μ C : s min ( μ I A ) = ϵ ^ 1 .
Indeed, points satisfying the equality s min ( μ I A ) = ϵ for some ϵ > 0 and lying in the interior of σ ϵ ( A ) are finite in number. Thus, in the generic case, we may think of the inclusion (1) as an equality.
We consider the real–valued function g A : C R + with g A ( z ) = s min ( z I A ) . In the process of formulating a curve-tracing algorithm for pseudospectrum computation [16], Brühl analyzed g A ( z ) and, identifying C R 2 , noted that its differentiability is explained by the following Theorem in [17]:
Theorem 1.
Let the matrix valued function P ( χ ) : R d C n × n be real analytic in a neighborhood of χ 0 = x 0 1 , , x 0 d and let σ 0 a simple nonzero singular value of P ( χ 0 ) with u 0 , v 0 its associated left and right singular vectors, respectively.
Then, there exists a neighborhood N of χ 0 on which a simple nonzero singular value σ ( χ ) of P ( χ ) is defined with corresponding left and right singular vectors u ( χ ) and v ( χ ) , respectively, such that σ ( χ 0 ) = σ 0 , u ( χ 0 ) = u 0 , v ( χ 0 ) = v 0 and the functions σ, u, v are real analytic on N . The partial derivatives of σ ( χ ) are given by
s ( χ 0 ) χ j = Re u 0 * P ( χ 0 ) χ j v 0 , j = 1 , , d .
Hence, recalling (1) and assuming ϵ ^ 1 is a simple singular value of the matrix P ( λ 0 ) = λ 0 I A , then
s min ( z I A ) | z = λ 0 = Re v min * u min , Im v min * u min = v min * u min ,
where u min and v min denote the left and right singular vectors of λ 0 I A associated to ϵ ^ 1 = s min ( λ 0 I A ) , respectively [16] (Corollary 2.2).
On the other hand, if λ is an eigenvalue of A near λ 0 , it holds λ λ 0 ϵ ^ 1 . The latter observation follows from the fact that
σ ϵ ( A ) σ ( A ) + D ( 0 , ϵ ) = z C : dist ( z , σ ( A ) ) ϵ ,
where D ( 0 , ϵ ) = z C : | z | ϵ and equality holds for normal matrices. So, the scalar
μ 1 = λ 0 ϵ ^ 1 · s min ( z I A ) s min ( z I A ) | z = λ 0 = λ 0 s min ( λ 0 I A ) · v min ( λ 0 ) * u min ( λ 0 ) v min ( λ 0 ) * u min ( λ 0 )
can be considered to be an estimate of eigenvalue λ . In particular, λ 0 σ ϵ ^ 1 ( A ) and μ 1 lies in the interior of σ ϵ ^ 1 ( A ) . Moreover, the sequence
μ 0 = λ 0 μ 1 = μ 0 s min ( μ 0 I A ) · v min ( μ 0 ) * u min ( μ 0 ) v min ( μ 0 ) * u min ( μ 0 ) μ 2 = μ 1 s min ( μ 1 I A ) · v min ( μ 1 ) * u min ( μ 1 ) v min ( μ 1 ) * u min ( μ 1 ) μ k = μ k 1 s min ( μ k 1 I A ) · v min ( μ k 1 ) * u min ( μ k 1 ) v min ( μ k 1 ) * u min ( μ k 1 )
converges to λ .
The above process requires the computation of the triplet
s min ( μ k I A ) , u min ( μ k ) , v min ( μ k )
at every point μ k ; see [18].
Remark. To avoid the computational burden of computing the (left and right) singular vectors, a cheaper alternative would be to consider at each iteration ( k = 0 , 1 , 2 , ) the canonical octagon with vertices
p k , j = μ k + e i j π 4 · s min μ k I A , j = 0 , 1 , 2 , 7
instead and simply compute
θ k , j = s min p k , j I A , j = 0 , 1 , 2 , 7 .
In this case, instead of (2), we can set
μ k + 1 = μ k + e i j 0 π 4 · θ k , j 0
with j 0 such that
θ k , j 0 = min j = 0 , 1 , 2 , , 7 θ k , j = min j = 0 , 1 , 2 , , 7 s min p k , j I A .

2.1. Numerical Experiments

2.1.1. Pseudospectrum Computation

The approximating sequences in (2) may be utilized to implement a computationally cheap procedure to visualize matrix pseudospectra, at least in cases where the order of the matrix is small or when its spectrum exhibits large eigenvalue gaps. Several related techniques for pseudospectrum computation have appeared in the literature. These fall largely into two categories: grid [14] and path-following algorithms [16,19,20,21]. Grid algorithms begin by evaluating the function s min ( z I A ) on a predefined grid on the complex plane and lead to a graphical visualization of the boundary σ ϵ ( A ) by plotting the ϵ -contours of s min ( z I A ) . This approach faces two severe challenges; namely, the requirement of a–priori information on the location of the spectrum to correctly identify a suitable region to discretize, as well as the typically large number of grid points the computations have to be performed upon. path-following algorithms, on the other hand, require an initial step to detect a starting point on the curve σ ϵ ( A ) and then proceed to compute additional boundary points for each connected component of σ ϵ ( A ) . The main drawbacks of this latter approach lie in the difficulty in performing the initial step and the need to correctly identify every connected component of σ ϵ ( A ) in order to repeat the procedure and properly trace its boundary. Moreover, cases where pseudospectrum computation is required for a whole tuple of parameters ϵ drastically compromise the efficiency of path-following algorithms.
Our approach is to use the approximating sequences (2) to decrease the number of singular value evaluations and therefore speed up the computation of pseudospectra. The basic steps are outlined as follows:
  • Select a tuple of initial points μ 0 j j = 1 s C encircling the spectrum; for instance, these can be chosen on the circle z C : | z | = A .
  • Construct eigenvalue approximating sequences μ k j k = 0 n j ( j = 1 , , s ), as in (2). If ϵ k j > 0 ( k = 1 , , n j ) are such that μ k j σ ϵ k j ( A ) , the length n j of each sequence is determined, so that s min ( μ n j j I A ) ϵ 0 for all j = 1 , , s , where ϵ 0 is some prefixed parameter value. In other words, ϵ 0 indicates the tolerance with which the approached by the constructed sequences eigenvalues should be approximated and corresponds to the minimum parameter for which pseudospectra will be computed.
  • Classify the sequences into distinct clusters, according to the proximity of their final terms. This step may be performed using a k-means clustering algorithm, using a suitable criterion to evaluate the optimal number of groups.
  • Compute
    u = min j = 1 , , s max j = 1 , , n j ϵ k j ( > ϵ 0 ) and = max j = 1 , , s min j = 1 , , n j ϵ k j ( < ϵ 0 ) .
  • If necessary, repeat the procedure for t additional points between the centroids of the detected clusters, constructing additional sequences, so that
    min j = s + 1 , , s + t max j = 1 , , n j ϵ k j > u and max j = s + 1 , , s + t min j = 1 , , n j ϵ k j < .
  • Detect boundary points of σ ϵ ( A ) for any choice of parameters ϵ [ , u ] along the polygonal chains formed by the total of s + t constructed sequences of points by interpolation.
  • Fit closed spline curves passing through the respective sets of boundary points in σ ϵ ( A ) for the various choices of ϵ [ , u ] to obtain sketches of the corresponding pseudospectra σ ϵ ( A ) .
The proposed method successfully localizes the spectrum, initiating the procedure with a restricted number of points. Then, singular value computations are kept to a minimum by considering points only on the constructed sequences. Pseudospectrum components corresponding to peripheral eigenvalues λ co σ ( A ) \ λ not in the convex hull of the other eigenvalues, are thus extremely easy to identify. This approach is also well–suited to cases, where the matrix has convexly independent spectrum; i.e., when λ co σ ( A ) \ λ , for every λ σ ( A ) . Moreover, it is clearly amenable to parallelization, which could lead to significant computational savings in cases of large matrices.
Application 1.
We consider a random matrix A C 6 × 6 , the sole constraint being that its eigenvalues are distant form each other; the real and imaginary parts of its entries follow the standardized normal distribution scaled by 10 4 . For the proposed procedure, we select initial points μ 0 j j = 1 10 z C : | z | = A 1 and exploit the fact that the corresponding sequences μ k j k = 0 n j generated as in (2) converge to some element of σ ( A ) . The number n j of terms in each sequence ( j = 1 , 10 ) is determined, so that all values s min ( μ n j j I A ) j = 1 10 do not exceed ϵ 0 = 0.5 . The sequences are organized into distinct clusters, grouping together those sequences which approximate the same element of σ ( A ) . This grouping is performed using a k-means clustering algorithm, where the optimal number of clusters is evaluated via the silhouette criterion and using a distance metric based on the sum of absolute differences between points. Since six different groups are identified, clearly all elements of σ ( A ) have been sufficiently approximated by at least one of the sequences. For an illustration, refer to Figure 1a; different colors have been used to differentiate between polygonal chains corresponding to distinct clusters. The construction so far required 914 singular value computations. Having calculated all parameter values ϵ k j such that μ k j σ ϵ k j ( A ) during the previous procedure, it is possible to interpolate between these known points along the trajectories formed by μ k j k = 0 n j ( j = 1 , 10 ) to approximate boundary points of σ ϵ ( A ) for selected values ϵ > ϵ 0 = 0.5 . Since all ten trajectories converge to eigenvalues from points encircling σ ( A ) , to obtain better pseudospectra approximations, it is necessary to repeat the procedure for additional suitably selected points. Hence, for each cluster we consider three additional points; see Figure 1b. In particular, denoting c 1 , c 6 the centroids of the clusters, for each j we consider the three centroids c j , k k = 1 3 which lie closest to c j and take the convex combinations
p k j = 1 6 5 c j + c j , k , k = 1 , 2 , 3 .
Then, additional sequences corresponding to these extra points are constructed so that the desired parameter values of ϵ for which pseudospectra should be computed (in this instance, the triple of ϵ = 1 , 5 , 10 [ , u ] ) may be interpolated within these trajectories, as for the ten initial ones. This imposes an extra cost of 1170 additional singular value computations (2084 in total). The resulting approximations of the pseudospectra components identified by the upper left corner trajectories for ϵ = 1 , 5 , 10 are depicted in greater detail in Figure 1c; the relevant eigenvalue is indicated by “*”.
An advantage of this procedure is that it does not require some a–priori knowledge of the initial region Ω on the complex plane where the spectrum is located. In fact, the very nature of this specific example, whose spectrum covers a wide area Ω , would render computations on a suitable grid impractical. Another way in which this method diverges from conventional grid algorithms is in that the computations are performed on a dynamically chosen set of points, iteratively selected as the corresponding trajectories converge to peripheral eigenvalues and identify the relevant pseudospectrum components, rather than on a large number of predefined points on a rigid grid.
Application 2.
To demonstrate how the procedure works in cases of larger matrices, in this application we examine the matrix A = 10 7 · P o r e s 2 , where P o r e s 2 is a 1024 × 1024 matrix from the Harwell-Boeing sparse matrix collection [22] related to a non–symmetric computational fluid dynamics problem. Here, the factor 10 7 is used for scaling purposes and is related to the norm– · 1 order of the matrix under consideration. Initiating the procedure with 30 equidistributed points on the circle z C : z = 1 2 A 1 , the method required a total of 810 singular value computations for a minimum parameter value of ϵ 0 = 0.005 ; the resulting pseudospectra visualizations for ϵ = 10 1 , 10 1.5 , 10 2 are depicted in Figure 2. For this example, we have opted not to introduce additional points.
Perron root computation. Applications of non–negative matrices, i.e., matrices with exclusively non–negative real entries, abound in such diverse fields as probability theory, dynamical systems, Internet search engines, tournament matrices etc. In this context, the dominant eigenvalue of non–negative matrices, also referred to as Perron root, is of central importance. Localization of the Perron root has been extensively studied in the literature; relevant bounds can be found in [23,24,25,26,27]. Its computation is typically carried out using the power method; the convergence rate of this approach depends on the relative magnitudes of the two dominant eigenvalues. Relevant methods have appeared in [28,29,30], among others. As a second application of the approximating sequences (2), the following experiment reports an elegant way of approximating Perron roots.
Application 3.
For this experiment, we considered a tuple of 50 non–negative matrices A = 1 50 R + 500 × 500 with uniformly distributed entries in ( 0 , 50 ) . The symmetry of σ ϵ ( A ) with respect to the real axis suggests that it suffices to restrict the computations exclusively to the closed upper half–plane. Hence, for each of the matrices A , we initiated the construction of the sequences (2) from equidistributed initial terms μ 0 , j j = 1 10 z C : | z | = 10 4 · A 1 , Im ( z ) 0 ( = 1 , 50 ). As expected, the rightmost of these points formed sequences converging to the Perron root of A , while each of the remaining ones approximated some other peripheral eigenvalue. In the generic case, the magnitude of the second highest eigenvalue of A was much smaller than the Perron root. Figure 3 is illustrative of this separation; the blue curve traces the boundary of the numerical range of such a matrix, red points indicate its eigenvalues, while the cyan lines correspond to the trajectories of the constructed sequences. Denoting μ k , j k = 0 n , j ( j = 1 , , s ) those sequences approximating the Perron root λ σ ( A ) ( = 1 , , 50 ), then the relative error in each iteration μ k , j λ λ , k = 0 , 1 , , min ( n , j ) , decreases rapidly, even though the initial points μ 0 , j ( j = 1 , , s ) were chosen to be extremely remote from σ ( A ) . Averages
1 = 1 50 1 s j = 1 s μ k , j λ λ
of these relative approximation errors over the tuple of matrices for the first k = 1 , 2 , , 5 iterations are demonstrated in the first column of Table 1, verifying that a reliable estimate for the Perron root may in the generic case be obtained after the computation of as few as 3 terms in the corresponding trajectories.
The remaining ( 10 s ) sequences converge to some other peripheral eigenvalues λ , 1 , , λ , s σ ( A ) , reasonable approximations of which require a rather larger number of iterations, as can be seen from the second column of Table 1 reporting.
1 = 1 50 1 10 s j = s + 1 10 μ k , j λ , j λ , j .
Application 3 suggests that any reasonable upper bound μ 0 R suffices to yield reliable estimations for the Perron root after computation of only 2–3 terms in the sequence (2).
The previous experiment may seem excessively optimistic. Indeed, there can be instances when the situation is much more demanding.
Application 4.
The Frank matrix is well–known to have ill-conditioned eigenvalues. For this application, we test the behavior of the proposed method on the Frank matrix of order 32, the normalized matrix of eigenvectors of which has condition number 7.81 × 10 11 . Figure 4 depicts the resulting pseudospectra visualizations for ϵ = 0.001 , 0.005 , 0.01 , 0.02 , 0.03 , initiating the procedure from 30 points located on the upper semiellipse centered at ( 40 , 0 ) with semi–major and semi–minor axes lengths equal to 70 and 15, respectively. The depicted trajectories were constructed, so that the final terms in each polygonal chain lie within σ 0.001 ( A ) . Then, according to the distances of the final terms of consecutive sequences, at most two additional points are introduced on the line segment connecting these respective final terms. The necessary iterations for the construction of the relevant sequences are reported in Table 2 for different numbers of initial points.
The approximating quality of the sequences is much compromised when compared to the generic case, requiring many more iterations, especially for the eigenvalues with smallest real parts; these are also the most ill-conditioned ones. In fact, the seven rightmost sequences converging to the Perron root (refer to Figure 4) display the fastest convergence, the second group of thirteen sequences leading to the intermediate eigenvalues being somewhat more compromised, while the leftmost sequences naturally exhibit even more diminished approximation quality. Mean relative approximation errors for these three groups are reported in Table 3.
For the numerical experiments in this section, we have restricted ourselves to initial points encircling the spectrum. Another option would be to use our method in tandem with randomization techniques for the initial points selection.

3. Matrix Polynomials

The derivation of eigenvalue approximating sequences may be readily extended to account for the general matrix polynomial case
P ( λ ) = A m λ m + A m 1 λ m 1 + + A 1 λ + A 0 ,
where λ C and A j C n × n ( j = 0 , 1 , , m ), with A m 0 . Recall that the spectrum of P ( λ ) is the set of all its eigenvalues; i.e., σ ( P ) = λ C : det P ( λ ) = 0 . For a scalar λ 0 σ ( P ) , the nonzero solutions v 0 C n to the system P ( λ 0 ) v 0 = 0 are the eigenvectors of P ( λ ) corresponding to λ 0 .
The ϵ pseudospectrum of P ( λ ) was introduced in [14] for a given parameter ϵ > 0 and a set of nonnegative weights w R + m + 1 as the set
σ ϵ , w ( P ) = λ C : det P Δ ( λ ) = 0 , Δ j ϵ w j , j = 0 , 1 , , m
of eigenvalues of all admissible perturbations P Δ ( λ ) of P ( λ ) of the form
P Δ ( λ ) = ( A m + Δ m ) λ m + ( A m 1 + Δ m 1 ) λ m 1 + + ( A 1 + Δ 1 ) λ + ( A 0 + Δ 0 ) ,
where the norms of the matrices Δ j C n × n ( j = 0 , 1 , , m ) satisfy the specified ( ϵ , w ) -related constraints. In contrast to the constant matrix case, a whole tuple of perturbing matrices Δ j is involved, which explains the presence of the additional parameter vector w in the definition of σ ϵ , w ( P ) . However, considering for some A C n × n the pencil P ( λ ) = I n λ A , note that (3) reduces to the usual ϵ pseudospectrum of the matrix A C n × n for the choice of w = w 0 , w 1 = 1 , 0 , since
σ ϵ , 1 , 0 ( P ) = λ C : det ( I n λ ( A + Δ 0 ) ) = 0 , Δ 0 ϵ = σ ϵ ( A ) .
In the general case, the nonnegative weights w j j = 0 m allow freedom in how perturbations are measured; for example, in an absolute sense when w 0 = w 1 = = w m = 1 , or in a relative sense when w j = A j ( j = 0 , 1 , , m ). On the other hand, the choice ϵ = 0 leads to σ 0 , w ( P ) = σ ( P ) .
From a computational viewpoint, a more convenient characterization [14] (Lemma 2.1) for this set is given by
σ ϵ , w ( P ) = λ C : s min ( P ( λ ) ) ϵ q w ( λ ) ,
where s min ( P ( λ ) ) is the minimum singular value of the matrix P ( λ ) and the scalar polynomial
q w ( λ ) = w m λ m + w m 1 λ m 1 + + w 1 λ + w 0 ,
is defined in terms of the weights w j j = 0 m used in the definition (3) of σ ϵ , w ( P ) . In fact, since the eigenvalues of P Δ ( λ ) are continuous with respect to the entries of its coefficient matrices, the boundary of σ ϵ , w ( P ) is expressed as
σ ϵ , w ( P ) λ C : s min ( P ( λ ) ) = ϵ q w ( λ ) ;
the equality s min ( P ( λ ) ) = ϵ q w ( λ ) is satisfied for some ϵ > 0 only for a finite number of points λ int σ ϵ , w ( P ) .
Suppose now that we want to approximate an eigenvalue of a matrix polynomial which lies in the neighborhood of some point of interest μ 0 C \ σ ( P ) on the complex plane. Expression (4) suggests that the derivation of a convergent sequence in Section 2 may be readily adapted for our purposes. Indeed, for every scalar μ 0 C , there exists some ϵ ^ 1 > 0 , such that μ 0 σ ϵ ^ 1 , w and then (4) implies ϵ ^ 1 = s min ( P ( μ 0 ) ) q w ( μ 0 ) . Moreover, assuming s min ( P ( μ 0 ) ) is a simple singular value of the matrix P ( μ 0 ) , we may invoke Theorem 1 to conclude that the function g P : C R + with g P ( z ) = s min ( P ( z ) ) is real analytic in a neighborhood of μ 0 = x 0 + i y 0 . In fact,
g P ( x 0 + i y 0 ) = Re u min * P ( x 0 + i y 0 ) x v min , Re u min * P ( x 0 + i y 0 ) y v min ,
where u min and v min denote the left and right singular vectors of P ( μ 0 ) associated to s min ( P ( μ 0 ) ) = ϵ ^ 1 q w ( μ 0 ) , respectively [13] (Corollary 4.2).
As in the constant matrix case, moving from the initial point μ 0 σ ϵ ^ 1 , w towards the interior of σ ϵ ^ 1 , w in the normal direction to the curve σ ϵ ^ 1 , w , the scalar
μ 1 = μ 0 ϵ ^ 1 · s min ( P ( z ) ) ϵ ^ 1 q w ( z ) s min ( P ( z ) ) ϵ ^ 1 q w ( z ) | z = x 0 + i y 0
with ϵ ^ 1 = s min ( P ( μ 0 ) ) q w ( μ 0 ) can be considered to be an estimate of some eigenvalue λ σ ( P ) . In this way, a convergent sequence μ k k N to the eigenvalue λ σ ( P ) is recursively defined with initial point μ 0 and general term
μ k = μ k 1 s min ( P ( μ k 1 ) ) q w ( μ k 1 ) · s min ( P ( z ) ) ϵ ^ k 1 q w ( z ) s min ( P ( z ) ) ϵ ^ k 1 q w ( z ) | z = μ k 1 = x k 1 + i y k 1 .

Numerical Experiments

The steps outlined in Section 2.1.1 are readily modified using the sequences in (5) to yield spectral enclosures for matrix polynomials.
Application 5
([31], Example 3). We consider the 50 × 50 matrix polynomial P ( λ ) = A 2 λ 2 + A 1 λ + A 0 , where
A 2 = I 50 , A 1 = tridiag 3 , 9 , 3 , A 0 = tridiag 5 , 15 , 5 ,
describing a damped mass-spring system [14,32] and set non-negative weights w = 1 , 1 , 1 , measuring perturbations of the coefficient matrices A j j = 0 2 in an absolute sense. We initiate the procedure with 15 equidistributed initial points μ 0 j j = 1 15 on the semicircle
z C : | z | = 15 = median j = 0 , 1 , 2 A j 1 , Im ( z ) 0
and proceed to determine eigenvalue approximating sequences μ k j k = 0 n j ( j = 1 , 15 ) according to (5), so that their final terms all lie in the interior of σ 0.01 , w ( P ) . This computation requires 722 iterations. As in the constant matrix case, interpolation between the values of ϵ k j such that μ k j σ ϵ k j , w ( P ) along the trajectories formed by μ k j k = 0 n j ( j = 1 , 15 ) results in approximations of σ ϵ , w ( P ) for ϵ = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , as seen in Figure 5a. Note this yields a sufficiently accurate sketch of σ ϵ , w ( P ) and is very competitive when compared to other methods. For instance, Figure 5b is obtained via the procedure in [31] applied to a 400 × 400 grid on the relevant region Ω = [ 20 , 5 ] × [ 15 , 15 ] . This latter approach is far more computationally intensive, requiring 71,575 iterations to visualize σ ϵ , w ( P ) for the same tuple of parameters.
In case a more detailed spectral localization is desired, our method may be adapted, as in Application 1, to identify individual pseudospectrum components. Our next experiment also serves to illustrate the fact that the number of initial trajectories that are attracted by the individual eigenvalues to form the related clusters is intimately connected to eigenvalue sensitivity.
Application 6
([13], Example 5.1). We consider the mass-spring system from ([13], Ex. 5.2) defining the 3 × 3 selfadjoint matrix polynomial
P ( λ ) = A 2 λ 2 + A 1 λ + A 0 = 1 0 0 0 2 0 0 0 5 λ 2 + 0 0 0 0 3 1 0 1 6 λ + 2 1 0 1 3 0 0 0 10
and set w = 1 , 1 , 1 . As in Application 5, computations are restricted exclusively to the closed upper half-plane. However, the close proximity of the eigenvalues 0.08 ± i 1.45 , 0.75 ± i 0.86 , 0.51 ± i 1.25 (indicated by “*” in Figure 6), as well as the fact that the pair λ = 0.51 ± i 1.25 is less sensitive than the other two, necessitates the use of many initial points. Indeed, as demonstrated in Figure 6a, initiating the procedure with 40 equidistributed initial points on z C : | z | = min A j 1 , Im ( z ) 0 results in σ ( P ) being under-represented in the resulting clusters. In order to correctly approximate all three elements of the spectrum on the upper half-plane enforces the use of as many as 80 points on the selected semicircle. The length n j of each sequence μ k j k = 0 n j ( j = 1 , 80 ) is determined, so that all values s min ( P ( μ n j j ) ) j = 1 80 do not exceed the prefixed parameter value of ϵ 0 = 0.01 ; this construction involved 1162 singular value computations. Using the squared Euclidean distance as the metric for computing the cluster evaluation criterion, three distinct groups are correctly identified, each converging to a different eigenvalue in the closed upper half-plane, as in Figure 6b. Note that the least sensitive eigenvalue λ = 0.51 + i 1.25 ends up attracting only one of these sequences; the corresponding group being a singleton. To correctly sketch the boundaries of σ ϵ , w ( P ) for the triple of ϵ = 0.24 , 0.48 , 0.73 (> ϵ 0 = 0.01 ) , we introduce six additional points for each cluster. Indeed, denoting c 1 , c 2 , c 3 the centroids of the clusters, for each cluster j = 1 , 2, 3 we consider the vertices p i j i = 1 6 of a canonical hexagon centered at c j with maximal diameter equal to min c j c i i j . These vertices are indicated by circles in Figure 6b and are used as starting points to construct the additional trajectories indicated by the black lines in Figure 6c. Note that all three selected parameters ϵ = 0.24, 0.48, 0.73 should be possible to interpolate along these additional lines as well, which explains why most of these trajectories have been extended to the opposite directions as well, modifying the definition of the sequences in (5) in each instance accordingly. The construction of the additional sequences requires 202 singular value computations (leading to a total of 1364 iterations), while the resulting approximations of pseudospectra boundaries for ϵ = 0.24 , 0.48 , 0.73 are depicted in Figure 6c.
Application 7
([13], Example 5.3). This experiment tests the behavior of the method on a damped gyroscopic system described by the 100 × 100 matrix polynomial
P ( λ ) = M λ 2 + ( G + D ) λ + K ,
with
M = I 10 4 I 10 + B + B T 6 + 1.30 4 I 10 + B + B T 6 I 10 , G = 1.35 I 10 ( B B T ) + 1.10 ( B B T ) I 10 , D = tridiag 0.1 , 0.3 , 0.1 , K = I 10 ( B + B T 2 I 10 ) + 1.20 ( B + B T 2 I 10 ) I 10
and B the 10 × 10 nilpotent matrix having ones on its subdiagonal and zeros elsewhere. Note M , K are positive and negative definite respectively, G is skew-symmetric, and the tridiagonal D is a damping matrix.
Starting with 50 points on
z C : | z | = 15 = median K 1 , G + D 1 , M 1 , Im ( z ) 0
and then 5 additional points on the perpendicular bisector of the line segment defined by the two centroids of the resulting clusters (indicated by the blue circles), the resulting pseudospectrum approximation required 1212 iterations in total with ϵ 0 = 0.002 and can be seen in the left part of Figure 7a. The algorithm in [31] applied to a 300 × 300 grid on the region Ω = [ 4 , 4 ] × [ 3 , 3 ] required 29,110 iterations for pseudospectra visualization for the same triple ϵ = 0.004 , 0.02 , 0.1 to obtain comparable results in Figure 7b.
We conclude this section, examining the behavior of the method on a non-symmetric example.
Application 8
([31], Example 2). We consider the 20 × 20 gyroscopic system
P ( λ ) = A 2 λ 2 + A 1 λ + A 0 = I 20 λ 2 + i I 10 0 0 5 I 10 λ + 1 1 1 1 1 1 1 1 1 1 1 1
and w = 1 , 1 , 1 . Starting with 21 points on
z C : | z | = 25 = A 0 1 + A 1 1 , Re ( z ) 0
and ϵ 0 = 0.001 , five clusters are detected (Figure 8a) after 1140 iterations. Then, two additional points are introduced on each of the line segments defined by the centroids of the detected clusters (indicated by the blue circles in Figure 8b), causing the iterations to rise to the total number of 2662 in order to determine the 20 corresponding trajectories (indicated by grey lines in Figure 8c). The corresponding visualizations in Figure 8d), obtained via [31] applied to a 400 × 400 grid on the region Ω = [ 20 , 20 ] × [ 15 , 10 ] required 88,462 iterations.

4. Concluding Remarks

In this note, we have shown how to define sequences which, beginning from arbitrary complex scalars, converge to some element of the spectrum of a matrix. This approach can be applied both to constant matrices and to more general matrix polynomials and can be used as a means to obtain estimates for those eigenvalues that lie in the vicinity of the initial term of the sequence. This construction is also useful when no information on the location of the spectrum is a priori known. In such cases, repeating the construction from arbitrary points, it is possible to detect peripheral eigenvalues, localize the spectrum and even obtain spectral enclosures.
As an application, in this paper we used this construction to compute the pseudospectrum of a matrix or a matrix polynomial. Thus, a useful technique for speeding up pseudospectra computations emerges, which is essential for applications. An advantage of the proposed approach is that does not make any assumptions on the location of the spectrum. The fact that all computations are performed on some dynamically chosen locations on the complex plane which converge to the eigenvalues, rather than on a large number of predefined points on a rigid grid, can be seen as improvement over conventional grid algorithms.
Parallel implementation of the method can lead to further computational savings when applied to large matrices. Another option would be to apply this method combined with randomization techniques for the selection of the initial points of the sequences. In the large-scale matrix case, this method may be helpful in obtaining a first impression of the shape and size of pseudospectrum and even computing a rough approximation. Then, if desired, this could be used in conjunction with local versions of the grid algorithm and small local meshes about individual areas of interest.

Author Contributions

All authors have equally contributed to the conceptualization of this paper, to software implementation and to the original draft preparation. Funding acquisition and project administration: P.P. All authors have read and agreed to the submitted version of the manuscript.

Funding

This research is carried out/funded in the context of the project “Approximation algorithms and randomized methods for large-scale problems in computational linear algebra” (MIS 5049095) under the call for proposals “Researchers’ support with an emphasis on young researchers–2nd Cycle’.” The project is co-financed by Greece and the European Union (European Social Fund—ESF) by the Operational Programme Human Resources Development, Education and Lifelong Learning 2014–2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Landau, H.J. On Szegö’s eigenvalue distribution theorem and non–Hermitian kernels. J. Analyse Math. 1975, 28, 216–222. [Google Scholar] [CrossRef]
  2. Varah, J.M. On the separation of two matrices. SIAM J. Numer. Anal. 1979, 16, 216–222. [Google Scholar] [CrossRef]
  3. Wilkinson, J.H. Sensitivity of eigenvalues II. Utilitas Math. 1986, 30, 243–286. [Google Scholar]
  4. Demmel, W. A counterexample for two conjectures bout stability. IEEE Trans. Aut. Control 1987, 32, 340–342. [Google Scholar] [CrossRef]
  5. Trefethen, L.N. Approximation theory and numerical linear algebra. In Algorithms for Approximation, II (Shrivenham, 1988); Chapman & Hall: London, UK, 1990; pp. 336–360. [Google Scholar]
  6. Nachtigal, N.M.; Reddy, S.C.; Trefethen, L.N. How fast are nonsymmetric matrix iterations? SIAM J. Matrix. Anal. 1992, 13, 778–795. [Google Scholar] [CrossRef] [Green Version]
  7. Mosier, R.G. Root neighborhoods of a polynomial. Math. Comput. 1986, 47, 265–273. [Google Scholar] [CrossRef]
  8. Reddy, S.C.; Trefethen, L.N. Lax–stability of fully discrete spectral methods via stability regions and pseudo–eigenvalues. Comput. Methods Appl. Mech. Eng. 1990, 80, 147–164. [Google Scholar] [CrossRef]
  9. Higham, D.J.; Trefethen, L.N. Stiffness of ODEs. BIT Numer. Math. 1993, 33, 285–303. [Google Scholar] [CrossRef]
  10. Davies, E.; Simon, B. Eigenvalue estimates for non–normal matrices and the zeros of random orthogonal polynomials on the unit circle. J. Approx. Theory 2006, 141, 189–213. [Google Scholar] [CrossRef] [Green Version]
  11. Böttcher, A.; Grudsky, S.M. Spectral Properties of Banded Toeplitz Matrices; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2005. [Google Scholar]
  12. Van Dorsselaer, J.L.M. Pseudospectra for matrix pencils and stability of equilibria. BIT Numer. Math. 1997, 37, 833–845. [Google Scholar] [CrossRef] [Green Version]
  13. Lancaster, P.; Psarrakos, P. On the pseudospectra of matrix polynomials. SIAM J. Matrix Anal. Appl. 2005, 27, 115–129. [Google Scholar] [CrossRef] [Green Version]
  14. Tisseur, F.; Higham, N.J. Structured pseudospectra for polynomial eigenvalue problems with applications. SIAM J. Matrix Anal. Appl. 2001, 23, 187–208. [Google Scholar] [CrossRef] [Green Version]
  15. Trefethen, L.N.; Embree, M. Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators; Princeton University Press: Princeton, NJ, USA, 2005. [Google Scholar]
  16. Brühl, M. A curve tracing algorithm for computing the pseudospectrum. BIT 1996, 36, 441–454. [Google Scholar] [CrossRef]
  17. Sun, J.-G. A note on simple non–zero singular values. J. Comput. Math. 1988, 62, 235–267. [Google Scholar]
  18. Kokiopoulou, E.; Bekas, C.; Gallopoulos, E. Computing smallest singular triples with implicitly restarted Lanczos biorthogonalization. Appl. Num. Math. 2005, 49, 39–61. [Google Scholar] [CrossRef] [Green Version]
  19. Bekas, C.; Gallopoulos, E. Cobra: Parallel path following for computing the matrix pseudospectrum. Parallel Comput. 2001, 27, 1879–1896. [Google Scholar] [CrossRef]
  20. Bekas, C.; Gallopoulos, E. Parallel computation of pseudospectra by fast descent. Parallel Comput. 2002, 28, 223–242. [Google Scholar] [CrossRef]
  21. Trefethen, L.N. Computation of pseudospectra. Acta Numer. 1999, 9, 247–295. [Google Scholar] [CrossRef] [Green Version]
  22. Duff, I.S.; Grimes, R.G.; Lewis, J.G. Sparse matrix test problems. ACM Trans. Math. Softw. 1989, 15, 1–14. [Google Scholar] [CrossRef]
  23. Kolotilina, L.Y. Lower bounds for the Perron root of a nonnegative matrix. Linear Algebra Appl. 1993, 180, 133–151. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, S.L. Bounds for the greater characteristic root of a nonnegative matrix. Linear Algebra Appl. 1996, 239, 151–160. [Google Scholar] [CrossRef] [Green Version]
  25. Duan, X.; Zhou, B. Sharp bounds on the spectral radius of a nonnegative matrix. Linear Algebra Appl. 2013, 439, 2961–2970. [Google Scholar] [CrossRef] [Green Version]
  26. Xing, R.; Zhou, B. Sharp bounds on the spectral radius of nonnegative matrices. Linear Algebra Appl. 2014, 449, 194–209. [Google Scholar] [CrossRef]
  27. Liao, P. Bounds for the Perron root of nonnegative matrices and spectral radius of iteration matrices. Linear Algebra Appl. 2017, 530, 253–265. [Google Scholar] [CrossRef]
  28. Elsner, L.; Koltracht, I.; Neumann, M.; Xiao, D. On accuate computations of the Perron root. SIAM J. Matrix. Anal. 1993, 14, 456–467. [Google Scholar] [CrossRef] [Green Version]
  29. Lu, L. Perron complement and Perron root. Linear Algebra Appl. 2002, 341, 239–248. [Google Scholar] [CrossRef] [Green Version]
  30. Dembélé, D. A method for computing the Perron root for primitive matrices. Numer. Linear Algebra Appl. 2021, 28, e2340. [Google Scholar] [CrossRef]
  31. Fatouros, S.; Psarrakos, P. An improved grid method for the computation of the pseudospectra of matrix polynomials. Math. Comp. Model. 2009, 49, 55–65. [Google Scholar] [CrossRef]
  32. Tisseur, F.; Meerbergen, K. The quadratic eigenvalue problem. SIAM Rev. 1997, 39, 383–406. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Pseudospectrum computation for random A C 6 × 6 with spectral gaps, using 10 initial points.
Figure 1. Pseudospectrum computation for random A C 6 × 6 with spectral gaps, using 10 initial points.
Mathematics 09 01729 g001
Figure 2. Pseudospectra computations for a non–symmetric sparse matrix of order 1024 from the Harwell-Boeing collection and ϵ = 10 1 , 10 1.5 , 10 2 .
Figure 2. Pseudospectra computations for a non–symmetric sparse matrix of order 1024 from the Harwell-Boeing collection and ϵ = 10 1 , 10 1.5 , 10 2 .
Mathematics 09 01729 g002
Figure 3. Indicative numerical range of 500 × 500 non–negative matrix and 10 approximating trajectories.
Figure 3. Indicative numerical range of 500 × 500 non–negative matrix and 10 approximating trajectories.
Mathematics 09 01729 g003
Figure 4. Pseudospectra computation for the Frank matrix of order 32 and ϵ = 0.001 , 0.005 , 0.01 , 0.02 , 0.03 . Additional points selected between the endpoints of the initial sequences are denoted by red circles, while eigenvalues are denoted by red stars.
Figure 4. Pseudospectra computation for the Frank matrix of order 32 and ϵ = 0.001 , 0.005 , 0.01 , 0.02 , 0.03 . Additional points selected between the endpoints of the initial sequences are denoted by red circles, while eigenvalues are denoted by red stars.
Mathematics 09 01729 g004
Figure 5. Pseudospectrum computation for a damped mass-spring system. (a) Approximate pseudospectra visualization, interpolating along 15 trajectories of converging sequences. (b) Pseudospectra visualization, using the modified grid algorithm in [31].
Figure 5. Pseudospectrum computation for a damped mass-spring system. (a) Approximate pseudospectra visualization, interpolating along 15 trajectories of converging sequences. (b) Pseudospectra visualization, using the modified grid algorithm in [31].
Mathematics 09 01729 g005
Figure 6. Pseudospectra computations for a vibrating system.
Figure 6. Pseudospectra computations for a vibrating system.
Mathematics 09 01729 g006
Figure 7. Comparison of pseudospectra computation for a damped gyroscopic system and ϵ = 0.004, 0.02, 0.1. (a) Computation using 50 initial points. (b) Computation using algorithm in [31].
Figure 7. Comparison of pseudospectra computation for a damped gyroscopic system and ϵ = 0.004, 0.02, 0.1. (a) Computation using 50 initial points. (b) Computation using algorithm in [31].
Mathematics 09 01729 g007
Figure 8. Comparison of pseudospectra computation for a gyroscopic system and ϵ = 0.1 , 0.2 , 0.4 , 0.6 , 0.8 . (a) Cluster detection using 21 initial points. (b) Locations of additional points. (c) Pseudospectra visualizations interpolating along the trajectories of 21 initial and 20 additional points. (d) Pseudospectra visualization, using the modified grid algorithm in [31].
Figure 8. Comparison of pseudospectra computation for a gyroscopic system and ϵ = 0.1 , 0.2 , 0.4 , 0.6 , 0.8 . (a) Cluster detection using 21 initial points. (b) Locations of additional points. (c) Pseudospectra visualizations interpolating along the trajectories of 21 initial and 20 additional points. (d) Pseudospectra visualization, using the modified grid algorithm in [31].
Mathematics 09 01729 g008
Table 1. Relative approximation errors for Perron root and other peripheral eigenvalues of 500 × 500 non–negative matrices.
Table 1. Relative approximation errors for Perron root and other peripheral eigenvalues of 500 × 500 non–negative matrices.
# of IterationsMean Rel. Error (Perron Root)Mean Rel. Error (Other Eigenvalues)
10.00110.4205
27.0082 × 10 7 0.1783
3 4.4907 × 10 10 0.1030
4 2.8798 × 10 13 0.0680
5 9.2285 × 10 16 0.0483
Table 2. Number of iterations for different numbers of initial points ( ϵ 0 = 0.001 ).
Table 2. Number of iterations for different numbers of initial points ( ϵ 0 = 0.001 ).
# of Initial Points101530
Iterations (initial points)11,20618,45535,159
Iterations (additional points)16,11614,87211,883
Iterations (total)27,32233,32747,042
Table 3. Relative approximation errors for Perron root and other eigenvalues of the Frank matrix of order 32.
Table 3. Relative approximation errors for Perron root and other eigenvalues of the Frank matrix of order 32.
# of IterationsMean Rel. Error (Perron Root)Mean Rel. Error (Intermediate Eigenvalues)Mean Rel. Error (Leftmost Eigenvalues)
10.09160.279511.0866
1000.03730.15075.7968
2000.01920.12065.3541
3000.01030.11035.1174
4000.00550.09564.9582
5000.00290.08434.8652
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Katsouleas, G.; Panagakou, V.; Psarrakos, P. Eigenvalue Estimates via Pseudospectra. Mathematics 2021, 9, 1729. https://doi.org/10.3390/math9151729

AMA Style

Katsouleas G, Panagakou V, Psarrakos P. Eigenvalue Estimates via Pseudospectra. Mathematics. 2021; 9(15):1729. https://doi.org/10.3390/math9151729

Chicago/Turabian Style

Katsouleas, Georgios, Vasiliki Panagakou, and Panayiotis Psarrakos. 2021. "Eigenvalue Estimates via Pseudospectra" Mathematics 9, no. 15: 1729. https://doi.org/10.3390/math9151729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop