Eigenvalue Estimates via Pseudospectra †

: In this note, given a matrix A ∈ C n × n (or a general matrix polynomial P ( z ) , z ∈ C ) and an arbitrary scalar λ 0 ∈ C , we show how to deﬁne a sequence { µ k } k ∈ N which converges to some element of its spectrum. The scalar λ 0 serves as initial term ( µ 0 = λ 0 ), while additional terms are constructed through a recursive procedure, exploiting the fact that each term µ k of this sequence is in fact a point lying on the boundary curve of some pseudospectral set of A (or P ( z ) ). Then, the next term in the sequence is detected in the direction which is normal to this curve at the point µ k . Repeating the construction for additional initial points, it is possible to approximate peripheral eigenvalues, localize the spectrum and even obtain spectral enclosures. Hence, as a by-product of our method, a computationally cheap procedure for approximate pseudospectra computations emerges. An advantage of the proposed approach is that it does not make any assumptions on the location of the spectrum. The fact that all computations are performed on some dynamically chosen locations on the complex plane which converge to the eigenvalues, rather than on a large number of predeﬁned points on a rigid grid, can be used to accelerate conventional grid algorithms. Parallel implementation of the method or use in conjunction with randomization techniques can lead to further computational savings when applied to large-scale matrices.


Introduction
The theory of pseudospectra originates in numerical analysis and can be traced back to Landau [1], Varah [2], Wilkinson [3], Demmel [4], and Trefethen [5], motivated by the need to obtain insights into systems evolving in ways that the eigenvalues alone could not explain. This is especially true in problems where the underlying matrices or linear operators are non-normal or exhibit in some sense large deviations from normality. A better understanding of such systems can be gained through the concept of pseudospectrum, which, for a matrix A ∈ C n×n and a positive parameter > 0, was introduced as the subset of the complex plane that is bounded by the −1 -level set of the norm of the resolvent (µI − A) −1 . A second definition stated in terms of perturbations characterizes the elements of this set as eigenvalues of some perturbation A + E with E ≤ . In this sense, the notion of pseudospectrum provides information that goes beyond eigenvalues, while retaining the advantage of being a natural extension of the spectral set. In fact, for different values of magnitude , pseudospectrum provides a global perspective on the effects of perturbations; this is in stark contrast to the concept of condition number, where only the worst-case scenario is considered.
On one hand, pseudospectrum may be used as a visualization tool to reveal information regarding the matrix itself and the sensitivity of its eigenvalues. Applications within numerical analysis include convergence of nonsymmetric matrix iterations [6], backward error analysis of eigenvalue algorithms [7], and stability of spectral methods [8]. On the other hand, it is a versatile tool that has been used to obtain quantitative bounds on the transient behavior of differential equations in finite time, which may deviate from the long-term asymptotic behavior [9]. Important results involving pseudospetra have been also been obtained in the context of spectral theory and spectral properties of banded Toeplitz matrices [10,11]. Although emphasis has been placed on the standard eigenproblem, attention has also been drawn to matrix pencils [12] and more general matrix polynomials [13,14] arising in vibrating systems, control theory, etc. For a comprehensive overview of this research field and its applications, the interested reader may refer to [15].
In this note, we propose an application of pseudospectral sets as a mean to obtain eigenvalue estimates in the vicinity of some complex scalar. In particular, given a matrix (or a general matrix polynomial) and a scalar λ 0 ∈ C, we construct a sequence {µ k } k∈N that converges to some element of its spectrum. The scalar λ 0 serves as initial term (µ 0 = λ 0 ), while additional terms are constructed through an iterative procedure, exploiting the fact that each term µ k of this sequence is in fact a point lying on the boundary curve of some pseudospectral set. Then, the next term in the sequence is detected in the perpendicular direction to the tangent line at the point µ k . Repeating the construction for a tuple of initial points encircling the spectrum, several peripheral eigenvalues are approximated. Since the pseudospectrum may be disconnected, this procedure allows the identification of individual connected components and, as a by-product, a convenient and numerically efficient procedure for approximate pseudospectrum computation emerges. Moreover, this approach is clearly amenable to parallelization or randomization and can lead to significant computational savings when applied to probems involving large-scale matrices.
Our paper is organized as follows. In Section 2, we provide the necessary theoretical background on the method and provide examples for the constant matrix case. As confirmed by numerical experiments, the method can provide a sufficiently accurate pseudospectrum computation at a much-reduced computational cost, especially in cases where the spectrum is convexly independent (i.e., each eigenvalue does not lie in the convex hull of the others) or exhibits large eigenvalue gaps. A second application of the method on Perron-root approximation for non-negative matrices is presented. Then, Section 3 shows how the procedure may be modified to estimate the spectrum of more general matrix polynomials. Numerical experiments showcasing the application of the method on damped mass-spring and gyroscopic systems conclude the paper.

Eigenvalues via Pseudospectra
Let the matrix A ∈ C n×n with spectrum σ(A) = {µ ∈ C : det(µI − A) = 0}, where det(·) denotes the determinant of a matrix. With respect to the · 2 -norm, the pseudospectrum of A is defined by where s min (·) denotes the smallest singular value of a matrix and > 0 is the maximum norm of admissible perturbations.
For every choice of increasing positive parameters 0 < 1 < 2 < 3 < . . . , the corresponding closed, strictly nested sequence of pseudospectra is obtained. In fact, the respective boundaries satisfy the inclusions It is also clear that, for any λ ∈ σ(A), s min (λI − A) = 0.
Our objective now is to exploit the properties of these sets to detect an eigenvalue of A in the vicinity of a given scalar λ 0 ∈ C\σ(A). This given point of interest may be considered to lie on the boundary of some pseudospectral set, i.e., there exists some non-negative parameterˆ 1 > 0, such that Indeed, points satisfying the equality s min (µI − A) = for some > 0 and lying in the interior of σ (A) are finite in number. Thus, in the generic case, we may think of the inclusion (1) as an equality. We consider the real-valued function g A : C → R + with g A (z) = s min (zI − A). In the process of formulating a curve-tracing algorithm for pseudospectrum computation [16], Brühl analyzed g A (z) and, identifying C ≡ R 2 , noted that its differentiability is explained by the following Theorem in [17]: Theorem 1. Let the matrix valued function P(χ) : R d → C n×n be real analytic in a neighborhood of χ 0 = x 1 0 , . . . , x d 0 and let σ 0 a simple nonzero singular value of P(χ 0 ) with u 0 , v 0 its associated left and right singular vectors, respectively.
Then, there exists a neighborhood N of χ 0 on which a simple nonzero singular value σ(χ) of P(χ) is defined with corresponding left and right singular vectors u(χ) and v(χ), respectively, such that σ(χ 0 ) = σ 0 , u(χ 0 ) = u 0 , v(χ 0 ) = v 0 and the functions σ, u, v are real analytic on N . The partial derivatives of σ(χ) are given by Hence, recalling (1) and assumingˆ 1 is a simple singular value of the matrix P(λ 0 ) = λ 0 I − A, then where u min and v min denote the left and right singular vectors of λ 0 I − A associated tô On the other hand, if λ is an eigenvalue of A near λ 0 , it holds |λ − λ 0 | ≤ˆ 1 . The latter observation follows from the fact that where D(0, ) = {z ∈ C : |z| ≤ } and equality holds for normal matrices. So, the scalar can be considered to be an estimate of eigenvalue λ. In particular, λ 0 ∈ ∂σˆ 1 (A) and µ 1 lies in the interior of σˆ 1 (A). Moreover, the sequence converges to λ. The above process requires the computation of the triplet at every point µ k ; see [18].
Remark. To avoid the computational burden of computing the (left and right) singular vectors, a cheaper alternative would be to consider at each iteration (k = 0, 1, 2, . . . ) the canonical octagon with vertices instead and simply compute θ k,j = s min p k,j I − A , j = 0, 1, 2, . . . 7.

Pseudospectrum Computation
The approximating sequences in (2) may be utilized to implement a computationally cheap procedure to visualize matrix pseudospectra, at least in cases where the order of the matrix is small or when its spectrum exhibits large eigenvalue gaps. Several related techniques for pseudospectrum computation have appeared in the literature. These fall largely into two categories: grid [14] and path-following algorithms [16,[19][20][21]. Grid algorithms begin by evaluating the function s min (zI − A) on a predefined grid on the complex plane and lead to a graphical visualization of the boundary ∂σ (A) by plotting the -contours of s min (zI − A). This approach faces two severe challenges; namely, the requirement of a-priori information on the location of the spectrum to correctly identify a suitable region to discretize, as well as the typically large number of grid points the computations have to be performed upon. path-following algorithms, on the other hand, require an initial step to detect a starting point on the curve ∂σ (A) and then proceed to compute additional boundary points for each connected component of σ (A). The main drawbacks of this latter approach lie in the difficulty in performing the initial step and the need to correctly identify every connected component of σ (A) in order to repeat the procedure and properly trace its boundary. Moreover, cases where pseudospectrum computation is required for a whole tuple of parameters drastically compromise the efficiency of path-following algorithms.
Our approach is to use the approximating sequences (2) to decrease the number of singular value evaluations and therefore speed up the computation of pseudospectra. The basic steps are outlined as follows: i.
Select a tuple of initial points µ j 0 s j=1 ∈ C encircling the spectrum; for instance, these can be chosen on the circle {z ∈ C : |z| = A }.
ii. Construct eigenvalue approximating sequences µ j k n j k=0 (j = 1, . . . , s), as in (2). If . . , s, where 0 is some prefixed parameter value. In other words, 0 indicates the tolerance with which the approached by the constructed sequences eigenvalues should be approximated and corresponds to the minimum parameter for which pseudospectra will be computed. iii. Classify the sequences into distinct clusters, according to the proximity of their final terms. This step may be performed using a k-means clustering algorithm, using a suitable criterion to evaluate the optimal number of groups. iv. Compute The proposed method successfully localizes the spectrum, initiating the procedure with a restricted number of points. Then, singular value computations are kept to a minimum by considering points only on the constructed sequences. Pseudospectrum components corresponding to peripheral eigenvalues λ / ∈ co(σ(A)\{λ}) not in the convex hull of the other eigenvalues, are thus extremely easy to identify. This approach is also well-suited to cases, where the matrix has convexly independent spectrum; i.e., when λ / ∈ co(σ(A)\{λ}), for every λ ∈ σ(A). Moreover, it is clearly amenable to parallelization, which could lead to significant computational savings in cases of large matrices. Application 1. We consider a random matrix A ∈ C 6×6 , the sole constraint being that its eigenvalues are distant form each other; the real and imaginary parts of its entries follow the standardized normal distribution scaled by 10 4 . For the proposed procedure, we select initial points do not exceed 0 = 0.5. The sequences are organized into distinct clusters, grouping together those sequences which approximate the same element of σ(A). This grouping is performed using a k-means clustering algorithm, where the optimal number of clusters is evaluated via the silhouette criterion and using a distance metric based on the sum of absolute differences between points. Since six different groups are identified, clearly all elements of σ(A) have been sufficiently approximated by at least one of the sequences. For an illustration, refer to Figure 1a; different colors have been used to differentiate between polygonal chains corresponding to distinct clusters. The construction so far required 914 singular value computations. Having calculated all parameter values j k such that µ j k ∈ ∂σ j k (A) during the previous procedure, it is possible to interpolate between these known points along the trajectories formed by µ j k n j k=0 (j = 1, . . . 10) to approximate boundary points of ∂σ (A) for selected values > 0 = 0.5. Since all ten trajectories converge to eigenvalues from points encircling σ(A), to obtain better pseudospectra approximations, it is necessary to repeat the procedure for additional suitably selected points. Hence, for each cluster we consider three additional points; see Figure 1b. In particular, denoting c 1 , . . . c 6 the centroids of the clusters, for each j we consider the three centroids which lie closest to c j and take the convex combinations Then, additional sequences corresponding to these extra points are constructed so that the desired parameter values of for which pseudospectra should be computed (in this instance, the triple of = 1, 5, 10 ∈ [ , u]) may be interpolated within these trajectories, as for the ten initial ones. This imposes an extra cost of 1170 additional singular value computations (2084 in total). The resulting approximations of the pseudospectra components identified by the upper left corner trajectories for = 1, 5, 10 are depicted in greater detail in Figure 1c; the relevant eigenvalue is indicated by "*".
An advantage of this procedure is that it does not require some a-priori knowledge of the initial region Ω on the complex plane where the spectrum is located. In fact, the very nature of this specific example, whose spectrum covers a wide area Ω, would render computations on a suitable grid impractical. Another way in which this method diverges from conventional grid algorithms is in that the computations are performed on a dynamically chosen set of points, iteratively selected as the corresponding trajectories converge to peripheral eigenvalues and identify the relevant pseudospectrum components, rather than on a large number of predefined points on a rigid grid.

Application 2.
To demonstrate how the procedure works in cases of larger matrices, in this application we examine the matrix A = 10 −7 · Pores2, where Pores2 is a 1024 × 1024 matrix from the Harwell-Boeing sparse matrix collection [22] related to a non-symmetric computational fluid dynamics problem. Here, the factor 10 −7 is used for scaling purposes and is related to the norm-· 1 order of the matrix under consideration. Initiating the procedure with 30 equidistributed points on the circle z ∈ C : |z| = 1 2 A 1 , the method required a total of 810 singular value computations for a minimum parameter value of 0 = 0.005; the resulting pseudospectra visualizations for = 10 −1 , 10 −1.5 , 10 −2 are depicted in Figure 2. For this example, we have opted not to introduce additional points.   Perron root computation. Applications of non-negative matrices, i.e., matrices with exclusively non-negative real entries, abound in such diverse fields as probability theory, dynamical systems, Internet search engines, tournament matrices etc. In this context, the dominant eigenvalue of non-negative matrices, also referred to as Perron root, is of central importance. Localization of the Perron root has been extensively studied in the literature; relevant bounds can be found in [23][24][25][26][27]. Its computation is typically carried out using the power method; the convergence rate of this approach depends on the relative magnitudes of the two dominant eigenvalues. Relevant methods have appeared in [28][29][30], among others. As a second application of the approximating sequences (2), the following experiment reports an elegant way of approximating Perron roots.

Application 3.
For this experiment, we considered a tuple of 50 non-negative matrices {A } 50 =1 ⊂ R 500×500 + with uniformly distributed entries in (0, 50). The symmetry of σ (A ) with respect to the real axis suggests that it suffices to restrict the computations exclusively to the closed upper half-plane. Hence, for each of the matrices A , we initiated the construction of the sequences (2) from equidistributed initial terms µ ,j 0 10 j=1 ⊂ z ∈ C : |z| = 10 4 · A 1 , Im(z) ≥ 0 ( = 1, . . . 50). As expected, the rightmost of these points formed sequences converging to the Perron root of A , while each of the remaining ones approximated some other peripheral eigenvalue. In the generic case, the magnitude of the second highest eigenvalue of A was much smaller than the Perron root.  Table 1, verifying that a reliable estimate for the Perron root may in the generic case be obtained after the computation of as few as 3 terms in the corresponding trajectories.
The remaining (10 − s ) sequences converge to some other peripheral eigenvalues λ ,1 , . . . , λ ,s ∈ σ(A ), reasonable approximations of which require a rather larger number of iterations, as can be seen from the second column of Table 1 reporting.  Application 3 suggests that any reasonable upper bound µ 0 ∈ R suffices to yield reliable estimations for the Perron root after computation of only 2-3 terms in the sequence (2).
The previous experiment may seem excessively optimistic. Indeed, there can be instances when the situation is much more demanding.

Application 4.
The Frank matrix is well-known to have ill-conditioned eigenvalues. For this application, we test the behavior of the proposed method on the Frank matrix of order 32, the normalized matrix of eigenvectors of which has condition number 7.81 × 10 11 . Figure 4 depicts the resulting pseudospectra visualizations for = 0.001, 0.005, 0.01, 0.02, 0.03, initiating the procedure from 30 points located on the upper semiellipse centered at (40, 0) with semi-major and semi-minor axes lengths equal to 70 and 15, respectively. The depicted trajectories were constructed, so that the final terms in each polygonal chain lie within σ 0.001 (A). Then, according to the distances of the final terms of consecutive sequences, at most two additional points are introduced on the line segment connecting these respective final terms. The necessary iterations for the construction of the relevant sequences are reported in Table 2 for different numbers of initial points.
The approximating quality of the sequences is much compromised when compared to the generic case, requiring many more iterations, especially for the eigenvalues with smallest real parts; these are also the most ill-conditioned ones. In fact, the seven rightmost sequences converging to the Perron root (refer to Figure 4) display the fastest convergence, the second group of thirteen sequences leading to the intermediate eigenvalues being somewhat more compromised, while the leftmost sequences naturally exhibit even more diminished approximation quality. Mean relative approximation errors for these three groups are reported in Table 3.   For the numerical experiments in this section, we have restricted ourselves to initial points encircling the spectrum. Another option would be to use our method in tandem with randomization techniques for the initial points selection.

Matrix Polynomials
The derivation of eigenvalue approximating sequences may be readily extended to account for the general matrix polynomial case where λ ∈ C and A j ∈ C n×n (j = 0, 1, . . . , m), with A m = 0. Recall that the spectrum of P(λ) is the set of all its eigenvalues; i.e., σ(P) = {λ ∈ C : det P(λ) = 0}. For a scalar λ 0 ∈ σ(P), the nonzero solutions v 0 ∈ C n to the system P(λ 0 )v 0 = 0 are the eigenvectors of P(λ) corresponding to λ 0 .
The -pseudospectrum of P(λ) was introduced in [14] for a given parameter > 0 and a set of nonnegative weights w ∈ R m+1 + as the set σ ,w (P) = λ ∈ C : det P ∆ (λ) = 0, ∆ j ≤ w j , j = 0, 1, . . . , m of eigenvalues of all admissible perturbations P ∆ (λ) of P(λ) of the form where the norms of the matrices ∆ j ∈ C n×n (j = 0, 1, . . . , m) satisfy the specified ( , w)related constraints. In contrast to the constant matrix case, a whole tuple of perturbing matrices ∆ j is involved, which explains the presence of the additional parameter vector w in the definition of σ ,w (P). However, considering for some A ∈ C n×n the pencil P(λ) = I n λ − A, note that (3) reduces to the usual -pseudospectrum of the matrix A ∈ C n×n for the choice of w = {w 0 , In the general case, the nonnegative weights w j m j=0 allow freedom in how perturbations are measured; for example, in an absolute sense when w 0 = w 1 = · · · = w m = 1, or in a relative sense when w j = A j (j = 0, 1, . . . , m). On the other hand, the choice = 0 leads to σ 0,w (P) = σ(P).
From a computational viewpoint, a more convenient characterization [14] (Lemma 2.1) for this set is given by where s min (P(λ)) is the minimum singular value of the matrix P(λ) and the scalar polynomial is defined in terms of the weights w j m j=0 used in the definition (3) of σ ,w (P). In fact, since the eigenvalues of P ∆ (λ) are continuous with respect to the entries of its coefficient matrices, the boundary of σ ,w (P) is expressed as ∂σ ,w (P) ⊆ {λ ∈ C : s min (P(λ)) = q w (|λ|)}; (4) the equality s min (P(λ)) = q w (|λ|) is satisfied for some > 0 only for a finite number of points λ ∈ int(σ ,w (P)). Suppose now that we want to approximate an eigenvalue of a matrix polynomial which lies in the neighborhood of some point of interest µ 0 ∈ C\σ(P) on the complex plane. Expression (4) suggests that the derivation of a convergent sequence in Section 2 may be readily adapted for our purposes. Indeed, for every scalar µ 0 ∈ C, there exists somê 1 > 0, such that µ 0 ∈ ∂σˆ 1 ,w and then (4) impliesˆ 1 = s min (P(µ 0 )) q w (|µ 0 |) . Moreover, assuming s min (P(µ 0 )) is a simple singular value of the matrix P(µ 0 ), we may invoke Theorem 1 to conclude that the function g P : C → R + with g P (z) = s min (P(z)) is real analytic in a neighborhood of µ 0 = x 0 + iy 0 . In fact, where u min and v min denote the left and right singular vectors of P(µ 0 ) associated to s min (P(µ 0 )) =ˆ 1 q w (|µ 0 |), respectively [13] (Corollary 4.2). As in the constant matrix case, moving from the initial point µ 0 ∈ ∂σˆ 1 ,w towards the interior of σˆ 1 ,w in the normal direction to the curve ∂σˆ 1 ,w , the scalar |∇[s min (P(z)) −ˆ 1 q w (|z|)]| z=x 0 +iy 0 withˆ 1 = s min (P(µ 0 )) q w (|µ 0 |) can be considered to be an estimate of some eigenvalue λ ∈ σ(P).
In this way, a convergent sequence {µ k } k∈N to the eigenvalue λ ∈ σ(P) is recursively defined with initial point µ 0 and general term

Numerical Experiments
The steps outlined in Section 2.1.1 are readily modified using the sequences in (5) to yield spectral enclosures for matrix polynomials.

Application 5 ([31], Example 3).
We consider the 50 × 50 matrix polynomial P(λ)  Figure 5a. Note this yields a sufficiently accurate sketch of σ ,w (P) and is very competitive when compared to other methods. For instance, Figure 5b is obtained via the procedure in [31] applied to a 400 × 400 grid on the relevant region Ω = [−20, 5] × [− 15,15]. This latter approach is far more computationally intensive, requiring 71,575 iterations to visualize σ ,w (P) for the same tuple of parameters.
In case a more detailed spectral localization is desired, our method may be adapted, as in Application 1, to identify individual pseudospectrum components. Our next experiment also serves to illustrate the fact that the number of initial trajectories that are attracted by the individual eigenvalues to form the related clusters is intimately connected to eigenvalue sensitivity. Application 6 ([13], Example 5.1). We consider the mass-spring system from ( [13], Ex. 5.2) defining the 3 × 3 selfadjoint matrix polynomial and set w = {1, 1, 1}. As in Application 5, computations are restricted exclusively to the closed upper half-plane. However, the close proximity of the eigenvalues −0.08 ± i1.45, −0.75 ± i0.86, −0.51 ± i1.25 (indicated by "*" in Figure 6), as well as the fact that the pair λ = −0.51 ± i1. 25 is less sensitive than the other two, necessitates the use of many initial points. Indeed, as demonstrated in Figure 6a, initiating the procedure with 40 equidistributed initial points on z ∈ C : |z| = min A j 1 , Im(z) ≥ 0 results in σ(P) being under-represented in the resulting clusters. In order to correctly approximate all three elements of the spectrum on the upper half-plane enforces the use of as many as 80 points on the selected semicircle. The length n j of each sequence Using the squared Euclidean distance as the metric for computing the cluster evaluation criterion, three distinct groups are correctly identified, each converging to a different eigenvalue in the closed upper half-plane, as in Figure 6b. Note that the least sensitive eigenvalue λ = −0.51 + i1.25 ends up attracting only one of these sequences; the corresponding group being a singleton. To correctly sketch the boundaries of σ ,w (P) for the triple of = 0.24, 0.48, 0.73 (> 0 = 0.01), we introduce six additional points for each cluster. Indeed, denoting c 1 , c 2 , c 3 the centroids of the clusters, for each cluster j = 1, 2, 3 we consider the vertices p j i 6 i=1 of a canonical hexagon centered at c j with maximal diameter equal to min c j − c i i =j . These vertices are indicated by circles in Figure 6b and are used as starting points to construct the additional trajectories indicated by the black lines in Figure 6c. Note that all three selected parameters = 0.24, 0.48, 0.73 should be possible to interpolate along these additional lines as well, which explains why most of these trajectories have been extended to the opposite directions as well, modifying the definition of the sequences in (5) in each instance accordingly. The construction of the additional sequences requires 202 singular value computations (leading to a total of 1364 iterations), while the resulting approximations of pseudospectra boundaries for = 0.24, 0.48, 0.73 are depicted in Figure 6c.  (a) Partial spectral identification, due to close eigenvalue proximity.
(b) Complete spectral identification with increased number of initial points.  We conclude this section, examining the behavior of the method on a non-symmetric example.

Concluding Remarks
In this note, we have shown how to define sequences which, beginning from arbitrary complex scalars, converge to some element of the spectrum of a matrix. This approach can be applied both to constant matrices and to more general matrix polynomials and can be used as a means to obtain estimates for those eigenvalues that lie in the vicinity of the initial term of the sequence. This construction is also useful when no information on the location of the spectrum is a priori known. In such cases, repeating the construction from arbitrary points, it is possible to detect peripheral eigenvalues, localize the spectrum and even obtain spectral enclosures.
As an application, in this paper we used this construction to compute the pseudospectrum of a matrix or a matrix polynomial. Thus, a useful technique for speeding up pseudospectra computations emerges, which is essential for applications. An advantage of the proposed approach is that does not make any assumptions on the location of the spectrum. The fact that all computations are performed on some dynamically chosen locations on the complex plane which converge to the eigenvalues, rather than on a large number of predefined points on a rigid grid, can be seen as improvement over conventional grid algorithms.
Parallel implementation of the method can lead to further computational savings when applied to large matrices. Another option would be to apply this method combined with randomization techniques for the selection of the initial points of the sequences. In the large-scale matrix case, this method may be helpful in obtaining a first impression of the shape and size of pseudospectrum and even computing a rough approximation. Then, if desired, this could be used in conjunction with local versions of the grid algorithm and small local meshes about individual areas of interest.