Clustering/Distribution Analysis and Preconditioned Krylov Solvers for the Approximated Helmholtz Equation and Fractional Laplacian in the Case of Complex-Valued, Unbounded Variable Coefficient Wave Number µ

: We consider the Helmholtz equation and the fractional Laplacian in the case of the complex-valued unbounded variable coefficient wave number µ , approximated by finite differences. In a recent analysis, singular value clustering and eigenvalue clustering have been proposed for a τ preconditioning when the variable coefficient wave number µ is uniformly bounded. Here, we extend the analysis to the unbounded case by focusing on the case of a power singularity. Several numerical experiments concerning the spectral behavior and convergence of the related preconditioned GMRES are presented.

, where Γ(.) is the Gamma function.More explicitly, our problem consists in finding fast solvers for the numerical approximation of a two-dimensional nonlocal Helmholtz equation with fractional Laplacian described by the equations (−∆) α/2 u(x, y) + µ(x, y)u(x, y) = v(x, y), (x, y) ∈ Ω ⊂ R 2 , α ∈ (1, 2), u(x, y) = 0, (x, y) ∈ Ω c , (1) with a given variable-coefficient, complex-valued wave number µ = µ(x, y), and with source term v. Here, Ω is taken [0, 1] 2 ⊂ R 2 and Ω c is the complement of Ω.In what follows, µ(x, y) = for every i, j ∈ Z.The discrete version of the fractional Laplacian in such a setting is given by (−∆ h ) α/2 (u(x, y)) := 1 where b are the Fourier coefficients of the function that is b where i is the imaginary unit.Proceeding as in [1] we trace back the original problem to solving the following linear system where B n = 1 h α Bn , and Bn is the two-level symmetric Toeplitz matrix generated by t α (η, Ψ), i.e., B n = T n (t α ) with For the sake of simplicity, the previous equation is rewritten in the following scaled form For the two-level notations and the theory regarding Toeplitz structures, refer to [3].
In the case where µ(x, y) = 1/(x + iy) γ we can give sufficient conditions on the coefficient γ, depending on α, in order to guarantee that {h α D n (µ)} n is zero distributed in the eigenvalue/singular value sense, thus obtaining the spectral distribution of the sequence { Ân } n which, under mild conditions, has to coincide with that of { Bn } n .In the next section, we first introduce the necessary tools and then present theoretical results completing those in [1,2] and related numerical experiments.The numerical experiments concern the visualization of the distribution/clustering results and the optimal performances of the related preconditioning when the preconditioned GMRES is used.
We highlight that the spectral analysis for the considered preconditioned and nonpreconditioned matrix-sequences for unbounded µ(x, y) is completely new.In fact, in [1,2] the assumption of boundedness of the wave number is always employed; furthermore, in [2] the results are focused on eigenvalue localization findings, while in [1] the singular value analysis is the main target.Finally, we stress that our eigenvalue results are nontrivial given the non-Hermitian and even non-normal nature of the involved matrix sequences.

Spectral Analysis
First, we report a few definitions regarding the spectral and singular value distribution, the notion of clustering and a few relevant relationships among the various concepts.Then, we present the main theoretical tool taken from [4] and we perform a spectral analysis of the various matrix-sequences.Numerical experiments and visualization results corroborating the analysis are presented in the last part of the section.Definition 1.Let {A n } n be a sequence of matrices, with A n of size d n , and let ψ : D ⊂ R t → C r×r be a measurable function defined on a set D with 0 < µ t (D) < ∞.

•
We say that {A n } n has an (asymptotic) singular value distribution described by ψ, and we write

•
We say that {A n } n has an (asymptotic) spectral (or eigenvalue) distribution described by ψ, and we write If A ∈ C m×m , then the singular values and the eigenvalues of A are denoted by σ 1 (A), . . ., σ m (A) and λ 1 (A), . . ., λ m (A), respectively.Furthermore, if A ∈ C m×m and 1 ≤ p ≤ ∞, then ∥A∥ p denotes the Schatten p-norm of A, i.e., the p-norm of the vector (σ 1 (A), . . ., σ m (A)); see [5] for a comprehensive treatment of the subject.The Schatten ∞-norm ∥A∥ ∞ is the largest singular value of A and coincides with the spectral norm ∥A∥.The Schatten 1-norm ∥A∥ 1 is the sum of the singular values of A and coincides with the so-called trace-norm of A, while the Schatten 2-norm ∥A∥ 2 coincides with the Frobenius norm of A, which is of great popularity in the numerical analysis community because of its low computational complexity.
At this point, we introduce the definition of clustering, which, as for the distribution notions, is a concept only of the asymptotic type.For z ∈ C and ϵ > 0, let B(z, ϵ) be the disk with center z and radius ϵ, B(z, ϵ) .= {w ∈ C : |w − z| < ϵ}.For S ⊆ C and ϵ > 0, we denote by B(S, ϵ) the ϵ-expansion of S, defined as B(S, ϵ) .= z∈S B(z, ϵ).
Definition 2. Let {A n } n be a sequence of matrices, with A n of size d n tending to infinity, and let S ⊆ C be a nonempty closed subset of C. {A n } n is strongly clustered at S in the sense of the eigenvalues if, for each ϵ > 0, the number of eigenvalues of A n outside B(S, ϵ) is bounded by a constant q ϵ independent of n.In symbols, {A n } n is weakly clustered at S if, for each ϵ > 0, If {A n } n is strongly or weakly clustered at S and S is not connected, then the connected components of S are called sub-clusters.Of special importance in the theory of preconditioning is the case of spectral single point clustering where S is made up by a unique complex number s.The same notions hold for the singular values, where s is a nonnegative number and S is a nonempty closed subset of the nonnegative real numbers.
For a measurable function g : D ⊆ R t → C, the essential range of g is defined as E R(g) .= {z ∈ C : µ t ({g ∈ B(z, ϵ)}) > 0 for all ϵ > 0}, where {g ∈ B(z, ϵ)} .= {x ∈ D : g(x) ∈ B(z, ϵ)}.E R(g) is always closed and, if g is continuous and D is contained in the closure of its interior, then E R(g) coincides with the closure of the image of g.
Hence, if {A n } n ∼ λ ψ (with {A n } n , ψ as in Definition 1), then, by ([6], Theorem 4.2), {A n } n is weakly clustered at the essential range of ψ, defined as the union of the essential ranges of the eigenvalue functions λ i (ψ), i = 1, . . ., r: E R(ψ) .= s i=1 E R(λ i (ψ)): all the considerations above can be translated in the singular value setting as well, with obvious minimal modifications.
In addition, the following result holds.
Theorem 1.If E R(ψ) = s with s fixed complex number then we have the subsequent equivalence: {A n } n ∼ λ ψ iff {A n } n is weakly clustered at s in the eigenvalue sense.Hence, if E R(|ψ|) = s with s fixed nonnegative number then we have the subsequent equivalence: is weakly clustered at s in the singular value sense.
A noteworthy example treated in the previous theorem is that of zero-distributed sequences {A n } n expressed by definition as {A n } n ∼ σ 0 (see [3]).
We will make use of [Theorem 1] in [4], which we report below and which extends previous results in [6] in the context of the zero distribution of zeros of perturbed orthogonal polynomials.
Theorem 2. Let {X n } n be a matrix-sequence such that each X n is Hermitian of size d n and {X n } n ∼ λ f , where f is a measurable function defined on a subset of R q for some q, with finite and positive Lebesgue measure.

Main Results
We study the eigenvalue distribution results of the two matrix-sequences {h α D n (µ)} n and {A n = Bn + h α D n (µ)} n , in the sense of Definition 1.The same kind of matrices and matrix-sequences are treated in [1,2].In [2] eigenvalues localization results are studied, while in [1] singular value and eigenvalue distribution results are obtained and in both cases, the coefficient µ(x, y) is assumed bounded.Here, we extend the results in the quoted literature.
Proof.In the proof we strongly rely on Theorem 2. Therefore, we compute Then, we estimate the quantity Now, the first sum can be estimated as Note, that 1  2 γ −2 < 0 for every γ ∈ (0, 1).A basic computation leads to As a consequence, we conclude that 1−γ for every γ ∈ (0, 1).This immediately implies that for every α ∈ (1, 2), as required to apply Theorem 1 in [4] and conclude the proof.
From the computation in Equation (8), we immediately deduce the following: Finally, when γ = 1, we obtain the estimate As above, this leads to the conclusion that ∥h With reference to the proof of Theorem 3, from a technical point of view, it should be observed that in [7][8][9] we can find more refined bounds for terms as ∑ n j=1 j ℓ , with various choices of the real parameter ℓ.
The next corollary complements the previous result.
Corollary 1. Assume that there exist positive constants c ≤ C for which Proof.It follows directly by Theorem 3 with the observation that Finally, the proof is concluded by invoking Theorem 2 with d n = n 2 .

Preconditioning
For a symmetric Toeplitz matrix the natural τ preconditioner of T n was already considered decades ago in [10][11][12] when a great amount of theoretical and computational work was dedicated to preconditioning strategies for structured linear systems.Here, H(T n ) denotes a Hankel matrix whose entries are constant along each antidiagonal and whose precise definition is the following: the first row and the last column of H(T n ) are given by [t 2 , t 3 , . . ., t n−1 , 0, 0] and [0, 0, t n−1 , . . ., t 3 , t 2 ] ⊤ , respectively.Notice, that by using the sine transform matrix S n , defined as it is known that every τ matrix is diagonalized as τ(T n ) = S n Λ n S n , where Λ n is a diagonal matrix constituted by all eigenvalues of τ(T n ), and S n = ([S n ] j,k ) is the real, symmetric, orthogonal matrix defined before, so that S n = S T n = S −1 n .Furthermore, the matrix S n is associated with the fast sine transform of type I (see [13,14] for several other sine/cosine transforms).Indeed the multiplication of a matrix S n times a real vector can be conducted in O(n log n) real operations and the cost is around half of the celebrated discrete fast Fourier transform [15].Therefore, all the relevant matrix operations in this algebra cost O(n log n) real operations, including matrix-matrix multiplication, inversion, solution of a linear system, and computation of the spectrum, i.e., of the diagonal entries of Λ n .
Using standard and known techniques, the τ algebra has d-level versions for every with Λ n , the diagonal matrix is obtained as a d-level sine transform of type I of the first column of T n .Again the quoted d-level transform and all the relevant matrix operations in the related algebra have a cost of O(ν(n) log ν(n)) real operations which is quasi-optimal given the fact that the matrices have size ν(n).
At the algebraic level, the explicit construction can be conducted recursively using additive decomposition ( 9): first at the most external level, and then applying the same operation to any block which is a (d − 1)-level symmetric Toeplitz matrix and so on until arriving at matrices with scalars.
In light of the excellent structural, spectral, and computation features of the τ algebra in d levels, two different types of τ preconditioning for the related linear systems were proposed in [2] and one in [1] (with d = 2 and n = (n, n)).Here, we consider the latter.In fact, {T n ( f ) − τ(T n ( f ))} n ∼ σ,λ 0 for any Lebesgue integrable f , thanks to the distribution results on multilevel Hankel matrix-sequences generated by any L 1 function f proven in [16].From this, as proven in [1], the preconditioner P n = τ(T n (t α )) is such that the preconditioned matrix sequence is clustered at 1 both in the eigenvalue and singular value sense under mild assumptions.In fact, using the notion of an approximating class of sequences, the eigenvalue perturbation results in [6], and the GLT apparatus [3]; it is enough that µ(x, y) is Riemann integrable or simply bounded.Here, we extend the spectral distribution results in the case where µ(x, y) is not bounded and even not integrable.More precisely, as in Corollary 1, we consider the case of a power singularity.
for every x, y ∈ [0, 1].Consider the preconditioner P n = τ(T n (t α )).Then, for every γ ∈ [0, 1) and for every α ∈ (1, 2)), we have c1 Proof.We rely on Theorem 2, on Corollary 1, and on a standard symmetrization trick.First of all, we observe that the eigenvalues of n are the same because the two matrices are similar.The same holds for Now X n is real symmetric, and in fact positive definite and so is Bn .As proven in [1], the spectral distribution function of {X n } n is 1 thanks to a basic use of the GLT theory.Furthermore, the minimal eigenvalue of P n = τ(T n (t α )) is positive and tends to zero as h α , since t α has a unique zero of order α at zero (see, e.g., [17]).Therefore, As a consequence of Corollary 1 we deduce if and only if γ < 1.In conclusion, the desired result follows from Theorem 2 with d n = n 2 and f = 1.
Theorem 4 cannot be sharp since the estimate in (11) would hold also if the preconditioner is chosen as P n = h α I n 2 .A more careful estimate would require considering that the eigenvalues of τ(T n (t α )) are explicitly known, and in fact, we will see in the numerical experiments that the spectral clustering at 1 of the preconditioned matrix-sequence is observed also for γ to be much larger than 1.

Numerical Evidence: Visualizations of the Original Matrix-Sequence
In the current subsection, we report visualizations regarding the analysis in Theorem 3.More precisely, in Figures 1-4, we plot the eigenvalues of the matrix Ân , when the matrix-size is n 2 = 2 12 and when (α, γ) ∈ {(1.2, 1), (1.4,1), 1.6, 1), (1.8, 1)}, satisfying the assumption of Theorem 3 given by γ < α + 1.As it can be observed, the clustering at zero of the imaginary part of the eigenvalues of Ân and the relation { Ân } ∼ λ t α are visible already for a moderate matrix size of 2 12 .A remarkable fact is that no outliers are present, since the imaginary parts are always negligible and the graph is of an equispaced sampling of t α and of the real parts of the eigenvalues, both sets are in nondecreasing order and superpose completely.

Numerical Evidence: Preconditioning, Visualizations, GMRES Iterations
In the present subsection, we consider the preconditioned matrices.More in detail, Tables 1-4 concern the measure of the clustering at 1 with radius ϵ = 0.1, 0.01, for γ = 0.5, 0.8, 1, 1.5, for α = 1.2, 1.4, 1.6, 1.8, for various matrix-dimensions n 2 = 2 8 , 2 10 , 2 12 .As can be seen, the number N o (ϵ) = N o (ϵ, n) increases moderately with n, but the percentage of the number of outliers with respect to the matrix-size tends to zero fast and this is in agreement with the forecast of Theorem 4, at least for γ < 1: the situation is indeed better than the theoretical predictions because even when the condition γ < 1 is violated, we still observe a clustering at 1 and this is not a surprise given the comments after Theorem 4.  The clustering in the preconditioning setting is also visualized in Figures 5-8, while Table 5 accounts for the fact that the localization around 1 is very good since we do not have large outliers.The moderate size of the outliers indicates that the preconditioned GMRES is expected to be optimal and robust with respect to all the involved parameters.The latter is evident in Table 6 with a slight increase in the number of iterations when γ increases, so that the number and the magnitude of the outliers are slightly larger.Table 5.Maximal distance of eigenvalues of the preconditioned matrix from 1 for increasing dimension n 2 . 4 1.289012 × 10 −1 1.065957 × 10 −1 8.471230 × 10 −2 5.120695 × 10 −2 2 5  1.660444 × 10 −1 1.580600 × 10 −1 1.257282 × 10 −1 6.848000 × 10 −2 2 6  2.157879 ×  Here, we check the clustering at zero of the imaginary part of the eigenvalues of Ân and the relation { Ân } ∼ λ t α for a moderate matrix size as 2 12 , for γ = 3, and for α = 1.2, 1.4, 1.6, 1.8, see Figures 9-12.We stress that the condition γ < α + 1 in Theorem 3 is violated.Nevertheless, the clustering at 0 of the imaginary part is present and the agreement between an equispaced sampling of t α and the real parts of the eigenvalues of Ân is still striking.

µ(x, y
However, the number and the magnitude of outliers in the preconditioned matrices start to become significant as reported in Table 7 and Figure 13 and hence, the number of preconditioned GMRES iterations starts to grow moderately with the matrix-size as can be seen in Table 8.

Conclusions
In this work, we considered a fractional Helmholtz equation approximated by ad hoc centered differences with variable wave number µ(x, y), in the specific case where the complex-valued function µ(x, y) has a pole of order γ.Eigenvalue distribution and clustering results have been derived in [1,2].The numerical results presented in this corroborate the analysis.
Many more intricate cases can be treated using the same type of theoretical apparatus, including the GLT theory [3,18] and non-Hermitian perturbation results, such as those in [4,6].We list a few of them.

•
The numerical results in Section 2.5 seem to indicate that the spectral distribution of the original matrix sequence and the spectral clustering at 1 of the preconditioned matrix sequence holds also when the Frobenius norm condition in [4] is violated; this is an indication that Theorem 1 in [4] may not be sharp.A related conjecture is that the key condition ∥Y n ∥ 2 2 = o(n) in Theorem 2 could be replaced by ∥Y n ∥ p p = o(n), with any p ∈ [1, ∞), which would be very useful when the trace norm is considered, i.e., for p = 1.
• Definition 1 has been reported with a matrix size of the symbol equal to r ≥ 1.In our study for matrices arising from finite differences, the parameter r is always equal to 1.However, when considering isogeometric analysis approximations with polynomial degree p and regularity k ≤ p − 1 we have r = (p − k) d [19,20].Notice that a particular case of the previous formula is the case of p order finite elements in space dimension d which leads to r = p d [20,21], since k = 0. Also, the discontinuous Galerkin techniques of degree p are covered: we have r = (p + 1) d [19] because k = −1.

•
The above considerations could be considered also in the case where the fractional Laplacian is defined on a non-Cartesian d-dimensional domain Ω, or equipped with variable coefficients, or with approximations on graded grids.In fact, the related GLT theory is already available [3,18,19] for encompassing such a generality, while non-Hermitian perturbation tools do not depend on a specific structure of the involved matrix sequences.

Theorem 4 .
Assume that there exist positive constants c ≤ C for which c/|x

Figure 1 .
Figure 1.Eigenvalues of the matrix Ân for γ = 1, α = 1.2 and n 2 = 212 .The left panel reports the eigenvalues in the complex plane.The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [min t α = 0, max t α = 2 3α/2 ].

Figure 2 .
Figure 2. Eigenvalues of the matrix Ân for γ = 1, α = 1.4 and n 2 = 212 .The left panel reports the eigenvalues in the complex plane.The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [min t α = 0, max t α = 2 3α/2 ].

Figure 3 .
Figure3.Eigenvalues of the matrix Ân for γ = 1, α = 1.6 and n 2 = 212 .The left panel reports the eigenvalues in the complex plane.The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [min t α = 0, max t α = 2 3α/2 ].

Figure 4 .
Figure 4. Eigenvalues of the matrix Ân for γ = 1, α = 1.8 and n 2 = 212 .The left panel reports the eigenvalues in the complex plane.The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [min t α = 0, max t α = 2 3α/2 ].

Figure 9 .Figure 10 .Figure 11 .Figure 12 .Figure 13 .
Figure 9. Eigenvalues of the matrix Ân for γ = 3, α = 1.2 and n 2 = 212 .The left panel reports the eigenvalues in the complex plane.The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [min t α = 0, max t α = 2 3α/2 ].

Table 1 .
Number of outliers N o (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .

Table 2 .
Number of outliers N o (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 . µ(

Table 3 .
Number of outliers N o (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .

Table 4 .
Number of outliers N o (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .

Table 6 .
Number of preconditioned GMRES iterations to solve the linear system for increasing dimension n 2 till tol = 10 −11 .

Table 7 .
Number of outliers N o (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .

Table 8 .
Number of preconditioned GMRES iterations to solve the linear system for increasing dimension n 2 till tol = 10 −11 .
Author Contributions: S.S.-C. is responsible for funding acquisition; the contribution of all authors is equal regarding all the listed items: Conceptualization, methodology, validation, investigation, data curation, writing-original draft preparation, writing-review and editing, visualization, supervision, project administration.All authors have read and agreed to the published version of the manuscript.The work of Stefano Serra-Capizzano is supported by GNCS-INdAM and is funded by the European High-Performance Computing Joint Undertaking (JU) under grant agreement No. 955701.The JU receives support from the European Union's Horizon 2020 research and innovation programme and Belgium, France, Germany, and Switzerland.Furthermore, Stefano Serra-Capizzano is grateful for the support of the Laboratory of Theory, Economics and Systems-Department of Computer Science at Athens University of Economics and Business. Funding: