Next Article in Journal
Application of Genetic Algorithms for Periodicity Recognition and Finite Sequences Sorting
Previous Article in Journal
Ensembling Supervised and Unsupervised Machine Learning Algorithms for Detecting Distributed Denial of Service Attacks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Clustering/Distribution Analysis and Preconditioned Krylov Solvers for the Approximated Helmholtz Equation and Fractional Laplacian in the Case of Complex-Valued, Unbounded Variable Coefficient Wave Number μ

by
Andrea Adriani
1,
Stefano Serra-Capizzano
1,2,* and
Cristina Tablino-Possio
3
1
Department of Science and High Technology, University of Insubria, Via Valleggio 11, 22100 Como, Italy
2
Division of Scientific Computing, Department of Information Technology, Uppsala University, Lägerhyddsv 2, hus 2, SE-751 05 Uppsala, Sweden
3
Department of Mathematics and Applications, University of Milano-Bicocca, Via Cozzi 53, 20125 Milano, Italy
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(3), 100; https://doi.org/10.3390/a17030100
Submission received: 22 January 2024 / Revised: 14 February 2024 / Accepted: 19 February 2024 / Published: 26 February 2024

Abstract

:
We consider the Helmholtz equation and the fractional Laplacian in the case of the complex-valued unbounded variable coefficient wave number μ , approximated by finite differences. In a recent analysis, singular value clustering and eigenvalue clustering have been proposed for a τ preconditioning when the variable coefficient wave number μ is uniformly bounded. Here, we extend the analysis to the unbounded case by focusing on the case of a power singularity. Several numerical experiments concerning the spectral behavior and convergence of the related preconditioned GMRES are presented.

1. Introduction

In the present, the fractional Laplacian operator ( Δ ) α / 2 ( · ) is considered. Its formal definition is
( Δ ) α / 2 ( u ( x , y ) ) = c α P . V . R 2 u ( x , y ) u ( x ˜ , y ˜ ) [ ( x x ˜ ) 2 + ( y y ˜ ) 2 ] 2 + α 2 d x ˜ d y ˜ , c α = 2 α Γ ( α + 2 2 ) π | Γ ( α 2 ) | ,
where Γ ( . ) is the Gamma function. More explicitly, our problem consists in finding fast solvers for the numerical approximation of a two-dimensional nonlocal Helmholtz equation with fractional Laplacian described by the equations
( Δ ) α / 2 u ( x , y ) + μ ( x , y ) u ( x , y ) = v ( x , y ) , ( x , y ) Ω R 2 , α ( 1 , 2 ) , u ( x , y ) = 0 , ( x , y ) Ω c ,
with a given variable-coefficient, complex-valued wave number μ = μ ( x , y ) , and with source term v. Here, Ω is taken [ 0 , 1 ] 2 R 2 and Ω c is the complement of Ω . In what follows, μ ( x , y ) = 1 ( x + i y ) γ for some γ > 0 , the case of a bounded μ ( x , y ) has been studied by Adriani et al. [1] and by Li et al. [2].
To approximate Equation (1), we employ the fractional centered differences (FCD). Given a positive integer n, we take h = 1 n + 1 as the step size. We define x i = i h and y j = j h for every i , j Z . The discrete version of the fractional Laplacian in such a setting is given by
( Δ h ) α / 2 ( u ( x , y ) ) : = 1 h α k 1 , k 2 Z b k 1 , k 2 ( α ) u ( x + k 1 h , y + k 2 h ) ,
where b k 1 , k 2 ( α ) are the Fourier coefficients of the function
t α ( η , Ψ ) = 4 sin 2 η 2 + 4 sin 2 Ψ 2 α 2 ,
that is
b k 1 , k 2 ( α ) = 1 4 π 2 π π π π t α ( η , Ψ ) e i ( k 1 η + k 2 Ψ ) d η d Ψ ,
where i is the imaginary unit.
Proceeding as in [1] we trace back the original problem to solving the following linear system
A n u : = ( B n + D n ( μ ) ) u = f , n = ( n , n ) ,
where B n = 1 h α B ^ n , and B ^ n is the two-level symmetric Toeplitz matrix generated by t α ( η , Ψ ) , i.e., B n = T n ( t α ) with
B n = B 0 B 1 B 2 B n 2 B n 1 B 1 B 0 B 1 B n 3 B n 2 B 2 B 1 B 0 B n 4 B n 3 B n 2 B n 3 B n 4 B 0 B 1 B n 1 B n 2 B n 3 B 1 B 0 ,
B j = b 0 , j ( α ) b 1 , j ( α ) b 2 , j ( α ) b n 2 , j ( α ) b n 1 , j ( α ) b 1 , j ( α ) b 0 , j ( α ) b 1 , j ( α ) b n 3 , j ( α ) b n 2 , j ( α ) b 2 , j ( α ) b 1 , j ( α ) b 0 , j ( α ) b n 4 , j ( α ) b n 3 , j ( α ) b n 2 , j ( α ) b n 3 , j ( α ) b n 4 , j ( α ) b 0 , j ( α ) b 1 , j ( α ) b n 1 , j ( α ) b n 2 , j ( α ) b n 3 , j ( α ) b 1 , j ( α ) b 0 , j ( α ) .
For the sake of simplicity, the previous equation is rewritten in the following scaled form
A ^ n u : = ( B ^ n + h α D n ( μ ) ) u = v , n = ( n , n ) .
For the two-level notations and the theory regarding Toeplitz structures, refer to [3].
In the case where μ ( x , y ) = 1 / ( x + i y ) γ we can give sufficient conditions on the coefficient γ , depending on α , in order to guarantee that { h α D n ( μ ) } n is zero distributed in the eigenvalue/singular value sense, thus obtaining the spectral distribution of the sequence { A ^ n } n which, under mild conditions, has to coincide with that of { B ^ n } n . In the next section, we first introduce the necessary tools and then present theoretical results completing those in [1,2] and related numerical experiments. The numerical experiments concern the visualization of the distribution/clustering results and the optimal performances of the related preconditioning when the preconditioned GMRES is used.
We highlight that the spectral analysis for the considered preconditioned and nonpreconditioned matrix-sequences for unbounded μ ( x , y ) is completely new. In fact, in [1,2] the assumption of boundedness of the wave number is always employed; furthermore, in [2] the results are focused on eigenvalue localization findings, while in [1] the singular value analysis is the main target. Finally, we stress that our eigenvalue results are nontrivial given the non-Hermitian and even non-normal nature of the involved matrix sequences.

2. Spectral Analysis

First, we report a few definitions regarding the spectral and singular value distribution, the notion of clustering and a few relevant relationships among the various concepts. Then, we present the main theoretical tool taken from [4] and we perform a spectral analysis of the various matrix-sequences. Numerical experiments and visualization results corroborating the analysis are presented in the last part of the section.
Definition 1.
Let { A n } n be a sequence of matrices, with A n of size d n , and let ψ : D R t C r × r be a measurable function defined on a set D with 0 < μ t ( D ) < .
  • We say that { A n } n has an (asymptotic) singular value distribution described by ψ, and we write { A n } n σ ψ , if
    lim n 1 d n i = 1 d n F ( σ i ( A n ) ) = 1 μ t ( D ) D i = 1 r F ( σ i ( ψ ( x ) ) ) r d x , F C c ( R ) .
  • We say that { A n } n has an (asymptotic) spectral (or eigenvalue) distribution described by ψ, and we write { A n } n λ ψ , if
    lim n 1 d n i = 1 d n F ( λ i ( A n ) ) = 1 μ t ( D ) D i = 1 r F ( λ i ( ψ ( x ) ) ) r d x , F C c ( C ) .
If A C m × m , then the singular values and the eigenvalues of A are denoted by σ 1 ( A ) , , σ m ( A ) and λ 1 ( A ) , , λ m ( A ) , respectively. Furthermore, if A C m × m and 1 p , then A p denotes the Schatten p-norm of A, i.e., the p-norm of the vector ( σ 1 ( A ) , , σ m ( A ) ) ; see [5] for a comprehensive treatment of the subject. The Schatten -norm A is the largest singular value of A and coincides with the spectral norm A . The Schatten 1-norm A 1 is the sum of the singular values of A and coincides with the so-called trace-norm of A, while the Schatten 2-norm A 2 coincides with the Frobenius norm of A, which is of great popularity in the numerical analysis community because of its low computational complexity.
At this point, we introduce the definition of clustering, which, as for the distribution notions, is a concept only of the asymptotic type. For z C and ϵ > 0 , let B ( z , ϵ ) be the disk with center z and radius ϵ , B ( z , ϵ ) { w C : | w z | < ϵ } . For S C and ϵ > 0 , we denote by B ( S , ϵ ) the ϵ -expansion of S, defined as B ( S , ϵ ) z S B ( z , ϵ ) .
Definition 2.
Let { A n } n be a sequence of matrices, with A n of size d n tending to infinity, and let S C be a nonempty closed subset of C . { A n } n is strongly clustered at S in the sense of the eigenvalues if, for each ϵ > 0 , the number of eigenvalues of A n outside B ( S , ϵ ) is bounded by a constant q ϵ independent of n. In symbols,
q ϵ ( n , S ) # { j { 1 , , d n } : λ j ( A n ) B ( S , ϵ ) } = O ( 1 ) , as n .
{ A n } n is weakly clustered at S if, for each ϵ > 0 ,
q ϵ ( n , S ) = o ( d n ) , as n .
If { A n } n is strongly or weakly clustered at S and S is not connected, then the connected components of S are called sub-clusters. Of special importance in the theory of preconditioning is the case of spectral single point clustering where S is made up by a unique complex number s.
The same notions hold for the singular values, where s is a nonnegative number and S is a nonempty closed subset of the nonnegative real numbers.
For a measurable function g : D R t C , the essential range of g is defined as ER ( g ) { z C : μ t ( { g B ( z , ϵ ) } ) > 0 for all ϵ > 0 } , where { g B ( z , ϵ ) } { x D : g ( x ) B ( z , ϵ ) } . ER ( g ) is always closed and, if g is continuous and D is contained in the closure of its interior, then ER ( g ) coincides with the closure of the image of g.
Hence, if { A n } n λ ψ (with { A n } n , ψ as in Definition 1), then, by ([6], Theorem 4.2), { A n } n is weakly clustered at the essential range of ψ , defined as the union of the essential ranges of the eigenvalue functions λ i ( ψ ) , i = 1 , , r : ER ( ψ ) i = 1 s ER ( λ i ( ψ ) ) : all the considerations above can be translated in the singular value setting as well, with obvious minimal modifications.
In addition, the following result holds.
Theorem 1.
If ER ( ψ ) = s with s fixed complex number then we have the subsequent equivalence: { A n } n λ ψ iff { A n } n is weakly clustered at s in the eigenvalue sense. Hence, if ER ( | ψ | ) = s with s fixed nonnegative number then we have the subsequent equivalence: { A n } n σ ψ if { A n } n is weakly clustered at s in the singular value sense.
A noteworthy example treated in the previous theorem is that of zero-distributed sequences { A n } n expressed by definition as { A n } n σ 0 (see [3]).
We will make use of Theorem 1 in [4], which we report below and which extends previous results in [6] in the context of the zero distribution of zeros of perturbed orthogonal polynomials.
Theorem 2.
Let { X n } n be a matrix-sequence such that each X n is Hermitian of size d n and { X n } n λ f , where f is a measurable function defined on a subset of R q for some q, with finite and positive Lebesgue measure. If Y n 2 = o ( d n ) , with · 2 being the Frobenius norm, then { Y n } n λ 0 and { X n + Y n } n λ f .

2.1. Main Results

We study the eigenvalue distribution results of the two matrix-sequences { h α D n ( μ ) } n and { A n = B ^ n + h α D n ( μ ) } n , in the sense of Definition 1. The same kind of matrices and matrix-sequences are treated in [1,2]. In [2] eigenvalues localization results are studied, while in [1] singular value and eigenvalue distribution results are obtained and in both cases, the coefficient μ ( x , y ) is assumed bounded. Here, we extend the results in the quoted literature.
Theorem 3.
Let μ ( x , y ) = 1 / ( x + i y ) γ . Then, for every γ 0 such that α > γ 1 ( α ( 1 , 2 ) ) , we have
a1 
{ h α D n ( μ ) } n λ 0 ;
a2 
{ B ^ n + h α D n ( μ ) } n λ t α .
Proof. 
In the proof we strongly rely on Theorem 2. Therefore, we compute
h α D n ( μ ) 2 2 = i , j = 1 n | μ i j | 2 h 2 α = i , j = 1 n | i h | 2 + | j h | 2 γ h 2 α = h 2 α 2 γ i , j = 1 n 1 ( i 2 + j 2 ) γ .
Then, we estimate the quantity i , j = 1 n 1 ( i 2 + j 2 ) γ , under the hypothesis that γ ( 0 , 1 ) .
i , j = 1 n 1 ( i 2 + j 2 ) γ = 2 i = 1 n j = 1 i 1 1 ( i 2 + j 2 ) γ + 1 2 γ i = 1 n 1 i 2 γ .
Now, the first sum can be estimated as
2 i = 1 n j = 1 i 1 1 ( i 2 + j 2 ) γ 2 i = 1 n i 1 i 2 γ = 2 i = 1 n 1 i 2 γ 1 2 i = 1 n 1 i 2 γ .
Therefore,
i , j = 1 n 1 ( i 2 + j 2 ) γ 2 i = 1 n 1 i 2 γ 1 + 1 2 γ 2 i = 1 n 1 2 γ .
Note, that 1 2 γ 2 < 0 for every γ ( 0 , 1 ) . A basic computation leads to
2 i = 1 n 1 i 2 γ 1 0 n d t t 2 γ 1 = n 2 2 γ 1 γ
and
i = 1 n 1 2 γ 0 n d t ( t + 1 ) 2 γ = ( n + 1 ) 1 2 γ 1 1 2 γ γ 1 2 , log ( n + 1 ) γ = 1 2 .
As a consequence, we conclude that
i , j = 1 n 1 ( i 2 + j 2 ) γ n 2 2 γ 1 γ + ( n + 1 ) 1 2 γ 1 1 2 γ γ 1 2 n 2 2 γ 1 γ + 1 2 γ 2 log ( n + 1 ) γ = 1 2 ,
so that i , j = 1 n 1 ( i 2 + j 2 ) γ c n ( γ ) n 2 2 γ 1 γ for every γ ( 0 , 1 ) . This immediately implies that
h 2 α 2 γ i , j = 1 n 1 ( i 2 + j 2 ) γ c n ( γ ) n 2 α 2 γ n 2 2 α 1 γ = o ( n 2 )
for every α ( 1 , 2 ) , as required to apply Theorem 1 in [4] and conclude the proof.
From the computation in Equation (8), we immediately deduce the following: if γ > 1 , i , j = 1 n 1 ( i 2 + j 2 ) γ c γ for every n, so that h α D n ( μ ) 2 2 = o ( n 2 ) if and only if 2 α 2 γ + 2 > 0 , that is, α > γ 1 .
Finally, when γ = 1 , we obtain the estimate
i , j = 1 n 1 i 2 + j 2 k + 1 n 1 n 1 x 2 + y 2 d x d y k + 1 n 0 π 2 1 r d θ d r = k + π 2 log ( n ) ,
with k being any constant independent of n satisfying
k 2 j = 1 1 j 2 + 1 1 2 .
As above, this leads to the conclusion that h α D n ( μ ) 2 2 = o ( n 2 ) if and only if α > γ 1 , that is γ < α + 1 . Then the proof is complete by Theorem 2 with d n = n 2 . □
With reference to the proof of Theorem 3, from a technical point of view, it should be observed that in [7,8,9] we can find more refined bounds for terms as j = 1 n j , with various choices of the real parameter .
The next corollary complements the previous result.
Corollary 1.
Assume that there exist positive constants c C for which
c / | x + i y | γ | μ ( x , y ) | C / | x + i y | γ .
for every x , y [ 0 , 1 ] . Then, for every γ 0 such that α > γ 1 ( α ( 1 , 2 ) ) , we have
b1 
{ h α D n ( μ ) } n λ 0 ;
b2 
{ B ^ n + h α D n ( μ ) } n λ t α .
Proof. 
It follows directly by Theorem 3 with the observation that
c δ n h α D n ( μ ) 2 C δ n
with δ n = h α D n ( μ γ ) 2 , μ γ ( x , y ) = 1 / ( x + i y ) γ .
Finally, the proof is concluded by invoking Theorem 2 with d n = n 2 . □

2.2. Preconditioning

For a symmetric Toeplitz matrix T n R n × n with first column [ t 1 , t 2 , , t n ] , the matrix τ ( T n ) defined as
τ ( T n ) : = T n H ( T n )
the natural τ preconditioner of T n was already considered decades ago in [10,11,12] when a great amount of theoretical and computational work was dedicated to preconditioning strategies for structured linear systems. Here, H ( T n ) denotes a Hankel matrix whose entries are constant along each antidiagonal and whose precise definition is the following: the first row and the last column of H ( T n ) are given by [ t 2 , t 3 , , t n 1 , 0 , 0 ] and [ 0 , 0 , t n 1 , , t 3 , t 2 ] , respectively. Notice, that by using the sine transform matrix S n , defined as
[ S n ] k , j = 2 n + 1 sin π k j n + 1 , 1 k , j n ,
it is known that every τ matrix is diagonalized as τ ( T n ) = S n Λ n S n , where Λ n is a diagonal matrix constituted by all eigenvalues of τ ( T n ) , and S n = ( [ S n ] j , k ) is the real, symmetric, orthogonal matrix defined before, so that S n = S n T = S n 1 . Furthermore, the matrix S n is associated with the fast sine transform of type I (see [13,14] for several other sine/cosine transforms). Indeed the multiplication of a matrix S n times a real vector can be conducted in O ( n log n ) real operations and the cost is around half of the celebrated discrete fast Fourier transform [15]. Therefore, all the relevant matrix operations in this algebra cost O ( n log n ) real operations, including matrix–matrix multiplication, inversion, solution of a linear system, and computation of the spectrum, i.e., of the diagonal entries of Λ n .
Using standard and known techniques, the τ algebra has d-level versions for every d 1 , in which a τ ( T n ) , T n d-level symmetric Toeplitz matrix of size ν ( n ) = n 1 n 2 n d , n = ( n 1 , , n d ) , has the diagonalization form
τ ( T n ) = S n Λ n S n , S n = S n 1 S n d ,
with Λ n , the diagonal matrix is obtained as a d-level sine transform of type I of the first column of T n . Again the quoted d-level transform and all the relevant matrix operations in the related algebra have a cost of O ( ν ( n ) log ν ( n ) ) real operations which is quasi-optimal given the fact that the matrices have size ν ( n ) .
At the algebraic level, the explicit construction can be conducted recursively using additive decomposition (9): first at the most external level, and then applying the same operation to any block which is a ( d 1 ) -level symmetric Toeplitz matrix and so on until arriving at matrices with scalars.
In light of the excellent structural, spectral, and computation features of the τ algebra in d levels, two different types of τ preconditioning for the related linear systems were proposed in [2] and one in [1] (with d = 2 and n = ( n , n ) ). Here, we consider the latter. In fact, { T n ( f ) τ ( T n ( f ) ) } n σ , λ 0 for any Lebesgue integrable f, thanks to the distribution results on multilevel Hankel matrix-sequences generated by any L 1 function f proven in [16]. From this, as proven in [1], the preconditioner P n = τ ( T n ( t α ) ) is such that the preconditioned matrix sequence is clustered at 1 both in the eigenvalue and singular value sense under mild assumptions. In fact, using the notion of an approximating class of sequences, the eigenvalue perturbation results in [6], and the GLT apparatus [3]; it is enough that μ ( x , y ) is Riemann integrable or simply bounded. Here, we extend the spectral distribution results in the case where μ ( x , y ) is not bounded and even not integrable. More precisely, as in Corollary 1, we consider the case of a power singularity.
Theorem 4.
Assume that there exist positive constants c C for which
c / | x + i y | γ | μ ( x , y ) | C / | x + i y | γ .
for every x , y [ 0 , 1 ] . Consider the preconditioner P n = τ ( T n ( t α ) ) . Then, for every γ [ 0 , 1 ) and for every α ( 1 , 2 ) ) , we have
c1 
{ P n 1 h α D n ( μ ) } n λ 0 ;
c2 
P n 1 B ^ n + h α D n ( μ ) n λ 1 .
Proof. 
We rely on Theorem 2, on Corollary 1, and on a standard symmetrization trick. First of all, we observe that the eigenvalues of P n 1 h α D n ( μ ) and Y n = P n 1 / 2 h α D n ( μ ) P n 1 / 2 are the same because the two matrices are similar. The same holds for
P n 1 B ^ n + h α D n ( μ )
and X n + Y n with
X n = P n 1 / 2 B ^ n P n 1 / 2 .
Now X n is real symmetric, and in fact positive definite and so is B ^ n . As proven in [1], the spectral distribution function of { X n } n is 1 thanks to a basic use of the GLT theory. Furthermore, the minimal eigenvalue of P n = τ ( T n ( t α ) ) is positive and tends to zero as h α , since t α has a unique zero of order α at zero (see, e.g., [17]). Therefore,
h α P n 1 / 2 = h α λ 1 ( τ ( T n ( t α ) ) ) D h 1 / 2 , P n 1 / 2 = 1 λ 1 ( τ ( T n ( t α ) ) ) D h 1 / 2 .
As a consequence of Corollary 1 we deduce
Y n 2 D 2 D n ( μ ) 2 = o ( n )
if and only if γ < 1 . In conclusion, the desired result follows from Theorem 2 with d n = n 2 and f = 1 . □
Theorem 4 cannot be sharp since the estimate in (11) would hold also if the preconditioner is chosen as P n = h α I n 2 . A more careful estimate would require considering that the eigenvalues of τ ( T n ( t α ) ) are explicitly known, and in fact, we will see in the numerical experiments that the spectral clustering at 1 of the preconditioned matrix-sequence is observed also for γ to be much larger than 1.

2.3. Numerical Evidence: Visualizations of the Original Matrix-Sequence

In the current subsection, we report visualizations regarding the analysis in Theorem 3. More precisely, in Figure 1, Figure 2, Figure 3 and Figure 4, we plot the eigenvalues of the matrix A ^ n , when the matrix-size is n 2 = 2 12 and when ( α , γ ) { ( 1.2 , 1 ) , ( 1.4 , 1 ) , 1.6 , 1 ) , ( 1.8 , 1 ) } , satisfying the assumption of Theorem 3 given by γ < α + 1 . As it can be observed, the clustering at zero of the imaginary part of the eigenvalues of A ^ n and the relation { A ^ n } λ t α are visible already for a moderate matrix size of 2 12 . A remarkable fact is that no outliers are present, since the imaginary parts are always negligible and the graph is of an equispaced sampling of t α and of the real parts of the eigenvalues, both sets are in nondecreasing order and superpose completely.

2.4. Numerical Evidence: Preconditioning, Visualizations, GMRES Iterations

In the present subsection, we consider the preconditioned matrices. More in detail, Table 1, Table 2, Table 3 and Table 4 concern the measure of the clustering at 1 with radius ϵ = 0.1 , 0.01 , for γ = 0.5, 0.8, 1, 1.5, for α = 1.2 , 1.4 , 1.6 , 1.8 , for various matrix-dimensions n 2 = 2 8 , 2 10 , 2 12 . As can be seen, the number N o ( ϵ ) = N o ( ϵ , n ) increases moderately with n, but the percentage of the number of outliers with respect to the matrix-size tends to zero fast and this is in agreement with the forecast of Theorem 4, at least for γ < 1 : the situation is indeed better than the theoretical predictions because even when the condition γ < 1 is violated, we still observe a clustering at 1 and this is not a surprise given the comments after Theorem 4.
The clustering in the preconditioning setting is also visualized in Figure 5, Figure 6, Figure 7 and Figure 8, while Table 5 accounts for the fact that the localization around 1 is very good since we do not have large outliers. The moderate size of the outliers indicates that the preconditioned GMRES is expected to be optimal and robust with respect to all the involved parameters. The latter is evident in Table 6 with a slight increase in the number of iterations when γ increases, so that the number and the magnitude of the outliers are slightly larger.

2.5. Numerical Evidence: When the Hypotheses Are Violated

Here, we check the clustering at zero of the imaginary part of the eigenvalues of A ^ n and the relation { A ^ n } λ t α for a moderate matrix size as 2 12 , for γ = 3 , and for α = 1.2 , 1.4 , 1.6 , 1.8 , see Figure 9, Figure 10, Figure 11 and Figure 12. We stress that the condition γ < α + 1 in Theorem 3 is violated. Nevertheless, the clustering at 0 of the imaginary part is present and the agreement between an equispaced sampling of t α and the real parts of the eigenvalues of A ^ n is still striking.
However, the number and the magnitude of outliers in the preconditioned matrices start to become significant as reported in Table 7 and Figure 13 and hence, the number of preconditioned GMRES iterations starts to grow moderately with the matrix-size as can be seen in Table 8.

3. Conclusions

In this work, we considered a fractional Helmholtz equation approximated by ad hoc centered differences with variable wave number μ ( x , y ) , in the specific case where the complex-valued function μ ( x , y ) has a pole of order γ . Eigenvalue distribution and clustering results have been derived in [1,2]. The numerical results presented in this corroborate the analysis.
Many more intricate cases can be treated using the same type of theoretical apparatus, including the GLT theory [3,18] and non-Hermitian perturbation results, such as those in [4,6]. We list a few of them.
  • The numerical results in Section 2.5 seem to indicate that the spectral distribution of the original matrix sequence and the spectral clustering at 1 of the preconditioned matrix sequence holds also when the Frobenius norm condition in [4] is violated; this is an indication that Theorem 1 in [4] may not be sharp. A related conjecture is that the key condition Y n 2 2 = o ( n ) in Theorem 2 could be replaced by Y n p p = o ( n ) , with any p [ 1 , ) , which would be very useful when the trace norm is considered, i.e., for p = 1 .
  • Definition 1 has been reported with a matrix size of the symbol equal to r 1 . In our study for matrices arising from finite differences, the parameter r is always equal to 1. However, when considering isogeometric analysis approximations with polynomial degree p and regularity k p 1 we have r = ( p k ) d [19,20]. Notice that a particular case of the previous formula is the case of p order finite elements in space dimension d which leads to r = p d [20,21], since k = 0 . Also, the discontinuous Galerkin techniques of degree p are covered: we have r = ( p + 1 ) d [19] because k = 1 .
  • The above considerations could be considered also in the case where the fractional Laplacian is defined on a non-Cartesian d-dimensional domain Ω , or equipped with variable coefficients, or with approximations on graded grids. In fact, the related GLT theory is already available [3,18,19] for encompassing such a generality, while non-Hermitian perturbation tools do not depend on a specific structure of the involved matrix sequences.

Author Contributions

S.S.-C. is responsible for funding acquisition; the contribution of all authors is equal regarding all the listed items: Conceptualization, methodology, validation, investigation, data curation, writing—original draft preparation, writing—review and editing, visualization, supervision, project administration. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Stefano Serra-Capizzano is supported by GNCS-INdAM and is funded by the European High-Performance Computing Joint Undertaking (JU) under grant agreement No. 955701. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Belgium, France, Germany, and Switzerland. Furthermore, Stefano Serra-Capizzano is grateful for the support of the Laboratory of Theory, Economics and Systems—Department of Computer Science at Athens University of Economics and Business.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We thank the anonymous Referees for their careful work and for the explicit appreciation of our results.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adriani, A.; Sormani, R.L.; Tablino-Possio, C.; Krause, R.; Serra-Capizzano, S. Asymptotic spectral properties and preconditioning of an approximated nonlocal Helmholtz equation with Caputo fractional Laplacian and variable coefficient wave number μ. arXiv 2024, arXiv:2402.10569. [Google Scholar]
  2. Li, T.-Y.; Chen, F.; Sun, H.W.; Sun, T. Preconditioning technique based on sine transformation for nonlocal Helmholtz equations with fractional Laplacian. J. Sci. Comput. 2023, 97, 17. [Google Scholar] [CrossRef]
  3. Garoni, C.; Serra-Capizzano, S. Generalized Locally Toeplitz Sequences: Theory and Applications; Springer: Cham, Switzerland, 2018; Volume II. [Google Scholar]
  4. Barbarino, G.; Serra-Capizzano, S. Non-Hermitian perturbations of Hermitian matrix-sequences and applications to the spectral analysis of the numerical approximation of partial differential equations. Numer. Linear Algebra Appl. 2020, 27, e2286. [Google Scholar] [CrossRef]
  5. Bhatia, R. Matrix Analysis, Graduate Texts in Mathematics; Springer: New York, NY, USA, 1997; Volume 169. [Google Scholar]
  6. Golinskii, L.; Serra-Capizzano, S. The asymptotic properties of the spectrum of nonsymmetrically perturbed Jacobi matrix sequences. J. Approx. Theory 2007, 144, 84–102. [Google Scholar] [CrossRef]
  7. Agarwal, R.P. Difference Equations and Inequalities: Second Edition, Revised and Expanded; Marcel Dekker: New York, NY, USA, 2000. [Google Scholar]
  8. Guo, S.-L.; Qi, F. Recursion Formulae for m = 1 n mk. Z. Anal. Anwend. 1999, 18, 1123–1130. [Google Scholar] [CrossRef]
  9. Kuang, J.C. Applied Inequalities, 2nd ed.; Hunan Education Press: Changsha, China, 1993. (In Chinese) [Google Scholar]
  10. Bini, D.; Capovani, M. Spectral and computational properties of band symmetric Toeplitz matrices. Linear Algebra Appl. 1983, 52–53, 99–126. [Google Scholar] [CrossRef]
  11. Chan, R.; Ng, M. Conjugate gradient methods for Toeplitz systems. SIAM Rev. 1996, 38, 427–482. [Google Scholar] [CrossRef]
  12. Serra-Capizzano, S. Superlinear PCG methods for symmetric Toeplitz systems. Math. Comp. 1999, 68, 793–803. [Google Scholar] [CrossRef]
  13. Di Benedetto, F.; Serra-Capizzano, S. Optimal multilevel matrix algebra operators. Linear Multilinear Algebra 2000, 48, 35–66. [Google Scholar] [CrossRef]
  14. Kailath, T.; Olshevsky, V. Displacement structure approach to discrete-trigonometric-transform based preconditioners of G. Strang type and of T. Chan type. SIAM J. Matrix Anal. Appl. 2005, 26, 706–734. [Google Scholar] [CrossRef]
  15. Loan, C.V. Computational Frameworks for the Fast Fourier Transform; Frontiers in Applied Mathematics; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1992. [Google Scholar]
  16. Fasino, D.; Tilli, P. Spectral clustering properties of block multilevel Hankel matrices. Linear Algebra Appl. 2000, 306, 155–163. [Google Scholar] [CrossRef]
  17. Serra-Capizzano, S. On the extreme spectral properties of Toeplitz matrices generated by L1 functions with several minima/maxima. BIT 1996, 36, 135–142. [Google Scholar] [CrossRef]
  18. Barbarino, G. A systematic approach to reduced GLT. BIT 2022, 62, 681–743. [Google Scholar] [CrossRef]
  19. Barbarino, G.; Garoni, C.; Serra-Capizzano, S. Block generalized locally Toeplitz sequences: Theory and applications in the multidimensional case. Electr. Trans. Numer. Anal. 2020, 53, 113–216. [Google Scholar] [CrossRef]
  20. Garoni, C.; Speleers, H.; Ekström, S.-E.; Reali, A.; Serra-Capizzano, S.; Hughes, T.J.R. Symbol-based analysis of finite element and isogeometric B-spline discretizations of eigenvalue problems: Exposition and review. Arch. Comput. Methods Eng. 2019, 26, 1639–1690. [Google Scholar] [CrossRef]
  21. Garoni, C.; Serra-Capizzano, S.; Sesana, D. Spectral analysis and spectral symbol of d-variate Qp Lagrangian FEM stiffness matrices. SIAM J. Matrix Anal. Appl. 2015, 36, 1100–1128. [Google Scholar] [CrossRef]
Figure 1. Eigenvalues of the matrix A ^ n for γ = 1 , α = 1.2 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Figure 1. Eigenvalues of the matrix A ^ n for γ = 1 , α = 1.2 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Algorithms 17 00100 g001
Figure 2. Eigenvalues of the matrix A ^ n for γ = 1 , α = 1.4 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Figure 2. Eigenvalues of the matrix A ^ n for γ = 1 , α = 1.4 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Algorithms 17 00100 g002
Figure 3. Eigenvalues of the matrix A ^ n for γ = 1 , α = 1.6 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Figure 3. Eigenvalues of the matrix A ^ n for γ = 1 , α = 1.6 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Algorithms 17 00100 g003
Figure 4. Eigenvalues of the matrix A ^ n for γ = 1 , α = 1.8 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Figure 4. Eigenvalues of the matrix A ^ n for γ = 1 , α = 1.8 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Algorithms 17 00100 g004
Figure 5. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 0.5 and α = { 1.2 , 1.4 , 1.6 , 1.8 } , respectively.
Figure 5. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 0.5 and α = { 1.2 , 1.4 , 1.6 , 1.8 } , respectively.
Algorithms 17 00100 g005
Figure 6. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 0.8 and α = { 1.2 , 1.4 , 1.6 , 1.8 } , respectively.
Figure 6. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 0.8 and α = { 1.2 , 1.4 , 1.6 , 1.8 } , respectively.
Algorithms 17 00100 g006
Figure 7. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 1 and α = { 1.2 , 1.4 , 1.6 , 1.8 } , respectively.
Figure 7. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 1 and α = { 1.2 , 1.4 , 1.6 , 1.8 } , respectively.
Algorithms 17 00100 g007
Figure 8. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 1.5 and α = { 1.2 , 1.4 , 1.6 , 1.8 } , respectively.
Figure 8. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 1.5 and α = { 1.2 , 1.4 , 1.6 , 1.8 } , respectively.
Algorithms 17 00100 g008
Figure 9. Eigenvalues of the matrix A ^ n for γ = 3 , α = 1.2 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Figure 9. Eigenvalues of the matrix A ^ n for γ = 3 , α = 1.2 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Algorithms 17 00100 g009
Figure 10. Eigenvalues of the matrix A ^ n for γ = 3 , α = 1.4 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Figure 10. Eigenvalues of the matrix A ^ n for γ = 3 , α = 1.4 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Algorithms 17 00100 g010
Figure 11. Eigenvalues of the matrix A ^ n for γ = 3 , α = 1.6 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Figure 11. Eigenvalues of the matrix A ^ n for γ = 3 , α = 1.6 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Algorithms 17 00100 g011
Figure 12. Eigenvalues of the matrix A ^ n for γ = 3 , α = 1.8 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Figure 12. Eigenvalues of the matrix A ^ n for γ = 3 , α = 1.8 and n 2 = 2 12 . The left panel reports the eigenvalues in the complex plane. The right panel reports in blue the real part of the eigenvalues and in red the equispaced samplings of t α in nondecreasing order, in the interval [ min t α = 0 , max t α = 2 3 α / 2 ] .
Algorithms 17 00100 g012
Figure 13. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 3 and α = 1.8 .
Figure 13. Eigenvalues of the preconditioned matrix of size n 2 = 2 12 for γ = 3 and α = 1.8 .
Algorithms 17 00100 g013
Table 1. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
Table 1. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
μ ( x , y ) = 1 / ( x + i y ) γ , γ = 0.5
n N o ( 0.1 ) Percentage N o ( 0.01 ) Percentage
α = 1.2
2 4 2 7.812500 × 10 1 227 8.867188 × 10 1
2 5 6 5.859375 × 10 1 302 2.949219 × 10 1
2 6 16 3.906250 × 10 1 307 7.495117 × 10 0
α = 1.4
2 4 1 3.906250 × 10 1 89 3.476562 × 10 1
2 5 5 4.882812 × 10 1 99 9.667969 × 10 0
2 6 13 3.173828 × 10 1 154 3.759766 × 10 0
α = 1.6
2 4 0035 1.367188 × 10 1
2 5 3 2.929688 × 10 1 60 5.859375 × 10 0
2 6 8 1.953125 × 10 1 112 2.734375 × 10 0
α = 1.8
2 4 0020 7.812500 × 10 0
2 5 0038 3.710938 × 10 0
2 6 0073 1.782227 × 10 0
Table 2. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
Table 2. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
μ ( x , y ) = 1 / ( x + i y ) γ , γ = 0.8
n N o ( 0.1 ) Percentage N o ( 0.01 ) Percentage
α = 1.2
2 4 3 1.171875 × 10 0 233 9.101562 × 10 1
2 5 7 6.835938 × 10 1 395 3.857422 × 10 1
2 6 16 3.906250 × 10 1 456 1.113281 × 10 1
α = 1.4
2 4 2 7.812500 × 10 1 115 4.492188 × 10 1
2 5 6 5.859375 × 10 1 131 1.279297 × 10 1
2 6 15 3.662109 × 10 1 182 4.443359 × 10 0
α = 1.6
2 4 0046 1.796875 × 10 1
2 5 3 2.929688 × 10 1 66 6.445312 × 10 0
2 6 8 1.953125 × 10 1 116 2.832031 × 10 0
α = 1.8
2 4 0020 7.812500 × 10 0
2 5 0039 3.808594 × 10 0
2 6 0075 1.831055 × 10 0
Table 3. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
Table 3. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
μ ( x , y ) = 1 / ( x + i y ) γ , γ = 1
n N o ( 0.1 ) Percentage N o ( 0.01 ) Percentage
α = 1.2
2 4 8 3.125000 × 10 0 235 9.179688 × 10 1
2 5 11 1.074219 × 10 0 450 4.394531 × 10 1
2 6 22 5.371094 × 10 1 588 1.435547 × 10 1
α = 1.4
2 4 2 7.812500 × 10 1 12850
2 5 6 5.859375 × 10 1 164 1.601562 × 10 1
2 6 15 3.662109 × 10 1 217 5.297852 × 10 0
α = 1.6
2 4 0058 2.265625 × 10 1
2 5 2 1.953125 × 10 1 76 7.421875 × 10 0
2 6 8 1.953125 × 10 1 123 3.002930 × 10 0
α = 1.8
2 4 0027 1.054688 × 10 1
2 5 0043 4.199219 × 10 0
2 6 0078 1.904297 × 10 0
Table 4. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
Table 4. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
μ ( x , y ) = 1 / ( x + i y ) γ , γ = 1.5
n N o ( 0.1 ) Percentage N o ( 0.01 ) Percentage
α = 1.2
2 4 20 7.812500 × 10 0 237 9.257812 × 10 1
2 5 33 3.222656 × 10 0 547 5.341797 × 10 1
2 6 55 1.342773 × 10 0 923 2.253418 × 10 1
α = 1.4
2 4 9 3.515625 × 10 0 154 6.015625 × 10 1
2 5 13 1.269531 × 10 0 246 2.402344 × 10 1
2 6 24 5.859375 × 10 1 367 8.959961 × 10 0
α = 1.6
2 4 3 1.171875 × 10 0 78 3.046875 × 10 1
2 5 5 4.882812 × 10 1 117 1.142578 × 10 1
2 6 11 2.685547 × 10 1 181 4.418945 × 10 0
α = 1.8
2 4 1 3.906250 × 10 1 42 1.640625 × 10 1
2 5 1 9.765625 × 10 2 61 5.957031 × 10 0
2 6 1 2.441406 × 10 2 97 2.368164 × 10 0
Table 5. Maximal distance of eigenvalues of the preconditioned matrix from 1 for increasing dimension n 2 .
Table 5. Maximal distance of eigenvalues of the preconditioned matrix from 1 for increasing dimension n 2 .
μ ( x , y ) = 1 / ( x + i y ) γ
γ = 0.5
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
2 4 1.289012 × 10 1 1.065957 × 10 1 8.471230 × 10 2 5.120695 × 10 2
2 5 1.660444 × 10 1 1.580600 × 10 1 1.257282 × 10 1 6.848000 × 10 2
2 6 2.157879 × 10 1 2.018168 × 10 1 1.609333 × 10 1 9.085515 × 10 2
γ = 0.8
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
2 4 1.495892 × 10 1 1.105071 × 10 1 8.629435 × 10 2 5.992612 × 10 2
2 5 1.699374 × 10 1 1.589429 × 10 1 1.257050 × 10 1 6.831767 × 10 2
2 6 2.172238 × 10 1 2.017168 × 10 1 1.604429 × 10 1 9.037168 × 10 2
γ = 1
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
2 4 1.775892 × 10 1 1.177315 × 10 1 9.016597 × 10 2 6.761970 × 10 2
2 5 1.783878 × 10 1 1.626862 × 10 1 1.274329 × 10 1 6.947072 × 10 2
2 6 2.226273 × 10 1 2.038328 × 10 1 1.612753 × 10 1 9.085565 × 10 2
γ = 1.5
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
2 4 6.788881 × 10 1 3.445147 × 10 1 1.712513 × 10 1 1.195110 × 10 1
2 5 8.354951 × 10 1 3.702033 × 10 1 1.651846 × 10 1 1.109992 × 10 1
2 6 1.029919 × 10 0 3.983900 × 10 1 1.686921 × 10 1 1.048754 × 10 1
Table 6. Number of preconditioned GMRES iterations to solve the linear system for increasing dimension n 2 till t o l = 10 11 .
Table 6. Number of preconditioned GMRES iterations to solve the linear system for increasing dimension n 2 till t o l = 10 11 .
μ ( x , y ) = 1 / ( x + i y ) γ
γ = 0.5
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n- P τ - P τ - P τ - P τ
2 4 359418477547
2 5 5496798381017
2 6 8210109914481897
2 7 124111771025193517
2 8 18911288104379>5008
2 9 2871146710>5009>5008
γ = 0.8
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n- P τ - P τ - P τ - P τ
2 4 369429498567
2 5 55106898581037
2 6 83111111014791927
2 7 126111801025693568
2 8 19112293104499>5008
2 9 2901247711>50010>5008
γ = 1
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n- P τ - P τ - P τ - P τ
2 4 3610429498567
2 5 551169108691047
2 6 84111121014891938
2 7 127121821025893598
2 8 193122951145210>5008
2 9 2931248011>50010>5008
γ = 1.5
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n- P τ - P τ - P τ - P τ
2 4 38134411519578
2 5 5914711288101068
2 6 891611612152101978
2 7 1361718813264103668
2 8 206183061346110>5008
2 9 3122049814>50011>5009
Table 7. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
Table 7. Number of outliers N o ( ϵ ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01 and related percentage for increasing dimension n 2 .
μ ( x , y ) = 1 / ( x + i y ) γ , γ = 3
n N o ( 0.1 ) Percentage N o ( 0.01 ) Percentage
α = 1.8
2 4 15 5.859375 × 10 0 85 3.320312 × 10 1
2 5 26 2.539062 × 10 0 162 1.582031 × 10 1
2 6 50 1.220703 × 10 0 303 7.397461 × 10 0
Table 8. Number of preconditioned GMRES iterations to solve the linear system for increasing dimension n 2 till t o l = 10 11 .
Table 8. Number of preconditioned GMRES iterations to solve the linear system for increasing dimension n 2 till t o l = 10 11 .
μ ( x , y ) = 1 / ( x + i y ) γ
γ = 3 , α = 1 . 8
n- P τ
2 4 6917
2 5 12522
2 6 23632
2 7 45447
2 8 >50070
2 9 >500108
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adriani, A.; Serra-Capizzano, S.; Tablino-Possio, C. Clustering/Distribution Analysis and Preconditioned Krylov Solvers for the Approximated Helmholtz Equation and Fractional Laplacian in the Case of Complex-Valued, Unbounded Variable Coefficient Wave Number μ. Algorithms 2024, 17, 100. https://doi.org/10.3390/a17030100

AMA Style

Adriani A, Serra-Capizzano S, Tablino-Possio C. Clustering/Distribution Analysis and Preconditioned Krylov Solvers for the Approximated Helmholtz Equation and Fractional Laplacian in the Case of Complex-Valued, Unbounded Variable Coefficient Wave Number μ. Algorithms. 2024; 17(3):100. https://doi.org/10.3390/a17030100

Chicago/Turabian Style

Adriani, Andrea, Stefano Serra-Capizzano, and Cristina Tablino-Possio. 2024. "Clustering/Distribution Analysis and Preconditioned Krylov Solvers for the Approximated Helmholtz Equation and Fractional Laplacian in the Case of Complex-Valued, Unbounded Variable Coefficient Wave Number μ" Algorithms 17, no. 3: 100. https://doi.org/10.3390/a17030100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop