Next Article in Journal
Inference and Optimal Design on Partially Accelerated Life Tests for the Power Half-Logistic Distribution Under Adaptive Type II Progressive Censoring
Previous Article in Journal
FldtMatch: Improving Unbalanced Data Classification via Deep Semi-Supervised Learning with Self-Adaptive Dynamic Threshold
Previous Article in Special Issue
The High Relative Accuracy of Computations with Laplacian Matrices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Matrix-Sequences of Geometric Means in the Case of Hidden (Asymptotic) Structures

by
Danyal Ahmad
1,
Muhammad Faisal Khan
2 and
Stefano Serra-Capizzano
2,3,*
1
NUTECH School of Applied Sciences and Humanities, National University of Technology, Islamabad 44000, Pakistan
2
Department of Science and High Technology, University of Insubria, 22100 Como, Italy
3
Department of Information Technology, University of Uppsala, 75310 Uppsala, Sweden
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 393; https://doi.org/10.3390/math13030393
Submission received: 14 December 2024 / Revised: 20 January 2025 / Accepted: 22 January 2025 / Published: 24 January 2025
(This article belongs to the Special Issue Numerical Analysis and Matrix Computations: Theory and Applications)

Abstract

:
In the current work, we analyze the spectral distribution of the geometric mean of two or more matrix-sequences constituted by Hermitian positive definite matrices, under the assumption that all input matrix-sequences belong to the same Generalized Locally Toeplitz (GLT) ∗-algebra. We consider the geometric mean for two matrices, using the Ando-Li-Mathias (ALM) definition, and then we pass to the extension of the idea to more than two matrices by introducing the Karcher mean. While there is no simple formula for the geometric mean of more than two matrices, iterative methods from the literature are employed to compute it. The main novelty of the work is the extension of the study in the distributional sense when input matrix-sequences belong to one of the GLT ∗-algebras. More precisely, we show that the geometric mean of more than two positive definite GLT matrix-sequences forms a new GLT matrix-sequence, with the GLT symbol given by the geometric mean of the individual symbols. Numerical experiments are reported concerning scalar and block GLT matrix-sequences in both one-dimensional and two-dimensional cases. A section with conclusions and open problems ends the current work.

1. Introduction

The study of geometric means of matrices has become increasingly important in various fields, including numerical linear algebra, differential equations, and signal processing. Positive definite matrices arise naturally in many applications, such as in the discretization of differential equations via methods like finite differences or finite elements. These methods produce matrix-sequences whose sizes increase as the mesh size becomes finer. An essential concept related to matrix sequences is the asymptotic eigenvalue distribution in the Weyl sense, and this has been a classical study; see e.g., [1,2,3]. Since the publication of the seminal papers by Tyrtyshnikov in 1996 [4] and Untili in 1998 [5,6], there has been growing interest in this area, which eventually contributed to the development of the theory of Generalized Locally Toeplitz (GLT) sequences [7,8]. The increasing attention to this topic is not purely theoretical, as the asymptotic eigenvalue and singular value distributions have significant practical applications, particularly in the analysis of large-scale matrix computations, especially in the context of the numerical approximations of systems of (fractional) partial differential equations ((F)PDEs); see, e.g., the books and review papers [9,10,11,12,13] and references therein. In fact, it is worth noticing that virtually any meaningful approximation technique of a (F)PDE leads to a GLT matrix-sequence, including finite differences, finite elements of any order, discontinuous Galerkin techniques, finite volumes, isogeometric analysis, etc. More in detail, if we fix d , r positive integer numbers, then the set of d-level r-block GLT matrix-sequences forms a maximal ∗-algebra of matrix-sequences, isometrically equivalent to d-variate r × r matrix-valued mesurable functions defined on [ 0 , 1 ] d × [ π , π ] d . Furthermore, a d-level r-block GLT sequence { A n } n is uniquely associated with a r × r matrix-valued Lebesgue-measurable function κ , known as the GLT symbol, which is defined over the domain D = [ 0 , 1 ] d × [ π , π ] d . Notice that the set [ 0 , 1 ] d can be replaced by any bounded Peano-Jordan measurable subset of R d as occurring with the notion of reduced GLT ∗-algebras, see [7] (pp. 398–399, Formula (59)) for the first occurrence with applications of approximated PDEs on general non-Cartesian domains in d dimensions, ref. [8] (Section 3.1.4) for the first formal proposal, and ref. [14] for an exhaustive treatment, containing both the ∗-algebra theoretical results and a number of applications. This symbol provides a powerful tool for analyzing the singular value and eigenvalue distributions when the matrices A n are Hermitian and part of a matrix-sequence of increasing size. The notation { A n } n GLT κ indicates that { A n } n is a GLT sequence with symbol κ . Notably, the symbol of a GLT sequence is unique in the sense that if { A n } n GLT κ and { A n } n GLT ξ , then κ = ξ almost everywhere in [ 0 , 1 ] d × [ π , π ] d [9,10,11,12]. Furthermore, by the ∗-algebra structure, { A n } n GLT κ and { B n } n GLT κ implies that { A n B n } n GLT 0 , i.e., the matrix-sequence { A n B n } n is zero-distributed and the latter is very important for building explicit matrix-sequences approximating a GLT matrix-sequence and whose inversion is computationally cheap, in the context of preconditioning of large linear systems.
In certain physical applications, it is often necessary to represent the results of multiple experiments through a single average matrix G, where the data are represented by a set of positive definite matrices A 1 , A 2 , . . . , A k . The arithmetic mean 1 k i = 1 k A i is not appropriate in these cases because it does not fulfill the requirement that the inverse of the mean should coincide with the average of the inverses G 1 = 1 k i = 1 k A i 1 . This property, which is crucial in certain physical models is satisfied by the geometric mean. For positive real numbers a 1 , a 2 , . . . , a k , the geometric mean is defined as
g = i = 1 k a i 1 / k ,
a concept that is extended to the case of matrices [15,16] in a nontrivial way, where the difficulty is, of course, the lack of commutativity. A well-known definition that satisfies desirable properties such as congruence invariance, permutation invariance, and consistency with the scalar geometric mean was proposed by ALM [17]. They defined the geometric mean of two Hermitian positive definite (HPD) matrices as
G ( A , B ) = A 1 / 2 A 1 / 2 B A 1 / 2 1 / 2 A 1 / 2 .
We recall the definition of functions applied to diagonalizable matrices, which we frequently utilize throughout our work. Suppose A C n × n is diagonalizable, meaning A = M D M 1 , where D = ( d i j ) is a diagonal matrix, and f is a given function. In this case, f ( A ) is defined as M f ( D ) M 1 , with f ( D ) being a diagonal matrix whose diagonal entries are f ( d i i ) , for i = 1 , , n . In the case where f is a multi-valued function, the same branch of f must be chosen for any repeated eigenvalue.
If k > 2 , then the ALM-mean is obtained through a recursive iteration process where at each step, the geometric means of k matrices are computed by reducing them to k 1 matrices. However, a significant limitation of this method is its linear convergence leading to a high computational cost due to the large number of iterations required at each recursive step. As a result, the computation of the ALM-mean using this approach becomes quite expensive. However, despite the elegance of the ALM geometric mean for two matrices, it becomes computationally infeasible to extend this formula to more than two matrices [18].
To overcome these limitations, the Karcher mean [19], was introduced as a generalization of the geometric mean for more than two matrices. The Karcher mean of HPD matrices A 1 , A 2 , , A k is defined as the unique positive definite solution to the matrix equation:
i = 1 k log ( A i 1 X ) = 0 ,
as established by Moakher [15] (Proposition 3.4). This equation can be equivalently expressed in other forms, such as
i = 1 k log ( X A i 1 ) = 0 , or i = 1 k log ( X 1 / 2 A i 1 X 1 / 2 ) = 0 ,
by utilizing the formula M 1 log ( K ) M = log ( M 1 K M ) , which holds for any invertible matrix M and any matrix K with real positive eigenvalues. This formulation arises from Riemannian geometry, where the space of positive definite matrices forms a Riemannian manifold with non-positive curvature (see [18,20,21]). The Karcher mean represents the center of mass (or barycenter) on this manifold [16]. In this manifold, the distance between two positive definite matrices A and B is defined as δ ( A , B ) = log ( A 1 / 2 B A 1 / 2 ) F , where · F denotes the Frobenius norm.
Several numerical methods have been proposed for solving the Karcher mean equation. Initially, fixed-point iteration methods were used, but these methods suffered from slow convergence, especially in cases where the matrices involved were poorly conditioned [15]. Later, methods based on gradient descent in the Riemannian manifold were introduced. A common iteration scheme for approximating the Karcher mean is
X v + 1 = X v exp θ v i = 1 k log ( A i 1 X v ) , X 0 P n , v 0 ,
where X v is the approximation at the v-th step, and the exponential and logarithmic functions are matrix operations. Although this method improves convergence, it can exhibit slow linear convergence in certain cases. The iteration with X 0 = A 1 or X 0 = I and θ v = 1 / k , as considered in [15,22], can fail to converge for some matrices A 1 , , A k . Furthermore, similar iterations have been proposed in [23,24], but without specific recommendations on choosing the initial value or step length. While an optimal θ v could theoretically be determined using a line search strategy, this approach is often computationally expensive. Heuristic strategies for selecting the step size, as discussed in [20], may result in slow convergence in many cases.
To further enhance the convergence rate, the Richardson iteration method is employed. Indeed, the considered method improves the convergence by using a parameter θ , which controls the step size in each iteration [19]. More precisely, given X 0 P n , the Richardson iteration is given by
X v + 1 = X v θ X v i = 1 k log ( A i 1 X v ) T ( X v ) , v 0 .
Any solution of Equation (3) is a fixed point of the map T in (5). The iterative formula can also be rewritten as
X ν + 1 = X ν θ X ν 1 / 2 i = 1 k log X ν 1 / 2 A i 1 X ν 1 / 2 X ν 1 / 2 , v 0 ,
provided that all the iterates X ν remain positive definite. Equation (6) further demonstrates that if X ν is Hermitian, then X ν + 1 is also a Hermitian matrix.
The parameter θ plays a crucial role in controlling the step size of each iteration and choosing an optimal value of θ can significantly influence the convergence behavior of the iteration. If θ is small enough, the iteration is guaranteed to produce positive definite matrices and converge towards the solution. In particular, when the matrices A 1 , , A k commute, setting θ = 1 k ensures at least quadratic convergence. More generally, the optimal value of θ can be determined based on the condition numbers of matrices M i = G 1 / 2 A i 1 G 1 / 2 , i = 1 , , k , where G is the desired solution. The closer the eigenvalues of M i are to 1, the faster the convergence is. This analysis guarantees that if the initial guess X 0 is close enough to the solution and is positive definite, the sequence { X v } generated by the iteration remains well-defined and converges to the desired solution. However, if the initial iterate is not positive definite, adjusting the value of θ or modifying the iteration scheme may be necessary to ensure that all iterates remain positive.
Numerical experiments show that selecting appropriate initial guesses, such as the arithmetic mean X 0 = 1 k i = 1 k A i or the identity matrix X 0 = I , can significantly affect the convergence rate. In particular, the cheap mean introduced in [25] provides a practical initial approximation that leads to faster convergence in many cases.
In our study, we consider the analysis of the Karcher mean for matrix-sequences, particularly those arising from the discretization of differential equations, which often form GLT sequences. Numerical results demonstrate that the geometric mean of not only two but more than two GLT matrix-sequences is itself a GLT matrix-sequence, with the symbol of the new sequence given by the geometric mean of the original symbols: the latter is formally proven in the case of two GLT matrix-sequences. Regarding the examples, we consider either scalar unilevel and multilevel GLT matrix-sequences or block GLT asymptotic structures, with special attention to cases stemming from the approximation by local methods of differential operators. By analyzing the spectral distribution of the geometric mean of these matrix-sequences, we provide new insights into the asymptotic behavior of large-scale matrix computations and their potential applications in numerical analysis.
This paper is structured as follows. In Section 2, we introduce notations, terminology, and preliminary results concerning Toeplitz and GLT structures, which are essential for the mathematical formulation of the problem and its technical solution. In Section 3, we present the geometric mean of two matrices and the Karcher mean for more than two matrices, followed by a discussion of the iterative methods employed for their computation; the section contains the GLT theoretical results. Section 4 contains numerical experiments that illustrate the (asymptotic) spectral behavior of the geometric mean for GLT matrix-sequences in both 1D and 2D settings and in both scalar and block cases. Finally, in Section 5 we draw conclusions and we point out in evidence a few open problems.

2. Preliminaries

In this introduction, we provide the necessary tools for performing the spectral analysis of the matrices involved, based on the theory of uni and multilevel block GLT matrix-sequences.

2.1. Matrices and Matrix-Sequences

Given a square matrix A C m × m , we denote by A its conjugate transpose and by A the Moore–Penrose pseudoinverse of A. Recall that A = A 1 whenever A is invertible. Regarding matrix norms · refers to the spectral norm and for 1 p , the notation · p stands for the Schatten p-norm defined as the p-norm of the vector of the singular values. Note that the Schatten -norm, which is equal to the largest singular value coincides with the spectral norm · ; the Schatten 1-norm since it is the sum of the singular values is often referred to as the trace-norm; and the Schatten 2-norm coincides with the Frobenius norm. Schatten p-norms, as important special cases of unitarily invariant norms are treated in detail in a wonderful book by Bhatia [26].
Finally, the expression matrix-sequence refers to any sequence of the form { A n } n , where A n is a square matrix of size d n with d n strictly increasing so that d n as n . A r-block matrix-sequence, or simply a matrix-sequence if r can be deduced from context is a special { A n } n in which the size of A n is d n = r φ n , with r 1 N fixed and φ n N strictly increasing.

2.2. Multi-Index Notation

To effectively deal with multilevel structures it is necessary to use multi-indices, which are vectors of the form i = ( i 1 , , i d ) Z d . The related notation is listed below
  • 0 , 1 , 2 , are vectors of all zeroes, ones, twos, etc.
  • h k means that h r k r for all r = 1 , , d . In general, relations between multi-indices are evaluated componentwise.
  • Operations between multi-indices, such as addition, subtraction, multiplication, and division, are also performed componentwise.
  • The multi-index interval [ h , k ] is the set { j Z d : h j k } . We always assume that the elements in an interval [ h , k ] are ordered in the standard lexicographic manner
    [ [ [ j 1 , , j d ] j d = h d , , k d ] j d 1 = h d 1 , , k d 1 ] j 1 = h 1 , , k 1 .
  • j = h , , k means that j varies from h to k , always following the lexicographic ordering.
  • m means that min ( m ) = min j = 1 , , d m j .
  • The product of all the components of m is denoted as ν ( m ) : = j = 1 d m j .
A multilevel matrix-sequence is a matrix-sequence { A n } n such that n varies in some infinite subset of N , n = n ( n ) is a multi-index in N d depending on n, and n when n . This is typical of many approximations of differential operators in d dimensions.

2.3. Singular Value and Eigenvalue Distributions of a Matrix-Sequence

Let μ k be the Lebesgue measure in R k . Throughout this work, all terminology from measure theory (such as “measurable set”, “measurable function”, “almost everywhere”, etc.) always refers to the Lebesgue measure. Let C c ( R ) (resp., C c ( C ) ) be the space of continuous complex-valued functions with bounded support defined on R (resp., C ). If A C n × n , the singular values and eigenvalues of A are denoted by σ 1 ( A ) , , σ n ( A ) and λ 1 ( A ) , , λ n ( A ) , respectively. The set of the eigenvalues (i.e., the spectrum) of A is denoted by Λ ( A ) .
Definition 1
(Singular value and eigenvalue distribution of a matrix-sequence). Let { A n } n be a matrix-sequence, with A n of size d n , and let ψ : D R t C r × r be a measurable function defined on a set D with 0 < μ t ( D ) < .
  • We say that { A n } n has an (asymptotic) singular value distribution described by ψ, and we write { A n } n σ ψ , if
    lim n 1 d n i = 1 d n F ( σ i ( A n ) ) = 1 μ t ( D ) D 1 r i = 1 r F ( σ i ( ψ ( x ) ) ) d x , F C c ( R ) .
  • We say that { A n } n has an (asymptotic) spectral (or eigenvalue) distribution described by ψ, and we write { A n } n λ ψ , if
    lim n 1 d n i = 1 d n F ( λ i ( A n ) ) = 1 μ t ( D ) D 1 r i = 1 r F ( λ i ( ψ ( x ) ) ) d x , F C c ( C ) .
  • If ψ describes both the singular value and eigenvalue distribution of { A n } n , we write { A n } n σ , λ ψ .
In this case, the function ψ is referred to as the eigenvalue (or spectral) symbol of { A n } n .
The same definition when the considered matrix-sequence shows a multilevel structure. In that case n is replaced by n uniformly in A n and d n .
The informal meaning behind the spectral distribution definition is as follows: if ψ is continuous, then a suitable ordering of the eigenvalues { λ j ( A n ) } j = 1 , , d n , assigned in correspondence with an equispaced grid on D, reconstructs approximately the r surfaces x λ i ( ψ ( x ) ) , i = 1 , , r . For example, in the simplest case where t = 1 and D = [ a , b ] , d n = n r , the eigenvalues of A n are approximately equal—up to a few potential outliers—to λ i ( ψ ( x j ) ) , where
x j = a + j ( b a ) n , j = 1 , , n , i = 1 , , r .
If t = 2 and D = [ a 1 , b 1 ] × [ a 2 , b 2 ] , d n = n 2 r , the eigenvalues of A n are approximately equal — again up to a few potential outliers—to λ i ( ψ ( x j 1 , y j 2 ) ) , where
x j 1 = a 1 + j 1 ( b 1 a 1 ) n , y j 2 = a 2 + j 2 ( b 2 a 2 ) n , j 1 , j 2 = 1 , , n , i = 1 , , r .
If the considered structure is two-level then the subscript is n = ( n 1 , n 2 ) and d n = n 1 n 2 r .
Furthermore, for t 3 , a similar reasoning applies.
Finally, we report an observation which is useful in the following derivations.
Remark 1.
The relation { A n } n λ f and Λ ( A n ) S for all n imply that the range of f is a subset of the closure S ¯ of S. In particular { A n } n λ f and A n positive definite for all n imply that f is nonnegative definite almost everywhere simply nonnegative almost everywhere if r = 1 . The same applies when a multilevel matrix-sequence { A n } n is considered and similar statements hold when singular values are taken into account.

2.4. Approximating Classes of Sequences

In this subsection, we present the notion of the approximating class of sequences and a related key result.
Definition 2
(Approximating class of sequences [27]). Let { A n } n be a matrix-sequence and let { { B n , j } n } j be a class of matrix-sequences, with A n and B n , j of size d n . We say that { { B n , j } n } j is an approximating class of sequences (a.c.s.) for { A n } n if the following condition is met: for every j, there exists n j such that, for every n n j ,
A n = B n , j + R n , j + N n , j ,
r a n k ( R n , j ) c ( j ) d n a n d N n , j ω ( j ) ,
where n j , c ( j ) , and ω ( j ) depend only on j, and
lim j c ( j ) = lim j ω ( j ) = 0 .
{ { B n , j } n } j a . c . s . w i t h r e s p e c t t o j { A n } n denotes that { { B n , j } n } j is an a.c.s. for { A n } n .
The following theorem represents the expression of a related convergence theory and it is a powerful tool used, for example, in the construction of the GLT *-Algebra.
Theorem 1
([27]). Let { A n } n , { B n , j } n , with j , n N , be matrix-sequences and let ψ , ψ j : D R d C r × r be measurable functions defined on a set D with positive and finite Lebesgue measure. Suppose that:
1. 
{ B n , j } n σ ψ j for every j;
2. 
{ { B n , j } n } j a . c . s . with respect to j { A n } n ;
3. 
ψ j ψ in measure.
Then
{ A n } n σ ψ .
Moreover, if all the involved matrices are Hermitian, the first assumption is replaced by { B n , j } n λ ψ j for every j, and the other two are left unchanged, then { A n } n λ ψ .
We end this section by observing that the same definition can be given and corresponding results (with obvious changes) hold, when the involved matrix-sequences show a multilevel structure. In that case n is replaced by n uniformly in A n , B n , j , d n .

2.5. Matrix-Sequences with Explicit or Hidden (Asymptotic) Structure

In this subsection, we introduce the three types of matrix structures that constitute the basic building blocks of the GLT ∗-algebras. To be more specific, for any d , r positive integers we consider the set of d-level r-block GLT matrix-sequences. For any d , r positive integers, the considered set forms a ∗-algebra of matrix-sequences, which is maximal and is isometrically equivalent to the maximal ∗-algebra of 2 d -variate r × r matrix-valued measurable functions (with respect to the Lebesgue measure) defined canonically over [ 0 , 1 ] d × [ π , π ] d ; see [9,10,11,12,28,29] and references therein.
The reduced version is essential when dealing with approximations of integro-differential operators (also in fractional versions) defined over general (non-Cartesian) domains. The idea was presented in [7,8] and it was exhaustively developed in [14], where the GLT symbols are again measurable functions defined over Ω × [ π , π ] d , with Ω Peano-Jordan measurable and contained in [ 0 , 1 ] d . Also the reduced versions form maximal ∗-algebras isometrically equivalent to the corresponding maximal ∗-algebras of measurable functions. The considered GLT ∗-algebras represent rich examples of hidden (asymptotic) structures. Their building blocks are formed by two classes of explicit algebraic structures, d-level r-block Toeplitz and sampling diagonal matrix-sequences (see Section 2.7 and Section 2.8), plus the asymptotic structures given by the zero-distributed matrix-sequences; see Section 2.6. It is worth noticing that the latter class plays the role of compact operators with respect to bounded linear operators and in fact they form a two-sided ideal of matrix-sequences with respect to any of the GLT ∗-algebras.

2.6. Zero-Distributed Sequences

Zero-distributed sequences are defined as matrix-sequences { A n } n such that { A n } n σ 0 . Note that, for any r 1 , { A n } n σ 0 is equivalent to { A n } n σ O r , where O r is the r × r zero matrix. The following theorem (see [11,30]), provides a useful characterization for detecting this type of sequence. We use the natural convention 1 / = 0 .
Theorem 2.
Let { A n } n be a matrix-sequence, with A n of size d n . Then
  • { A n } n σ 0 if and only if A n = R n + N n with r a n k ( R n ) / d n 0 and N n 0 as n ;
  • { A n } n σ 0 if there exists p [ 1 , ] such that A n p / d n 1 / p 0 as n .
As in Section 2.4, the same definition can be given and corresponding result (with obvious changes) holds, when the involved matrix-sequences show a multilevel structure. In that case n is replaced by n uniformly in A n , N n , R n , d n .

2.7. Multilevel Block Toeplitz Matrices

Given n N d , a matrix of the form
[ A i j ] i , j = 1 n C ν ( n ) r × ν ( n ) r ,
with blocks A k C r × r , k [ ( n 1 ) , , n 1 ] , is called multilevel block Toeplitz, or more precisely, d-level r-block Toeplitz matrix.
Given a matrix-valued function f : [ π , π ] d C r × r belonging to L 1 ( [ π , π ] d ) , the n -th Toeplitz matrix associated with f is defined as
T n ( f ) : = [ f ^ i j ] i , j = 1 n C ν ( n ) r × ν ( n ) r ,
where
f ^ k = 1 ( 2 π ) d [ π , π ] d f ( θ ) e i ( k , θ ) d θ C r × r , k Z d ,
are the Fourier coefficients of f, in which i denotes the imaginary unit, the integrals are computed componentwise, and ( k , θ ) = k 1 θ 1 + + k d θ d . Equivalently, T n ( f ) can be expressed as
T n ( f ) = | j 1 | < n 1 | j d | < n d J n 1 ( j 1 ) J n d ( j d ) f ^ ( j 1 , , j d ) ,
where ⊗ denotes the Kronecker tensor product between matrices and J m ( l ) is the matrix of order m whose ( i , j ) entry equals 1 if i j = l and zero otherwise.
{ T n ( f ) } n N d is the family of (multilevel block) Toeplitz matrices associated with f, which is called the generating function.

2.8. Block Diagonal Sampling Matrices

Given d 1 , n N d , and a function a : [ 0 , 1 ] d C r × r , we define the multilevel block diagonal sampling matrix D n ( a ) as the block diagonal matrix
D n ( a ) = diag i = 1 , , n a i n C ν ( n ) r × ν ( n ) r .

2.9. The ∗-Algebra of d-Level r-Block GLT Matrix-Sequences

Let r 1 be a fixed integer. A multilevel r-block GLT sequence, or simply a GLT sequence if we do not need to specify r, is a special multilevel r-block matrix-sequence equipped with a measurable function κ : [ 0 , 1 ] d × [ π , π ] d C r × r , d 1 , called symbol. The symbol is essentially unique, in the sense that if κ , ς are two symbols of the same GLT sequence, then κ = ς almost everywhere. We write { A n } n GLT κ to denote that { A n } n is a GLT sequence with symbol κ .
It can be proven that the set of multilevel block GLT sequences is the ∗-algebra generated by the three classes of sequences defined in Section 2.6, Section 2.7 and Section 2.8: zero-distributed, multilevel block Toeplitz, and block diagonal sampling matrix-sequences. The GLT class satisfies several algebraic and topological properties that are treated in detail in [9,10,11,12]. Here, we focus on the main operative properties, listed below, that represent a complete characterization of GLT sequences, equivalent to the full constructive definition.

GLT Axioms

  • GLT 1. If { A n } n GLT κ then { A n } n σ κ in the sense of Definition 1, with t = 2 d and D = [ 0 , 1 ] d × [ π , π ] d . Moreover, if each A n is Hermitian, then { A n } n λ κ , again in the sense of Definition 1 with t = 2 d .
  • GLT 2. We have
    { T n ( f ) } n GLT κ ( x , θ ) = f ( θ ) if f : [ π , π ] d C r × r is in L 1 ( [ π , π ] d ) ;
    { D n ( a ) } n GLT κ ( x , θ ) = a ( x ) if a : [ 0 , 1 ] d C r × r is Riemann-integrable;
    { Z n } n GLT κ ( x , θ ) = O r if and only if { Z n } n σ 0 .
  • GLT 3. If { A n } n GLT κ and { B n } n GLT ς , then:
    { A n } n GLT κ ;
    { α A n + β B n } n GLT α κ + β ς for all α , β C ;
    { A n B n } n GLT κ ς ;
    { A n } n GLT κ 1 , provided that κ is invertible almost everywhere.
  • GLT 4.  { A n } n GLT κ if and only if there exist { B n , j } n GLT κ j such that
    { { B n , j } n } j a . c . s . with respect to j { A n } n and κ j κ in measure.
  • GLT 5. If { A n } n GLT κ and A n = X n + Y n , where
    every X n is Hermitian,
    | | X n | | , | | Y n | | C for some constant C independent of n,
    N ( n ) 1 Y n 1 0 ,
    then { A n } n λ κ .
  • GLT 6. If { A n } n GLT κ and each A n is Hermitian, then { f ( A n ) } n GLT f ( κ ) for every continuous function f : C C .
Note that, by GLT 1, it is always possible to obtain the singular value distribution from the GLT symbol, while the eigenvalue distribution can only be deduced either if the involved matrices are Hermitian or the related matrix-sequence is quasi-Hermitian in the sense of GLT 5.

3. Geometric Mean of GLT Matrix-Sequences

This section discusses the geometric mean of positive definite matrices, starting with the well-established case of two matrices and then considering multiple matrices. In particular we give distribution results in the case of GLT matrix-sequences.

3.1. Means of Two Matrices

The geometric mean of two positive numbers a and b is simply a b , a well-known fact from basic arithmetic. However, extending this concept to HPD matrices introduces a number of challenges, as matrix multiplication is not commutative. The question of how to define a geometric mean for matrices in a way that preserves key properties such as congruence invariance, consistency with scalars, and symmetry was solved by ALM [17]. Their work presented an axiomatic approach to defining the geometric mean of two HPD matrices previously defined in Equation (1). ALM formalized the geometric mean for matrices by establishing a set of ten essential properties, known as the ALM axioms. These axioms ensure the geometric mean behaves appropriately in the matrix setting. Here, are three key properties:
1. Permutation invariance: G ( A , B ) = G ( B , A ) for all A , B P n .
2. Congruence invariance: G ( M A M , M B M ) = M G ( A , B ) M for all A , B P n and all invertible matrices M C n × n .
3. Consistency with scalars: G ( A , B ) = ( A B ) 1 / 2 for all commuting A , B P n (note that A B P n for all commuting A , B P n because ( A B ) = B A = B A = A B and A B is similar to the HPD matrix A 1 / 2 B A 1 / 2 = A 1 / 2 ( A B ) A 1 / 2 ).
When considering sequences of matrices, particularly in the framework of GLT sequences, the geometric mean operation is well-preserved under the structure of GLT sequences. If r = d = 1 and we consider two scalar unilevel GLT matrix-sequences, i.e., { A n } n GLT κ and { B n } n GLT ξ , where A n , B n P n are HPD matrices for every n, the matrix-sequence of their geometric mean { G ( A n , B n ) } n also forms a scalar unilevel GLT matrix-sequence. The symbol of the resulting sequence is the geometric mean of the individual symbols κ and ξ .
Theorem 3
([11], Theorem 10.2). Let r = d = 1 . Suppose { A n } n GLT κ and { B n } n GLT ξ , where A n , B n P n for every n. Assume that at least one between κ and ξ is nonzero almost everywhere. Then
{ G ( A n , B n ) } n GLT ( κ ξ ) 1 / 2
and
{ G ( A n , B n ) } n σ , λ ( κ ξ ) 1 / 2 .
The previous result is easily extended to the case of matrix-sequence resulting from the geometric mean of two block multilevel GLT matrix-sequences, thanks to powerful ∗-algebra structures of considered spaces described in Section 2.9. Indeed the following two generalizations of Theorem 3 hold.
Theorem 4.
Let r = 1 and d 1 . Suppose { A n } n GLT κ and { B n } n GLT ξ , where A n , B n P ν ( n ) for every multi-index n . Assume that at least one between κ and ξ is nonzero almost everywhere. Then
{ G ( A n , B n ) } n GLT ( κ ξ ) 1 / 2
and
{ G ( A n , B n ) } n σ , λ ( κ ξ ) 1 / 2 .
Proof. 
Since both A n and B n are positive definite for every multi-index n , the matrix-sequence { G ( A n , B n ) } n is well defined according to Formula (1) since A n 1 / 2 is well defined for every multi-index n . According to the assumption, we start with the case where κ is nonzero almost everywhere. Hence, the matrix-sequence { A n 1 } n is a GLT matrix-sequence with GLT symbol κ 1 by Axiom GLT 3, part 4. Since the square root is continuous and well defined over positive definite matrices also the matrix-sequence { A n 1 / 2 } n is a GLT matrix-sequence with GLT symbol κ 1 / 2 by virtue of Axiom GLT 6.
Now using two times GLT 3, part 3, we infer that { A n 1 / 2 B n A n 1 / 2 } n is a GLT matrix-sequence with GLT symbol κ 1 ξ , where X n = A n 1 / 2 B n A n 1 / 2 is positive definite because of the Sylvester inertia law. Hence, the square root of X n is well defined and by exploiting again Axiom GLT 6 we deduce that A n 1 / 2 B n A n 1 / 2 1 / 2 n is a GLT matrix-sequence with GLT symbol κ 1 ξ 1 / 2 . Finally, by exploiting Axiom GLT 3, part 4, we have { A n 1 / 2 } n is a GLT matrix-sequence with GLT symbol κ 1 / 2 and the application of GLT 3, part 3, two time leads to the desired conclusion
{ G ( A n , B n ) } n GLT ( κ ξ ) 1 / 2 ,
where the latter and Axiom GLT 3 imply { G ( A n , B n ) } n σ , λ ( κ ξ ) 1 / 2 .
Finally, the other case where ξ is nonzero almost everywhere has the very same proof since G ( · , · ) is invariant under permutations and hence
G ( A , B ) = A 1 / 2 A 1 / 2 B A 1 / 2 1 / 2 A 1 / 2 = G ( B , A ) = B 1 / 2 B 1 / 2 A B 1 / 2 1 / 2 B 1 / 2 ,
so that the the same steps can be repeated by exchanging A n and B n . □
Theorem 5.
Let r , d 1 . Suppose { A n } n GLT κ and { B n } n GLT ξ , where A n , B n P ν ( n ) for every multi-index n . Assume that at least one between the minimal eigenvalue of κ and the minimal eigenvalue of ξ is nonzero almost everywhere. Then
{ G ( A n , B n ) } n GLT G ( κ , ξ )
and
{ G ( A n , B n ) } n σ , λ G ( κ , ξ ) .
Furthermore G ( κ , ξ ) = ( κ ξ ) 1 / 2 whenever the GLT symbols κ and ξ commute.
Proof. 
The case of r = 1 is already contained in Theorem 4, so we assume r > 1 , i.e., a true GLT block setting. The proof is in fact a repetition of that of the previous theorem with the only attention to GLT symbol part where the multiplication is noncommutative.
Since both A n and B n are positive definite for every multi-index n , the matrix-sequence { G ( A n , B n ) } n is well defined according to Formula (1) since A n 1 / 2 is well defined for every multi-index n . According to the assumption, we start with the case where κ is invertible almost everywhere so that { A n 1 } n GLT κ 1 by Axiom GLT 3, part 4, and { A n 1 / 2 } n GLT κ 1 / 2 thanks to Axiom GLT 6.
Now using two times GLT 3, part 3, we have { A n 1 / 2 B n A n 1 / 2 } n GLT κ 1 / 2 ξ κ 1 / 2 , where
X n = A n 1 / 2 B n A n 1 / 2
is positive definite because of the Sylvester inertia law. Hence, the square root of X n is well defined and by exploiting again Axiom GLT 6 we obtain A n 1 / 2 B n A n 1 / 2 1 / 2 n GLT κ 1 / 2 ξ κ 1 / 2 1 / 2 . Finally, by exploiting Axiom GLT 3, part 4, on the matrix-sequence { A n 1 / 2 } n and using GLT 3, part 3, two times, we conclude
{ G ( A n , B n ) } n GLT κ 1 / 2 κ 1 / 2 ξ κ 1 / 2 1 / 2 κ 1 / 2 ,
where the symbol κ 1 / 2 κ 1 / 2 ξ κ 1 / 2 1 / 2 κ 1 / 2 is exactly G ( κ , ξ ) . Furthermore, relation (11) and Axiom GLT 3 imply { G ( A n , B n ) } n σ , λ G ( κ , ξ ) where G ( κ , ξ ) = ( κ ξ ) 1 / 2 , whenever κ and ξ commute.
Finally, the remaining case where ξ is invertible almost everywhere has the very same proof since G ( · , · ) is invariant under permutations and hence
G ( A , B ) = A 1 / 2 A 1 / 2 B A 1 / 2 1 / 2 A 1 / 2 = G ( B , A ) = B 1 / 2 B 1 / 2 A B 1 / 2 1 / 2 B 1 / 2 ,
so that the the same steps can be repeated by exchanging A n and B n and by exchanging κ and ξ . □
Remark 2.
In Theorem 5, we assume that at least one between κ and ξ is invertible almost everywhere and the same is true in Theorem 4. The assumption has in fact only a technical nature for allowing the possibility of using part 4 of Axiom  GLT 3, when a GLT matrix-sequence is inverted for guaranteeing that the sequence of inverses is still of GLT nature. However, if we assume commutativity then no inversion is required and the technical assumption is no longer needed. The latter observation is of interest since it indicates that there is room for improving the result, by weakening somehow the assumption. This will be the subject of future investigations.

3.2. Mean of More than Two Matrices

In this section, we describe the iterative method used to compute the Karcher mean for more than two HPD matrices. The Karcher mean is an extension of the geometric mean to more than two matrices and can be computed using an iterative method based on the Richardson-like iteration. As detailed earlier in the introduction (see Equation (6)), the iteration updates the approximation at each step based on the logarithmic correction term. The step-size parameter θ is dynamically computed during the iteration, ensuring that the process converges efficiently by accounting for the condition numbers of the matrices involved.
Specifically, θ is given by
θ = 2 γ + β ,
with β and γ computed as
[ β , γ ] = j = 1 k log ( c j ) c j 1 , j = 1 k c j log ( c j ) c j 1 ,
where c j = λ max ( M j ) λ min ( M j ) , and M j = G 1 / 2 A j 1 G 1 / 2 , G is the current approximation of the Karcher mean.
The Richardson-like iteration can be implemented in different equivalent forms
X = X θ X 1 / 2 i = 1 k log X 1 / 2 A i 1 X 1 / 2 X 1 / 2 , G = X ,
X = X 1 / 2 I θ i = 1 k log X 1 / 2 A i 1 X 1 / 2 X 1 / 2 , G = X ,
Y = Y θ Y 1 / 2 i = 1 k log Y 1 / 2 A i Y 1 / 2 Y 1 / 2 , G = Y 1 ,
Y = Y 1 / 2 I θ i = 1 k log Y 1 / 2 A i Y 1 / 2 Y 1 / 2 , G = Y 1 .
Among these equivalent formulations, the first one, Equation (15), is the most practical for implementation. It avoids matrix inversions, which can introduce numerical instabilities and increase computational complexity. While formulations Equations (17) and (18) aim to reduce the number of matrix inversions, the final step requires inverting the result, which can lead to inaccuracies, especially for poorly conditioned matrices. Additionally, Equation (16) retains the simplicity of direct matrix operations without introducing unnecessary complications. A numerically more efficient approach uses Cholesky factorization to reduce the computational cost of forming matrix square roots at every step, enhancing efficiency as forming the Cholesky factor costs less than computing a full matrix square root [31].
Suppose X ν = R ν T R ν is the Cholesky decomposition of X ν , where R ν is an upper triangular matrix. The iteration step can be rewritten as
X ν + 1 = X ν + θ R ν T i = 1 k log R ν T A i R ν 1 R ν .
In this formulation, the Cholesky factor R ν is updated at each iteration. The condition number of the Cholesky factor R ν in the spectral norm is the square root of the condition number of X ν , thus ensuring good numerical accuracy. For this heuristic to be effective, it is essential that X 0 provides a good approximation of G. Therefore, selecting X 0 as the Cheap mean is critical. An adaptive version of this iteration has been proposed and implemented in the Matrix Means Toolbox [32].
Of course the Richardson-like iteration is relevant for computing efficiently the Karcher mean: we also exploit its formal expression for theoretical purposes when dealing with sequences of matrices, particularly those involving GLT sequences. For the theory we come back at relation (15) and we consider the associated iteration
X ν + 1 = X ν θ X ν 1 / 2 i = 1 k log X ν 1 / 2 A i 1 X ν 1 / 2 X ν 1 / 2 ,
with X 0 given positive definite matrix. We know that X ν converges to the geometric mean of A 1 , , A k as ν tends to infinity for every fixed positive definite initial guess X 0 .
Fix r , d 1 . Suppose now that the block multitivel sequence of matrices { A n ( i ) } n GLT κ i for i = 1 , , k are given, where A n ( 1 ) , , A n ( k ) are positive definite for every multi-index n .
Due to the positive definiteness of the matrices { A n ( i ) } n and because of { A n ( i ) } n GLT κ i , from Axiom GLT 1 it follows that each κ i is nonnegative definite almost everywhere (see Remark 1).
In this setting, it is conjectured that the sequence of Karcher means { G ( A n ( 1 ) , , A n ( k ) ) } n forms a new GLT matrix-sequence whose symbol is the geometric mean of the individual symbols κ 1 , , κ k , specifically ( κ 1 κ k ) 1 / k if all symbols commute and G ( κ 1 , , κ k ) in the general case. In order to attack the problem the initial guess matrix-sequence { X n , 0 } n must be of GLT type with nonnegative definite GLT symbol. In this way thanks to (20) and using the GLT axioms in the way it is done in Theorem 5, we easily deduce that { X n , ν } n is still a GLT matrix-sequence with symbol g ν converging to G ( κ 1 , , κ k ) .
Finally, Theorem 1 and Axiom GLT 4 could be applied if we prove that { { X n , ν } n } ν if an a.c.s. for the limit sequence { G ( A n ( 1 ) , , A n ( k ) ) } n . This could be proven using Schatten estimates like those in the second item of Theorem 2, but at the moment this is not easy because the known convergence proofs for the Karcher iterations are all based on pointwise convergence.

4. Numerical Experiments

In this section, we present and critically analyze several selected examples, by considering matrix-sequences of geometric means k HPD matrices for k = 2 , 3 . In particular our numerics show asymptotical spectral properties of the resulting matrix-sequences in accordance with the theoretical results (and conjectures) of the previous section. We introduce few examples in which the input matrix-sequences are either of Toeplitz type or are general r-block d-level GLT matrix-sequences, stemming from the approximations of differential operators via local methods such finite differences, finite elements, isogeometric analysis: in the first group we explore the geometric mean of two matrix-sequences and in the second group we consider the Karcher mean of three matrix-sequences, both taking into account one-dimensional (1D) and two-dimensional (2D) settings, i.e., d = 1 , 2 and r = 1 ; in the final group we deal with r-block GLT matrix-sequences with r = 2 . We anticipate the strong agreement of the numerical evidences with the theoretical results in Theorems 3–5, and with the conjecture regarding the Karcher mean, also when the matrix-sizes are quite moderate. The latter is a nontrivial numerical finding since all the theoretical results have an asymptotic spectral nature.

4.1. Example 1 (1D)

Let A n = T n ( 2 2 cos ( θ ) ) 2 according to Section 2.7 and let B n be the finite difference discretization of the differential operator ( α ( x ) u ) on the interval ( 0 , 1 ) , with boundary conditions u ( 0 ) = u ( 1 ) = u ( 0 ) = u ( 1 ) = 0 , where α ( x ) is positive on ( 0 , 1 ) . For the fourth order boundary value problem
( α ( x ) u ) = f ( x ) , u ( 0 ) = u ( 1 ) = u ( 0 ) = u ( 1 ) = 0 ,
we approximate the derivative u ( 4 ) ( x ) by using the second-order central FD scheme characterized by the stencil ( 1 , 4 , 6 , 4 , 1 ) . More specifically, for α , u smooth enough, we have ( α ( x ) u ( x ) ) x = x i equal to
α i 2 u i 2 2 ( α i 1 + α i ) u i 1 + ( α i 1 + 4 α i + α i + 1 ) u i 2 ( α i + 1 + α i ) u i + 1 + α i + 1 u i + 2 h 4 + O ( h 2 ) ,
for all i = 2 , 3 , . . . . , n + 1 ; here, h = 1 n + 3 and x i = i h for i = 0 , 1 , . . . . , n + 3 .
Mathematics 13 00393 i001
By taking into account the homogeneous boundary conditions and by neglecting the O ( h 2 ) approximation error, we approximate the nodal value u ( x i ) with the value of u i for i = 0 , 1 , . . . . , n + 3 , where u 0 = u 1 = u n + 2 = u n + 3 = 0 and u = ( u 2 , . . . . , u n + 1 ) T is the solution of the linear system
α i 2 u i 2 2 ( α i 1 + α i ) u i 1 + ( α i 1 + 4 α i + α i + 1 ) u i 2 ( α i + 1 + α i ) u i + 1 + α i + 1 u i + 2 = h 4 f ( x ) ,
for all i = 2 , 3 , . . . . , n + 1 .
The structure of the resulting matrix B n = B n ( α ) is as reported below
( α 3 + 4 α 2 + α 1 ) 2 ( α 3 + α 2 ) α 3 2 ( α 2 + α 3 ) ( α 4 + 4 α 3 + α 2 ) 2 ( α 4 + α 3 ) α 4 α 3 2 ( α 3 + α 4 ) ( α 5 + 4 α 4 + α 3 ) 2 ( α 5 + α 4 ) α 5 α n 2 2 ( α n 2 + α n 1 ) ( α n 2 + 4 α n 1 + α n ) 2 ( α n + α n 1 ) α n α n 1 2 ( α n 1 + α n ) ( α n 1 + 4 α n + α n + 1 ) 2 ( α n + 1 + α n ) α n 2 ( α n + α n + 1 ) ( α n + 4 α n + 1 + α n + 2 ) .
Looking at B n in (21), we observe that it can be written as
B n = B n ( α ) = D n + K n + + D n K n + D n K n ,
with
K n + = 1 2 1 1 2 1 1 2 1 1 2 1 1 2 1 = T n ( 1 2 e i θ + e 2 i θ ) ,
K n = 4 2 2 4 2 2 4 2 2 4 2 2 4 2 2 4 = T n ( 4 2 e i θ 2 e i θ ) ,
K n = 1 2 1 1 2 1 1 2 1 1 2 1 1 2 1 = T n ( 1 2 e i θ + e 2 i θ ) .
D n + = diag i = 1 , 2 . . . . , n α i + 2 = diag i = 1 , 2 . . . . , n α ( x i + 2 ) , D n = diag i = 1 , 2 . . . . , n α i + 1 = diag i = 1 , 2 . . . . , n α ( x i + 1 ) , D n = diag i = 1 , 2 . . . . , n α i = diag i = 1 , 2 . . . . , n α ( x i ) .
It is easy to check that grids G n + = { x i + 2 } i = 1 , . . . . , n , G n = { x i + 1 } i = 1 , . . . . , n and G n = { x i } i = 1 , . . . . , n are asymptotically uniform in [ 0 , 1 ] , according to the notion given in [33]. In fact, for G n + = { x i + 2 } i = 1 , . . . . , n we have
max i = 1 , 2 , , n x i , n i n = max i = 1 , 2 , , n x i + 2 i n = max i = 1 , 2 , , n ( i + 2 ) h i n .
. = max i = 1 , 2 , , n i + 2 n + 3 i n = max i = 1 , 2 , , n n i + 2 n i ( n + 3 ) n ( n + 3 ) = max i = 1 , 2 , , n i n ( n + 3 ) n n ( n + 3 ) 0 .
For G n = { x i + 1 } i = 1 , . . . . , n we have
max i = 1 , 2 , , n x i , n i n = max i = 1 , 2 , , n x i + 1 i n = max i = 1 , 2 , , n ( i + 1 ) h i n .
. = max i = 1 , 2 , , n i + 1 n + 3 i n = max i = 1 , 2 , , n n i + n i ( n + 3 ) n ( n + 3 ) = max i = 1 , 2 , , n 2 i n ( n + 3 ) 2 n n ( n + 3 ) 0 .
For G n = { x i } i = 1 , . . . . , n we have
max i = 1 , 2 , , n x i , n i n = max i = 1 , 2 , , n x i i n = max i = 1 , 2 , , n i h i n .
. = max i = 1 , 2 , , n i n + 3 i n = max i = 1 , 2 , , n n i i ( n + 3 ) n ( n + 3 ) = max i = 1 , 2 , , n 3 i n ( n + 3 ) 3 n n ( n + 3 ) 0 .
Hence, by GLT 2, part 2, and by [33], we deduce that { D n + } n GLT α ( x ) , { D n } n GLT α ( x ) and { D n } n GLT α ( x ) . In conclusion, by invoking GLT 2GLT 3, we infer
{ B n } n GLT α ( x ) ( 1 2 e i θ + e 2 i θ ) + α ( x ) ( 4 2 e i θ 2 e i θ ) + α ( x ) ( 1 2 e i θ + e 2 i θ ) = α ( x ) ( 2 2 cos ( θ ) ) 2 .

Eigenvalue Distribution

We begin by numerically verifying the eigenvalue distribution of the matrix-sequence { G ( A n , B n ) } n with respect to its GLT symbol ( κ ξ ) 1 / 2 according to Theorem 8, with A n = T n ( 2 2 cos ( θ ) ) 2 , B n = B n ( α ) as in (21) with α ( x ) = x , κ ( x , θ ) = ( 2 2 cos ( θ ) ) 2 , ξ ( x , θ ) ) = α ( x ) ( 2 2 cos ( θ ) ) 2 . In Figure 1, we compare the eigenvalues of the geometric mean with a uniform sampling of the symbol. It is evident that, as n increases, the symbol provides a better and better approximation of the eigenvalues. Similar results for the two-dimensional case are shown in Section 4.2, Figure 2.

4.2. Example 2 (2D)

Let A n = T n ( F ( θ 1 , θ 2 ) ) according to the notation in Section 2.7 with d = 2 and B n be the finite difference discretization of the differential operator
2 x 2 a ( x , y ) 2 x 2 + 2 y 2 b ( x , y ) 2 y 2 ,
on the open domain Ω = ( 0 , 1 ) 2 , with a ( x , y ) , b ( x , y ) nonnegative on the closure of Ω , with homogeneous Dirichlet boundary conditions on u ( x , y ) , and zero normal derivatives at Ω .
Regarding A n , the generating function F ( θ 1 , θ 2 ) is given by
F ( θ 1 , θ 2 ) = f ( θ 1 ) + f ( θ 2 ) ,
with f ( θ ) = ( 2 2 cos ( θ ) ) 2 .
Thus, we have
A n = T n ( f ( θ 1 ) + f ( θ 2 ) ) = T n ( 8 + 4 cos 2 ( θ 1 ) + 4 cos 2 ( θ 2 ) 8 cos ( θ 1 ) 8 cos ( θ 1 ) ) .
As in the one-dimensional setting in Section 4.1, we apply the second-order central finite difference scheme separately to the x- and y-directions, in perfect analogy with the 1D case, and take a ( x , y ) = α ( x ) , b ( x , y ) = α ( y ) . Choosing α ( t ) = t , the related problem is semielliptic, but it is of separable nature. This separable nature is reflected algebraically in a tensor decomposition of the whole approximation and in fact we find that resulting global matrix is given by the Kronecker sum
B n = B ( n 1 , n 2 ) = B n 1 ( α ) I n 2 + I n 1 B n 2 ( α ) ,
where I n denotes the identity matrix of size n and B n ( α ) is exactly the structure displayed in (21). By exploiting the GLT analysis in the one-dimensional case, the symbol for the two-dimensional matrix-sequence can be derived similarly. More precisely, we have
{ B n } n GLT α ( x ) ( 2 2 cos ( θ 1 ) ) 2 + α ( y ) ( 2 2 cos ( θ 2 ) ) 2 .
As shown in Figure 2, the agreement is remarkable even for the largest eigenvalues for which the absolute discrepancy is higher.

4.3. Example 3 (1D)

Let A n ( 1 ) = T n ( 3 + 2 cos ( θ ) ) , A n ( 2 ) = D n ( a ) , and A n ( 3 ) = D n ( a ) T n ( 4 2 cos ( θ ) ) D n ( a ) , where T n ( · ) is the Toeplitz operator for d = 1 , as in Section 2.7 and D n ( a ) is the diagonal matrix generated by the continuous function a ( x ) = x 2 , according to the notation in Section 2.8.

Eigenvalue Distribution

First, we aim to verify the eigenvalue distribution of more than two matrices using the Karcher mean. In Figure 3, we compare the eigenvalues of the Karcher mean with a uniform sampling of the resulting limit GLT symbol. It can be observed that the symbol provides a reliable approximation of the eigenvalues, and in fact, as n increases, the spectral distribution holds asymptotically. Similar results for the two-dimensional case are shown in Section 4.4, Figure 4. Both types of result corroborate the conjecture on the GLT nature of the limit matrix-sequence of the Karcher means, as discussed at the end of Section 3.2.

4.4. Example 4 (2D)

Consider the matrix A n ( 1 ) = T n ( F ( θ 1 , θ 2 ) ) , where T n ( · ) is the Toeplitz operator as in Section 2.7 with d = 2 , and the function F ( θ 1 , θ 2 ) is defined as F ( θ 1 , θ 2 ) = f ( θ 1 ) + f ( θ 2 ) , with f ( θ ) = 3 + 2 cos ( θ ) , resulting in A n ( 1 ) = T n ( 6 + 2 cos ( θ 1 ) + 2 cos ( θ 2 ) ) .
Also, consider the diagonal matrix A n ( 2 ) = D n ( a ) , where according to the notation in Section 2.8, D n ( a ) is the diagonal sampling matrix generated by a continuous function a ( x , y ) = x 2 + y 2 , which is positive on the domain ( 0 , 1 ) 2 .
We take the matrix A n ( 3 ) = D n ( b ) T n ( G ( θ 1 , θ 2 ) ) D n ( b ) , where b ( x , y ) = 1 / x + 1 / y is positive and unbounded on the domain ( 0 , 1 ) 2 , and where the generating function G ( θ 1 , θ 2 ) = f ( θ 1 ) + f ( θ 2 ) , with f ( θ ) = 4 2 cos ( θ ) implies that A n ( 3 ) is T n ( 8 2 cos ( θ 1 ) 2 cos ( θ 2 ) ) .
Also, in the current example, the agreement between the limit GLT symbol and the displayed eigenvalues is remarkably good, so giving ground to the conjecture on the Karcher means reported in the final part of Section 3.2.

4.5. Galerkin Discretization of the Laplacian Eigenvalue Problem

The one-dimensional Laplacian eigenvalue problem is given by the differential equation
u j ( x ) = λ j u j ( x ) , x ( 0 , 1 ) ,
with Dirichlet boundary conditions:
u j ( 0 ) = u j ( 1 ) = 0 .
The goal is to find the eigenvalues λ j R + and corresponding eigenfunctions u j H 0 1 ( [ 0 , 1 ] ) , for j = 1 , 2 , . . . . , where H 0 1 ( [ 0 , 1 ] ) is the standard Sobolev space of L 2 functions with L 2 derivatives and vanishing at the boundaries.

4.5.1. Weak Formulation

To derive the weak formulation, we multiply both sides of the differential equation by a test function v H 0 1 ( [ 0 , 1 ] ) and integrate over the interval [ 0 , 1 ] , i.e., 0 1 u j ( x ) v ( x ) d x = 0 1 λ j u j ( x ) v ( x ) d x . Using integration by parts on the left-hand side and noting that the boundary terms vanish, we deduce 0 1 u j ( x ) v ( x ) d x = 0 1 u j ( x ) v ( x ) d x , so that the weak form becomes
0 1 u j ( x ) v ( x ) d x = λ j 0 1 u j ( x ) v ( x ) d x ,
for every test function v H 0 1 ( [ 0 , 1 ] ) . The latter is rewritten compactly as
a ( u j , v ) = λ j ( u j , v ) ,
where the bilinear form a ( u j , v ) and the L 2 inner product ( u j , v ) are defined as
a ( u j , v ) : = 0 1 u j ( x ) v ( x ) d x , ( u j , v ) : = 0 1 u j ( x ) v ( x ) d x .

4.5.2. Galerkin Approximation

The weak formulation allows us to use the Galerkin method to approximate the solution. Let W n = span { ϕ 1 , , ϕ N n } be a finite-dimensional subspace of H 0 1 ( [ 0 , 1 ] ) . The weak problem now becomes: find approximate eigenvalues λ j , n R + and eigenfunctions u j , n W n , for j = 1 , 2 . . . . , N n such that, for all v n W n , we have a ( u j , n , v n ) = λ j , n ( u j , n , v n ) . By expanding u j , n and v n in terms of the basis functions { u i } , we obtain u j , n = i = 1 N n c i ϕ i , v n = k = 1 N n d k ϕ k , and substituting into the bilinear forms, the generalized eigenvalue problem
K n c j = λ j , n M n c j
is defined, where the stiffness matrix K n and mass matrix M n are defined as
K n = [ a ( ϕ j , ϕ i ) ] i j = 1 N n = [ 0 1 ϕ j ( x ) ϕ i ( x ) d x ] i j = 1 N n ,
M n = [ ( ϕ j , ϕ i ) ] i j = 1 N n = [ 0 1 ϕ j ( x ) ϕ i ( x ) d x ] i j = 1 N n .
Both K n and M n are symmetric and positive definite, due to the coercive character of the underlying bilinear forms.

4.6. Quadratic C 0 B-Spline Discretization

In the quadratic C 0 B-spline discretization of the one-dimensional Laplacian eigenvalue problem, the basis functions ϕ 1 , , ϕ N n are chosen as B-splines of degree 2 defined on a uniform mesh with step size 1 n . The basis functions are explicitly constructed on the knot sequence { 0 , 0 , 0 , 1 n , 1 n , 2 n , 2 n , n 1 n , n 1 n , 1 , 1 , 1 } , (excluding the first and last B-splines, which do not vanish on the boundary of [ 0 , 1 ] ); see [13]. The resulting normalized stiffness and mass matrices are given by
Mathematics 13 00393 i002
Mathematics 13 00393 i003
The stiffness matrix 1 n K n and the mass matrix n M n contain, as principal submatrices, the unilevel 2-block Toeplitz matrices T n 1 ( f ) and T n 1 ( h ) , according to the notation in Section 2.7 with d = 1 and r = 2 , where the generating functions f ( θ ) and h ( θ ) are given by
f ( θ ) : = 1 3 0 2 0 2 e i θ + 4 2 2 8 + 0 0 2 2 e i θ ,
f ( θ ) = 1 3 4 2 2 e i θ 2 2 e i θ 8 4 cos θ ,
and
h ( θ ) : = 1 30 0 3 0 1 e i θ + 4 3 3 12 + 0 0 3 1 e i θ ,
h ( θ ) = 1 30 4 3 + 3 e i θ 3 + 3 e i θ 12 + 2 cos θ .
Since { T n ( g ) } n GLT g for any Lebesgue integrable generating function g (Axiom GLT 2, part 1), the theory of GLT sequences and a basic use of the extradimensional approach [34,35] lead to
1 n K n n GLT f ( θ ) ,
{ n M n } n GLT h ( θ ) .
By the Axiom GLT 3, the linear combination of two GLT sequences is again a GLT sequence, with the symbol being the corresponding linear combination of their symbols. The matrix-sequence { L n } n with
L n : = 1 n K n + n M n ,
is a linear combination of the sequences 1 n K n n and { n M n } n . Consequently,
{ L n } n GLT f ( θ ) + h ( θ ) = e ( θ ) ,
where f ( θ ) and h ( θ ) are the symbols of the stiffness matrix 1 n K n and the mass matrix n M n , respectively. The numerical evidence is convincing; furthermore, from the related Figure 5, we observe that there exist two branches of the spectrum in accordance with Theorem 5 with r = 2 .

4.7. Cubic C 1 B-Spline Discretization

In the cubic C 1 B-spline discretization on a uniform mesh with step size 1 n , the basis functions ϕ 1 , , ϕ N n are chosen as the B-splines; see [13]. The resulting normalized stiffness and mass matrices are given by
Mathematics 13 00393 i004
Mathematics 13 00393 i005
According to the notation in Section 2.7 with d = 1 and r = 2 , the stiffness matrix 1 n K n and the mass matrix n M n contain, as principal submatrices, the unilevel 2-block Toeplitz matrices T n 1 ( f ) and T n 1 ( h ) , where the generating functions f ( θ ) and h ( θ ) are 2 × 2 matrix-valued functions given by
f ( θ ) : = 1 40 15 15 3 15 e i θ + 48 0 0 48 + 15 3 15 15 e i θ ,
f ( θ ) = 1 40 48 30 cos ( θ ) 15 3 e i θ 3 e i θ 15 e i θ 48 30 cos θ ,
and
h ( θ ) : = 1 560 9 53 1 9 e i θ + 128 80 80 128 + 9 1 53 9 e i θ ,
h ( θ ) = 1 560 128 + 18 cos ( θ ) 80 + 53 e i θ + e i θ 80 + 53 e i θ + e i θ 128 + 18 cos ( θ ) .
As discussed in Section 4.6, the same reasoning applies to the matrix-sequence { L n } n with
L n : = 1 n K n + n M n ,
which results in the GLT sequence symbol e ( θ ) = f ( θ ) + h ( θ ) . The numerical evidence is again strong and in fact we can see from the related Figure 6 that there exist two branches of the spectrum in accordance with Theorem 5 with r = 2 .

4.8. Minimal Eigenvalues and Conditioning

In the last part of the current numerical section, we consider the problem of understanding the extremal behavior of the spectral of the conditioning of { G ( A n ( 1 ) , A n ( 2 ) , A n ( 3 ) ) } n in dependence of the analytical features of the corresponding GLT symbol. The idea is borrowed from the literature, where we find papers dealing with the extremal eigenvalues in a Toeplitz setting [36,37,38,39], in a r-block Toeplitz setting [40,41], in a differential setting [42,43], including multilevel cases of d > 1 . Here, we restrict our attention to the unilevel scalar setting with d = r = 1 and we consider again the examples in Section 4.1 and Section 4.3.

4.8.1. Example 1 (1D): Minimal Eigenvalue

We consider the example in Section 4.1. The generating function κ ( x , θ ) = ( 2 2 cos ( θ ) ) 2 of A n has a unique zero of order 4 at θ = 0 , while the matrix-sequence { B n } n shows the GLT symbol ξ ( x , θ ) ) = x ( 2 2 cos ( θ ) ) 2 with combinations of zeros at x = 0 of order 1 and at θ = 0 of order 4. According to the theory, the minimal eigenvalue of A n is positive and tends to zero as n 4 (see [36,37,38,39]). On the other hand, according to [42,43], we know that eigenvalue of B n is positive and tends to zero as n 4 . If we consider the geometric mean of the two symbols, we deduce that it has again zeros of order at most 4: as a consequence, heuristically we expect that the minimal eigenvalue of X n = G ( A n , B n ) is positive and tends to zero as n 4 as it is perfectly verified in Table 1 with α j tending to 4.
  • X n = G ( A n , B n )
  • Take n j = 40 2 j , j = 0 , 1 , 2 , 3 ,
  • Compute τ j = λ min ( X n ) , j = 0 , 1 , 2 , 3 ,
  • Compute α j = log 2 τ j τ j + 1 , j = 0 , 1 , 2 .

4.8.2. Example 3 (1D): Minimal Eigenvalue

We consider the example in Section 4.3. The generating function κ 1 of A n ( 1 ) is strictly positive, the generating function κ 2 ( x , θ ) = x 2 of A n ( 2 ) has a unique zero of order 2 at x = 0 , while the matrix-sequence { A n ( 3 ) } n shows the GLT symbol κ 3 ( x , θ ) ) = x 4 ( 2 2 cos ( θ ) ) 4 with combinations of zeros at x = 0 of order 4 and at θ = 0 of order 4. According to the theory, the minimal eigenvalue of A n ( 1 ) is positive and tends to 1 = min κ 1 as (see [36,37,38,39]). On the other hand, according to [42,43], we know that eigenvalue of A n ( 3 ) is positive and tends to zero as n 4 , while for A n ( 2 ) it is trivial tp check that the minimal eigenvalue tends to zero as n 2 . If we consider the geometric mean of the two symbols, we observe that it has zeros of order at most 2: hence, heuristically we expect that the minimal eigenvalue of X n = G ( A n ( 1 ) , A n ( 2 ) , A n ( 3 ) ) is positive and tends to zero as n 2 as it is perfectly verified in Table 2 with α j tending to 2.
  • X n = G ( A n ( 1 ) , A n ( 2 ) , A n ( 3 ) )
  • Take n j = 40 2 j , j = 0 , 1 , 2 , 3 ,
  • Compute τ j = λ min ( X n ) , j = 0 , 1 , 2 , 3 ,
  • Compute α j = log 2 τ j τ j + 1 , j = 0 , 1 , 2 .

5. Conclusions

We have studied the spectral distribution of the geometric mean of two or more matrix-sequences constituted by HPD matrices, under the assumption that k input matrix-sequences belong to the same d-level, r-block GLT ∗-algebra, where k 2 , d , r 1 . For k = 2 an explicit formula exists, and this has allowed us to prove that the new matrix-sequence is of GLT type for any fixed d , r , with the GLT symbol being the corresponding geometric mean of the k = 2 input symbols, according to Theorems 4 and 5. As a consequence, the spectral distribution of the geometric mean of k = 2 matrix-sequences is formally demonstrated. For k > 2 , an explicit formula is not available, and we have considered the Karcher mean, for which efficient iterative procedures available in the literature can be used for computational purposes. Interestingly enough, using the same tools as in the proof of Theorems 4 and 5, we have shown that all the matrix-sequences made by the Karcher mean iterates are still GLT matrix-sequences with symbols converging to the geometric mean of the k symbols, provided that the matrix-sequence of the initial input matrices is a GLT matrix-sequence made by HPD matrices. The theoretical results have been confirmed through various numerical experiments, where we compared the eigenvalues of the geometric mean with a uniform sampling of the geometric mean of the symbols. A very good agreement has been observed in all the numerical tests and even for very moderate matrix-sizes. In fact, as the matrix-size increases, the symbol provides a better and better approximation of the eigenvalues, both in one-dimensional and two-dimensional cases, and both with scalar and block matrices, i.e., using GLT matrix-sequences taken from the differential world with k = 2 , 3 , d , r = 1 , 2 and including finite difference, finite element, isogeometric analysis approximations of constant and variable differential operators. Few open questions remain, and in the subsequent lines we indicate three main issues:
  • A formal proof of the GLT nature of the Karcher mean of k > 2 HPD GLT matrix-sequences has to be given under the assumption that the initial guess is a HPD GLT matrix-sequence; in this respect, also from a computational viewpoint, starting with an initial guess { X n , 0 } n having already as the GLT symbol the geometric mean of the input symbols should reduce sensibly the number of iterations;
  • In connection with Theorems 4 and 5 and k = 2 , the ALM axioms suggest that the technical assumption on the invertibility almost everywhere of the input GLT symbols is not necessary for every d , r 1 (see Remark 2);
  • A completely open problem concerns the study of the extremal eigenvalues of the geometric means of GLT matrix-sequences as a function of the analytic features of the geometric means of the GLT symbols: in this direction it should be recalled that the rich literature exists regarding the extremal eigenvalues in a Toeplitz setting [36,37,38,39], in a r-block Toeplitz setting [40,41], in a differential setting [42,43], so involving all the types of examples considered in the numerical experiments. Prelinary numerical experiments in the unilevel scalar setting with k = 2 , 3 have been performed in Section 4.8, and the results are quite promising. Interestingly enough, and substantially mimicking the cases already studied in the literature, it seems that the order of the zeros of the GLT symbol decides the asymptotic behavior of the minimal eigenvalues and hence of the conditioning of G ( A n 1 , , A n k ) ) , at least for d = r = 1 , k = 2 , 3 .
All the previous items show that a lot of scientific research has still to be done in connection with the findings discussed in the current work.

Author Contributions

Methodology, M.F.K. and S.S.-C.; Software, M.F.K.; Formal analysis, M.F.K. and S.S.-C.; Resources, D.A. and S.S.-C.; Writing—original draft, M.F.K.; Writing—review & editing, M.F.K. and S.S.-C.; Supervision, S.S.-C.; Project administration, S.S.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The second author and the third author are grateful for the support to “Gruppo Nazionale per il Calcolo Scientifico” (INdAM-GNCS). The work of Stefano Serra-Capizzano is partially supported by INdAM—GNCS Project “Analisi e applicazioni di matrici strutturate (a blocchi)” CUP E53C23001670001 and is funded via the European High-Performance Computing Joint Undertaking (JU) under grant agreement No. 955701. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Belgium, France, Germany, Switzerland. Furthermore Stefano Serra-Capizzano is grateful for the support of the Laboratory of Theory, Economics and Systems—Department of Computer Science at Athens University of Economics and Business and to the “Como Lake center for AstroPhysics” of Insubria University. Finally the authors warmly thank the Reviewers for their careful reading of the original manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Böttcher, A.; Silbermann, B. Introduction to Large Truncated Toeplitz Matrices; Universitext; Springer: New York, NY, USA, 1999. [Google Scholar]
  2. Grenander, U.; Szegö, G. Toeplitz forms and their applications. In California Monographs in Mathematical Sciences; University of California Press: Berkeley, LA, USA, 1958. [Google Scholar]
  3. Tracy, C.A.; Widom, H. Level-spacing distributions and the Airy kernel. Commun. Math. Phys. 1994, 159, 151–174. [Google Scholar] [CrossRef]
  4. Tyrtyshnikov, E.E. A unifying approach to some old and new theorems on distribution and clustering. Linear Algebra Appl. 1996, 232, 1–43. [Google Scholar] [CrossRef]
  5. Tilli, P. Locally Toeplitz sequences: Spectral properties and applications. Linear Algebra Appl. 1998, 278, 91–120. [Google Scholar] [CrossRef]
  6. Tilli, P. A note on the spectral distribution of Toeplitz matrices. Linear Multilinear Algebra 1998, 45, 147–159. [Google Scholar] [CrossRef]
  7. Serra-Capizzano, S. Generalized locally Toeplitz sequences: Spectral analysis and applications to discretized partial differential equations. Linear Algebra Appl. 2003, 366, 371–402. [Google Scholar] [CrossRef]
  8. Serra-Capizzano, S. The GLT class as a generalized Fourier analysis and applications. Linear Algebra Appl. 2006, 419, 180–233. [Google Scholar] [CrossRef]
  9. Barbarino, G.; Garoni, C.; Serra-Capizzano, S. Block generalized locally Toeplitz sequences: Theory and applications in the unidimensional case. Electron. Trans. Numer. Anal. 2020, 53, 28–112. [Google Scholar] [CrossRef]
  10. Barbarino, G.; Garoni, C.; Serra-Capizzano, S. Block generalized locally Toeplitz sequences: Theory and applications in the multidimensional case. Electron. Trans. Numer. Anal. 2020, 53, 113–216. [Google Scholar] [CrossRef]
  11. Garoni, C.; Serra-Capizzano, S. Generalized Locally Toeplitz Sequences: Theory and Applications; Springer: Cham, Switzerland, 2017; Volume I. [Google Scholar]
  12. Garoni, C.; Serra-Capizzano, S. Generalized Locally Toeplitz Sequences: Theory and Applications; Springer: Cham, Switzerland, 2018; Volume II. [Google Scholar]
  13. Garoni, C.; Speleers, H.; Ekström, S.-E.; Reali, A.; Serra-Capizzano, S.; Hughes, T.J.R. Symbol-based analysis of finite element and isogeometric B-spline discretizations of eigenvalue problems: Exposition and review. Arch. Comput. Methods Eng. State Art Rev. 2019, 26, 1639–1690. [Google Scholar] [CrossRef]
  14. Barbarino, G. A systematic approach to reduced GLT. BIT 2022, 62, 681–743. [Google Scholar] [CrossRef]
  15. Moakher, M. A differential geometric approach to the geometric mean of symmetric positive-definite matrices. SIAM J. Matrix Anal. Appl. 2005, 26, 735–747. [Google Scholar] [CrossRef]
  16. Bhatia, R. Positive Definite Matrices; Princeton University Press: Princeton, NJ, USA, 2007. [Google Scholar]
  17. Ando, T.; Li, C.-K.; Mathias, R. Geometric Means. Linear Algebra Appl. 2004, 385, 305–334. [Google Scholar] [CrossRef]
  18. Bini, D.A.; Meini, B.; Poloni, F. An effective matrix geometric mean satisfying the Ando-Li-Mathias properties. Math. Comput. 2010, 79, 437–452. [Google Scholar] [CrossRef]
  19. Bini, D.A.; Iannazzo, B. Computing the Karcher Mean of Symmetric Positive Definite Matrices. Linear Algebra Appl. 2013, 438, 1700–1710. [Google Scholar] [CrossRef]
  20. Fletcher, P.T.; Joshi, S. Riemannian geometry for the statistical analysis of diffusion tensor data. Signal Process. 2007, 87, 250–262. [Google Scholar] [CrossRef]
  21. Bonnabel, S.; Sepulchre, R. Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank. SIAM J. Matrix Anal. Appl. 2009, 31, 1055–1070. [Google Scholar] [CrossRef]
  22. Manton, J.H. A globally convergent numerical algorithm for computing the centre of mass on compact Lie groups. In Proceedings of the Eighth International Conference on Control, Automation, Robotics and Vision, ICARCV 2004, Kunming, China, 6–9 December 2004. [Google Scholar]
  23. Rathi, Y.; Michailovich, O.; Tannenbaum, A. Segmenting images on the tensor manifold. In Proceedings of the Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  24. Barbaresco, F. New foundation of radar Doppler signal processing based on advanced differential geometry of symmetric spaces. In Proceedings of the International Radar Conference 2009, Bordeaux, France, 12–16 October 2009. [Google Scholar]
  25. Bini, D.A.; Iannazzo, B. A note on computing matrix geometric means. Adv. Comput. Math. 2011, 35, 175–192. [Google Scholar] [CrossRef]
  26. Bhatia, R. Matrix Analysis. In Graduate Texts in Mathematics; Springer: New York, NY, USA, 1997; Volume 169. [Google Scholar]
  27. Serra-Capizzano, S. Distribution results on the algebra generated by Toeplitz sequences: A finite-dimensional approach. Linear Algebra Appl. 2011, 328, 121–130. [Google Scholar] [CrossRef]
  28. Barbarino, G. Equivalence between GLT sequences and measurable functions. Linear Algebra Appl. 2017, 529, 397–412. [Google Scholar] [CrossRef]
  29. Garoni, C. Topological foundations of an asymptotic approximation theory for sequences of matrices with increasing size. Linear Algebra Appl. 2017, 513, 324–341. [Google Scholar] [CrossRef]
  30. Serra-Capizzano, S. Spectral behavior of matrix sequences and discretized boundary value problems. Linear Algebra Appl. 2001, 337, 37–78. [Google Scholar] [CrossRef]
  31. Higham, N.J. Functions of Matrices: Theory and Computation; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2008. [Google Scholar]
  32. Bini, D.A.; Iannazzo, B. The Matrix Means Toolbox. Available online: http://bezout.dm.unipi.it/software/mmtoolbox/ (accessed on 7 May 2010).
  33. Barbarino, G.; Garoni, C. An extension of the theory of GLT sequences: Sampling on asymptotically uniform grids. Linear Multilinear Algebra 2023, 71, 2008–2025. [Google Scholar] [CrossRef]
  34. Tyrtyshnikov, E.E. Extra Dimension Approach to Spectral Distributions. Private discussion, 1997. [Google Scholar]
  35. Barakitis, N.; Ferrari, P.; Furci, I.; Serra-Capizzano, S. An extradimensional approach and distributional results for the case of 2 × 2 block Toeplitz structures. Springer Proc. Math. Stat. 2025; in press. [Google Scholar]
  36. Böttcher, A.; Grudsky, S.M. On the condition numbers of large semi-definite Toeplitz matrices. Linear Algebra Appl. 1998, 279, 285–301. [Google Scholar] [CrossRef]
  37. Serra-Capizzano, S. On the extreme spectral properties of Toeplitz matrices generated by L1 functions with several minima/maxima. BIT 1996, 36, 135–142. [Google Scholar] [CrossRef]
  38. Serra-Capizzano, S. On the extreme eigenvalues of Hermitian (block) Toeplitz matrices. Linear Algebra Appl. 1998, 270, 109–129. [Google Scholar] [CrossRef]
  39. Serra-Capizzano, S.; Tilli, P. Extreme singular values and eigenvalues of non-Hermitian block Toeplitz matrices. J. Comput. Appl. Math. 1999, 108, 113–130. [Google Scholar] [CrossRef]
  40. Serra-Capizzano, S. Asymptotic results on the spectra of block Toeplitz preconditioned matrices. SIAM J. Matrix Anal. Appl. 1999, 20, 31–44. [Google Scholar] [CrossRef]
  41. Serra-Capizzano, S. Spectral and computational analysis of block Toeplitz matrices having nonnegative definite matrix-valued generating functions. BIT 1999, 39, 152–175. [Google Scholar] [CrossRef]
  42. Vassalos, P. Asymptotic results on the condition number of FD matrices approximating semi-elliptic PDEs. Electron. J. Linear Algebra 2018, 34, 566–581. [Google Scholar] [CrossRef]
  43. Noutsos, D.; Serra-Capizzano, S.; Vassalos, P. The conditioning of FD matrix sequences coming from semi-elliptic differential equations. Linear Algebra Appl. 2008, 428, 600–624. [Google Scholar] [CrossRef]
Figure 1. Comparison between the symbol ( κ ξ ) 1 2 and eig ( G ( A n , B n ) ) .
Figure 1. Comparison between the symbol ( κ ξ ) 1 2 and eig ( G ( A n , B n ) ) .
Mathematics 13 00393 g001
Figure 2. Comparison between the symbol ( κ ξ ) 1 2 and eig ( G ( A n , B n ) .
Figure 2. Comparison between the symbol ( κ ξ ) 1 2 and eig ( G ( A n , B n ) .
Mathematics 13 00393 g002
Figure 3. Comparison between the symbol ( κ 1 κ 2 κ 3 ) 1 3 and eig ( G ( A n ( 1 ) , A n ( 2 ) , A n ( 3 ) ) ) .
Figure 3. Comparison between the symbol ( κ 1 κ 2 κ 3 ) 1 3 and eig ( G ( A n ( 1 ) , A n ( 2 ) , A n ( 3 ) ) ) .
Mathematics 13 00393 g003
Figure 4. Comparison between the symbol ( κ 1 κ 2 κ 3 ) 1 3 and eig ( G ( A n ( 1 ) , A n ( 2 ) , A n ( 3 ) ) ) .
Figure 4. Comparison between the symbol ( κ 1 κ 2 κ 3 ) 1 3 and eig ( G ( A n ( 1 ) , A n ( 2 ) , A n ( 3 ) ) ) .
Mathematics 13 00393 g004
Figure 5. Comparison between the symbol G ( f ( θ ) , h ( θ ) , e ( θ ) ) and eig ( G ( n 1 K n , n M n , L n ) ) .
Figure 5. Comparison between the symbol G ( f ( θ ) , h ( θ ) , e ( θ ) ) and eig ( G ( n 1 K n , n M n , L n ) ) .
Mathematics 13 00393 g005
Figure 6. Comparison between the symbol G ( f ( θ ) , h ( θ ) , e ( θ ) ) and eig ( G ( n 1 K n , n M n , L n ) ) .
Figure 6. Comparison between the symbol G ( f ( θ ) , h ( θ ) , e ( θ ) ) and eig ( G ( n 1 K n , n M n , L n ) ) .
Mathematics 13 00393 g006
Table 1. Numerical behaviour of the minimal eigenvalue of Example 1, Section 4.8.1.
Table 1. Numerical behaviour of the minimal eigenvalue of Example 1, Section 4.8.1.
τ j α j
n 0 = 40 0.000109943.8698
n 1 = 80 0.000007523.9398
n 2 = 160 0.000000494.0297
n 3 = 320 0.00000003
Table 2. Numerical behaviour of the minimal eigenvalue of Example 3, Section 4.8.2.
Table 2. Numerical behaviour of the minimal eigenvalue of Example 3, Section 4.8.2.
τ j α j
n 0 = 40 0.00142.2223
n 1 = 80 0.000347972
n 2 = 160 0.0000869932
n 3 = 320 0.000021748
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmad, D.; Khan, M.F.; Serra-Capizzano, S. Matrix-Sequences of Geometric Means in the Case of Hidden (Asymptotic) Structures. Mathematics 2025, 13, 393. https://doi.org/10.3390/math13030393

AMA Style

Ahmad D, Khan MF, Serra-Capizzano S. Matrix-Sequences of Geometric Means in the Case of Hidden (Asymptotic) Structures. Mathematics. 2025; 13(3):393. https://doi.org/10.3390/math13030393

Chicago/Turabian Style

Ahmad, Danyal, Muhammad Faisal Khan, and Stefano Serra-Capizzano. 2025. "Matrix-Sequences of Geometric Means in the Case of Hidden (Asymptotic) Structures" Mathematics 13, no. 3: 393. https://doi.org/10.3390/math13030393

APA Style

Ahmad, D., Khan, M. F., & Serra-Capizzano, S. (2025). Matrix-Sequences of Geometric Means in the Case of Hidden (Asymptotic) Structures. Mathematics, 13(3), 393. https://doi.org/10.3390/math13030393

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop