Next Article in Journal
Integrated Scheduling Algorithm Based on the Improved Floyd Algorithm
Previous Article in Journal
Electroweak Form Factors of Baryons in Dense Nuclear Matter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Factorizations and Accurate Computations with Min and Max Matrices

by
Yasmina Khiar
1,
Esmeralda Mainar
1,* and
Eduardo Royo-Amondarain
2
1
Department of Applied Mathematics, University Research Institute of Mathematics and Its Applications (IUMA), University of Zaragoza, 50009 Zaragoza, Spain
2
Department of Applied Mathematics, Centro de Astropartículas y Física de Altas Energías (CAPA), University of Zaragoza, 50009 Zaragoza, Spain
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(5), 684; https://doi.org/10.3390/sym17050684 (registering DOI)
Submission received: 25 February 2025 / Revised: 25 April 2025 / Accepted: 27 April 2025 / Published: 29 April 2025
(This article belongs to the Section Mathematics)

Abstract

:
Min and max matrices are structured matrices that appear in diverse mathematical and computational applications. Their inherent structures facilitate highly accurate numerical solutions to algebraic problems. In this research, the total positivity of generalized Min and Max matrices is characterized, and their bidiagonal factorizations are derived. It is also demonstrated that these decompositions can be computed with high relative accuracy (HRA), enabling the precise computations of eigenvalues and singular values and the solution of linear systems. Notably, the discussed approach achieves relative errors on the order of the unit roundoff, even for large and ill-conditioned matrices. To illustrate the exceptional accuracy of this method, numerical experiments on quantum extensions of Min and L-Hilbert matrices are presented, showcasing their superior precisions compared to those of standard computational techniques.

1. Introduction

The study of matrices exhibiting specific structural regularities remains an active area of research, as they find applications in diverse fields of science and technology [1]. A particularly interesting class comprises matrices with repeated entries that form distinct geometric patterns. Because of their combination of simplicity and intriguing properties, these matrices are frequently analyzed [2,3,4].
In this context, a considerable amount of research has focused on the study of various extensions of the Γ -shaped and L-shaped min and max matrices [5,6,7,8]. The  n × n min matrix M and max matrix M are defined as follows:
M = 1 1 1 1 1 2 2 2 1 2 3 3 1 2 3 n , M = 1 2 3 n 2 2 3 n 3 3 3 n n n n n .
Originally formulated by Pólya and Szegö [9], the min matrix (M) arises in various mathematical and statistical contexts. For instance, it can be considered as a covariance matrix of specific stochastic processes [10]. The min matrix was generalized for sequences [11], and several of its properties—including its total positivity, inverse, and determinant—were determined shortly thereafter [12].
In parallel, another set of matrices extensively analyzed in recent decades are those with all their minors being nonnegative—these are known as totally positive matrices. Well-known examples include Pascal [13], Green [14], and, under certain conditions, collocation, Wronskian, and Gram matrices of different bases [15,16,17]. Beyond their intrinsic value, many of these studies focus on finding a specific type of bidiagonal factorization, as this enables highly accurate solutions to several numerical linear algebra problems [18]. When the necessary conditions are met, machine-order precision is achieved, even for extremely ill-conditioned problems and high dimensions—this is known as high relative accuracy (HRA).
The layout of this paper is as follows. After recalling the basic principles of the bidiagonal factorization approach and high-relative-accuracy computations in Section 2, the main results of the paper are presented in Section 3. The total positivity of generalized versions of min and max matrices is explored. Their bidiagonal factorizations are provided for any given sequence, and the precise conditions under which these matrices are totally positive are derived. Consequently, a series of algebraic problems can be solved with outstanding precision. To illustrate the behavior of the discussed methods in a challenging scenario, numerical experiments are reported in Section 4 for quantum extensions of the classical min and L-Hilbert matrices involving q-numbers.

2. Preliminaries

A matrix is termed as totally positive (TP) if all its minors are nonnegative and as strictly totally positive (STP) if all its minors are strictly positive. In the literature, TP and STP matrices are also frequently referred to as totally nonnegative and totally positive matrices, respectively, as detailed in [19,20]. The applications of TP and STP matrices are notably interesting and diverse, with prominent examples discussed in [19,21,22].
A fundamental property of TP matrices is their closure under matrix multiplication; specifically, the product of two TP matrices is also a TP matrix (see Theorem 3.1 in [21]). This crucial property has motivated the factorization of TP matrices into products of simpler TP factors, a subject of extensive study within numerical linear algebra and computational mathematics.
Bidiagonal matrices are prevalent in numerical linear algebra because of a number of interesting structural properties that render them as powerful tools in a variety of problems, particularly when involved in matrix products (cf. [5]). These matrices exhibit computational advantages in various algorithms, including singular value decomposition (SVD) and eigenvalue computations.
Factorizations involving bidiagonal matrices provide elegant and often simpler proofs for the total positivity and other related properties of the resulting factorized matrices. This fundamental connection between bidiagonal factorizations and the total positivity not only offers theoretical insights but also has practical implications for developing efficient and accurate numerical methods for handling TP matrices and related matrix classes. The sparse structure of bidiagonal matrices contributes to the computational efficiency of these factorizations and subsequent matrix operations.
To achieve high-accuracy solutions in the numerical solution of algebraic problems involving TP matrices, a widely adopted approach in recent years has been to factorize an n × n TP matrix, A, into bidiagonal factors as follows:
A = F n 1 F 1 D G 1 G n 1 ,
where, for  i = 1 , , n 1 , F i and G i are lower and upper triangular bidiagonal matrices, respectively, given by
F i = 1 1 m i + 1 , 1 1 m n , n i 1 , G i T = 1 1 m ˜ i + 1 , 1 1 m ˜ n , n i 1 .
The factor D is a diagonal matrix given by D = diag ( p 1 , 1 , , p n , n ) . Such a decomposition offers significant computational advantages, particularly in numerical algorithms where efficiency and stability are paramount concerns [23,24,25,26].
The elements m i , j , m ˜ i , j , and  p i , i can be computed using various methods. One particularly effective approach—adopted in this research—is Neville elimination, a process analogous to Gaussian elimination. This method systematically eliminates entries below the diagonal, column by column, by updating each row through the addition of a suitable multiple of the preceding one. Specifically, the procedure generates a sequence of matrices A ( k ) = ( a i , j ( k ) ) 1 i , j n for k = 1 , , n , starting with A ( 1 ) : = A and satisfying the property that all the entries at positions ( i , j ) with i > j and j = 1 , , k 1 are zero. The final matrix, A ( n ) , is upper triangular.
At each step of the Neville elimination process, the pivots
p i , j : = a i , j ( j ) , 1 j i n ,
are obtained.
The Neville multipliers m i , j in (3) are the values used to create zeros at the positions ( i , j ) , given by
m i , j : = p i , j / p i 1 , j , if p i 1 , j 0 , 0 , if p i 1 , j = 0 ,
for 1 j < i n . The diagonal pivots, p i , i , correspond to the diagonal entries of the resulting upper triangular matrix, A ( n ) . A similar procedure applied to A T provides the multipliers, m ˜ i , j , in (3).
The complete process, which involves performing Neville elimination on both A and A T , is known as the complete Neville elimination. According to Theorem 2.2 in [27], the complete Neville elimination of a nonsingular matrix, A, can be performed without row and column swaps if and only if it can be factorized to the form shown in (2), and its multipliers, m i , j , m ˜ i , j , verify that
m i , j = 0 m h , j = 0 , h > i and m ˜ i , j = 0 m ˜ h , j = 0 , h > i .
Moreover, under such conditions, the factorization shown in (2) is unique.
For completeness, let us mention that an alternative approach to obtain the bidiagonal factorization of A consists of computing the pivots, p i , j , directly from the entries of A, as  p i , 1 = a i , 1 , 1 i n and
p i , j : = det A [ i j + 1 , , i 1 , , j ] det A [ i j + 1 , , i 1 1 , , j 1 ] , 1 < j i n ,
where A [ i 1 , , i r j 1 , , j s ] denotes the submatrix of A formed by rows i 1 , , i r and columns j 1 , , j s . Then, the multipliers, m i , j , are obtained using (4), and, similarly, the entries, m ˜ i , j , are computed from the pivots, p ˜ i , j , that can be obtained as in (5) from A T .
For further details, the reader is referred to the foundational research of Gasca and Peña [27,28,29] and the recent survey [18].
A useful application of the Neville elimination of a matrix, A, is that it provides straightforward criteria to determine whether A is TP. Rather than directly verifying the definition—that is, checking that all the minors are nonnegative—one can employ an alternative approach. The following result is based on Corollary 5.5 in [28] and the arguments presented on p. 116 in [27].
Theorem 1.
A given nonsingular matrix, A, is TP (resp., STP) if and only if the complete Neville elimination of A and A T can be performed without row swaps, the diagonal pivots of the Neville elimination of A are positive, and the multipliers of the Neville elimination of A and A T are nonnegative (resp., positive).
Additionally, an immediate consequence of computing the diagonal pivots, p i , i , is that because of the structure of the factorization in (2), the determinant of A is given by
det ( A ) = i = 1 n p i , i .
Finally, following [30], all the information related to the bidiagonal factorization of a given n × n matrix, A, can be encoded in another n × n matrix, B D ( A ) , which entries coincide with the multipliers and pivots of the complete Neville elimination, namely,
B D ( A ) i , j : = m i , j , i > j , p i , i , i = j , m ˜ j , i , i < j .
This reformulation of A finds a direct application when A is TP, as it enables solving several algebraic problems without performing any intermediate subtraction—thus satisfying the non-inaccurate cancellation (NIC) condition.
When an algorithm satisfies the NIC condition, it ensures that the computed solution, x ^ , has high relative accuracy (HRA), meaning that its relative error is bounded by
| | x x ^ | | | | x | | K u ,
where K is a positive constant independent of the dimension and the condition of the problem, u is the unit roundoff, and x is the exact solution [25].
In particular, if the entries of B D ( A ) are computed at the machine precision level, one can leverage existing algorithms that achieve HRA for problems such as computing the eigenvalues and singular values of A and its inverse or solving linear systems A x = b for alternating-sign component vectors, b. These computations can be efficiently performed using the TNTool package, developed by Koev [31], which provides MATLAB implementations of the corresponding subroutines—TNEigenValues, TNSingularValues, TNInverseExpand, and TNSolve.

3. Bidiagonal Factorizations of Generalized Min and Max Matrices

Given n N and an n-tuple, x = ( x 1 , , x n ) , the associated min matrix is defined as M x = ( M i , j ) 1 i , j n , where
M i , j = x min ( i , j ) .
This results in the matrix with the following explicit structure:
M x = x 1 x 1 x 1 x 1 x 1 x 2 x 2 x 2 x 1 x 2 x 3 x 3 x 1 x 2 x 3 x n .
Note that M x is a symmetric matrix, and its elements are arranged in a distinct Γ pattern. For the specific case where x i = i for i = 1 , , n , the classical min matrix, M, defined in (1) is obtained.
The inherent symmetry and simple structure of min matrices (8) make them particularly well suited for the application of Neville elimination, as demonstrated in the following result:
Theorem 2.
The n × n min matrix, M x , in (8) for the n-tuple, x = ( x 1 , , x n ) , admits a bidiagonal factorization (2) such that
m i , 1 = m ˜ i , 1 = 1 , i = 2 , , n ; m i , j = m ˜ i , j = 0 , i = j + 1 , , n , j = 2 , , n 1 ,
and
p 1 , 1 = x 1 , p i , i = x i x i 1 , i = 2 , , n .
Proof. 
Taking into account the Γ pattern of the entries of M x = ( M i , j ) 1 i , j n , it is deduced that M i , j = M i 1 , j for  j = 1 , , n 1 and i = j + 1 , , n . So, from (4), the multipliers of its Neville elimination satisfy
m i , 1 = M i , 1 / M i 1 , 1 = 1 , for i = 2 , , n .
Moreover, the matrix obtained after the first step of this process is an upper triangular matrix, U = ( u i , j ) 1 i , j n . This implies that
m i , j = 0 , for i = j + 1 , , n , and j = 2 , , n 1 .
Additionally, the diagonal entries of U are explicitly given by u i , i = p i , i for i = 1 , , n , as stated in (10). Moreover, because M x is a symmetric matrix, it is concluded that the multipliers, m ˜ i , j , also satisfy (9).    □
To illustrate the bidiagonal factorization (2), let us consider the case of a 4 × 4 min matrix associated with the tuple x = ( x 1 , , x 4 ) . From  Theorem 2,
M x = x 1 x 1 x 1 x 1 x 1 x 2 x 2 x 2 x 1 x 2 x 3 x 3 x 1 x 2 x 3 x 4 = F 3 F 2 F 1 D F 1 T F 2 T F 3 T ,
with
F 3 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 , F 2 = 1 0 0 0 0 1 0 0 0 1 1 0 0 0 0 1 , F 1 = 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 ,
and
D = x 1 0 0 0 0 x 2 x 1 0 0 0 0 x 3 x 2 0 0 0 0 x 4 x 3 .
This factorization can be represented as in formula (7) by
B D ( M x ) = x 1 1 1 1 1 x 2 x 1 0 0 1 0 x 3 x 2 0 1 0 0 x 4 x 3 .
Using Formula (6), the well-known determinant expression for min matrices is derived (see Theorem 6.2 in [10]). Furthermore, by leveraging Theorems 1 and 2, the complete characterization of the total positivity of min matrices in terms of the underlying sequence x = { x 1 , , x n } is obtained.
Theorem 3.
Let M x R n × n be the min matrix in (8) for the n-tuple x = ( x 1 , , x n ) . Then,
det ( M x ) = x 1 i = 2 n ( x i x i 1 ) .
Moreover, M x is nonsingular and TP if and only if the sequence { x 1 , , x n } is strictly increasing, with all the elements being positive. In this scenario, the expression (11) for det ( M x ) and the bidiagonal decomposition (2), represented by
B D ( M x ) = x 1 1 1 1 1 x 2 x 1 0 0 1 0 x 3 x 2 0 1 0 0 x n 1 x n ,
satisfy the NIC condition and can be computed to HRA.
Example 1.
As an illustrative example, a quantum extension (or q-extension) of the classical min matrix (1) is considered from the increasing sequence x = ( [ 1 ] q , , [ n ] q ) , where, for  q > 0 , the q-integer [ i ] q is defined as
[ i ] q : = 1 + q + + q i 1 = 1 q i 1 q , if q 1 , i , if q = 1 .
From Theorem 3, the corresponding min matrix, M x , is totally positive, and its determinant is given by
det ( M x ) = x 1 i = 2 n ( x i x i 1 ) = i = 2 n ( [ i ] q [ i 1 ] q ) = i = 2 n q i 1 = q n ( n 1 ) / 2 .
Additionally, the bidiagonal factorization of M x is
B D ( M x ) = 1 1 1 1 1 q 0 0 1 0 q 2 0 1 0 0 q n 1 .
Let us proceed similarly with the max matrices. For a given n-tuple, x = ( x 1 , , x n ) , the max matrix is defined as M x = ( M i , j ) 1 i , j n , where the entries are given by
M i , j = x max ( i , j ) , i , j = 1 , , n ,
which results in the following explicit form:
M x = x 1 x 2 x 3 x n x 2 x 2 x 3 x n x 3 x 3 x 3 x n x n x n x n x n .
Max matrices are symmetric and exhibit a reverse L-shaped pattern in their entries. For the particular case x i = i , i = 1 , , n , the classical max matrix, M , in (1) is recovered.
In a clear analogy with min matrices, the following result derives the bidiagonal factorization of max matrices.
Theorem 4.
The Max matrix, M x , in (12) for the n-tuple x = ( x 1 , , x n ) admits a bidiagonal factorization (2) such that
m i , 1 = m ˜ i , 1 = x i x i 1 , i = 2 , , n ; m i , j = m ˜ i , j = 0 , i = j + 1 , , n , j = 2 , , n 1 ,
and
p 1 , 1 = x 1 , p i , i = x i x i 1 ( x i 1 x i ) , i = 2 , , n .
Proof. 
Taking into account the reverse L-pattern of M x = ( M i , j ) 1 i , j n , it is deduced that M i , j / M i 1 , j = x i / x i 1 for  j = 1 , , n 1 and i = j + 1 , , n . From (4), the multipliers in the first step of the Neville elimination are given by
m i , 1 = x i x i 1 , i = 2 , , n .
After this step, the resulting matrix is an upper triangular matrix with diagonal entries corresponding to the diagonal pivots in (14). Additionally, for  i = j + 1 , , n and j = 2 , , n 1 , the off-diagonal entries satisfy m i , j = 0 . Because M x is symmetric, m i , j = m ˜ i , j for i > j , which completes the proof.    □
From Theorem 4, for the specific case where n = 4 , the bidiagonal factorization (2) of M x corresponding to the sequence x = { x 1 , x 2 , x 3 , x 4 } reads as
M x = x 1 x 2 x 3 x 4 x 2 x 2 x 3 x 4 x 3 x 3 x 3 x 4 x 4 x 4 x 4 x 4 = F 3 F 2 F 1 D F 1 T F 2 T F 3 T ,
with
F 3 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 x 4 / x 3 1 , F 2 = 1 0 0 0 0 1 0 0 0 x 3 / x 2 1 0 0 0 0 1 , F 1 = 1 0 0 0 x 2 / x 1 1 0 0 0 0 1 0 0 0 0 1 ,
and
D = x 1 0 0 0 0 x 2 x 1 ( x 1 x 2 ) 0 0 0 0 x 3 x 2 ( x 2 x 3 ) 0 0 0 0 x 4 x 3 ( x 3 x 4 ) .
This factorization can be represented by
B D ( M x ) = x 1 x 2 / x 1 x 3 / x 2 x 4 / x 3 x 2 / x 1 x 2 x 1 ( x 1 x 2 ) 0 0 x 3 / x 2 0 x 3 x 2 ( x 2 x 3 ) 0 x 4 / x 3 0 0 x 4 x 3 ( x 3 x 4 ) .
The determinants of max matrices can be deduced in a straightforward manner using Formula (6). Furthermore, Theorems 1 and 4 allow us to characterize the total positivity of max matrices in terms of the tuple { x 1 , , x n } .
Theorem 5.
Let M x R n × n be the max matrix in (12) for the n-tuple x = ( x 1 , , x n ) . Then,
det M x = x n i = 2 n ( x i 1 x i ) .
Moreover, M x is nonsingular and TP if and only if { x 1 , , x n } is a strictly decreasing sequence of positive values. In this scenario, Formula (15) for det M x and the bidiagonal decomposition (2), represented by
B D ( M x ) = x 1 x 2 x 1 x 3 x 2 x n x n 1 x 2 x 1 x 2 x 1 ( x 1 x 2 ) 0 0 x 3 x 2 0 x 3 x 2 ( x 2 x 3 ) 0 x n x n 1 0 0 x n x n 1 ( x n 1 x n ) ,
satisfy the NIC condition and can be computed to HRA.
Example 2.
A notable example of a max matrix is the so-called L-Hilbert matrix, often referred to as the loyal companion of the Hilbert matrix, paraphrasing Choi [32]. It is defined as
L = ( L i , j ) 1 i , j n , where L i , j = min 1 i , 1 j , i , j = 1 , , n .
Although it appears less frequently than the Hilbert matrix, the L-Hilbert matrix possesses several intriguing properties and has been explored in various mathematical contexts [33].
An interesting extension of the L-Hilbert matrix (16) is the quantum L-Hilbert matrix, L ( q ) = ( L i , j ( q ) ) 1 i , j n , q > 0 , defined in terms of q-integers as follows:
L i , j ( q ) = min 1 [ i ] q , 1 [ j ] q , i , j = 1 , , n .
Note that for q = 1 , the L-Hilbert matrix (16) is recovered.
Because x n = 1 / [ n ] q , n N is a decreasing sequence, L i , j ( q ) = min x i , x j = x max ( i , j ) , i , j = 1 , , n . Then, using Theorem 5, it immediately follows that
det L ( q ) = 1 [ n ] q i = 2 n 1 [ i 1 ] q 1 [ i ] q = 1 [ n ] q i = 2 n q i 1 [ i ] q [ i 1 ] q = q n ( n 1 ) / 2 i = 2 n [ i ] q 2 .
Moreover, the quantum L-Hilbert matrix, L ( q ) , in (17) is a TP matrix, and its bidiagonal factorization (2) can be encoded as follows:
B D ( L ( q ) ) = 1 1 [ 2 ] q [ 2 ] q [ 3 ] q [ n 1 ] q [ n ] q 1 [ 2 ] q q [ 2 ] q 2 0 0 [ 2 ] q [ 3 ] q 0 q 2 [ 3 ] q 2 0 [ n 1 ] q [ n ] q 0 0 q n 1 [ n ] q 2 .
The great accuracy of the results obtained by applying the derived bidiagonal factorizations to min and max matrices is illustrated in Section 4.

4. Numerical Experiments

To test the methods proposed in the previous sections, a series of experiments is provided, addressing several algebraic problems relevant in many applied contexts where interpolation and approximation problems must be solved accurately. These experiments involve the specific min and max matrices discussed in Examples 1 and 2.
For each analyzed matrix, the eigenvalues, the singular values, and the solutions of linear systems ( A x = b ), with b having an alternating-sign component pattern, are computed. For this purpose, the standard MATLAB R2024a functions (eig and svd and the ∖ command, respectively) were used and, alternatively, the subroutines provided by the TNTool package, which take the bidiagonal representation (7) as input. We compare both strategies by measuring their relative errors, | | x x ^ | | / | | x | | , where x ^ is the approximation, and x is the exact value, the latter computed using Wolfram Mathematica 13.3 with 100-digit arithmetic. To assess the ill-conditioning of the given matrices, we also calculated the condition number, κ 2 .
Let us briefly recall that regarding the computational cost of the bidiagonal approach methods, both the eigenvalues and singular values are computed in O ( n 3 ) operations, whereas the solution of linear systems is achieved at a better O ( n 2 ) cost.

4.1. HRA Computations with Min Matrices

In this first set of experiments, the quantum extension of the classical min matrix described in Example 1 is analyzed. The elements of the bidiagonal form, B D ( M x ) , which serve as input to the TNTool routines, are computed to HRA through Algorithm 1. Table 1 presents the relative errors in the computations of the lowest eigenvalue and singular value, as well as the solution of the system M x y = b , for  q = 0.2 and various matrix sizes, n. Note that the value of q is chosen to be low enough to pose a stringent test to numerical methods because in this way, the components of the matrices are relatively close, making it vulnerable to cancellations. The components of b are chosen as random numbers uniformly distributed in [ 0 , 10 3 ] , with an alternating-sign pattern to ensure HRA. As observed, when the matrix dimension and condition number increase, standard MATLAB methods quickly lose accuracy, whereas the bidiagonal approach consistently maintains machine-order errors in all the cases.
Algorithm 1 Computation to HRA of the bidiagonal form, B D ( M x ) .
Require:  n , ( x 1 , , x n )
    for  i = 2 : n  do
           B D ( M x ) i , 1 1
           B D ( M x ) 1 , i 1
          for  j = 2 : i 1  do
                 B D ( M x ) i , j 0
                 B D ( M x ) j , i 0
          end for
    end for
     B D ( M x ) 1 , 1 1
    for  i = 2 : n  do
           B D ( M x ) i , i q B D ( M x ) i 1 , i 1
    end for

4.2. HRA Computations with Max Matrices

In this experiment, the quantum extension of the L-Hilbert matrix, which is a particular case of the max matrix described in Example 2, is studied. The elements of the bidiagonal form, B D ( L ( q ) ) , are calculated to HRA using Algorithm 2. Table 2 reports the relative errors in the computations of the lowest eigenvalue and the lowest singular value and the solution of the system M x y = b . In this experiment, q = 0.3 , and the components of b are chosen to follow an alternating-sign pattern, where their absolute values are randomly generated from a uniform distribution in [ 0 , 10 3 ] . The results clearly demonstrate that as the matrix dimension and condition number increase, standard methods rapidly lose accuracy, whereas the proposed approach consistently achieves high-precision results in all the analyzed dimensions.
Algorithm 2 Computation to HRA of the bidiagonal form, B D ( L ( q ) ) .
Require:  n , ( x 1 , , x n )
    for  i = 2 : n  do
           B D ( L ( q ) ) i , 1 [ i 1 ] q / [ i ] q
           B D ( L ( q ) ) 1 , i [ i 1 ] q / [ i ] q
          for  j = 2 : i 1  do
                 B D ( L ( q ) ) i , j 0
                 B D ( L ( q ) ) j , i 0
          end for
    end for
    for  i = 1 : n  do
           B D ( L ( q ) ) i , i q i 1 / [ i ] q 2
    end for

5. Conclusions and Future Research

A generalization of min and max matrices—namely, symmetric Γ matrices and reverse L-shaped matrices—has been investigated. A comprehensive characterization of their total positivity has been developed, along with the derivation of their bidiagonal factorizations.
These factorizations provide exceptional accuracy when applied to a broad range of algebraic problems involving these matrices, including the computations of eigenvalues and singular values, as well as the solution of linear systems.
Numerical experiments have been conducted on particularly ill-conditioned matrices, achieving machine-level relative errors using the proposed methods. These results stand in sharp contrast to those obtained with standard, non-specialized algorithms, which fail to deliver accurate solutions, even for relatively small matrix sizes.
Overall, the obtained results contribute to the expanding field of structured matrices and their applications in numerical linear algebra, paving the way for more accurate and efficient computational approaches in both theoretical and applied contexts. Future lines of research that are planned to be explored include the study of different families of matrices that are natural extensions of the ones analyzed in this research.
Efficient computations involving block matrices hold significant importance in modern computing and play a crucial role across a wide spectrum of applications, as highlighted in [1]. As a compelling avenue for future research, the authors intend to investigate block min and max matrices constructed from a sequence, X = { X 1 , , X n } , of square and nonsingular matrices, X i R m × m , i = 1 , , n , generalizing the Neville elimination procedure to provide block bidiagonal factorizations and exploring their potential applications.
As a possible limitation of the proposed methods, it should be noted that obtaining the bidiagonal factorization of the matrices may not be sufficient. In addition to the expressions for the multipliers and the diagonal pivots of the complete Neville elimination, an algorithm capable of computing them to HRA is required—something that is not always feasible, even when the matrices in question are known to be TP.

Author Contributions

Conceptualization, Y.K., E.M. and E.R.-A.; methodology, Y.K., E.M. and E.R.-A.; investigation, Y.K., E.M. and E.R.-A.; writing—original draft, Y.K., E.M. and E.R.-A.; writing—review and editing, Y.K., E.M. and E.R.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by Ministerio de Ciencia, Innovación y Universidades/Agencia Estatal de Investigación (PID2022-138569NB-I00, RED2022-134176-T) and Gobierno de Aragón (E41_23R).

Data Availability Statement

The source codes employed to run the numerical experiments are available upon request.

Acknowledgments

The authors thank the anonymous referees for their helpful comments and suggestions, which have improved this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sidorov, N.A.; Dreglea, A.I.; Sidorov, D.N. Generalisation of the Frobenius Formula in the Theory of Block Operators on Normed Spaces. Mathematics 2021, 9, 3066. [Google Scholar] [CrossRef]
  2. Bryc, W.; Dembo, A.; Jiang, T. Spectral measure of large random Hankel, Markov and Toeplitz matrices. Ann. Probab. 2006, 34, 1–38. [Google Scholar] [CrossRef]
  3. Noschese, S.; Pasquini, L.; Reichel, L. Tridiagonal Toeplitz matrices: Properties and novel applications. Numer. Linear Algebra Appl. 2013, 20, 302–326. [Google Scholar] [CrossRef]
  4. Srivastava, H.M.; Ahmad, Q.Z.; Khan, N.; Khan, N.; Khan, B. Hankel and Toeplitz Determinants for a Subclass of q-Starlike Functions Associated with a General Conic Domain. Mathematics 2019, 7, 181. [Google Scholar] [CrossRef]
  5. Higham, N.J. The power of bidiagonal matrices. Electron. J. Linear Algebra 2024, 40, 453–474. [Google Scholar] [CrossRef]
  6. Li, H.; Yuan, P. A new generalization of the geometric min matrix and the geometric max matrix. J. Appl. Math. Comput. 2024, 71, 1521–1542. [Google Scholar] [CrossRef]
  7. Polatli, E. On some properties of a generalized min matrix. AIMS Math. 2023, 8, 26199–26212. [Google Scholar] [CrossRef]
  8. Štampach, F. The Hilbert L-matrix. J. Funct. Anal. 2022, 282, 109401. [Google Scholar] [CrossRef]
  9. Pólya, G.; Szegö, G. Problems and Theorems in Analysis II; Springer: Berlin, Germany, 1998. [Google Scholar]
  10. Mattila, M.; Haukkanen, P. Studying the various properties of MIN and MAX matrices—Elementary vs. more advanced methods. Spec. Matrices 2016, 4, 101–109. [Google Scholar] [CrossRef]
  11. Neudecker, H.; Trenkler, G.; Liu, S. Problem section. Stat. Pap. 2009, 50, 221–223. [Google Scholar] [CrossRef]
  12. Chu, K.; Puntanen, S.; Styan, G. Problem section. Stat. Pap. 2011, 52, 257–262. [Google Scholar] [CrossRef]
  13. Alonso, P.; Delgado, J.; Gallego, R.; Peña, J.M. Conditioning and accurate computations with Pascal matrices. J. Comput. Appl. Math. 2013, 252, 21–26. [Google Scholar] [CrossRef]
  14. Delgado, J.; Peña, G.; Peña, J.M. Accurate and fast computations with Green matrices. Appl. Math. Lett. 2023, 145, 108778. [Google Scholar] [CrossRef]
  15. Delgado, J.; Orera, H.; Peña, J.M. High Relative Accuracy With Collocation Matrices of -Jacobi Polynomials. Numer. Linear Algebra Appl. 2025, 32, e2602. [Google Scholar] [CrossRef]
  16. Marco, A.; Martínez, J.J.; Viaña, R. Accurate computations with collocation matrices of the Lupaş-type (p,q)-analogue of the Bernstein basis. Linear Algebra Appl. 2022, 651, 312–331. [Google Scholar] [CrossRef]
  17. Mainar, E.; Peña, J.M.; Rubio, B. Accurate computations with Gram and Wronskian matrices of geometric and Poisson bases. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A-Mat. 2022, 116, 126. [Google Scholar] [CrossRef]
  18. Martinez, J.J. The Application of the Bidiagonal Factorization of Totally Positive Matrices in Numerical Linear Algebra. Axioms 2024, 13, 258. [Google Scholar] [CrossRef]
  19. Fallat, S.M.; Johnson, C.R. Totally Nonnegative Matrices; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  20. Koev, P. Accurate Computations with Totally Nonnegative Matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 731–751. [Google Scholar] [CrossRef]
  21. Ando, T. Totally positive matrices. Linear Algebra Its Appl. 1987, 90, 165–219. [Google Scholar] [CrossRef]
  22. Pinkus, A. Totally Positive Matrices; Cambridge Tracts in Mathematics; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  23. Marco, A.; Martínez, J.J. Accurate computation of the Moore-Penrose inverse of strictly totally positive matrices. J. Comput. Appl. Math. 2019, 350, 299–308. [Google Scholar] [CrossRef]
  24. Demmel, J.; Koev, P. The Accurate and Efficient Solution of a Totally Positive Generalized Vandermonde Linear System. SIAM J. Matrix Anal. Appl. 2005, 27, 142–152. [Google Scholar] [CrossRef]
  25. Demmel, J.; Gu, M.; Eisenstat, S.; Slapničar, I.; Veselić, K.; Drmač, Z. Computing the singular value decomposition with high relative accuracy. Linear Algebra Appl. 1999, 299, 21–80. [Google Scholar] [CrossRef]
  26. Demmel, J. Accurate Singular Value Decompositions of Structured Matrices. SIAM J. Matrix Anal. Appl. 2000, 21, 562–580. [Google Scholar] [CrossRef]
  27. Gasca, M.; Peña, J.M. Total Positivity and Its Applications; Chapter on Factorizations of Totally Positive Matrices; Springer: Dordrecht, The Netherlands, 1996; pp. 109–130. [Google Scholar]
  28. Gasca, M.; Peña, J.M. Total positivity and Neville elimination. Linear Algebra Appl. 1992, 165, 25–44. [Google Scholar] [CrossRef]
  29. Gasca, M.; Peña, J.M. A matricial description of Neville elimination with applications to total positivity. Linear Algebra Appl. 1994, 202, 33–53. [Google Scholar] [CrossRef]
  30. Koev, P. Accurate Eigenvalues and SVDs of Totally Nonnegative Matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 1–23. [Google Scholar] [CrossRef]
  31. Koev, P. TNTool: Software Package for Performing Virtually All Matrix Computations with Nonsingular Totally Nonnegative Matrices to High Relative Accuracy. Available online: https://sites.google.com/sjsu.edu/plamenkoev/home/software/tntool (accessed on 26 April 2025).
  32. Choi, M.D. Tricks or Treats with the Hilbert Matrix. Amer. Math. Mon. 1983, 90, 301–312. [Google Scholar] [CrossRef]
  33. Štampach, F. Asymptotic Spectral Properties of the Hilbert L-Matrix. SIAM J. Matrix Anal. Appl. 2022, 43, 1658–1679. [Google Scholar] [CrossRef]
Table 1. Relative errors obtained for the quantum extension of the min matrix in the computations of the minimum eigenvalue, the minimum singular value and the solution of linear systems for q = 0.2 .
Table 1. Relative errors obtained for the quantum extension of the min matrix in the computations of the minimum eigenvalue, the minimum singular value and the solution of linear systems for q = 0.2 .
n κ 2 ( M x ) EIGTNEigenValuesSVDTNSingular–
Values
M x b TNSolve
10 4.9 × 10 7 1.3 × 10 9 2.2 × 10 16 8.6 × 10 11 1.3 × 10 15 8.6 × 10 11 4.0 × 10 16
20 9.9 × 10 14 6.1 × 10 2 6.4 × 10 16 4.6 × 10 4 8.9 × 10 16 4.1 × 10 4 1.1 × 10 15
30 1.5 × 10 22 1.3 × 10 6 1.0 × 10 15 1.0 × 10 0 1.6 × 10 15 4.1 × 10 104 1.6 × 10 15
40 1.9 × 10 29 1.1 × 10 13 1.9 × 10 15 1.0 × 10 0 2.2 × 10 15 1.3 × 10 256 2.1 × 10 15
Table 2. Relative errors obtained for the quantum extension of the L-Hilbert matrix in the computations of the minimum eigenvalue, the minimum singular value and the solution of linear systems for q = 0.3 .
Table 2. Relative errors obtained for the quantum extension of the L-Hilbert matrix in the computations of the minimum eigenvalue, the minimum singular value and the solution of linear systems for q = 0.3 .
n κ 2 ( M x ) EIGTNEigenValuesSVDTNSingularValues M x b TNSolve
10 1.6 × 10 6 1.5 × 10 10 1.9 × 10 16 87.6 × 10 12 3.8 × 10 16 1.2 × 10 11 5.5 × 10 16
20 5.4 × 10 11 3.6 × 10 05 1.5 × 10 15 8.0 × 10 7 9.9 × 10 16 1.2 × 10 6 1.3 × 10 15
30 1.4 × 10 17 3.3 × 10 0 9.6 × 10 16 1.4 × 10 2 9.6 × 10 16 3.0 × 10 2 1.7 × 10 15
40 3.1 × 10 22 2.3 × 10 6 1.2 × 10 15 1.0 × 10 0 1.9 × 10 15 2.6 × 10 10 1.8 × 10 15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khiar, Y.; Mainar, E.; Royo-Amondarain, E. Factorizations and Accurate Computations with Min and Max Matrices. Symmetry 2025, 17, 684. https://doi.org/10.3390/sym17050684

AMA Style

Khiar Y, Mainar E, Royo-Amondarain E. Factorizations and Accurate Computations with Min and Max Matrices. Symmetry. 2025; 17(5):684. https://doi.org/10.3390/sym17050684

Chicago/Turabian Style

Khiar, Yasmina, Esmeralda Mainar, and Eduardo Royo-Amondarain. 2025. "Factorizations and Accurate Computations with Min and Max Matrices" Symmetry 17, no. 5: 684. https://doi.org/10.3390/sym17050684

APA Style

Khiar, Y., Mainar, E., & Royo-Amondarain, E. (2025). Factorizations and Accurate Computations with Min and Max Matrices. Symmetry, 17(5), 684. https://doi.org/10.3390/sym17050684

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop