1. Introduction
The study of matrices exhibiting specific structural regularities remains an active area of research, as they find applications in diverse fields of science and technology [
1]. A particularly interesting class comprises matrices with repeated entries that form distinct geometric patterns. Because of their combination of simplicity and intriguing properties, these matrices are frequently analyzed [
2,
3,
4].
In this context, a considerable amount of research has focused on the study of various extensions of the
-shaped and L-shaped min and max matrices [
5,
6,
7,
8]. The
min matrix
M and max matrix
are defined as follows:
Originally formulated by Pólya and Szegö [
9], the min matrix (
M) arises in various mathematical and statistical contexts. For instance, it can be considered as a covariance matrix of specific stochastic processes [
10]. The min matrix was generalized for sequences [
11], and several of its properties—including its total positivity, inverse, and determinant—were determined shortly thereafter [
12].
In parallel, another set of matrices extensively analyzed in recent decades are those with all their minors being nonnegative—these are known as totally positive matrices. Well-known examples include Pascal [
13], Green [
14], and, under certain conditions, collocation, Wronskian, and Gram matrices of different bases [
15,
16,
17]. Beyond their intrinsic value, many of these studies focus on finding a specific type of bidiagonal factorization, as this enables highly accurate solutions to several numerical linear algebra problems [
18]. When the necessary conditions are met, machine-order precision is achieved, even for extremely ill-conditioned problems and high dimensions—this is known as high relative accuracy (HRA).
The layout of this paper is as follows. After recalling the basic principles of the bidiagonal factorization approach and high-relative-accuracy computations in
Section 2, the main results of the paper are presented in
Section 3. The total positivity of generalized versions of min and max matrices is explored. Their bidiagonal factorizations are provided for any given sequence, and the precise conditions under which these matrices are totally positive are derived. Consequently, a series of algebraic problems can be solved with outstanding precision. To illustrate the behavior of the discussed methods in a challenging scenario, numerical experiments are reported in
Section 4 for quantum extensions of the classical min and L-Hilbert matrices involving
q-numbers.
2. Preliminaries
A matrix is termed as totally positive (TP) if all its minors are nonnegative and as strictly totally positive (STP) if all its minors are strictly positive. In the literature, TP and STP matrices are also frequently referred to as totally nonnegative and totally positive matrices, respectively, as detailed in [
19,
20]. The applications of TP and STP matrices are notably interesting and diverse, with prominent examples discussed in [
19,
21,
22].
A fundamental property of TP matrices is their closure under matrix multiplication; specifically, the product of two TP matrices is also a TP matrix (see Theorem 3.1 in [
21]). This crucial property has motivated the factorization of TP matrices into products of simpler TP factors, a subject of extensive study within numerical linear algebra and computational mathematics.
Bidiagonal matrices are prevalent in numerical linear algebra because of a number of interesting structural properties that render them as powerful tools in a variety of problems, particularly when involved in matrix products (cf. [
5]). These matrices exhibit computational advantages in various algorithms, including singular value decomposition (SVD) and eigenvalue computations.
Factorizations involving bidiagonal matrices provide elegant and often simpler proofs for the total positivity and other related properties of the resulting factorized matrices. This fundamental connection between bidiagonal factorizations and the total positivity not only offers theoretical insights but also has practical implications for developing efficient and accurate numerical methods for handling TP matrices and related matrix classes. The sparse structure of bidiagonal matrices contributes to the computational efficiency of these factorizations and subsequent matrix operations.
To achieve high-accuracy solutions in the numerical solution of algebraic problems involving TP matrices, a widely adopted approach in recent years has been to factorize an
TP matrix,
A, into bidiagonal factors as follows:
where, for
,
and
are lower and upper triangular bidiagonal matrices, respectively, given by
The factor
D is a diagonal matrix given by
. Such a decomposition offers significant computational advantages, particularly in numerical algorithms where efficiency and stability are paramount concerns [
23,
24,
25,
26].
The elements , and can be computed using various methods. One particularly effective approach—adopted in this research—is Neville elimination, a process analogous to Gaussian elimination. This method systematically eliminates entries below the diagonal, column by column, by updating each row through the addition of a suitable multiple of the preceding one. Specifically, the procedure generates a sequence of matrices for , starting with and satisfying the property that all the entries at positions with and are zero. The final matrix, , is upper triangular.
At each step of the Neville elimination process, the pivots
are obtained.
The Neville multipliers
in (
3) are the values used to create zeros at the positions
, given by
for
. The diagonal pivots,
, correspond to the diagonal entries of the resulting upper triangular matrix,
. A similar procedure applied to
provides the multipliers,
, in (
3).
The complete process, which involves performing Neville elimination on both
A and
, is known as the complete Neville elimination. According to Theorem 2.2 in [
27], the complete Neville elimination of a nonsingular matrix,
A, can be performed without row and column swaps if and only if it can be factorized to the form shown in (
2), and its multipliers,
, verify that
Moreover, under such conditions, the factorization shown in (
2) is unique.
For completeness, let us mention that an alternative approach to obtain the bidiagonal factorization of
A consists of computing the pivots,
, directly from the entries of
A, as
,
and
where
denotes the submatrix of
A formed by rows
and columns
. Then, the multipliers,
, are obtained using (
4), and, similarly, the entries,
, are computed from the pivots,
, that can be obtained as in (
5) from
.
For further details, the reader is referred to the foundational research of Gasca and Peña [
27,
28,
29] and the recent survey [
18].
A useful application of the Neville elimination of a matrix,
A, is that it provides straightforward criteria to determine whether
A is TP. Rather than directly verifying the definition—that is, checking that all the minors are nonnegative—one can employ an alternative approach. The following result is based on Corollary 5.5 in [
28] and the arguments presented on p. 116 in [
27].
Theorem 1. A given nonsingular matrix, A, is TP (resp., STP) if and only if the complete Neville elimination of A and can be performed without row swaps, the diagonal pivots of the Neville elimination of A are positive, and the multipliers of the Neville elimination of A and are nonnegative (resp., positive).
Additionally, an immediate consequence of computing the diagonal pivots,
, is that because of the structure of the factorization in (
2), the determinant of
A is given by
Finally, following [
30], all the information related to the bidiagonal factorization of a given
matrix,
A, can be encoded in another
matrix,
, which entries coincide with the multipliers and pivots of the complete Neville elimination, namely,
This reformulation of A finds a direct application when A is TP, as it enables solving several algebraic problems without performing any intermediate subtraction—thus satisfying the non-inaccurate cancellation (NIC) condition.
When an algorithm satisfies the NIC condition, it ensures that the computed solution,
, has high relative accuracy (HRA), meaning that its relative error is bounded by
where
K is a positive constant independent of the dimension and the condition of the problem,
u is the unit roundoff, and
x is the exact solution [
25].
In particular, if the entries of
are computed at the machine precision level, one can leverage existing algorithms that achieve HRA for problems such as computing the eigenvalues and singular values of
A and its inverse or solving linear systems
for alternating-sign component vectors,
b. These computations can be efficiently performed using the
TNTool package, developed by Koev [
31], which provides MATLAB implementations of the corresponding subroutines—
TNEigenValues,
TNSingularValues,
TNInverseExpand, and
TNSolve.
3. Bidiagonal Factorizations of Generalized Min and Max Matrices
Given
and an
n-tuple,
, the associated min matrix is defined as
, where
This results in the matrix with the following explicit structure:
Note that
is a symmetric matrix, and its elements are arranged in a distinct
pattern. For the specific case where
for
, the classical min matrix,
M, defined in (
1) is obtained.
The inherent symmetry and simple structure of min matrices (
8) make them particularly well suited for the application of Neville elimination, as demonstrated in the following result:
Theorem 2. The min matrix, , in (8) for the n-tuple, , admits a bidiagonal factorization (2) such thatand Proof. Taking into account the
pattern of the entries of
, it is deduced that
for
and
. So, from (
4), the multipliers of its Neville elimination satisfy
Moreover, the matrix obtained after the first step of this process is an upper triangular matrix,
. This implies that
Additionally, the diagonal entries of
U are explicitly given by
for
, as stated in (
10). Moreover, because
is a symmetric matrix, it is concluded that the multipliers,
, also satisfy (
9). □
To illustrate the bidiagonal factorization (
2), let us consider the case of a
min matrix associated with the tuple
. From Theorem 2,
with
and
This factorization can be represented as in formula (
7) by
Using Formula (
6), the well-known determinant expression for min matrices is derived (see Theorem 6.2 in [
10]). Furthermore, by leveraging Theorems 1 and 2, the complete characterization of the total positivity of min matrices in terms of the underlying sequence
is obtained.
Theorem 3. Let be the min matrix in (8) for the n-tuple . Then, Moreover, is nonsingular and TP if and only if the sequence is strictly increasing, with all the elements being positive. In this scenario, the expression (11) for and the bidiagonal decomposition (2), represented bysatisfy the NIC condition and can be computed to HRA. Example 1. As an illustrative example, a quantum extension (or q-extension) of the classical min matrix (1) is considered from the increasing sequence , where, for , the q-integer is defined as From Theorem 3, the corresponding min matrix, , is totally positive, and its determinant is given by Additionally, the bidiagonal factorization of is Let us proceed similarly with the max matrices. For a given
n-tuple,
, the max matrix is defined as
, where the entries are given by
which results in the following explicit form:
Max matrices are symmetric and exhibit a reverse L-shaped pattern in their entries. For the particular case
,
, the classical max matrix,
, in (
1) is recovered.
In a clear analogy with min matrices, the following result derives the bidiagonal factorization of max matrices.
Theorem 4. The Max matrix, , in (12) for the n-tuple admits a bidiagonal factorization (2) such thatand Proof. Taking into account the reverse L-pattern of
, it is deduced that
for
and
. From (
4), the multipliers in the first step of the Neville elimination are given by
After this step, the resulting matrix is an upper triangular matrix with diagonal entries corresponding to the diagonal pivots in (
14). Additionally, for
and
, the off-diagonal entries satisfy
. Because
is symmetric,
for
, which completes the proof. □
From Theorem 4, for the specific case where
, the bidiagonal factorization (
2) of
corresponding to the sequence
reads as
with
and
This factorization can be represented by
The determinants of max matrices can be deduced in a straightforward manner using Formula (
6). Furthermore, Theorems 1 and 4 allow us to characterize the total positivity of max matrices in terms of the tuple
.
Theorem 5. Let be the max matrix in (12) for the n-tuple . Then, Moreover, is nonsingular and TP if and only if is a strictly decreasing sequence of positive values. In this scenario, Formula (15) for and the bidiagonal decomposition (2), represented bysatisfy the NIC condition and can be computed to HRA. Example 2. A notable example of a max matrix is the so-called L-Hilbert matrix, often referred to as the loyal companion of the Hilbert matrix, paraphrasing Choi [32]. It is defined as Although it appears less frequently than the Hilbert matrix, the L-Hilbert matrix possesses several intriguing properties and has been explored in various mathematical contexts [33]. An interesting extension of the L-Hilbert matrix (16) is the quantum L-Hilbert matrix, , , defined in terms of q-integers as follows: Note that for , the L-Hilbert matrix (16) is recovered. Because , is a decreasing sequence, , . Then, using Theorem 5, it immediately follows that Moreover, the quantum L-Hilbert matrix, , in (17) is a TP matrix, and its bidiagonal factorization (2) can be encoded as follows: The great accuracy of the results obtained by applying the derived bidiagonal factorizations to min and max matrices is illustrated in
Section 4.
4. Numerical Experiments
To test the methods proposed in the previous sections, a series of experiments is provided, addressing several algebraic problems relevant in many applied contexts where interpolation and approximation problems must be solved accurately. These experiments involve the specific min and max matrices discussed in Examples 1 and 2.
For each analyzed matrix, the eigenvalues, the singular values, and the solutions of linear systems (
), with
b having an alternating-sign component pattern, are computed. For this purpose, the standard MATLAB R2024a functions (
eig and
svd and the ∖ command, respectively) were used and, alternatively, the subroutines provided by the
TNTool package, which take the bidiagonal representation (
7) as input. We compare both strategies by measuring their relative errors,
, where
is the approximation, and
x is the exact value, the latter computed using Wolfram Mathematica 13.3 with 100-digit arithmetic. To assess the ill-conditioning of the given matrices, we also calculated the condition number,
.
Let us briefly recall that regarding the computational cost of the bidiagonal approach methods, both the eigenvalues and singular values are computed in operations, whereas the solution of linear systems is achieved at a better cost.
4.1. HRA Computations with Min Matrices
In this first set of experiments, the quantum extension of the classical min matrix described in Example 1 is analyzed. The elements of the bidiagonal form,
, which serve as input to the
TNTool routines, are computed to HRA through Algorithm 1.
Table 1 presents the relative errors in the computations of the lowest eigenvalue and singular value, as well as the solution of the system
, for
and various matrix sizes,
n. Note that the value of
q is chosen to be low enough to pose a stringent test to numerical methods because in this way, the components of the matrices are relatively close, making it vulnerable to cancellations. The components of
b are chosen as random numbers uniformly distributed in
, with an alternating-sign pattern to ensure HRA. As observed, when the matrix dimension and condition number increase, standard MATLAB methods quickly lose accuracy, whereas the bidiagonal approach consistently maintains machine-order errors in all the cases.
Algorithm 1 Computation to HRA of the bidiagonal form, . |
Require:
for do for do end for end for for do end for |
4.2. HRA Computations with Max Matrices
In this experiment, the quantum extension of the L-Hilbert matrix, which is a particular case of the max matrix described in Example 2, is studied. The elements of the bidiagonal form,
, are calculated to HRA using Algorithm 2.
Table 2 reports the relative errors in the computations of the lowest eigenvalue and the lowest singular value and the solution of the system
. In this experiment,
, and the components of
b are chosen to follow an alternating-sign pattern, where their absolute values are randomly generated from a uniform distribution in
. The results clearly demonstrate that as the matrix dimension and condition number increase, standard methods rapidly lose accuracy, whereas the proposed approach consistently achieves high-precision results in all the analyzed dimensions.
Algorithm 2 Computation to HRA of the bidiagonal form, . |
Require: for do for do end for end for for do end for |
5. Conclusions and Future Research
A generalization of min and max matrices—namely, symmetric matrices and reverse L-shaped matrices—has been investigated. A comprehensive characterization of their total positivity has been developed, along with the derivation of their bidiagonal factorizations.
These factorizations provide exceptional accuracy when applied to a broad range of algebraic problems involving these matrices, including the computations of eigenvalues and singular values, as well as the solution of linear systems.
Numerical experiments have been conducted on particularly ill-conditioned matrices, achieving machine-level relative errors using the proposed methods. These results stand in sharp contrast to those obtained with standard, non-specialized algorithms, which fail to deliver accurate solutions, even for relatively small matrix sizes.
Overall, the obtained results contribute to the expanding field of structured matrices and their applications in numerical linear algebra, paving the way for more accurate and efficient computational approaches in both theoretical and applied contexts. Future lines of research that are planned to be explored include the study of different families of matrices that are natural extensions of the ones analyzed in this research.
Efficient computations involving block matrices hold significant importance in modern computing and play a crucial role across a wide spectrum of applications, as highlighted in [
1]. As a compelling avenue for future research, the authors intend to investigate block min and max matrices constructed from a sequence,
, of square and nonsingular matrices,
,
, generalizing the Neville elimination procedure to provide block bidiagonal factorizations and exploring their potential applications.
As a possible limitation of the proposed methods, it should be noted that obtaining the bidiagonal factorization of the matrices may not be sufficient. In addition to the expressions for the multipliers and the diagonal pivots of the complete Neville elimination, an algorithm capable of computing them to HRA is required—something that is not always feasible, even when the matrices in question are known to be TP.