Next Article in Journal
Sharp Coefficient Bounds for a Subclass of Bounded Turning Functions with a Cardioid Domain
Next Article in Special Issue
On the Total Positivity and Accurate Computations of r-Bell Polynomial Bases
Previous Article in Journal
The Optimal Strategies to Be Adopted in Controlling the Co-Circulation of COVID-19, Dengue and HIV: Insight from a Mathematical Model
Previous Article in Special Issue
Tensor Eigenvalue and SVD from the Viewpoint of Linear Transformation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Green Matrices, Minors and Hadamard Products

1
Departamento de Matemática Aplicada, Escuela de Ingeniería y Arquitectura, Universidad de Zaragoza/IUMA, 50018 Zaragoza, Spain
2
Departamento de Análisis Económico, Facultad de Economía y Empresa, Universidad de Zaragoza, 50005 Zaragoza, Spain
3
Departamento de Matemática Aplicada, Facultad de Ciencias, Universidad de Zaragoza/IUMA, 50009 Zaragoza, Spain
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(8), 774; https://doi.org/10.3390/axioms12080774
Submission received: 11 July 2023 / Revised: 2 August 2023 / Accepted: 8 August 2023 / Published: 10 August 2023
(This article belongs to the Special Issue Advances in Linear Algebra with Applications)

Abstract

:
Green matrices are interpreted as discrete version of Green functions and are used when working with inhomogeneous linear system of differential equations. This paper discusses accurate algebraic computations using a recent procedure to achieve an important factorization of these matrices with high relative accuracy and using alternative accurate methods. An algorithm to compute any minor of a Green matrix with high relative accuracy is also presented. The bidiagonal decomposition of the Hadamard product of Green matrices is obtained. Illustrative numerical examples are included.
MSC:
65D17; 65G50; 41A10; 41A15; 41A20

1. Introduction

A real value x 0 is said to be computed to HRA whenever the obtained x ^ satisfies
x x ^ x < C u ,
where u is the unit round-off and C is a constant, which does not depend on either the conditioning of the corresponding problem or arithmetic precision. An algorithm can be computed to high relative accuracy (HRA) when it only uses multiplications, divisions and sums of numbers with the same sign or subtractions of initial data (cf. [1]). In other words, the only operation that breaks HRA is the sum of numbers with opposite signs. In fact, this computation keeps HRA for initial and exact data if the floating-point arithmetic is well-implemented. The problem of obtaining an adequate parameterization of the considered classes of matrices is the cornerstone to devise HRA algorithms. A matrix is said to be totally positive (TP) if all its minors are greater than or equal to zero and it is said to be strictly totally positive (STP) if they are greater than zero (see [2,3,4]). In the literature some authors have also called TP and STP matrices as totally nonnegative and totally positive matrices, respectively (see [5]). TP matrices emerge in many applications in Approximation Theory, Statistics, Combinatory, Differential Equations or Economy (cf. [2,3,4,5,6]) and, in particular, in connection with optimal bases in Computer Aided Geometric Design (cf. [7,8]). HRA algorithms have been developed for some classes of matrices. Among them, first we mention here some few subclasses of nonsingular TP matrices and matrices related with them. For instance, for Generalized Vandermonde matrices [9], for Bernstein-Vandermonde matrices [10,11,12], for Said-Ball-Vandermonde matrices [13], for Cauchy-Vandermonde matrices [14], for Schoenmakers-Coffey matrices [15], for h-Bernstein-Vandermonde matrices [16] or for tridiagonal Toeplitz matrices [17], and much more subclasses of TP matrices. HRA algorithms for some algebraic problems related with other class of matrices have also been developed. For example, in [18] an HRA algorithm for the computation of the singular values was devised for a class of negative matrices. In [19] the authors develop a method to solve linear systems associated with a wide class of rank-structured matrices with HRA. These matrices contain the well-known Vandermonde and Cauchy matrices.
If the bidiagonal decomposition, BD ( A ) , of a nonsingular TP matrix A is known to HRA, then many algebraic computations can be carried out with HRA by using the functions in the software library [20]. For instance, computing its eigenvalues, its singular values, its inverse and the solution of linear systems A x = b , where b has alternating signs. In [21] a method to get bidiagonal factorizations of Green matrices to HRA with  O ( n ) elementary operations was introduced. Now we shall deepen on this method and its consequences as well as we shall show alternative accurate methods for Green matrices and provide further properties of these matrices. Green matrices (see [2,3,22]) are usually interpreted as discrete version of Green functions (see p. 237 of [22]) and are used when working with inhomogeneous systems of of differential equations. Green functions emerge in the Sturm-Liouville boundary-value problem. They have important applications in many scientific fields, such as Mechanics (see [22]) or Econometrics (see [23]).
Schoenmakers-Coffey matrices form a subclass of Green matrices with important financial applications (see [24,25,26,27,28,29]). In [15] Schoenmakers-Coffey matrices were characterized by a parametrization containing n parameters. The parametrization was used in [15] to perform HRA algebraic computations. In [21], a parametrization of 2 n parameters leading to HRA computations for nonsingular TP Green matrices was presented. In Section 2, devoted to basic notations and auxiliary results, we recall this parametrization and, in the general case of nonsingular Green matrices, we also use it in Section 3 (in Theorem 4) to obtain alternative methods to compute the determinant and the inverse with HRA. In Section 4 we provide a method to compute all minors of a Green matrix with HRA. In Section 5, the bidiagonal factorization of the Hadamard product of Green matrices is obtained. In contrast to the Hadamard product of nonsingular Green matrices, we also prove that the the Hadamard product of nonsingular TP Green matrices is also a nonsingular TP Green matrix. Section 6 includes a family of numerical experiments confirming the advantages of using the HRA methods. Finally, Section 7 summarizes the conclusions of the paper.

2. Basic Notations and Auxilliary Results

A desirable goal in the construction of numerical algorithms is the high relative accuracy (cf. [30,31]). This goal has only been achieved for a few classes of matrices. We say that an algorithm for the solution of an algebraic problem is performed to high relative accuracy in floating point arithmetic if the relative errors in the computations have the order of the unit round-off (or machine precision), without being affected by the dimension or the conventional conditionings of the problem (see [31]). It is well known that algorithms to high relative accuracy are those avoiding subtractive cancellations, that is, only requiring the following arithmetics operations: products, quotients, and additions of numbers of the same sign (see p. 52 in [1]). Moreover, if the floating-point arithmetic is well-implemented, the subtraction of initial data can also be done without loosing high relative accuracy (see p. 53 in [1]).
As mentioned in the Introduction, computations to high relative accuracy with totally positive matrices can be achieved by means of a proper representation of the matrices in terms of bidiagonal factorizations, which is in turn closely related to their Neville elimination (cf. [32,33]). Roughly speaking, Neville elimination is a procedure to make zeros in a column of a given matrix A R n × n by adding to each row an appropriate multiple of the previous one.
Given A R n × n , Neville elimination consists of n 1 major steps
A = : A ( 1 ) A ˜ ( 1 ) A ( 2 ) A ˜ ( 2 ) A ( n ) = : U ,
where U is an upper triangular matrix. Every major step in turn consists of two steps. In the first of these two steps, the Neville elimination obtains A ˜ ( k ) from the matrix A ( k ) by moving to the bottom the rows with a zero entry in column k below the main diagonal, if necessary. Then, in the second step the matrix A ( k + 1 ) is obtained from A ˜ ( k ) according to the following formula
a i , j ( k + 1 ) : = a ˜ i , j ( k ) , if 1 i k , a ˜ i , j ( k ) a ˜ i , k ( k ) a ˜ i 1 , k ( k ) a ˜ i 1 , j ( k ) , if k + 1 i , j n ,   and   a ˜ i 1 , j ( k ) 0 , a ˜ i , j ( k ) , if k + 1 i n ,   and   a ˜ i 1 , k ( k ) = 0 .
The process finishes when U : = A ( n ) is an upper triangular matrix. The entry
p i , j : = a i , j ( j ) , 1 j i n ,
is the ( i , j ) pivot and p i , i is called the i-th diagonal pivot of the Neville elimination of A. The Neville elimination of A can be performed without row exchanges if all the pivots are nonzero. Then the value
m i , j : = a i , j ( j ) / a i 1 , j ( j ) = p i , j / p i 1 , j , 1 j < i n ,
is called the ( i , j ) multiplier. The complete Neville elimination of A consists of performing the Neville elimination to obtain the upper triangular matrix U = A ( n ) and next, the Neville elimination of the lower triangular matrix U T .
Neville elimination is an important tool to analyze if a given matrix is STP, as shown in this characterization derived from Theorem 4.1 and Corollary 5.5 of [32], and the arguments of p. 116 of [34].
Theorem 1. 
Let A be a nonsingular matrix. A is STP (resp., TP) if and only if the Neville elimination of A and A T can be performed without row exchanges, all the multipliers of the Neville elimination of A and A T are positive (resp., nonnegative), and the diagonal pivots of the Neville elimination of A are all positive.
In [34], it is shown that a nonsingular totally positive matrix A R n × n can be decomposed as follows,  
A = F n 1 F n 2 F 1 D G 1 G 2 G n 1 ,
where F i R n × n (respectively, G i R n × n ) is the TP, lower (respectively, upper) triangular bidiagonal matrix given by
F i = 1 1 m i + 1 , 1 1 m n , n i 1 ,
G i T = 1 1 m ˜ i + 1 , 1 1 m ˜ n , n i 1
and D is a diagonal matrix whose diagonal entries are p i , i > 0 , i = 1 , , n . The diagonal elements p i , i are the diagonal pivots of the Neville elimination of A. Moreover, the elements  m i , j and m ˜ i , j are the multipliers of the Neville elimination of A and A T , respectively. The bidiagonal factorization of (4) is usually compacted in matrix form BD ( A ) as follows
( BD ( A ) ) i j = m i , j , for 1 j < i n , m ˜ j , i , for 1 i < j n , p i , i , for i = j { 1 , , n } .
The transpose of a nonsingular totally positive matrix A is also totally positive and, using the factorization (4), it can be written as follows
A T = G n 1 T G n 2 T G 1 T D F 1 T F 2 T F n 1 T .
If, in addition, A is symmetric, then we have that G i = F i T , i = 1 , , n 1 , and then
A = F n 1 F n 2 F 1 D F 1 T F 2 T F n 1 T ,
where F i , i = 1 , , n 1 , are the lower triangular bidiagonal matrices described in (5), whose off-diagonal entries coincide with the multipliers of the Neville elimination of A and D is the diagonal matrix with the diagonal pivots of the Neville elimination of A.
Under certain conditions (cf. [34]), the factorization (4) is unique. On the other hand, Neville elimination can also allow us to obtain bidiagonal factorizations as in (4) for some matrices that are singular and that are not TP. This will happen in the following sections. When the matrix is singular some diagonal entries of the matrix D of (4) can be zero.
Many of the subclasses of TP matrices mentioned in the introduction are very ill-conditioned, so that classical error analysis (cf. [35]) expects great roundoff errors when performing algebraic calculations with those matrices. However, the parametrization provided by the bidiagonal decomposition allows us to compute many algebraic calculations with high relative accuracy. Let us recall that, given a nonsingular and totally positive matrix A, by providing its bidiagonal factorization B = BD ( A ) to high relative accuracy, the Matlab functions avalaible in the software library TNTools in [20] compute to high relative accuracy A 1 (using the algorithm presented in [36]), the solution of A x = b , for vectors b with alternating signs, and the singular values of A, which coincide with the eigenvalues when the matrix is also symmetric, as it happens with Green matrices. In particular:
  • TNInverseExpand ( B ) returns A 1 , requiring O ( n 2 ) elementary operations.
  • TNSolve ( B , d ) returns the solution c of A c = d . It requires O ( n 2 ) elementary operations.
  • TNSingularValues(B) returns the singular values of A. Its computational cost is  O ( n 3 ) .
Following [4], we are going to recall the definition of a Green matrix. Given two sequences of nonzero real numbers ( u i ) 1 i n , ( v i ) 1 i n , a Green matrix A = ( a i j ) 1 i , j n is the symmetric matrix given by a i j = u i v j if i j (or, equivalently, a i j = u min { i , j } v max { i , j } for all i , j ).
The initial parameters used in [21] to get HRA algorithms were:
v i , r i : = u i v i , i = 1 , , n .
Then, the elements of the Green matrix A = ( a i j ) 1 i , j n can be expressed as a i j = r i v i v j for i j . We also have that u i = v i r i for all i = 1 , , n .
In Theorem 4.2 of [4] or p. 214 of [2] a result characterizing TP Green matrices was presented. Let us now recall it. Using (8), it can be stated as follows.
Theorem 2. 
A Green matrix is TP if and only if the parameters (8) are formed by numbers of the same stric sign and   
( 0 < ) r 1 r n .
The parameters (8) can be seen as natural parameters for a Green matrix taking into account Theorem 2 and the characterization of nonsingular Green matrices that we shall show below in Section 3.
Recently, a bidiagonal factorization of a Green matrix A to HRA has been derived. Let us now recall this decomposition of A. Using it, when A is also nonsingular TP, we can also carry out some algebraic computations of A to HRA: computation of its eigenvalues (which coincide with its singular values by the symmetry of the matrix), of its inverse and of the solution of some linear systems. Although this result is presented in [21], we include it with its proof for the sake of completeness.
Theorem 3. 
Let A be a Green matrix where the parameters (8) are known. Then, the bidiagonal factorization BD ( A ) of A can be computed to HRA. If, in addition, A is nonsingular TP, then its eigenvalues, A 1 and the solution of linear systems A x = b (where b has alternating signs) can also be computed with HRA.
Proof. 
The result in [21] was proved by systematically applying complete Neville elimination to A and keeping track the corresponding steps in matrix form. So, the first major step of the complete Neville elimination (it makes zeros the entries ( n , 1 ) and ( 1 , n ) by performing row an column matrix elementary operations) was expressed in matrix form as
E n v n v n 1 A E n v n v n 1 T = M ,
where E i ( α ) ( 2 i n ) denotes the elementary matrix coinciding with the order n × n identity matrix except for the place ( i , i 1 ) , which is α instead of 0. With respect to the matrix M, all the elements of its last row and its last column are zero except for the entry ( n , n ) , which is given by
d n : = v n 2 ( r n r n 1 ) .
Performing the major steps 2 , , n 1 of the complete Neville elimination to convert the entries ( n 1 , 1 ) , ( 1 , n 1 ) , , ( 2 , 1 ) , ( 1 , 2 ) to zeros, the following factorization was obtained
E 2 v 2 v 1 E n v n v n 1 A E n v n v n 1 T E 2 v 2 v 1 T = D ,
where D = d i a g ( d 1 , , d n ) is the diagonal matrix given by
d 1 = r 1 v 1 2 , d i = v i 2 ( r i r i 1 ) , i > 1 .
From (11) the following factorization of A was deduced
A = E n v n v n 1 E 2 v 2 v 1 D E 2 v 2 v 1 T E n v n v n 1 T ,
that is, BD ( A ) . Observe that all elements of the decomposition (13) can be obtained with HRA.
If A is a nonsingular and totally positive matrix, by using the algorithms devised in [36,37], the announced algebraic computations can be carried out to HRA from the bidiagonal factorization of A. □
Observe in the proof of the previous result that the only nonzero entries of the compacted matrix form of the bidiagonal factorization (commented above) of a Green matrix lie in the first row, the first column or the diagonal.
In Remark 2.2 of [21] we discussed the computational cost of the algorithm given in the proof of Theorem 3 and taking into account the computational costs of the algorithms given in [36,37], the computational cost of its application to compute the eigenvalues, the inverse or the solution of a linear system of equations for a Green matrix.

3. Accurate Method for the Inverse and Determinant of a Green Matrix

The next result is valid for all nonsingular Green matrices, which are characterized in terms of the parameters (8). For nonsingular TP Green matrices, it also provides an alternative way to that of Theorem 3 for calculating their inverses with HRA.
Theorem 4. 
A Green matrix A = ( a i j ) 1 i , j n is nonsingular if and only if the parameters r i of (8) satisfy that r i r i + 1 for all i = 1 , , n 1 . In this case, if we know the parameters (8), then we can compute det A and A 1 with HRA and O ( n ) elementary operations.
Proof. 
With the notations of the proof of Theorem 3, observe that the matrix
U : = D E 2 v 2 v 1 T E n v n v n 1 T
is upper triangular, it has as diagonal entries d i ( i = 1 , , n ) given by (12) and
det A = det U = Π i = 1 n d i .
Therefore, if we know the parameters (8), then we can compute det A by (12) and (14) with HRA and the computational cost is O ( n ) elementary operations. Besides, Equation (14) also implies that det A 0 if and only if d i 0 for all i = 1 , , n , which is equivalent by (12) to the fact that r i r i + 1 for all i = 1 , , n 1 .
Besides, if we know the parameters (8), then we can also compute the parameters u i = v i r i with HRA for all i = 1 , , n . Finally, in the nonsingular case of A = ( a i j ) 1 i , j n with a i j = u i v j if i j , by Theorem 1 of [15] its inverse C = G 1 = ( c i j ) 1 i , j n is the tridiagonal matrix given by
c 11 = u 2 u 1 ( u 2 v 1 u 1 v 2 ) = v 2 r 2 v 1 r 1 ( u 2 v 1 u 1 v 2 ) , c i i = u i + 1 v i 1 u i 1 v i + 1 ( u i v i 1 u i 1 v i ) ( u i + 1 v i u i v i + 1 ) , i = 2 , , n 1 , c n n = v n 1 v n ( u n v n 1 u n 1 v n ) , c i + 1 , i = c i , i + 1 = 1 u i v i + 1 u i + 1 v i , i = 1 , , n 1 .
Observe that, for i = 2 , , n ,
u i v i 1 v i u i 1 = r i v i v i 1 v i r i 1 v i 1 = v i 1 v i ( r i r i 1 )
and, for i = 2 , , n 1 ,
u i + 1 v i 1 v i + 1 u i 1 = r i + 1 v i + 1 v i 1 v i + 1 r i 1 v i 1 = v i 1 v i + 1 ( r i + 1 r i 1 ) .
If we know the parameters (8), then we can compute (15) and (16) with HRA and so  A 1 with HRA and with a computational cost of O ( n ) elementary operations. □
Observe that the proof of Theorem 4 also provides accurately the null determinant when the Green matrix is singular because, in this case, one of the pivots d i is zero and so  d e t ( A ) = 0 by (14).
Recall that a TP matrix is oscillatory if a certain power A k is STP. The following remark shows that, if we have a nonsingular and TP Green matrix A, then A is oscillatory.
Remark 1. 
Taking into account Theorems 3 and 4, when the inequalities of (9) are strict then we have a nonsingular TP matrix A = ( a i j ) 1 i , j n . In fact, since all entries a i , i + 1 = u i v i + 1 = a i + 1 , i are nonzero ( i = 1 , , n 1 ), then by Theorem 4.2 of [2] A is oscillatory. Remember (cf. Theorem 6.5 of [2]) that an oscillatory matrix has all its eigenvalues distinct and positive.

4. Accurate Method for the Minors of a Green Matrix

Let A = ( a i j ) 1 i , j n be a real square matrix and k a positive integer satisfying k n . With Q k , n will be denoted the set of all strictly increasing sequences of k natural numbers less than or equal to n:
α = α i i = 1 k Q k , n if ( 1 ) α 1 < α 2 < < α k n ·
Given α , β Q k , n , then A [ α | β ] is by definition the k × k submatrix of A containing rows numbered by α and columns numbered by β .
As shown in p. 4 of [15], a Green matrix A = ( a i j ) 1 i , j n ( n 3 ) cannot be strictly totally positive because it always has zero minors. In fact, for any i , j with i < j ( < n ) , det A [ i , i + 1 | j , j + 1 ] = u i v j u i + 1 v i + 1 u i v i + 1 v j u i + 1 = 0 . The following result will show that we can compute all minors of a Green matrix with high relative accuracy. Its proof includes an explicit formula for the computation of nonzero minors that can be performed with high relative accuracy.
Theorem 5. 
If A = ( a i j ) 1 i , j n is a Green matrix with known parameters (8), then we can compute any minor of A with HRA.
Proof. 
As proved in pp. 110–111 of [3], any minor det A [ i 1 , , i p | j 1 , , j p ] of a Green matrix A satisfies
det A [ i 1 , , i p | j 1 , , j p ] = u k 1 × det u k 2 u h 1 v k 2 v h 1 det u k 3 u h 2 v k 3 v h 2 det u k p u h p 1 v k p v h p 1 × v h p
where k m = min ( i m , j m ) and h m = max ( i m , j m ) , provided i m , j m < i m + 1 , j m + 1 ( m = 1 , 2 , , p 1 ). In all other cases, the minor is zero.
Taking into account that u k 1 = r k 1 v k 1 and that, for each m = 2 , , p 1 ,
det u k m u h m 1 v k m v h m 1 = u k m v h m 1 v k m u h m 1 = v h m 1 v k m ( r k m r h m 1 ) ,
we can deduce from (17) that all nonzero minors of A can be calculated as
det A [ i 1 , , i p | j 1 , , j p ] = r k 1 v k 1 v h p m = 2 p 1 ( v h m 1 v k m ( r k m r h m 1 ) )
We can observe that, with the initial parameters (8), the calculation of (19) can be performed with HRA. □

5. Hadamard Product of Green Matrices

Given two n × m matrices A = ( a i j ) and B = ( b i j ) the Hadamard (or entrywise) product of A and B is the n × m matrix C = A B defined by c i j = a i j b i j . There are some properties of the ordinary product of totally positive matrices that do not hold for the Hadamard product. A first property is the fact that the product of two totally positive matrices is totally positive (see Theorem 3.1 of [2]). However, in general, the class of totally positive matrices is not closed under the operation of Hadamard products (see p. 120 of [4]). For instance, consider the 3 × 3 TP matrix A with 0 in entry ( 1 , 3 ) and 1 elsewhere. Then A and A T are TP, but A A T has determinant 1 and so it is not TP. Let us see this counterexample with more detail:
A A T = 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 = 1 1 0 1 1 1 0 1 1
and then  
det ( A A T ) = det 1 1 0 1 1 1 0 1 1 = 1 .
However, the Hadamard product of totally positive Green matrices is a totally positive Green matrix (see p. 120 of [4]).
The second announced property deals with the bidiagonal factorization. If we know the bidiagonal factorizations BD ( A ) and BD ( B ) of two nonsingular totally positive matrices A and B, then we can find in [37] a method to obtain the bidiagonal factorization BD ( A B ) of their ordinary product A B . This method has not been extended to the Hadamard product. However, we are going to see that, if we know the parameters (8) of two Green matrices, then we can obtain the bidiagonal factorization (4) of their Hadamard product.
Theorem 6. 
If A and B are n × n Green matrices with parameters (8) given by v i A , r i A and v i B , r i B , respectively, then the bidiagonal factorization of the Hadamard product A B is given by
A B = E n v n A · v n B v n 1 A · v n 1 B E 2 v 2 A · v 2 B v 1 A · v 1 B D E 2 v 2 A · v 2 B v 1 A · v 1 B T E n v n A · v n B v n 1 A · v n 1 B T ,
where D = d i a g ( d 1 , , d n ) is matrix given by
d 1 = r 1 A ( v 1 A ) 2 r 1 B ( v 1 B ) 2 , d i = ( v i A ) 2 ( v i B ) 2 ( r i A r i B r i 1 A r i 1 B ) i > 1 .
Proof. 
Neville elimination must be performed. First, ( v n A · v n B ) / ( v n 1 A · v n 1 B ) times the last but one row is substracted to the last row in order to transform the place ( n , 1 ) of A B to zero. Like in the case of the Neville elimination on a Green matrix, this elementary operation also converts the remaining off-diagonal elements of the last row to zero. After carrying out the previous operation, the ( n , n ) place of A B is transformed to
d n : = r n A ( v n A ) 2 r n B ( v n B ) 2 v n A v n B v n 1 A v n 1 B r n 1 A v n 1 A v n A r n 1 B v n 1 B v n B = ( v n A ) 2 ( v n B ) 2 r n A r n B r n 1 A r n 1 B .
By symmetry, if ( v n A · v n B ) / ( v n 1 A · v n 1 B ) times the last but one column is subtracted to the last column, a matrix, where all the entries of the last column up to the place ( n , n ) are zero, is obtained. The entry ( n , n ) does not change and d n remains. So, this first major step can be expressed in matrix form as
E n v n A v n B v n 1 A v n 1 B ( A B ) E n v n A v n B v n 1 A v n 1 B T = M
If this process is continued to transform the entries ( n 1 , 1 ) , ( 1 , n 1 ) , , ( 2 , 1 ) , ( 1 , 2 ) to zeros, the following decomposition is obtained   
E 2 v 2 A v 2 B v 1 A v 1 B E n v n A v n B v n 1 A v n 1 B ( A B ) E n v n A v n B v n 1 A v n 1 B T E 2 v 2 A v 2 B v 1 A v 1 B T = D ,
where D is the diagonal matrix with diagonal entries in (20).
Taking into account the formula for the inverse of E i ( α ) , the following bidiagonal factorization BD ( A B ) of A B is obtained:
A B = E n v n A v n B v n 1 A v n 1 B E 2 v 2 A v 2 B v 1 A v 1 B D E 2 v 2 A v 2 B v 1 A v 1 B T E n v n A v n B v n 1 A v n 1 B T .
 □
Observe that Theorem 6 does not require any additional property for the Green matrices that are factors of the Hadamard product: neither total positivity neither singularity.
Remark 2. 
As recalled above, the Hadamard product of two totally positive Green matrices is a totally positive Green matrix. However, the Hadamard product of nonsingular Green matrices can be singular. Now let us illustrate this with an example. Let us consider two nonsingular Green matrices A and B given by the sequences r A = { 3 , 5 } , v A = { 1 / 3 , 1 / 5 } and r B = { 5 , 3 } , v B = { 1 / 5 , 1 / 3 } . The Green matrices A and B can be written explicitly as
A = 1 3 1 5 1 5 1 5 a n d B = 1 5 1 3 1 3 1 3 .
We can observe that A and B are both nonsingular. In contrast, the Hadamard product of A and B  
A B = 1 15 1 15 1 15 1 15
is clearly singular.
In contrast to the previous remark, the next result will show that the Hadamard product of two nonsingular TP Green matrices is again a nonsingular TP Green matrix.
Theorem 7. 
If A and B are n × n nonsingular TP Green matrices, then the Hadamard product A B is also a nonsingular TP Green matrix.
Proof. 
It is known that the Hadamard product of totally positive Green matrices is a totally positive Green matrix (see p. 120 of [4]). Let v i A , r i A and v i B , r i B ( i = 1 , , n ) be the parameters (8) of A and B, respectively. Since A and B are totally positive, by Theorem 2 we have that ( 0 < ) r 1 A r n A and that ( 0 < ) r 1 B r n B and since they are also nonsingular, we deduce from Theorem 4 that
( 0 < ) r 1 A < < r n A , ( 0 < ) r 1 B < < r n B .
Taking into account the bidiagonal factorization of a Green matrix given in Theorem 3 and the bidiagonal factorization of the Hadamard product of Green matrices given in Theorem 6, we deduce that the parameters r i A B of A B are given by r i A B = r i A r i B . Then, by (22), r i A B < r i + 1 A B for all i = 1 , , n 1 and so, by Theorem 4, we deduce that A B is nonsingular. □

6. Numerical Tests

Given an square nonsingular TP matrix A, if the bidiagonal decomposition BD ( A ) of A is known to HRA, Koev ([37,38]) developed HRA algorithms to perform the following algebraic tasks:
  • computation of the eigenvalues of A,
  • computation of the singular values of A, and
  • computation of the solution of linear systems of equations A x = b , where b has an alternating sign pattern.
These methods were implemented by Koev in the software library TNTool available for download in [20] for the MATLAB and Octave environment. The corresponding functions are TNEigenvalues, TNSingularValues, and TNSolve, respectively. In addition, in [36] an algorithm to compute the inverse of A was developed. This algorithm was implemented by the authors in the function TNInverseExpand and contributed to the software library [20]. The previous four functions require as input argument BD ( A ) to HRA and, for the case of TNSolve, the vector b of the linear system A x = b to be solved is also necessary as second argument.
In the proof of Theorem 3 the bidiagonal decomposition of a Green matrix was recalled. The corresponding method to obtain this bidiagonal decomposition was implemented in a Matlab/Octave function TNBDGreen to be used together with the software library [20]. In Algorithm 1 we can see the pseudocode of this function.
Algorithm 1 Computation of the bidiagonal decomposition of a Green matrix A
Require:  ( v ( i ) ) i = 1 n and ( r ( i ) ) i = 1 n
Ensure: B bidiagonal decomposition of A
     B ( 1 , 1 ) = r ( 1 ) v ( 1 ) 2
    for i = 2 : n do
           B ( i , 1 ) = v ( i ) / v ( i 1 )
           B ( 1 , i ) = v ( i ) / v ( i 1 )
           B ( i , i ) = v ( i ) 2 ( r ( i ) r ( i 1 ) )
    end for
Now we will illustrate some of the theoretical results presented along this paper with numerical examples. First, in our numerical tests, we have considered the Green matrix A 30 of order 30 whose bidiagonal decomposition is given by the parameters v = ( v i ) i = 1 30 and r = ( r i ) i = 1 30 defined by
v i = i and r i = 1 + 1 2 40 i for i = 1 , , 30 .
It is the Green matrix with parameters given by the sequence v and the sequence u = ( u i ) i = 1 30 defined by u i = i 1 + 1 2 40 i for i = 1 , , 30 . By Theorem 2, the matrix A 30 is TP and, by Theorem 4, the matrix is also nonsingular. In fact, by Remark 1, A 30 is oscillatory. Then, by Theorem 3, BD ( A 30 ) can be computed to HRA and, in addition, using this bidiagonal decomposition and the software library [20] for example the eigenvalues of A 30 can also be computed to HRA.
So the eigenvalues of A 30 has been calculated with Mathematica using a 200-digit precision. In addition, these eigenvalues have also been computed with Matlab in two different ways:
  • by using the usual function eig, and
  • by using the function TNEigenvalues using the bidiagonal decomposition BD ( A 30 ) computed to HRA with TNBDGreen.
Then, in order to compare the accuracy of the approximations to the eigenvalues obtained with Matlab in these two ways (eig and TNEigenvalues) we have obtained the relative errors considering the eigenvalues obtained with Mathematica as exact. The eigenvalues of A 30 have been arranged in a decreasing order ( λ 1 > λ 2 > > λ 30 > 0 ) to illustrate the computed relative errors. In Figure 1, the relative errors of the approximation to these eigenvalues by both methods have been displayed. Although we have calculated the relative errors of both methods for a discrete number of eigenvalues, in order to improve the readability of the picture, the piecewise linear interpolant of these errors has been plotted for each method (piecewise linear interpolants have also been used in the following figures of the paper). The figure shows that using BD ( A 30 ) to HRA together with the function TNEigenvalues of Koev software library ([20]) provides much better results than these obtained by using the usual Matlab function eig from the point of view of accuracy.
Figure 1 shows that, for the lower eigenvalues, eig Matlab command does not provide satisfactory approximations in contrast to the accurate approximations provided by the HRA bidiagonal decomposition of A joint with the function TNEigenvalues of Plamen Koev software library. Taking into account that the lowest eigenvalues present the most difficulties to be approximated, now we consider the Green matrices A n , n = 6 , 8 , , 40 , of order n given by the parameters v n = ( v i n ) i = 1 n and r n = ( r i n ) i = 1 n defined by
v i n = i and r i n = 1 + 1 2 n + 10 i for i = 1 , , n .
The matrix A n corresponds to the Green matrix given by the sequence v n and the sequence u n = ( u i n ) i = 1 20 defined by u i n = i 1 + 1 2 n + 10 i for i = 1 , , n . We have computed the 2-norm condition numbers of these matrices A n , n = 6 , 8 , , 40 : κ ( A n ) = A n 2 · A n 1 2 . These condition numbers can be seen in Table 1. For a more intuitive representation of these condition numbers you can see Figure 2. Due to these huge condition numbers we cannot expect accurate enough results when performing algebraic computations by the usual methods for those matrices. In this case, algorithms taking into account the structure of the matrices are desirable in order to obtain good numerical results. Green matrices A n , n = 6 , 8 , , 40 , are nonsingular TP matrices and their bidiagonal decompositions BD ( A n ) can also be computed to HRA.
Approximations of the lowest eigenvalue of these matrices by HRA methods and by Matlab command eig have been considered. Then we have computed those eigenvalues with Mathematica using a 200 digits precision. The relative errors for the approximations of the eigenvalues obtained previously have been calculated considering the eigenvalues provided by Mathematica as exact. Table 2 shows the relative errors of the approximation to the lowest eigenvalue λ n n of each matrix A n for n = 2 , 4 , , 40 . In Figure 3 we can see these same relative errors in a graphical way.
The bidiagonal decomposition of a Green matrix A to HRA together with the functions TNInverseExpand and TNSolve of the software library [20] allow us to compute its inverse and the solution of A x = b , where b has alternating signs, to HRA. Now let us compare numerically the accuracy of our HRA methods with this of the usual methods in Matlab. In this respect we have computed the inverse of the considered Green matrix of order 40, A 40 1 , with Mathematica using a 200-digit precision. We have also computed with Matlab the inverse A 40 1 with the usual function inv and with the function TNInverseExpand using BD ( A 40 ) to HRA obtained with the function TNBDGreen. Then we have computed the componentwise relative errors for both Matlab approximations. Finally, we have calculated the average of the componentwise relative errors for both approximations and the greatest componentwise relative error. These data can be seen in Table 3.
Now let us consider a system of linear equations A 40 x = b , where b has an alternating pattern of signs with the absolute value of its entries randomly generated as integers in the interval [ 1 , 1000 ] . First we have calculated the exact solution x of the linear system with Mathematica. Then we have obtained approximations to the exact solution of the system with the usual Matlab command \ and with TNSolve of the software library [20] using the bidiagonal factorization BD ( A 40 ) provided by TNBDGreen to HRA. Finally we have computed the relative errors componentwise. These componentwise relative errors are shown in Table 4.

7. Conclusions

In [21] a method with HRA to obtain the bidiagonal factorization of a Green matrix was presented and used to derive accurate methods for algebraic computations with nonsingular TP matrices. In this paper we illustrate with a family of matrices the advantages of using this method and we show that the parametrization used in [21] can be also used to construct accurate alternative methods and for other related problems. For instance, to obtain HRA methods to compute the inverse or the determinant of any nonsingular Green matrix and a method to compute with HRA any minor of a Green matrix. The bidiagonal factorization of the Hadamard product of two Green matrices is also obtained. Other related properties are analyzed. In particular, the Hadamard product is closed for nonsingular TP Green matrices, in contrast to the Hadamard product of nonsingular Green matrices.

Author Contributions

The authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Spanish research grants PGC2018-096321-B-I00 (MCIU/AEI) and RED2022-134176-T (MCI/AEI), and by Gobierno de Aragón (E41_23R).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HRAHigh relative accuracy
TPTotally positive
STPStrictly totally positive

References

  1. Demmel, J.; Gu, M.; Eisenstat, S.; Slapnicar, I.; Veselic, K.; Drmac, Z. Computing the singular value decomposition with high relative accuracy. Linear Algebra Appl. 1999, 299, 21–80. [Google Scholar] [CrossRef]
  2. Ando, T. Totally positive matrices. Linear Algebra Appl. 1987, 90, 165–219. [Google Scholar] [CrossRef] [Green Version]
  3. Karlin, S. Total Positivity; Stanford University Press: Stanford, CA, USA, 1968; Volume 1. [Google Scholar]
  4. Pinkus, A. Totally Positive Matrices, Cambridge Tracts in Mathematics 181; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  5. Fallat, S.M.; Johnson, C.R. Totally Nonnegative Matrices; Princeton Series in Applied Mathematics; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  6. Gasca, M.; Micchelli, C.A. Total Positivity and Its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996. [Google Scholar]
  7. Delgado, J.; Orera, H.; Peña, J.M. Optimal properties of tensor product of B-bases. Appl. Math. Lett. 2021, 121, 107473. [Google Scholar] [CrossRef]
  8. Delgado, J.; Peña, J.M. Extremal and optimal properties of B-bases collocation matrices. Numer. Math. 2020, 146, 105–118. [Google Scholar] [CrossRef]
  9. Demmel, J.; Koev, P. The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 2005, 27, 42–52. [Google Scholar] [CrossRef] [Green Version]
  10. Marco, A.; Martínez, J.J. A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 2007, 422, 616–628. [Google Scholar] [CrossRef] [Green Version]
  11. Marco, A.; Martínez, J.J. Polynomial least squares fitting in the Bernstein basis. Linear Algebra Appl. 2010, 433, 1254–1264. [Google Scholar] [CrossRef] [Green Version]
  12. Marco, A.; Martínez, J.J. Accurate computations with totally positive Bernstein-Vandermonde matrices. Electron. J. Linear Algebra 2013, 26, 357–380. [Google Scholar] [CrossRef] [Green Version]
  13. Marco, A.; Martínez, J.J. Accurate computations with Said-Ball-Vandermonde matrices. Linear Algebra Appl. 2010, 432, 2894–2908. [Google Scholar] [CrossRef] [Green Version]
  14. Marco, A.; Martínez, J.J.; Peña, J.M. Accurate bidiagonal decomposition of totally positive Cauchy-Vandermonde matrices and applications. Linear Algebra Appl. 2017, 517, 63–84. [Google Scholar] [CrossRef] [Green Version]
  15. Delgado, J.; Peña, G.; Peña, J.M. Accurate and fast computations with positive extended Schoenmakers-Coffey matrices. Numer. Linear Algebra Appl. 2016, 23, 1023–1031. [Google Scholar] [CrossRef]
  16. Marco, A.; Martínez, J.J.; Viaña, R. Accurate bidiagonal decomposition of totally positive h-Bernstein-Vandermonde matrices and applications. Linear Algebra Appl. 2019, 579, 320–335. [Google Scholar] [CrossRef]
  17. Delgado, J.; Orera, H.; Peña, J.M. Characterizations and accurate computations for tridiagonal Toeplitz matrices. Linear Multilinear Algebra 2022, 70, 4508–4527. [Google Scholar] [CrossRef]
  18. Huang, R.; Xue, J. Accurate singular values of a class of parameterized negative matrices. Adv. Comput. Math. 2021, 47, 73. [Google Scholar] [CrossRef]
  19. Huang, R. Accurate solutions of product linear systems associated with rank-structured matrices. J. Comput. Appl. Math. 2019, 347, 108–127. [Google Scholar] [CrossRef]
  20. Koev, P. TNTool. Available online: https://math.mit.edu/~plamen/software/TNTool.html (accessed on 10 March 2023).
  21. Delgado, J.; Peña, G.; Peña, J.M. Accurate and fast computations with Green matrices. Appl. Math. Lett. 2023, 145, 108778. [Google Scholar] [CrossRef]
  22. Gantmacher, F.P.; Krein, M.G. Oscillation Matrices and Kernels and Small Vibrations of Mechanical Systems, Revised ed.; AMS Chelsea: Providence, RI, USA, 2002. [Google Scholar]
  23. Ramsay, J.O.; Ramsey, J.B. Functional data analysis of the dynamics of the monthly index of nondurable goods production. J. Econom. 2002, 107, 327–344. [Google Scholar] [CrossRef]
  24. Lord, R.; Pelsser, A. Level, Slope and Curvature: Art or Artefact? Appl. Math. Financ. 2007, 14, 105–130. [Google Scholar] [CrossRef] [Green Version]
  25. Salinelli, E.; Serra-Capizzano, S.; Sesana, D. Eigenvalue-eigenvector structure of Schoenmakers-Coffey matrices via Toeplitz technology and applications. Linear Algebra Appl. 2016, 491, 138–160. [Google Scholar] [CrossRef]
  26. Salinelli, E.; Sgarra, C. Correlation matrices of yields and total positivity. Linear Algebra Appl. 2006, 418, 682–692. [Google Scholar] [CrossRef] [Green Version]
  27. Salinelli, E.; Sgarra, C. Shift, slope and curvature for a class of yields correlation matrices. Linear Algebra Appl. 2007, 426, 650–666. [Google Scholar] [CrossRef] [Green Version]
  28. Salinelli, E.; Sgarra, C. Some results on correlation matrices for interest rates. Acta Appl. Math. 2011, 115, 291–318. [Google Scholar] [CrossRef]
  29. Schoenmakers, J.; Coffey, B. Systematic Generation of Parametric Correlation Structures for the LIBOR Market Model. Int. J. Theor. Appl. Financ. 2003, 6, 507–519. [Google Scholar] [CrossRef]
  30. Demmel, J. Accurate singular value decompositions of structured matrices. SIAM J. Matrix Anal. Appl. 1999, 21, 562–580. [Google Scholar] [CrossRef]
  31. Demmel, J.; Dumitriu, I.; Holtz, O.; Koev, P. Accurate and efficient expression evaluation and linear algebra. Acta Numer. 2008, 17, 87–145. [Google Scholar] [CrossRef] [Green Version]
  32. Gasca, M.; Peña, J.M. Total positivity and Neville elimination. Linear Algebra Appl. 1992, 165, 25–44. [Google Scholar] [CrossRef] [Green Version]
  33. Gasca, M.; Peña, J.M. A matricial description of Neville elimination with applications to total positivity. Linear Algebra Appl. 1994, 202, 33–53. [Google Scholar] [CrossRef] [Green Version]
  34. Gasca, M.; Peña, J.M. On Factorizations of Totally Positive Matrices. In Total Positivity and Its Applications; Gasca, M., Micchelli, C.A., Eds.; Kluwer Academic: Dordrecht, The Netherlands, 1996; pp. 109–130. [Google Scholar]
  35. Higham, N.J. Accuracy and Stability of Numerical Algorithms, 2nd ed.; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  36. Marco, A.; Martínez, J.J. Accurate computation of the Moore–Penrose inverse of strictly totally positive matrices. J. Comput. Appl. Math. 2019, 350, 299–308. [Google Scholar] [CrossRef]
  37. Koev, P. Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 731–751. [Google Scholar] [CrossRef] [Green Version]
  38. Koev, P. Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 1–23. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Relative errors when computing the eigenvalues λ i , i = 1 , , 30 , with Matlab.
Figure 1. Relative errors when computing the eigenvalues λ i , i = 1 , , 30 , with Matlab.
Axioms 12 00774 g001
Figure 2. 2-norm condition number κ ( A n ) for n = 6 , 8 , , 40 .
Figure 2. 2-norm condition number κ ( A n ) for n = 6 , 8 , , 40 .
Axioms 12 00774 g002
Figure 3. Relative errors when computing the eigenvalues λ n n for n = 6 , 8 , , 40 , with Matlab.
Figure 3. Relative errors when computing the eigenvalues λ n n for n = 6 , 8 , , 40 , with Matlab.
Axioms 12 00774 g003
Table 1. 2-norm condition number κ ( A n ) for n = 6 , 8 , , 40 .
Table 1. 2-norm condition number κ ( A n ) for n = 6 , 8 , , 40 .
n κ ( A n )
6 3.81388 × 10 6
8 3.41953 × 10 7
10 2.58123 × 10 8
12 1.74308 × 10 9
14 1.08872 × 10 10
16 6.41847 × 10 10
18 3.61933 × 10 11
20 1.97009 × 10 12
22 1.04201 × 10 13
24 5.38161 × 10 13
26 2.72417 × 10 14
28 1.35553 × 10 15
30 6.64583 × 10 15
32 3.21641 × 10 16
34 1.53904 × 10 17
36 7.29018 × 10 17
38 3.42223 × 10 18
40 1.59352 × 10 19
Table 2. Relative errors when computing the eigenvalues λ n n of A n for n = 6 , 8 , , 40 .
Table 2. Relative errors when computing the eigenvalues λ n n of A n for n = 6 , 8 , , 40 .
n λ n n TNEigenvalueseig
6 2.3869 × 10 05 1.4194 × 10 16 1.4422 × 10 12
8 5.9675 × 10 06 2.8388 × 10 16 1.5301 × 10 13
10 1.4919 × 10 06 2.8388 × 10 16 4.6748 × 10 12
12 3.7297 × 10 07 1.4194 × 10 16 5.1488 × 10 11
14 9.3242 × 10 08 4.2582 × 10 16 9.7463 × 10 10
16 2.3310 × 10 08 8.5165 × 10 16 3.6172 × 10 09
18 5.8276 × 10 09 2.8388 × 10 16 6.3593 × 10 09
20 1.4569 × 10 09 7.0971 × 10 16 8.9982 × 10 08
22 3.6423 × 10 10 1.1355 × 10 15 1.9536 × 10 07
24 9.1057 × 10 11 2.8388 × 10 16 2.1620 × 10 06
26 2.2764 × 10 11 4.2582 × 10 16 3.5856 × 10 06
28 5.6910 × 10 12 0 6.5372 × 10 06
30 1.4228 × 10 12 4.2582 × 10 16 1.3889 × 10 04
32 3.5569 × 10 13 1.4194 × 10 16 2.3374 × 10 04
34 8.8922 × 10 14 1.4194 × 10 16 5.0519 × 10 04
36 2.2231 × 10 14 5.6777 × 10 16 5.6332 × 10 03
38 5.5577 × 10 15 1.7033 × 10 15 8.3078 × 10 03
40 1.3894 × 10 15 2.1291 × 10 15 3.3643 × 10 02
Table 3. Relative errors when computing A 40 1 .
Table 3. Relative errors when computing A 40 1 .
Maximum Rel. ErrorAverage Rel. Error
TNInverseExpand 2.1988 × 10 16 4.8020 × 10 17
inv 1.0002 × 10 02 4.9671 × 10 04
Table 4. Componentwise relative error when solving A 40 1 x = b .
Table 4. Componentwise relative error when solving A 40 1 x = b .
Component iTNSolve
1 0.0000 × 10 0 2.3992 × 10 07
2 0.0000 × 10 0 1.0182 × 10 04
3 0.0000 × 10 0 8.5298 × 10 04
4 0.0000 × 10 0 4.1862 × 10 03
5 0.0000 × 10 0 7.8154 × 10 03
6 0.0000 × 10 0 2.2929 × 10 02
7 0.0000 × 10 0 9.3949 × 10 02
8 0.0000 × 10 0 6.6030 × 10 02
9 0.0000 × 10 0 2.1479 × 10 01
10 2.1443 × 10 16 2.2253 × 10 01
11 1.3119 × 10 16 2.3643 × 10 01
12 0.0000 × 10 0 2.0137 × 10 01
13 0.0000 × 10 0 2.5941 × 10 01
14 0.0000 × 10 0 4.5052 × 10 01
15 1.2024 × 10 16 4.2045 × 10 01
16 0.0000 × 10 0 4.1211 × 10 01
17 0.0000 × 10 0 2.6014 × 10 0
18 0.0000 × 10 0 1.6916 × 10 0
19 0.0000 × 10 0 1.0482 × 10 0
20 1.1668 × 10 16 5.1923 × 10 01
21 0.0000 × 10 0 3.8324 × 10 0
22 0.0000 × 10 0 1.8851 × 10 0
23 0.0000 × 10 0 8.6188 × 10 01
24 1.1740 × 10 16 4.7406 × 10 01
25 0.0000 × 10 0 1.4654 × 10 0
26 1.4173 × 10 16 5.2659 × 10 01
27 0.0000 × 10 0 5.2313 × 10 01
28 0.0000 × 10 0 4.7117 × 10 01
29 1.9056 × 10 16 8.1661 × 10 01
30 2.0596 × 10 16 6.0549 × 10 01
31 1.9884 × 10 16 5.1771 × 10 01
32 0.0000 × 10 0 8.8847 × 10 01
33 1.4007 × 10 16 2.4246 × 10 0
34 0.0000 × 10 0 2.5744 × 10 0
35 1.7406 × 10 16 8.1932 × 10 01
36 0.0000 × 10 0 2.3385 × 10 0
37 0.0000 × 10 0 2.4029 × 10 0
38 0.0000 × 10 0 2.1596 × 10 0
39 0.0000 × 10 0 3.4694 × 10 0
40 0.0000 × 10 0 1.4997 × 10 0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Delgado, J.; Peña, G.; Peña, J.M. Green Matrices, Minors and Hadamard Products. Axioms 2023, 12, 774. https://doi.org/10.3390/axioms12080774

AMA Style

Delgado J, Peña G, Peña JM. Green Matrices, Minors and Hadamard Products. Axioms. 2023; 12(8):774. https://doi.org/10.3390/axioms12080774

Chicago/Turabian Style

Delgado, Jorge, Guillermo Peña, and Juan Manuel Peña. 2023. "Green Matrices, Minors and Hadamard Products" Axioms 12, no. 8: 774. https://doi.org/10.3390/axioms12080774

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop