Green Matrices, Minors and Hadamard Products

: Green matrices are interpreted as discrete version of Green functions and are used when working with inhomogeneous linear system of differential equations. This paper discusses accurate algebraic computations using a recent procedure to achieve an important factorization of these matrices with high relative accuracy and using alternative accurate methods. An algorithm to compute any minor of a Green matrix with high relative accuracy is also presented. The bidiagonal decomposition of the Hadamard product of Green matrices is obtained. Illustrative numerical examples are included


Introduction
A real value x = 0 is said to be computed to HRA whenever the obtained x satisfies where u is the unit round-off and C is a constant, which does not depend on either the conditioning of the corresponding problem or arithmetic precision.An algorithm can be computed to high relative accuracy (HRA) when it only uses multiplications, divisions and sums of numbers with the same sign or subtractions of initial data (cf.[1]).In other words, the only operation that breaks HRA is the sum of numbers with opposite signs.In fact, this computation keeps HRA for initial and exact data if the floating-point arithmetic is well-implemented.The problem of obtaining an adequate parameterization of the considered classes of matrices is the cornerstone to devise HRA algorithms.A matrix is said to be totally positive (TP) if all its minors are greater than or equal to zero and it is said to be strictly totally positive (STP) if they are greater than zero (see [2][3][4]).
In the literature some authors have also called TP and STP matrices as totally nonnegative and totally positive matrices, respectively (see [5]).TP matrices emerge in many applications in Approximation Theory, Statistics, Combinatory, Differential Equations or Economy (cf.[2][3][4][5][6]) and, in particular, in connection with optimal bases in Computer Aided Geometric Design (cf.[7,8]).HRA algorithms have been developed for some classes of matrices.Among them, first we mention here some few subclasses of nonsingular TP matrices and matrices related with them.For instance, for Generalized Vandermonde matrices [9], for Bernstein-Vandermonde matrices [10][11][12], for Said-Ball-Vandermonde matrices [13], for Cauchy-Vandermonde matrices [14], for Schoenmakers-Coffey matrices [15], for h-Bernstein-Vandermonde matrices [16] or for tridiagonal Toeplitz matrices [17], and much more subclasses of TP matrices.HRA algorithms for some algebraic problems related with other class of matrices have also been developed.For example, in [18] an HRA algorithm for the computation of the singular values was devised for a class of negative matrices.In [19] the authors develop a method to solve linear systems associated with a wide class of rank-structured matrices with HRA.These matrices contain the well-known Vandermonde and Cauchy matrices.
If the bidiagonal decomposition, BD(A), of a nonsingular TP matrix A is known to HRA, then many algebraic computations can be carried out with HRA by using the functions in the software library [20].For instance, computing its eigenvalues, its singular values, its inverse and the solution of linear systems Ax = b, where b has alternating signs.In [21] a method to get bidiagonal factorizations of Green matrices to HRA with O(n) elementary operations was introduced.Now we shall deepen on this method and its consequences as well as we shall show alternative accurate methods for Green matrices and provide further properties of these matrices.Green matrices (see [2,3,22]) are usually interpreted as discrete version of Green functions (see p. 237 of [22]) and are used when working with inhomogeneous systems of of differential equations.Green functions emerge in the Sturm-Liouville boundary-value problem.They have important applications in many scientific fields, such as Mechanics (see [22]) or Econometrics (see [23]).
Schoenmakers-Coffey matrices form a subclass of Green matrices with important financial applications (see [24][25][26][27][28][29]). In [15] Schoenmakers-Coffey matrices were characterized by a parametrization containing n parameters.The parametrization was used in [15] to perform HRA algebraic computations.In [21], a parametrization of 2n parameters leading to HRA computations for nonsingular TP Green matrices was presented.In Section 2, devoted to basic notations and auxiliary results, we recall this parametrization and, in the general case of nonsingular Green matrices, we also use it in Section 3 (in Theorem 4) to obtain alternative methods to compute the determinant and the inverse with HRA.In Section 4 we provide a method to compute all minors of a Green matrix with HRA.In Section 5, the bidiagonal factorization of the Hadamard product of Green matrices is obtained.In contrast to the Hadamard product of nonsingular Green matrices, we also prove that the the Hadamard product of nonsingular TP Green matrices is also a nonsingular TP Green matrix.Section 6 includes a family of numerical experiments confirming the advantages of using the HRA methods.Finally, Section 7 summarizes the conclusions of the paper.

Basic Notations and Auxilliary Results
A desirable goal in the construction of numerical algorithms is the high relative accuracy (cf.[30,31]).This goal has only been achieved for a few classes of matrices.We say that an algorithm for the solution of an algebraic problem is performed to high relative accuracy in floating point arithmetic if the relative errors in the computations have the order of the unit round-off (or machine precision), without being affected by the dimension or the conventional conditionings of the problem (see [31]).It is well known that algorithms to high relative accuracy are those avoiding subtractive cancellations, that is, only requiring the following arithmetics operations: products, quotients, and additions of numbers of the same sign (see p. 52 in [1]).Moreover, if the floating-point arithmetic is well-implemented, the subtraction of initial data can also be done without loosing high relative accuracy (see p. 53 in [1]).
As mentioned in the Introduction, computations to high relative accuracy with totally positive matrices can be achieved by means of a proper representation of the matrices in terms of bidiagonal factorizations, which is in turn closely related to their Neville elimination (cf.[32,33]).Roughly speaking, Neville elimination is a procedure to make zeros in a column of a given matrix A ∈ R n×n by adding to each row an appropriate multiple of the previous one.
Given A ∈ R n×n , Neville elimination consists of n − 1 major steps where U is an upper triangular matrix.Every major step in turn consists of two steps.
In the first of these two steps, the Neville elimination obtains Ã(k) from the matrix A (k) by moving to the bottom the rows with a zero entry in column k below the main diagonal, if necessary.Then, in the second step the matrix A (k+1) is obtained from Ã(k) according to the following formula The process finishes when U := A (n) is an upper triangular matrix.The entry is the (i, j) pivot and p i,i is called the i-th diagonal pivot of the Neville elimination of A.
The Neville elimination of A can be performed without row exchanges if all the pivots are nonzero.Then the value is called the (i, j) multiplier.The complete Neville elimination of A consists of performing the Neville elimination to obtain the upper triangular matrix U = A (n) and next, the Neville elimination of the lower triangular matrix U T .Neville elimination is an important tool to analyze if a given matrix is STP, as shown in this characterization derived from Theorem 4.1 and Corollary 5.5 of [32], and the arguments of p. 116 of [34].

Theorem 1.
Let A be a nonsingular matrix.A is STP (resp., TP) if and only if the Neville elimination of A and A T can be performed without row exchanges, all the multipliers of the Neville elimination of A and A T are positive (resp., nonnegative), and the diagonal pivots of the Neville elimination of A are all positive.
In [34], it is shown that a nonsingular totally positive matrix A ∈ R n×n can be decomposed as follows, where F i ∈ R n×n (respectively, G i ∈ R n×n ) is the TP, lower (respectively, upper) triangular bidiagonal matrix given by and D is a diagonal matrix whose diagonal entries are p i,i > 0, i = 1, . . ., n.The diagonal elements p i,i are the diagonal pivots of the Neville elimination of A. Moreover, the elements m i,j and m i,j are the multipliers of the Neville elimination of A and A T , respectively.The bidiagonal factorization of ( 4) is usually compacted in matrix form BD(A) as follows The transpose of a nonsingular totally positive matrix A is also totally positive and, using the factorization (4), it can be written as follows If, in addition, A is symmetric, then we have that G i = F T i , i = 1, . . ., n − 1, and then where F i , i = 1, . . ., n − 1, are the lower triangular bidiagonal matrices described in (5), whose off-diagonal entries coincide with the multipliers of the Neville elimination of A and D is the diagonal matrix with the diagonal pivots of the Neville elimination of A.
Under certain conditions (cf.[34]), the factorization ( 4) is unique.On the other hand, Neville elimination can also allow us to obtain bidiagonal factorizations as in (4) for some matrices that are singular and that are not TP.This will happen in the following sections.When the matrix is singular some diagonal entries of the matrix D of ( 4) can be zero.
Many of the subclasses of TP matrices mentioned in the introduction are very illconditioned, so that classical error analysis (cf.[35]) expects great roundoff errors when performing algebraic calculations with those matrices.However, the parametrization provided by the bidiagonal decomposition allows us to compute many algebraic calculations with high relative accuracy.Let us recall that, given a nonsingular and totally positive matrix A, by providing its bidiagonal factorization B = BD(A) to high relative accuracy, the Matlab functions avalaible in the software library TNTools in [20] compute to high relative accuracy A −1 (using the algorithm presented in [36]), the solution of Ax = b, for vectors b with alternating signs, and the singular values of A, which coincide with the eigenvalues when the matrix is also symmetric, as it happens with Green matrices.In particular: Following [4], we are going to recall the definition of a Green matrix.Given two sequences of nonzero real numbers (u i ) 1≤i≤n , (v i ) 1≤i≤n , a Green matrix A = (a ij ) 1≤i,j≤n is the symmetric matrix given by a ij = u i v j if i ≤ j (or, equivalently, a ij = u min{i,j} v max{i,j} for all i, j).
The initial parameters used in [21] to get HRA algorithms were: Then, the elements of the Green matrix A = (a ij ) 1≤i,j≤n can be expressed as a ij = r i v i v j for i ≤ j.We also have that u i = v i r i for all i = 1, . . ., n.
In Theorem 4.2 of [4] or p. 214 of [2] a result characterizing TP Green matrices was presented.Let us now recall it.Using (8), it can be stated as follows.
Theorem 2. A Green matrix is TP if and only if the parameters (8) are formed by numbers of the same stric sign and The parameters (8) can be seen as natural parameters for a Green matrix taking into account Theorem 2 and the characterization of nonsingular Green matrices that we shall show below in Section 3.
Recently, a bidiagonal factorization of a Green matrix A to HRA has been derived.Let us now recall this decomposition of A. Using it, when A is also nonsingular TP, we can also carry out some algebraic computations of A to HRA: computation of its eigenvalues (which coincide with its singular values by the symmetry of the matrix), of its inverse and of the solution of some linear systems.Although this result is presented in [21], we include it with its proof for the sake of completeness.Theorem 3. Let A be a Green matrix where the parameters (8) are known.Then, the bidiagonal factorization BD(A) of A can be computed to HRA.If, in addition, A is nonsingular TP, then its eigenvalues, A −1 and the solution of linear systems Ax = b (where b has alternating signs) can also be computed with HRA.
Proof.The result in [21] was proved by systematically applying complete Neville elimination to A and keeping track the corresponding steps in matrix form.So, the first major step of the complete Neville elimination (it makes zeros the entries (n, 1) and (1, n) by performing row an column matrix elementary operations) was expressed in matrix form as where E i (α) (2 ≤ i ≤ n) denotes the elementary matrix coinciding with the order n × n identity matrix except for the place (i, i − 1), which is α instead of 0. With respect to the matrix M, all the elements of its last row and its last column are zero except for the entry (n, n), which is given by Performing the major steps 2, . . ., n − 1 of the complete Neville elimination to convert the entries (n − 1, 1), (1, n − 1), . . ., (2,1), (1,2) to zeros, the following factorization was obtained where D = diag(d 1 , . . ., d n ) is the diagonal matrix given by From ( 11) the following factorization of A was deduced that is, BD(A).Observe that all elements of the decomposition (13) can be obtained with HRA.
If A is a nonsingular and totally positive matrix, by using the algorithms devised in [36,37], the announced algebraic computations can be carried out to HRA from the bidiagonal factorization of A.
Observe in the proof of the previous result that the only nonzero entries of the compacted matrix form of the bidiagonal factorization (commented above) of a Green matrix lie in the first row, the first column or the diagonal.
In Remark 2.2 of [21] we discussed the computational cost of the algorithm given in the proof of Theorem 3 and taking into account the computational costs of the algorithms given in [36,37], the computational cost of its application to compute the eigenvalues, the inverse or the solution of a linear system of equations for a Green matrix.

Accurate Method for the Inverse and Determinant of a Green Matrix
The next result is valid for all nonsingular Green matrices, which are characterized in terms of the parameters (8).For nonsingular TP Green matrices, it also provides an alternative way to that of Theorem 3 for calculating their inverses with HRA.Theorem 4. A Green matrix A = (a ij ) 1≤i,j≤n is nonsingular if and only if the parameters r i of ( 8) satisfy that r i = r i+1 for all i = 1, . . ., n − 1.In this case, if we know the parameters (8), then we can compute det A and A −1 with HRA and O(n) elementary operations.
Proof.With the notations of the proof of Theorem 3, observe that the matrix is upper triangular, it has as diagonal entries d i (i = 1, . . ., n) given by (12) and Therefore, if we know the parameters ( 8), then we can compute det A by ( 12) and ( 14) with HRA and the computational cost is O(n) elementary operations.Besides, Equation ( 14) also implies that det A = 0 if and only if d i = 0 for all i = 1, . . ., n, which is equivalent by (12) to the fact that r i = r i+1 for all i = 1, . . ., n − 1.
Besides, if we know the parameters (8), then we can also compute the parameters u i = v i r i with HRA for all i = 1, . . ., n.Finally, in the nonsingular case of A = (a ij ) 1≤i,j≤n with a ij = u i v j if i ≤ j, by Theorem 1 of [15] its inverse C = G −1 = (c ij ) 1≤i,j≤n is the tridiagonal matrix given by Observe that, for i = 2, . . ., n, and, for i = 2, . . ., n − 1, If we know the parameters (8), then we can compute ( 15) and ( 16) with HRA and so A −1 with HRA and with a computational cost of O(n) elementary operations.
Observe that the proof of Theorem 4 also provides accurately the null determinant when the Green matrix is singular because, in this case, one of the pivots d i is zero and so det(A) = 0 by (14).
Recall that a TP matrix is oscillatory if a certain power A k is STP.The following remark shows that, if we have a nonsingular and TP Green matrix A, then A is oscillatory.
Remark 1. Taking into account Theorems 3 and 4, when the inequalities of (9) are strict then we have a nonsingular TP matrix A = (a ij ) 1≤i,j≤n .In fact, since all entries a i,i+1 = u i v i+1 = a i+1,i are nonzero (i = 1, . . ., n − 1), then by Theorem 4.2 of [2] A is oscillatory.Remember (cf.Theorem 6.5 of [2]) that an oscillatory matrix has all its eigenvalues distinct and positive.

Accurate Method for the Minors of a Green Matrix
Let A = (a ij ) 1≤i,j≤n be a real square matrix and k a positive integer satisfying k ≤ n.With Q k,n will be denoted the set of all strictly increasing sequences of k natural numbers less than or equal to n: is by definition the k × k submatrix of A containing rows numbered by α and columns numbered by β.
As shown in p. 4 of [15], a Green matrix A = (a ij ) 1≤i,j≤n (n ≥ 3) cannot be strictly totally positive because it always has zero minors.In fact, for any i, j with i < j The following result will show that we can compute all minors of a Green matrix with high relative accuracy.Its proof includes an explicit formula for the computation of nonzero minors that can be performed with high relative accuracy.Theorem 5.If A = (a ij ) 1≤i,j≤n is a Green matrix with known parameters (8), then we can compute any minor of A with HRA.
Proof.As proved in pp.110-111 of [3], any minor det A[i 1 , . . ., i p |j 1 , . . ., j p ] of a Green matrix A satisfies det A[i 1 , . . ., i p |j 1 , . . ., where k m = min(i m , j m ) and h m = max(i m , j m ), provided i m , j m < i m+1 , j m+1 (m = 1, 2, . . ., p − 1).In all other cases, the minor is zero.Taking into account that u k 1 = r k 1 v k 1 and that, for each m = 2, . . ., p − 1, we can deduce from ( 17) that all nonzero minors of A can be calculated as We can observe that, with the initial parameters (8), the calculation of (19) can be performed with HRA.

Hadamard Product of Green Matrices
Given two n × m matrices A = (a ij ) and B = (b ij ) the Hadamard (or entrywise) product of A and B is the n × m matrix C = A • B defined by c ij = a ij b ij .There are some properties of the ordinary product of totally positive matrices that do not hold for the Hadamard product.A first property is the fact that the product of two totally positive matrices is totally positive (see Theorem 3.1 of [2]).However, in general, the class of totally positive matrices is not closed under the operation of Hadamard products (see p. 120 of [4]).For instance, consider the 3 × 3 TP matrix A with 0 in entry (1, 3) and 1 elsewhere.
Then A and A T are TP, but A • A T has determinant −1 and so it is not TP.Let us see this counterexample with more detail: However, the Hadamard product of totally positive Green matrices is a totally positive Green matrix (see p. 120 of [4]).
The second announced property deals with the bidiagonal factorization.If we know the bidiagonal factorizations BD(A) and BD(B) of two nonsingular totally positive matrices A and B, then we can find in [37] a method to obtain the bidiagonal factorization BD(AB) of their ordinary product AB.This method has not been extended to the Hadamard product.However, we are going to see that, if we know the parameters (8) of two Green matrices, then we can obtain the bidiagonal factorization (4) of their Hadamard product.Theorem 6.If A and B are n × n Green matrices with parameters (8) given by v A i , r A i and v B i , r B i , respectively, then the bidiagonal factorization of the Hadamard product A • B is given by , where D = diag(d 1 , . . ., d n ) is matrix given by Proof.Neville elimination must be performed.First, (v times the last but one row is substracted to the last row in order to transform the place (n, 1) of A • B to zero.Like in the case of the Neville elimination on a Green matrix, this elementary operation also converts the remaining off-diagonal elements of the last row to zero.After carrying out the previous operation, the (n, n) place of A • B is transformed to times the last but one column is subtracted to the last column, a matrix, where all the entries of the last column up to the place (n, n) are zero, is obtained.The entry (n, n) does not change and d n remains.So, this first major step can be expressed in matrix form as If this process is continued to transform the entries (n − 1, 1), (1, n − 1), . . ., (2,1), (1, 2) to zeros, the following decomposition is obtained where D is the diagonal matrix with diagonal entries in (20).
Taking into account the formula for the inverse of E i (α), the following bidiagonal factorization BD(A • B) of A • B is obtained: Observe that Theorem 6 does not require any additional property for the Green matrices that are factors of the Hadamard product: neither total positivity neither singularity.Remark 2. As recalled above, the Hadamard product of two totally positive Green matrices is a totally positive Green matrix.However, the Hadamard product of nonsingular Green matrices can be singular.Now let us illustrate this with an example.Let us consider two nonsingular Green matrices A and B given by the sequences r A = {3, 5}, v A = {1/3, 1/5} and r B = {5, 3}, v B = {1/5, 1/3}.The Green matrices A and B can be written explicitly as A = In contrast to the previous remark, the next result will show that the Hadamard product of two nonsingular TP Green matrices is again a nonsingular TP Green matrix.Theorem 7. If A and B are n × n nonsingular TP Green matrices, then the Hadamard product A • B is also a nonsingular TP Green matrix.
Proof.It is known that the Hadamard product of totally positive Green matrices is a totally positive Green matrix (see p. 120 of [4]).Let v A i , r A i and v B i , r B i (i = 1, . . ., n) be the parameters (8) of A and B, respectively.Since A and B are totally positive, by Theorem 2 we have that (0 n and since they are also nonsingular, we deduce from Theorem 4 that Taking into account the bidiagonal factorization of a Green matrix given in Theorem 3 and the bidiagonal factorization of the Hadamard product of Green matrices given in Theorem 6, we deduce that the parameters i+1 for all i = 1, . . ., n − 1 and so, by Theorem 4, we deduce that A • B is nonsingular.

Numerical Tests
Given an square nonsingular TP matrix A, if the bidiagonal decomposition BD(A) of A is known to HRA, Koev ([37,38]) developed HRA algorithms to perform the following algebraic tasks: These methods were implemented by Koev in the software library TNTool available for download in [20] for the MATLAB and Octave environment.The corresponding functions are TNEigenvalues, TNSingularValues, and TNSolve, respectively.In addition, in [36] an algorithm to compute the inverse of A was developed.This algorithm was implemented by the authors in the function TNInverseExpand and contributed to the software library [20].The previous four functions require as input argument BD(A) to HRA and, for the case of TNSolve, the vector b of the linear system Ax = b to be solved is also necessary as second argument.
In the proof of Theorem 3 the bidiagonal decomposition of a Green matrix was recalled.The corresponding method to obtain this bidiagonal decomposition was implemented in a Matlab/Octave function TNBDGreen to be used together with the software library [20].
In Algorithm 1 we can see the pseudocode of this function.

Algorithm 1 Computation of the bidiagonal decomposition of a Green matrix
) end for Now we will illustrate some of the theoretical results presented along this paper with numerical examples.First, in our numerical tests, we have considered the Green matrix A 30 of order 30 whose bidiagonal decomposition is given by the parameters v = (v i ) 30 i=1 and r = (r i ) 30  i=1 defined by It is the Green matrix with parameters given by the sequence v and the sequence u = (u i ) 30  i=1 defined by u i = i 1 + 1 2 40−i for i = 1, . . ., 30.By Theorem 2, the matrix A 30 is TP and, by Theorem 4, the matrix is also nonsingular.In fact, by Remark 1, A 30 is oscillatory.Then, by Theorem 3, BD(A 30 ) can be computed to HRA and, in addition, using this bidiagonal decomposition and the software library [20] for example the eigenvalues of A 30 can also be computed to HRA.
So the eigenvalues of A 30 has been calculated with Mathematica using a 200-digit precision.In addition, these eigenvalues have also been computed with Matlab in two different ways: 1.
by using the usual function eig, and 2.
by using the function TNEigenvalues using the bidiagonal decomposition BD(A 30 ) computed to HRA with TNBDGreen.
Then, in order to compare the accuracy of the approximations to the eigenvalues obtained with Matlab in these two ways (eig and TNEigenvalues) we have obtained the relative errors considering the eigenvalues obtained with Mathematica as exact.The eigenvalues of A 30 have been arranged in a decreasing order (λ 1 > λ 2 > • • • > λ 30 > 0) to illustrate the computed relative errors.In Figure 1, the relative errors of the approximation to these eigenvalues by both methods have been displayed.Although we have calculated the relative errors of both methods for a discrete number of eigenvalues, in order to improve the readability of the picture, the piecewise linear interpolant of these errors has been plotted for each method (piecewise linear interpolants have also been used in the following figures of the paper).The figure shows that using BD(A 30 ) to HRA together with the function TNEigenvalues of Koev software library ( [20]) provides much better results than these obtained by using the usual Matlab function eig from the point of view of accuracy.Figure 1 shows that, for the lower eigenvalues, eig Matlab command does not provide satisfactory approximations in contrast to the accurate approximations provided by the HRA bidiagonal decomposition of A joint with the function TNEigenvalues of Plamen Koev software library.Taking into account that the lowest eigenvalues present the most difficulties to be approximated, now we consider the Green matrices A n , n = 6, 8, . . ., 40, of order n given by the parameters v n = (v n i ) n i=1 and r n = (r n i ) n i=1 defined by v n i = i and r n i = 1 + 1 2 n+10−i for i = 1, . . ., n.
The matrix A n corresponds to the Green matrix given by the sequence v n and the sequence u n = (u n i ) 20 i=1 defined by u n i = i 1 + 1 2 n+10−i for i = 1, . . ., n.We have computed the 2-norm condition numbers of these matrices A n , n = 6, 8, . . ., 40: These condition numbers can be seen in Table 1.For a more intuitive representation of these condition numbers you can see Figure 2. Due to these huge condition numbers we cannot expect accurate enough results when performing algebraic computations by the usual methods for those matrices.In this case, algorithms taking into account the structure of the matrices are desirable in order to obtain good numerical results.Green matrices A n , n = 6, 8, . . ., 40, are nonsingular TP matrices and their bidiagonal decompositions BD(A n ) can also be computed to HRA.
Approximations of the lowest eigenvalue of these matrices by HRA methods and by Matlab command eig have been considered.Then we have computed those eigenvalues with Mathematica using a 200 digits precision.The relative errors for the approximations of the eigenvalues obtained previously have been calculated considering the eigenvalues provided by Mathematica as exact.Table 2 shows the relative errors of the approximation to the lowest eigenvalue λ n n of each matrix A n for n = 2, 4, . . ., 40.In Figure 3 we can see these same relative errors in a graphical way.
The bidiagonal decomposition of a Green matrix A to HRA together with the functions TNInverseExpand and TNSolve of the software library [20] allow us to compute its inverse and the solution of Ax = b, where b has alternating signs, to HRA.Now let us compare numerically the accuracy of our HRA methods with this of the usual methods in Matlab.In this respect we have computed the inverse of the considered Green matrix of order 40, A −1 40 , with Mathematica using a 200-digit precision.We have also computed with Matlab the inverse A −1 40 with the usual function inv and with the function TNInverseExpand using BD(A 40 ) to HRA obtained with the function TNBDGreen.Then we have computed the componentwise relative errors for both Matlab approximations.Finally, we have calculated the average of the componentwise relative errors for both approximations and the greatest componentwise relative error.These data can be seen in Table 3.Now let us consider a system of linear equations A 40 x = b, where b has an alternating pattern of signs with the absolute value of its entries randomly generated as integers in the interval [1,1000].First we have calculated the exact solution x of the linear system with Mathematica.Then we have obtained approximations to the exact solution of the system with the usual Matlab command \ and with TNSolve of the software library [20] using the bidiagonal factorization BD(A 40 ) provided by TNBDGreen to HRA.Finally we have computed the relative errors componentwise.These componentwise relative errors are shown in Table 4.

Conclusions
In [21] a method with HRA to obtain the bidiagonal factorization of a Green matrix was presented and used to derive accurate methods for algebraic computations with nonsingular TP matrices.In this paper we illustrate with a family of matrices the advantages of using this method and we show that the parametrization used in [21] can be also used to construct accurate alternative methods and for other related problems.For instance, to obtain HRA methods to compute the inverse or the determinant of any nonsingular Green matrix and a method to compute with HRA any minor of a Green matrix.The bidiagonal factorization of the Hadamard product of two Green matrices is also obtained.Other related properties are analyzed.In particular, the Hadamard product is closed for nonsingular TP Green matrices, in contrast to the Hadamard product of nonsingular Green matrices.

.
We can observe that A and B are both nonsingular.In contrast, the Hadamard product of A

•
computation of the eigenvalues of A, • computation of the singular values of A, and • computation of the solution of linear systems of equations Ax = b, where b has an alternating sign pattern.