The Generalized Schur Algorithm and Some Applications

The generalized Schur algorithm is a powerful tool allowing to compute classical decompositions of matrices, such as the QR and LU factorizations. When applied to matrices with particular structures, the generalized Schur algorithm computes these factorizations with a complexity of one order of magnitude less than that of classical algorithms based on Householder or elementary transformations. In this manuscript, we describe the main features of the generalized Schur algorithm. We show that it helps to prove some theoretical properties of the R factor of the QR factorization of some structured matrices, such as symmetric positive definite Toeplitz and Sylvester matrices, that can hardly be proven using classical linear algebra tools. Moreover, we propose a fast implementation of the generalized Schur algorithm for computing the rank of Sylvester matrices, arising in a number of applications. Finally, we propose a generalized Schur based algorithm for computing the null-space of polynomial matrices.


Introduction
The generalized Schur algorithm (GSA) allows computing well-known matrix decompositions, such as QR and LU factorizations [1].In particular, if the involved matrix is structured, i.e., Toeplitz, block-Toeplitz or Sylvester, the GSA computes the R factor of the QR factorization with complexity of one order of magnitude less than that of the classical QR algorithm [2], since it relies only on the knowledge of the so-called generators [2] associated to the given matrix, rather than on the knowledge of the matrix itself.The stability properties of the GSA are described in [3][4][5], where it is proven that the algorithm is weakly stable provided the involved hyperbolic rotations are performed in a stable way.
In this manuscript, we first show that, besides the efficiency properties, the GSA provides new theoretical insights on the bounds of the entries of the R factor of the QR factorization of some structured matrices.In particular, if the involved matrix is a symmetric positive definite (SPD) Toeplitz or a Sylvester matrix, we prove that all or some of the diagonal entries of R monotonically decrease in absolute value.
We then propose a faster implementation of the algorithm described in [6] for computing the rank of a Sylvester matrix S ∈ R (m+n)×(m+n) , whose entries are the coefficients of two polynomials of degree m and n, respectively.This new algorithm is based on the GSA for computing the R factor of the QR factorization of S. The proposed modification of the GSA-based method has a computational cost of O(rl) floating point operations, where l = min{n, m} and r is the computed numerical rank.
It is well known that the upper triangular factor R factor of the QR factorization of a matrix A ∈ R n×n is equal to the upper triangular Cholesky factor R c ∈ R n×n of A T A, up to a diagonal sign matrix D, i.e., R = DR c , D = diag(±1, • • • , ±1) ∈ R n×n .In this manuscript, we assume, without loss of generality, that the diagonal entries of R and R c are positive and since the matrices are then equal, we denote both matrices by R.
Finally, we propose a GSA-based approach for computing a null-space basis of a polynomial matrix, which is an important problem in several systems and control applications [7,8].For instance, the computation of the null-space of a polynomial matrix arises when solving the column reduction problem of a polynomial matrix [9,10].
The manuscript is structured as follows.The main features of the GSA are provided in Section 2. In Section 3, a GSA implementation for computing the Cholesky factor R of a SPD Toeplitz matrix is described, which allows proving that the diagonal entries of R monotonically decrease.In Section 4, a GSA-based algorithm for computing the rank of a Sylvester matrix S is introduced, based on the computation of the Cholesky factor R of S T S. In addition, in this case, it is proven that the first diagonal entries of R monotonically decrease.The GSA-based method to compute the null-space of polynomial matrices is proposed in Section 5.The numerical examples are reported in Section 6 followed by the conclusions in Section 7.

The Generalized Schur Algorithm
Many of the classical factorizations of a symmetric matrix, e.g., QR and LDL T , can be obtained by the GSA.If the matrix is Toeplitz-like, the GSA computes these factorizations in a fast way.For the sake of completeness, the basic concepts of the GSA for computing the R factor of the QR factorization of structured matrices, such as Toeplitz and block-Toeplitz matrices, are introduced in this Section.A comprehensive treatment of the topic can be found in [1,2].
Let A ∈ R n×n be a symmetric positive definite (SPD) matrix.The semidefinite case is considered in Sections 4 and 5.The displacement of A with respect to a matrix Z of order n, is defined as while the displacement rank k of A with respect to Z is defined as the rank of ) can be written as the sum of k rank-one matrices, and the vectors g . ., k 2 , are called the positive and the negative generators of A with respect to Z, respectively, conversely, if there is no ambiguity, simply the positive and negative generators of A.
k 2 ] T is called the generator matrix.The matrix Z is a nilpotent matrix.In particular, for Toeplitz and block-Toeplitz matrices, the matrix Z can be chosen as the shift and the block shift matrix The implementation of the GSA relies only on the knowledge of the generators of A rather than on the knowledge of the matrix itself [1].
then, adding all members of the left and right-hand sides of Equation ( 2) yields which expresses the matrix A in terms of its generators.Exploiting Equation ( 2), we show how the GSA computes R by describing its first iteration.Observe that the matrix products involved in the right-hand side of Equation ( 2) have their first row equal to zero, with the exception of the first product, G T JG.
A key role in GSA is played by J-orthogonal matrices [11,12], i.e., matrices Φ satisfying Φ T JΦ = J.Any such matrix Φ can be constructed in different ways [11][12][13][14].For instance, it can be considered as the product of Givens and hyperbolic rotations.In particular, a Givens rotation acting on rows i and j of the generator matrix is chosen if J(i, i)J(j, j) > 0, i, j ∈ {1, . . ., n}, i = j.Otherwise, a hyperbolic rotation is considered.Indeed, suitable choices of Φ allow efficient implementations of GSA, as shown in Section 4.
Let G 0 ≡ G and Φ 1 be a J-orthogonal matrix such that and e i , i = 1, . . ., n, be the ith column of the identity matrix.Furthermore, let gT 1 and Γ1 be the first and last k − 1 rows of G1 , respectively, i.e., G1 = gT 1 Γ1 .
From Equation (4), it turns out that the first column of Γ1 is zero.Let J be the matrix obtained by deleting the first row and column from J.Then, Equation (2) can be written as follows, where G 1 ≡ [Z g1 , ΓT 1 ] T , that is, G 1 is obtained from G1 by multiplying g1 with Z, and A 1 ≡ ∑ n−2 j=0 Z j G T 1 JG 1 Z j T .If A is a Toeplitz matrix, this multiplication with Z corresponds to displacing the entries of g1 one position downward, while it corresponds to a block-displacement downward in the first generator if A is a block-Toeplitz matrix.
Thus, the first column of G 1 is zero and, hence, gT 1 is the first row of the R factor of the QR factorization of A. The above procedure is recursively applied to A 1 to compute the other rows of R.
The jth iteration of GSA, j = 1, . . ., n, involves the products Φ j G j−1 and Z g1 .The former multiplication can be computed in O (k(n − j)) operations [11,12], and the latter is done for free if Z is either a shift or a block-shift matrix.Therefore, if the displacement rank k of A is small compared to n, the GSA computes the R factor in O(kn 2 ) rather than in O(n 3 ) operations, as required by standard algorithms [15].
For the sake of completeness, the described GSA implementation is reported in the following matlab style function.(The function givens is the matlab function having as input two scalars, x 1 and x 2 , and as output an orthogonal 2 × 2 matrix Θ such that Θ x 1 . The function Hrotate computes the coefficients of the 2 × 2 hyperbolic rotation Φ such that, given two scalars . The function Happly applies Φ to two rows of the generator matrix.Both functions are defined in [12]).
The GSA has been proven to be weakly stable [3,4], provided the hyperbolic transformations involved in the construction of the matrices Φ j are performed in a stable way [3,11,12].

GSA for SPD Toeplitz Matrices
In this section, we describe the GSA for computing the R factor of the Cholesky factorization of a SPD Toeplitz matrix A, with R upper triangular, i.e., A = R T R.Moreover, we show that the diagonal entries of R decrease monotonically.
Let A ∈ R n×n and Z ∈ R n×n be a SPD Toeplitz matrix and a shift matrix, respectively, i.e., , and let t = A(:, 1).Then, , ∇ Z A is a symmetric rank-2 matrix.Moreover, the generator matrix G is given by In this case, the GSA can be implemented in matlab-like style as follows.
The following lemma holds.

Lemma 1.
Let A be a SPD Toeplitz matrix and let R be its Cholesky factor, with R upper triangular.Then, Proof.At each step i of GSA_chol, i = 1, . . ., n, first a hyperbolic rotation is applied to G i−1 in order to annihilate the element G i (2, i).Hence, the first row of G i−1 becomes the row i of R. Finally, G i (1, :) is obtained displacing the entries of the first row of G i−1 one position right, while G i (2, :) is equal to

Computing the Rank of Sylvester Matrices
In this section, we focus on the computation of the rank of Sylvester matrices.The numerical rank of a Sylvester matrix is a useful information for determining the degree of the greatest common divisor of the involved polynomials [6,16,17].
A GSA-based algorithm for computing the rank of S has been recently proposed in [6].It is based on the computation of the Cholesky factor R of S T S, with R upper triangular, i.e., R T R = S T S.
Here, we propose a more efficient variant of this algorithm that allows proving that the first entries of R monotonically decrease.
Let w i ∈ R, i = 0, 1, . . ., n, and let y i ∈ R, i = 0, 1, . . ., m. Denote by w(x) and y(x) two univariate polynomials, Let S ∈ R (m+n)×(m+n) be the Sylvester matrix defined as follows, with W ∈ R (m+n)×m and Y ∈ R (m+n)×n band Toeplitz matrices.We now describe how the GSA-based algorithm proposed in [6] for computing the rank of S can be implemented in a faster way.This variant is based on the computation of the Cholesky factor R ∈ R (m+n)×(m+n) of S T S, with R upper triangular, i.e., R T R = S T S. Defining the generator matrix G of S T S with respect to Z is then given by [6] G = g 1 g 2 g 3 g 4

T
where with x 1 = S T Se 1 , x 2 = S T Se m+1 , e j the jth vector of the canonical basis of R m+n , and The algorithm proposed in [6] is based on the following GSA implementation for computing the R factor of the QR factorization of S.
At the ith iteration of the algorithm, i = 1, . . ., n, the Givens rotations Θ 1 and Θ 2 are computed and applied, respectively, to the first and second generators, and to the third and fourth generators, to annihilate G(2, i) and G(4, i).Hence, the hyperbolic rotation c 1 −s 1 −s 1 c 1 is applied to the first and the third row of G to annihilate G(3, i).Finally, the first row of G becomes the ith row of R and the first row of G is multiplied by Z T .Summarizing, at the first step of the ith iteration of GSA, all entries of the ith column but the first one of G, are annihilated.If the number of rows of G is greater than 2, this can be accomplished in different ways (see [5,14]) .
Analyzing the pattern of the generators in Equation ( 8), we are able to derive a different implementation of GSA that costs O(rl), with l = min{n, m}.Moreover, this implementation allows proving that the first l diagonal entries of R are monotonically decreasing.
We observe that the matrix W T W in Equation ( 6) is the SPD Toeplitz matrix with Since Moreover, the rows G(2, :) and G(4, :) have their first entry equal to zero and differ only in their entry in column m + 1.This particular pattern of G is close to the ones described in [13,14,18], allowing to design an alternative GSA implementation with respect to that considered in [6], and thereby reducing the complexity from O(r(n + m)) to O(rl), where r is the computed rank of S and l = min{n, m}.
Since the description of the above GSA implementation is quite cumbersome and similar to the algorithms reported in [13,14,18], we omit it here.The corresponding matlab pseudo-code can be obtained from the authors upon request.
If the matrix S has rank r < (n + m), at the k = (n [6].Therefore, at each iteration of the algorithm we check whether where tol is a fixed tolerance.If Equation ( 10) is not satisfied, we stop the computation considering k as the computed numerical rank of S.
The R factor of the QR factorization of S is unique if the diagonal entries of R are positive.The considered GSA implementation, yielding the rank of S and based on computing the R factor of the QR factorization of S, allows us to prove that the first l entries of the diagonal of R are ordered in a decreasing order, with l = min{m, n}.In fact, the following theorem holds.Theorem 1.Let R T R = S T S be the Cholesky factorization of S T S with S the Sylvester matrix defined in Equation ( 6) with rank r ≥ l = min{m, n}.Then, Proof.Each entry i of the diagonal of R is determined by the ith entry of the first row of G at the end of iteration i, for i = 1, . . ., m + n.Let us define Ĝ ≡ G(:, 1 : l) and consider the following alternative implementation of the GSA for computing the first l columns of the Cholesky factor of S T S.
end % for We observe that, for i = 1, Ĝ(1, 1) is the only entry in the first column of Ĝ different from 0. Hence, R(1, i) = Ĝ(1, 1 : l) and the first iteration amounts only to shifting Ĝ(1, 1 : l) one position rightward, i.e., Ĝ(1, 2 : At the beginning of iteration i = 2, the second and the fourth row of Ĝ are equal Equation ( 8).Hence, when applying a Givens rotation to the first and the second row in order to annihilate the entry Ĝ(2, i) and when subsequently applying a hyperbolic rotation to the first and fourth row of Ĝ in order to annihilate Ĝ(4, i), it turns out that Ĝ(2, i : l) and Ĝ(4, i : l) are then modified but still equal to each other, while Ĝ(1, i : l) remains unchanged.The equality between Ĝ(2, :) and Ĝ(4, :) is maintained throughout the iterations 1, 2, . . ., l.
Therefore, the second and the fourth row of Ĝ do not play any role in computing R(1 : l, 1 : l) and can be neglected.Hence, the GSA for computing R(1 : l, 1 : l) reduces only to applying a hyperbolic rotation to the first and the third generators, as described in the following algorithm.
where the updated Ĝ Remark 1.The above GSA implementation allows to prove the inequality Equation (11).This property is difficult to obtain if the QR factorization is performed via Householder transformations or if the classical Cholesky factorization of S T S is used.

GSA for Computing the Null-Space of Polynomial Matrices
In this section, we consider the problem of computing a polynomial basis X(s) ∈ R n×(n−ρ) of the null-space of an m × n polynomial matrix of degree δ and rank ρ ≤ min(m, n), As described in [8,19,20], the above problem is equivalent to that of computing the null-space of a related block-Toeplitz matrix.Algorithms to solve this problem are proposed in [8,19] but they do not explicitly exploit the structure of the involved matrix.Algorithms to solve related problems have also been described in the literature, e.g., in [8,19,21,22].
In this paper, we propose an algorithm for computing the null-space of polynomial matrices based on a variant of the GSA for computing the null-space of a related band block-Toeplitz matrix [8].

Null-Space of Polynomial Matrices
The polynomial vector v(s) belongs to the null-space of M(s γ, is a vector belonging to the null-space of the band block-Toeplitz matrix where m = m(δ + n b ), n = nn b , with n b = γ + 1 the number of block columns of T, that can be determined, e.g., by the algorithm described in [8].Hence, the problem of computing the null-space of the polynomial matrix in Equation ( 12) is equivalent to the problem of computing the null-space of the matrix in Equation (13).To normalize the entries in this matrix, it is appropriate to first perform a QR factorization of each block column of T : and to absorb the upper triangular factor U in the vector u(s) := Uv(s).The convolution equation M(s)v(s) = 0 then becomes an equation of the type Q(s)u(s) = 0, but where the coefficient matrices Q i of Q(s) form together an orthonormalized matrix.
Remark 2. Above, we have assumed that there are no constant vectors v in the kernel of M(s).If there are, then, the block column of M i matrices has rank less than n and the above factorization will discover it in the sense that the matrix U is nonsquare and the matrices Q i have less columns than M i , i = 0, 1, . . ., δ.This trivial null-space can be eliminated and we therefore assume that the rank was full.For simplicity, from now on, we also assume that the coefficient matrices of the polynomial matrix M(s) were already normalized in this way and the norm of the block columns of T are thus orthonormalized.This normalization proves to be very useful in the sequel.
Denote by where 0 n is the null-matrix of order n ∈ N.
and v ∈ ker(T), then also Z k n b v ∈ ker(T), k = 0, 1, . . ., γ − i.In this case, the vector v is said to be a generator vector of a chain of length γ − i + 1 of the null-space of T.
The proposed algorithm for the computation of the null-space of polynomial matrices is based on the GSA for computing the R factor of the QR-factorization of the matrix T in Equation ( 13) and, if R is full column rank, its inverse R −1 .
Let us first assume that the matrix T is full rank, i.e., rank(T) = ρ = min{ m, n}.Without loss of generality, we suppose m ≥ n.If m < n, the algorithm still computes the R factor in trapezoidal form [23].Moreover, in this case, we compute the first m rows of the inverse of the matrix obtained appending the last n − m rows of the identity matrix of order n to R.
Let us consider the SPD block-Toeplitz matrix whose blocks are Notice that, because of the normalization introduced before, we have that T0 = I n and Ti 2 ≤ 1.This is used below.The matrix W = T I n I n 0 n (16) can be factorized in the following way, where R ∈ R n× n is the factor R of the QR-factorization of T, i.e., the Cholesky factor of T. Hence, R and its inverse R −1 can be retrieved from the first n columns of the matrix RT .
The displacement matrix and the displacement rank of W with respect to Z, are given by and ρ(W, Z) = rank(∇ Z (W)), respectively, with ∇ Z (W) ∈ R 2 n×2 n.
Then, taking the order n of the matrices Ti , i = 0, 1, . . ., δ, into account, it turns out that ρ(W, Z) ≤ 2n.Hence, Equation ( 18) can be written as the difference of two matrices of rank at most n, i.e., Since T0 = I n , the construction of G does not require any computation: it is easy to check that G is given by Remark 3. Observe that increasing n b , with n b ≥ δ + 1, the structures of W and ∇ Z (W) do not change due to the block band structure of the matrix W. Consequently, the length of the corresponding generators changes but their structure remains the same since only T 0 , T 1 , . . ., T δ and I n are different from zero in the first block row.
The computation by the GSA of the R factor of T and of its inverse R −1 is made by only using the matrix G rather than the matrix T. Its implementation is a straightforward block matrix extension of the GSA described in Section 2.
Remark 4. By construction, the initial generator matrix G 0 has the first δ + 1 block rows and the block row n b + 1 different from zero.Therefore, the multiplication of G 0 by the J-orthogonal matrix H 1 does not modify the structure of the generator matrix.
Let G 0 = G.At each iteration i (for i = 1, . . ., n b ,), we start from the generator matrix G i−1 having the blocks (of length n) i, i + 1, . . ., i + δ and n b + 1, . . ., n b + i different form zero.We then look for a J-orthogonal matrix H i such that the product H i G i−1 has in position (1 : n, (i − 1)n + 1 : in) and (n + 1 : 2n, (i − 1)n + 1 : in) a nonsingular upper triangular and zero matrix, respectively.
by multiplying the first n columns with Z, i.e., The computation of the J-orthogonal matrix H i at the ith iteration of the GSA can be constructed as a product of n Householder matrices Ĥi,j and n hyperbolic rotations Ŷi,j , j = 1, . . ., n.
The multiplication by the Householder matrices Ĥi,j modifies the last n columns of the generator matrix, annihilating the last n entries but the (n + 1)st in the row (i − 1)n + j, j = 1, . . ., n, while the multiplication by the hyperbolic rotations Ŷi,j acts on the columns i and n + 1, annihilating the entry in position ((i − 1)n + j, n + 1). Given ].The modification of the sparsity pattern of the generator matrix after the first and ith iteration of the GSA are displayed in Figures 1 and 2, respectively.
The reliability of the GSA strongly depends on the way the hyperbolic rotation is computed.In [4,5,24], it is proven that the GSA is weakly stable if the hyperbolic rotations are implemented in an appropriate manner [3,11,12,24]. Let Then, As previously mentioned, GSA relies only on the knowledge of the generators of W rather than on the matrix T itself.Its computation involves the product TT T, which can be accomplished with δ 2 n 3 flops.The ith iteration of the GSA involves the multiplication of n Householder matrices of size n times a matrix of size ((i + δ + 1)n × n).Therefore, since the cost of the multiplication by the hyperbolic rotation is negligible with respect to that of the multiplication by the Householder matrices, the computational cost at iteration i is 4n 3 (δ + i).Hence, the computational cost of GSA is Modification of the sparsity pattern of the generator matrix G after the first iteration.
Modification of the sparsity pattern of the generator matrix G after the ith iteration.

GSA for Computing the Right Null-Space of Semidefinite Block Toeplitz Matrices
As already mentioned in Section 5.1, the number of desired blocks n b of the matrix T in Equation ( 13) can be computed as described in [8].For the sake of simplicity, in the considered examples, we choose n b large enough to compute the null-space of T.
The structure and the computation via the GSA of the R factor of the QR factorization of the singular block Toeplitz matrix T with rank ρ < n ≤ m, is considered in [23].
A modification of the GSA for computing the null-space of Toeplitz matrices is described in [25].In this paper, we extend the latter results to compute the null-space of T by modifying GSA.
Without loss of generality, let us assume that the first n − 1 columns of T are linear independent and suppose that the nth column linearly depends on the previous ones.Therefore, the first n − 1 principal minors of T are positive while the nth one is zero.Let T = QΛQ T be the spectral decomposition of T, with Q = [q 1 , . . ., q n] orthogonal and Λ = diag(λ 1 , . . ., λ n−1 , λ n), with Let R ε be the Cholesky factor of Tε , with R ε upper triangular, i.e., Tε = R T ε R ε .Then, On the other hand, where r n, n = e n .Hence, as ε → 0 + , the last column of R −1 ε becomes closer and closer to a multiple of q n, the eigenvector corresponding to the 0 eigenvalue of T.
Therefore, given We observe that column n + 1 of the generator matrix is not involved in the GSA until the very last iteration, since only its nth entry is different from 0. At the very last iteration, the hyperbolic rotation is applied to the nth and ( n + 1)st rows of G, i.e., to the nth row of G (+) and the first one of G (−) .Since T is singular, it turns out that [23,25]).Thus, We observe that, as ε → 0 + , Since a vector of the right null-space of T is determined except for the multiplication by a constant, neglecting the term |G (+) (n, n)|/ε, such a vector can be computed at the last iteration as the first column of the product 1 −θ −θ 1 When detecting a vector of the null-space as a linear combination of row n of G (+) and row one of G (−) , the new generator matrix G for the GSA is obtained removing the latter columns from G [23,25].
The implementation of the modified GSA for computing the null-space of band block-Toeplitz matrices in Equation ( 13) is rather technical and can be obtained from the authors upon request.
The stability properties of the GSA have been studied in [4,5,24].The proposed algorithm inherits the stability properties of the GSA, which means that it is weakly stable.
The greatest common divisor of w and y has degree 3 and, therefore, the Sylvester matrix S ∈ R 33×33 constructed from w(x) anf y(x) has rank 30.The diagonal entries of the R factor computed by the GSA implementation described in Section 4 are displayed in Figure 3. Observe that the rank of the matrix can be retrieved by the number of entries of R above a certain tolerance.Moreover, it can be noticed that the first m = 15 diagonal entries monotonically decrease.Example 2. As second example, we consider the computation of the coprime factorization of a transfer function matrix, described in [9,26].The results obtained by the proposed GSA-based algorithm were compared with those obtained computing the null-space of the considered matrix by the function svd of matlab.
Let H(s) = N r (s)D −1 r (s) be the transfer function with As reported in [9,26], a minimal polynomial basis for the right null-space of M(s) is Let us consider n b = 3.Then, T ∈ R 20×21 is the block-Toeplitz matrix constructed from M(s) as described in Section 5.1.Let rank(M) = 17 and UΣV T be the rank and the singular value decomposition of T computed by matlab, respectively, and let us define V 1 = V(:, 1 : 17) and V 2 = V(:, 18 : 21) the matrices of the right singular vectors corresponding to the nonzero and zero singular values of T, respectively.The modified GSA applied to T yields four vectors v 1 , Z n b v 1 , v 2 , v 3 ∈ R 21 belonging to the right null-space of M(s), with Z n b = diag(ones(14, 1), −7).Let X = [v 1 Z n b v 1 v 2 v 3 ].In Table 1, the relative norm of TV 2 , the relative norm of TX, the norm of V T 1 V 2 and the norm of V T 1 X, are reported in Columns 1-4, respectively.Such values show that the results provided by svd of matlab and by the algorithm based on a modification of GSA are comparable in terms of accuracy.Example 3.This example can be found in [9,26].Let H(s) = D −1 l (s)N r (s) be the transfer function with D l (s) = (s + Let us choose n b = 4.Then, T ∈ R 14×16 is the block-Toeplitz matrix constructed from M(s) as described in Section 5.1.Let rank(M) = 11 and UΣV T be the rank and the singular value decomposition of T computed by matlab, respectively, and let define V 1 = V(:, 1 : 11) and V 2 = V(:, 12 : 16) the matrices of the right singular vectors corresponding to the nonzero and zero singular values of T, respectively.The modified GSA applied to T yields the vectors v 1 , Z n b v 1 , Z 2 n b v 1 , v 2 , Z n b v 2 , v 3 ∈ R 16 of the right null-space, with Z n b = diag(ones(12, 1), −4).
In Table 2 , the relative norm of TV 2 , the relative norm of TX, the norm of V T 1 V 2 and the norm of V T 1 X, are reported in Columns 1-4, respectively.As in Example 2, the results yielded by the considered algorithms are comparable in accuracy.

Conclusions
The Generalized Schur Algorithm is a powerful tool allowing to compute classical decompositions of matrices, such as the QR and LU factorizations.If the involved matrices have a particular structure, such as Toeplitz or Sylvester, the GSA computes the latter factorizations with a complexity of one order of magnitude less than that of classical algorithms based on Householder or elementary transformations.
After having emphasized the main features of the GSA, we have shown in this manuscript that the GSA helps to prove some theoretical properties of the R factor of the QR factorization of some structured matrices.Moreover, a fast implementation of the GSA for computing the rank of Sylvester matrices and the null-space of polynomial matrices is proposed, which relies on a modification of the GSA for computing the R factor and its inverse of the QR factorization of band block-Toeplitz matrices with full column rank.The numerical examples show that the proposed approach yields reliable results comparable to those ones provided by the function svd of matlab.