 Next Article in Journal
A Fixed Point Approach to the Stability of a Mean Value Type Functional Equation
Previous Article in Journal
Solving the Lane–Emden Equation within a Reproducing Kernel Method and Group Preserving Scheme
Article

# Convertible Subspaces of Hessenberg-Type Matrices

1
Faculdade de Ciências, Universidade da Beira Interior, Rua Marquês d’Ávila e Bolama, 6201-001 Covilhã, Portugal
2
Centro de Matemática e Aplicações (CMA-UBI), Universidade da Beira Interior, Rua Marquês d’Ávila e Bolama, 6201-001 Covilhã, Portugal
3
Center for Research and Development in Mathematics and Applications (CIDMA), Universidade de Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Mathematics 2017, 5(4), 79; https://doi.org/10.3390/math5040079
Received: 16 November 2017 / Revised: 9 December 2017 / Accepted: 10 December 2017 / Published: 13 December 2017

## Abstract

We describe subspaces of generalized Hessenberg matrices where the determinant is convertible into the permanent by affixing ± signs. An explicit characterization of convertible Hessenberg-type matrices is presented. We conclude that convertible matrices with the maximum number of nonzero entries can be reduced to a basic set.
Keywords:

## 1. Introduction

Let $M n ( C )$ denote the space of all n-square matrices over the complex field $C$ and let $S n$ be the symmetric group of degree n. For $A = [ a i j ] ∈ M n ( C )$, the permanent function is defined as:
$per ( A ) = ∑ σ ∈ S n ∏ i = 1 n a i σ ( i ) .$
The permanent function therefore resembles the determinant, which in turn is given by:
$det ( A ) = ∑ σ ∈ S n ϵ ( σ ) ∏ i = 1 n a i σ ( i ) ,$
where $ϵ$ denotes the sign function.
Although not as prominent as the determinant, the permanent function is still a well-known matrix function, with many applications in combinatorics and graph theory. However, while the determinant can be easily computed, no efficient algorithm for computing the permanent is known. The difficulty for a direct computation of the permanent leads to the idea of trying to compute it by using determinants. This problem dates back to 1913 in a work by Pólya , and it has been under intensive investigation since then. While it is clear that the permanent of a $2 × 2$ matrix:
$A = a 11 a 12 a 21 a 22$
equals the determinant of the related matrix:
$B = a 11 − a 12 a 21 a 22 ,$
Szegö  proved that for $n ≥ 3$, there is no way to generalize this procedure. That is, there is no uniform way of changing the signs in the entries of a matrix $A ∈ M n ( C )$ in order to obtain a matrix B satisfying $det ( B ) = per ( A )$.
In , Gibson proved that if A is an n-square $( 0 , 1 )$-matrix, and if the permanent of A can be converted to a determinant by affixing ± signs to the elements of the matrix, then A has at most $Ω n = 1 2 ( n 2 + 3 n − 2 )$ positive entries.
Later, Little  proved that an n-square $( 0 , 1 )$-matrix can be conveniently represented by means of a bipartite graph and reinterpreted the problem of characterizing convertible matrices as the problem of characterizing bipartite graphs whose 1-factors can be counted by using Pfaffians in the manner suggested by Kasteleyn .
The computational complexity for determining if a $( 0 , 1 )$-matrix is convertible was studied by Vazirani and Yannakakis  and Robertson, Seymour, and Thomas , leading the latter authors to design a polynomial-time algorithm capable of determining whether or not the permanent of the matrix is convertible into a determinant.
In this article, we consider mostly n-square $( 0 , 1 )$-matrices with the maximum number of positive entries $Ω n$. For these cases, we present a procedure to determine whether or not a given matrix is convertible. Compared with previously available algorithms, this method does not rely on the associated bipartite graph, and is more efficient. This result is presented in Section 4, where we introduce a new concept: the imprint. Before that, in Section 3, we define Hessenberg-type matrices, generalizing the well-known notion of Hessenberg matrices, and present some preliminary results. Our main results are presented in Section 5. We extend Fonseca’s result  by presenting an explicit characterization of convertible Hessenberg-type matrices and corresponding subspaces. We conclude that convertible matrices can be reduced to a basic set.

## 2. Basic Definitions and Preliminary Results

Let us start with a brief summary of definitions and results that are subsequently used throughout the article.
In the present work, we exclusively consider square matrices, so we will drop the qualifier square in what follows. In general, the order n of the considered matrices is arbitrary, except when explicitly stated otherwise.
A matrix $X ∈ M n ( C )$ is said to be convertible if there exists a $( 1 , − 1 )$-matrix $C ∈ M n ( C )$, such that:
$per ( X ) = det ( C ★ X ) ,$
where ★ denotes the Hadamard product. As already mentioned in the Introduction, a convertible matrix of order n has at most $Ω n = 1 2 ( n 2 + 3 n − 2 )$ nonzero entries .
For a $( 0 , 1 )$-matrix S of order n, we define the associated coordinate subspace as:
$M n ( S ) = { S ★ X : X ∈ M n ( C ) } .$
It is clear that if S is convertible then every element of $M n ( S )$ is also convertible, with respect to the same matrix C; i.e., there exists a $( 1 , − 1 )$-matrix $C ∈ M n ( C )$ such that:
We will also say that $M n ( S )$ is convertible.
A well-known set of convertible matrices is the set of Hessenberg matrices. The matrix $A = [ a i j ] ∈ M n ( C )$ is said to be a lower (upper) Hessenberg matrix if $a i j = 0$ for $i < j − 1$ ($i > j + 1$). In , Gibson proved that the linear space of lower (or upper) Hessenberg matrices is a convertible subspace of $M n ( C )$. In , Fonseca extended Gibson’s result to a broader class of matrices. In the next section we introduce a definition that further extends the class of matrices considered in .
Definition 1.
Two matrices A and B are permutation equivalent if there exist permutation matrices P and Q such that $B = P A Q$. In such a case, we denoted it by $A ∼ B$.
It is clear that if A is convertible and $A ∼ B$, then B is also convertible.
Proposition 1.
If a matrix $S ∈ M n ( C )$ is convertible, then it is sufficient to change at most $n − 1$ signs in order to convert the permanent into the determinant.
Proof.
Trivial by Lemma 3 and by the Theorem in . ☐
Let $A = [ a i j ] ∈ M n ( C )$. We denote by $A ( i 1 , … , i k ; j 1 , … , j k )$ the submatrix of A obtained after removing rows $i 1 , … , i k$ and columns $j 1 , … , j k$.
The following result is given in the proof of Lemma 1 of :
Lemma 1.
Let $S = [ s i j ]$ be a convertible $( 0 , 1 )$-matrix, with $per ( S ) = det ( C ★ S )$, where $C = [ c i j ]$ is a $( 1 , − 1 )$-matrix . If $s r s = 1$, then $S ( r ; s )$ is also convertible, with:
$per ( S ( r ; s ) ) = ( − 1 ) r + s c r s det ( C ( r ; s ) ★ S ( r ; s ) ) .$
A subspace version of this lemma follows immediately.
Proposition 2.
If $M n ( S )$ is a convertible subspace and $s i j = 1$, then $M n − 1 ( S ( i ; j ) )$ is also a convertible subspace.
Corollary 1.
If $M n ( S )$ is a convertible subspace and $s i 1 j 1 , … , s i k j k$ are k nonzero elements of S, then $M n − k ( S ( i 1 , … , i k ; j 1 , … , j k ) )$ is also a convertible subspace.
Proof.
Trivial by induction. ☐

## 3. Hessenberg-Type Matrices

In this section we will extend further the class of matrices considered in . Throughout what follows, the adjective “lower” for the lower Hessenberg matrices will be dropped, since no ambiguity will appear.
Definition 2.
An n-square matrix is a Hessenberg-type matrix if it has at most $n − 1$ nonzero entries above the main diagonal. A coordinate subspace V of $M n ( C )$ is said to be a Hessenberg-type subspace if there is a Hessenberg-type $( 0 , 1 )$-matrix $S = [ s i j ]$ with $s i j = 1$ if $i ≥ j$, such that:
$V = M n ( S ) .$
If S has exactly $k ≤ n − 1$ nonzero entries above the main diagonal, we call V a $( k , n )$-Hessenberg-type subspace, or simply a $( k , n )$-subspace if there is no ambiguity. If $k = n − 1$, then S is called a full Hessenberg-type matrix and the corresponding $M n ( S )$ subspace is called a full Hessenberg-type subspace.
Note that a full Hessenberg-type $( 0 , 1 )$-matrix has precisely the maximum allowed number $Ω n$ of nonzero entries for convertibility.
Standard Hessenberg matrices are of course special cases of Hessenberg-type matrices. In particular, a matrix in a $( n − 1 , n )$-subspace, with all $n − 1$ nonzero entries located in the second upper diagonal, will be referred to as a full Hessenberg matrix.
Fonseca’s extension result , concerning a particular Hessenberg-type subspace, can be stated as follows:
Theorem 1.
 Let $S = [ s i j ] ∈ M n ( C )$ be a full Hessenberg-type $( 0 , 1 )$-matrix, and $ℓ , k ∈ { 2 , … , n − 1 }$, with $k < ℓ + 1$. If the positions $( i , j )$ of the nonzero entries, above the main diagonal, satisfy:
then $M n ( S )$ is a convertible subspace. The $( 1 , − 1 )$-matrix $C ∈ M n ( C )$ such that $per ( X ) = det ( C ★ X )$ for all $X ∈ M n ( S )$ satisfies:
$c i j = − 1 , if ( 1 < i < k ∨ k < j < n ) ∧ ( i > j ) ,$
$c i j = 1 , otherwise .$
Example 1.
For example, if $n = 7$, $k = 4$, and $ℓ = 5$, then:
and the coordinate subspace $M 7 ( S )$ is convertible.
Next, we present some results concerning nonfull Hessenberg-type subspaces.
Proposition 3.
The $( 1 , n )$-subspace is convertible.
Proof.
Trivial by Theorem 1. ☐
Proposition 4.
Let S be a $( 2 , n )$-Hessenberg-type $( 0 , 1 )$-matrix with a 1 in position $( i 1 , j 1 )$, $j 1 > i 1$. S is convertible if and only if the position $( i 2 , j 2 )$ of the second 1 above the main diagonal is not in one of the following regions:
Proof.
Without loss of generality, let us consider the scheme below. Note that if $j 1 = i 1 + 1$, then we only have region I. $( ⇒ )$
Let us first suppose that the second 1 is in region I. Assume, without loss of generality, that the second 1 is in the upper-right corner of S. If this is not the case, then there exists a submatrix $S ′$ of S satisfying this condition, and the non-convertibility of $S ′$ implies the non-convertibility of S. Note that $S ( 1 ; n )$ is an $( n − 1 )$-square matrix with $( n − 1 )$ nonzero entries above the main diagonal—which cannot be convertible because it has more than $Ω n − 1$ nonzero entries . Then, by Lemma 1, we conclude that S is not convertible.
The situation where the second 1 is in region II is no different from the previous case, since one can interchange the role of the two 1’s.
Next, let us suppose that the second 1 is in region III. So, we have $j 1 > i 1 + 1$ and we may assume—without loss of generality—that $j 1 = n$ and $i 2 = 1$ (an example of a matrix S of this form is given in Equation (16) below). If this is not the case, then as we argued for the region I case, there exists a submatrix $S ′$ of S satisfying this condition.
$S = 1 0 0 0 1 0 0 1 1 0 0 0 0 0 1 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 .$
Since the number of nonzero entries of S is $1 2 ( n 2 + n + 4 )$, it follows that $S ( 1 , i 1 ; j 2 , n )$ is an $( n − 2 )$-square matrix with $1 2 ( n 2 + n + 4 ) − ( n + 5 + i 1 − j 2 ) = 1 2 ( n 2 − n − 6 ) + j 2 − i 1$ nonzero entries, because there are $2 + 2 + i 1 + ( n − j 2 + 1 ) = n + 5 + i 1 − j 2$ nonzero entries eliminated in S. Since $j 2 − i 1 > 1$ in region III, the number of nonzero entries of this $( n − 2 )$-square matrix is greater than the maximum admissible $Ω n − 2 = 1 2 ( n 2 − n − 4 )$, and $S ( 1 , i 1 ; j 2 , n )$ is therefore not convertible. By Corollary 1, this implies that the initial matrix is also nonconvertible.
Finally, if the second 1 is in region IV, one can interchange the role of the 1’s above the main diagonal, thus falling in the previous case.
$( ⇐ )$
If the second 1 is not in any of the four regions, then we have three cases:
if $j 2 = i 1 + 1$ and $i 2 < i 1$ (positions labeled by ⊙), then S is permutation equivalent to a convertible matrix by Theorem 1, permuting columns $j 2$ and $j 2 − 1$.
if $i 2 = j 1 − 1$ and $j 2 > j 1$ (positions labeled by ⊗), then S is permutation equivalent to a convertible matrix by Theorem 1, permuting rows $i 2$ and $i 2 + 1$.
otherwise (positions labeled by ∗) S is convertible by Theorem 1. ☐
Lemma 2.
For $n ≥ 3$, the number of convertible $( 2 , n )$-subspaces with a nonzero entry at a fixed position $( i , j )$, $i ≤ j − 1$, is:
$1 2 [ n 2 + ( 5 − 2 j ) n + j 2 − j + i 2 − i − 8 ] if i < j − 1 ,$
$1 2 [ n 2 + ( 1 − 2 i ) n + 2 i 2 − 4 ] if i = j − 1 .$
Proof.
Consider first the case $i < j − 1$. By Proposition 4, the number of convertible $( 2 , n )$-subspaces is given by:
$T n − j + 1 + T i + ( n − i − 2 ) + ( j − 3 ) = 1 2 [ n 2 + ( 5 − 2 j ) n + j 2 − j + i 2 − i − 8 ] ,$
where $T n = n ( n + 1 ) 2$ is the nth triangular number.
For $i = j − 1$ the number of convertible $( 2 , n )$-subspaces is likewise given by:
$T n − i + T i − 2 = 1 2 [ n 2 + ( 1 − 2 i ) n + 2 i 2 − 4 ] .$
☐
Proposition 5.
For $n ≥ 3$, the number of convertible $( 2 , n )$-subspaces is $1 24 ( n − 1 ) ( n − 2 ) ( n 2 + 13 n − 12 )$.
Proof.
The formula clearly holds for $n = 3$. Let us prove it for general n by induction. Suppose then that the formula holds for n. Let S be an arbitrary $( 2 , n + 1 )$-Hessenberg-type $( 0 , 1 )$-matrix. Two cases may occur: apart from the $( 1 , 1 )$ position, there is either none or at least one nonzero entry in the first row of S. In the first case, it is clear that the number of convertible subspaces of $M n + 1 ( S )$ coincides with the number of convertible subspaces of $M n ( S ( 1 ; 1 ) )$, which is, by hypothesis:
$1 24 ( n − 1 ) ( n − 2 ) ( n 2 + 13 n − 12 ) .$
In the second case, one applies Lemma 2 at fixed positions $( 1 , j )$. For position $( 1 , 2 )$ the counting is provided by Formula (18), which gives:
$1 2 ( n + 1 ) 2 + ( 1 − 2 ) ( n + 1 ) + 2 − 4 = 1 2 n 2 + n − 2 .$
For positions $( 1 , j )$, $j > 2$, the counting is provided by Formula (17). Summing over all different possibilities gives:
$∑ j = 3 n + 1 1 2 ( n + 1 ) 2 + ( 5 − 2 j ) ( n + 1 ) + j 2 − j − 8 − ( j − 2 ) ,$
where the term $( j − 2 )$ takes care of the double counting (since there may be two nonzero entries in the first row, apart from the position $( 1 , 1 )$). This sum can be easily performed, yielding:
$1 6 ( n − 1 ) ( n 2 + 7 n − 12 ) .$
Finally, summing all contributions (21), (22), and (24) for the number of convertible $( 2 , n + 1 )$-subspaces, we get:
$1 24 ( n − 1 ) n ( n 2 + 15 n + 2 ) .$
It follows by induction that the number of convertible $( 2 , n )$-subspaces is $1 24 ( n − 1 ) ( n − 2 ) ( n 2 + 13 n − 12 )$. ☐

## 4. The Imprint and a Criterion for Convertibility

The following new concept will allow us to determine if a given $( 0 , 1 )$-matrix with $Ω n$ nonzero entries is convertible or not in a very simple way, thus providing a useful criterion for convertibility of the associated subspaces.
Definition 3.
The imprint of an n-square $( 0 , 1 )$-matrix $S = [ s i j ]$ is the array, sorted in increasing order, given by:
$r 1 r 2 … r n c 1 c 2 … c n ,$
where
$r k = ∑ j = 1 n s i k j , for k = 1 , … , n , and i 1 , … , i n = 1 , 2 , … , n$
and
$c k = ∑ i = 1 n s i j k for k = 1 , … , n , and j 1 , … , j n = 1 , 2 , … , n .$
We will denote the imprint of S by $imp ( S )$.
Example 2.
Consider the following matrices:
$S 1 = 1 1 0 0 0 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 and S 2 = 1 1 0 0 0 1 1 1 0 0 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 .$
Then,
Lemma 3.
If S is permutation equivalent to a full Hessenberg $( 0 , 1 )$-matrix, then:
$imp ( S ) = 2 3 … n − 1 n n 2 3 … n − 1 n n .$
The following proposition gives a necessary and sufficient criterion for convertibility of matrices in $M n ( C )$ with $Ω n$ nonzero entries.
Proposition 6.
S is a convertible n-square $( 0 , 1 )$-matrix with $Ω n$ nonzero entries if and only if:
$imp ( S ) = 2 3 … n − 1 n n 2 3 … n − 1 n n .$
Proof.
$( ⇒ )$
If S is convertible and has $Ω n$ nonzero entries, then, by Corollary 2 in , S is permutation equivalent to a full Hessenberg matrix. Hence, by Lemma 3,
$imp ( S ) = 2 3 … n − 1 n n 2 3 … n − 1 n n .$
$( ⇐ )$
If
$imp ( S ) = 2 3 … n − 1 n n 2 3 … n − 1 n n ,$
it is possible to reorder the rows and columns to obtain a Hessenberg matrix. Hence, it is convertible. ☐
Remark 1.
It follows from the last result that not all matrices in an $( n − 1 , n )$-subspace are convertible. For example, the matrix:
$S = 1 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1$
is not convertible because $imp ( S ) ≠ 2 3 4 4 2 3 4 4$.

## 5. Charaterization of Full Hessenberg-Type Subspaces

In this section we consider full Hessenberg-type matrices and the corresponding subspaces. Our aim is to obtain an explicit characterization of convertible full Hessenberg-type subspaces.
Hessenberg-type matrices can be composed to produce new higher-dimensional matrices as follows.
Definition 4.
Let $S 1$ and $S 2$ be two Hessenberg-type $( 0 , 1 )$-matrices with dimensions m and n, respectively, such that the submatrix of $S 1$ formed by the $k × k$ lower-right corner coincides with the submatrix of $S 2$ formed by its $k × k$ upper-left corner. The k-overlap $S 1 ⨀ k S 2$ is the $( m + n − k )$-dimensional matrix obtained by superposition of $S 1$ and $S 2$ by their main diagonal, overlapping the coincident $k × k$ submatrix, where the missing entries below and above the main diagonal are 1 and 0, respectively.
Note that the new matrix $S 1 ⨀ k S 2$ is not necessarily of the Hessenberg-type, as the following counting of nonzero entries above the main diagonal shows. Let a, b, and c respectively denote the number of 1’s above the main diagonal in $S 1$, $S 2$, and the common $k × k$ submatrix. The number of 1’s above the main diagonal in $S 1 ⨀ k S 2$ is then $( a − c ) + c + ( b − c ) = a + b − c$. In the least favorable case when both matrices $S 1$ and $S 2$ are full (i.e., $a = m − 1$ and $b = n − 1$), it follows that $S 1 ⨀ k S 2$ is Hessenberg-type only if $c ≥ k − 1$.
Example 3.
For the two following (full) Hessenberg-type matrices:
$S 1 = 1 0 1 0 1 1 1 1 1 1 1 0 1 1 1 1 and S 2 = 1 0 1 1 1 1 1 1 1 ,$
two examples of overlaps are: In the first case, the matrix is Hessenberg-type, whereas in the second case it is not.
Lemma 4.
Let $S 1$ and $S 2$ be two full Hessenberg-type $( 0 , 1 )$-matrices. $S 1 ⨀ k S 2$ is a full Hessenberg-type $( 0 , 1 )$-matrix if and only if the common $k × k$ matrix is a full Hessenberg-type $( 0 , 1 )$-matrix.
Proof.
If the $k × k$ matrix is full, the conclusion follows trivially from the counting below Definition 4. By the same counting, if $S 1 ⨀ k S 2$ is full, we have $m + n − 2 − c = m + n − k − 1$, and thus the number of nonzero entries above the main diagonal in the $k × k$ matrix is $c = k − 1$. ☐
Proposition 7.
Let $S 1$ and $S 2$ be two full and convertible Hessenberg-type $( 0 , 1 )$-matrices. If a k-overlap $S 1 ⨀ k S 2$ is a full Hessenberg-type matrix, then it is convertible.
Proof.
Let $S 1$ and $S 2$ be two full and convertible Hessenberg-type $( 0 , 1 )$-matrices of order $m 1$ and $m 2$, respectively, and S the k-overlap matrix of order n, $S = S 1 ⨀ k S 2$, which by hypothesis is a full Hessenberg-type matrix. If $n = max { m 1 , m 2 }$, then $S = S 1$ or $S = S 2$, and it is therefore convertible.
Suppose that $n > max { m 1 , m 2 }$. Without loss of generality, one can consider the scheme depicted below: where the $k × k$ overlap submatrix will be denoted by $S ′$. It follows from the previous lemma that $S ′$ is a full Hessenberg-type matrix. Moreover, it follows from Corollary 1 that $S ′$ is convertible, since $S ′ = S 1 ( 1 , … , m 1 − k ; 1 , … , m 1 − k )$.
Let us calculate $imp ( S )$. For the last $m 2$ rows of S, the number of nonzero entries in each row is obtained by adding $m 1 − k$ to each value of the first row of $imp ( S 2 )$,
$2 … m 2 − 1 m 2 m 2 + m 1 − k … m 1 − k m 1 − k m 1 − k m 1 − k + 2 … m 1 + m 2 − k − 1 m 1 + m 2 − k m 1 + m 2 − k .$
Since $S ′$ is a full convertible matrix, we have:
$imp ( S ′ ) = 2 … k − 1 k k 2 … k − 1 k k ,$
and the contribution of these k rows to the first row of $imp ( S 1 )$ is:
$2 … k − 1 k k + m 1 − k … m 1 − k m 1 − k m 1 − k m 1 − k + 2 … m 1 − 1 m 1 m 1 ,$
which correspond to the last k values of the first row of $imp ( S 1 )$. It follows that the first $m 1 − k$ values of the first row of $imp ( S 1 )$ correspond to the first $m 1 − k$ rows of $S 1$, and therefore to the first $m 1 − k$ rows of S. Hence, after taking into account the first $m 1 − k$ values of the first row of $imp ( S 1 )$, the first row of $imp ( S )$ turns out to be:
$2 , 3 , … , m 1 + m 2 − k − 1 , m 1 + m 2 − k , m 1 + m 2 − k .$
Since a similar argument is clearly valid for the columns, we get:
$imp ( S ) = 2 … m 1 + m 2 − k − 1 m 1 + m 2 − k m 1 + m 2 − k 2 … m 1 + m 2 − k − 1 m 1 + m 2 − k m 1 + m 2 − k ,$
and it follows from Proposition 6 that S is convertible. ☐
We will now show that a small family of matrices generates—by means of the above defined overlap—all possible full convertible Hessenberg-type subspaces.
For $n ≥ 2$, consider the family of full n-square Hessenberg-type $( 0 , 1 )$-matrices represented below: where the $n − 1$ nonzero entries above the main diagonal are in the following regions: position $( 1 , n )$, positions $( 1 , 2 )$, …, $( 1 , k )$ in the first line and positions $( k + 1 , n )$, …, $( n − 1 , n )$ in the last column, where the integer k can take any of the values ${ 1 , 2 , ⋯ , n − 1 }$. In particular, if $k = n − 1$, all the 1’s above the main diagonal are positioned in the first row and if $k = 1$ all the 1’s are positioned in the last column.
We will refer to these matrices as basic, with the understanding that for $n = 2$ the (only) basic matrix is the $( 0 , 1 )$-Hessenberg matrix.
Proposition 8.
The basic matrices are convertible.
Proof.
The basic matrices are permutation equivalent to the Hessenberg matrix (or coincide with it, for $n = 2$). ☐
Proposition 9.
If a full Hessenberg-type $( 0 , 1 )$-matrix is convertible, then it is basic or an overlap of two or more basic matrices.
Proof.
We will prove it by induction.
For $n = 3$ there are three convertible subspaces $M 3 ( S )$, where:
The first matrix S is a 1-overlap of two 2-square $( 0 , 1 )$-hessenberg matrices, whereas the last two are basic.
Suppose now that the proposition is valid for n. Note first that validity for n actually guarantees validity for $n − 1$, and therefore for $n − k$, with $k ≥ 1$, for the following reason. Suppose there is a $( n − 1 )$-square full convertible Hessenberg-type $( 0 , 1 )$-matrix that is neither basic nor an overlap of basic matrices. The 1-overlap of this matrix (by means of the diagonal position $( n − 1 , n − 1 )$) with the $2 × 2$ basic matrix is then a full Hessenberg-type matrix of order n, and is convertible by Proposition 7, but fails to be basic or an overlap of basic matrices, which is a contradiction.
Let then $S = [ s i j ]$ be an $( n + 1 )$-square full convertible Hessenberg-type $( 0 , 1 )$-matrix. In what follows, we will repeatedly use Proposition 6, which establishes that for all integer $2 ≤ ℓ ≤ n$, there is exactly one row and one column of S with exactly nonzero entries. Proposition 2 and Corollary 1 are also used.
Let $k + 1$, $k ≥ 1$ be the number of 1’s in the first row of S. It follows that rows 2, 3, …, until row k are fully determined. In particular, for these rows, $s i j = 0$ for $j > i$. Moreover, all entries in the first row at positions $( 1 , j )$ with $j ≤ k$ must be 1’s, for the following reason. Matrices $S ( j ; j )$ are convertible $∀ j$, and must therefore have less nonzero entries above the diagonal than S. However, for $j ≤ k$ (and $j > 1$), no 1’s would be removed from S, unless they are at positions $( 1 , j )$. So, $s 12$, …, $s 1 k$ in S are necessarily 1’s, with the remaining 1 in the first row appearing in an arbitrary position $( 1 , m )$, $m > k$ (of course, the above constraints are void for $k = 1$). Note that the column $j = m$ is also completely determined: it is already established that $s i m = 0$ for $1 < i ≤ k$, and it follows from Proposition 6 that the remaining entries must be 1’s. Now, if $m = n + 1$, we are done, since S is then seen to be basic (in particular, this includes the case $k = n$). Otherwise, we have $m ≤ n$ and let $S ′$ be the square matrix formed by the first m rows and columns of S. This matrix $S ′$ is clearly convertible, by Corollary 1. On the other hand, since the combined number of 1’s above the diagonal in the first row and last column of $S ′$ is clearly $m − 1$, it follows that the maximum number $Ω m$ of admissible nonzero entries is saturated, showing that $S ′$ is of the basic type. Furthermore, $S ( 1 , … , k ; 1 … , k )$ is certainly Hessenberg-type, convertible, and full, since the number of 1’s above the diagonal in $S ( 1 , … , k ; 1 … , k )$ is precisely $n − k$, given that exactly k nonzero entries above the diagonal are removed from S. It is clear that S is an $( m − k )$-overlap of $S ′$ with $S ( 1 , … , k ; 1 … , k )$, thus completing our proof, since $S ( 1 , … , k ; 1 … , k )$ is, by hypothesis, basic or an overlap of basic matrices. ☐
We will conclude by counting the number of convertible full Hessenberg-type subspaces of order n, or the corresponding full convertible Hessenberg-type $( 0 , 1 )$-matrices. Given an $( n − 1 , n )$-subspace, there are $T n − 1 n − 1$ different combinations of distributing the nonzero entries above the diagonal, where $T n − 1$ is the $( n − 1 )$th triangular number. Of course, not all of them correspond to a convertible subspace. The number of convertible subspaces is established by the following proposition.
Proposition 10.
For $n ≥ 2$, there are $3 n − 2$ different full convertible Hessenberg-type $( 0 , 1 )$-matrices of order n.
Proof.
We will prove it by induction. The hypothesis is clearly verified for $n = 2$. Suppose then that the proposition is valid for n and let us consider the $n + 1$ case. It follows again from Proposition 6 that full convertible Hessenberg-type $( 0 , 1 )$-matrices S fall in two classes: there are either two 1’s in the first row, with one of them necessarily at position $( 1 , 1 )$, or two 1’s in the second row, necessarily at positions $( 2 , 1 )$ and $( 2 , 2 )$. In the first case, assigning the remaining 1 to the position $( 1 , 2 )$ puts no restriction on the (full convertible n-square Hessenberg-type) matrix $S ( 1 ; 1 )$, and so there are as many possibilities for S as there are for $S ( 1 ; 1 )$ (i.e., $3 n − 2$). Assigning the remaining 1 to one of the positions $( 1 , j )$, $j > 2$, restricts the form of $S ( 1 ; 1 )$, but one can easily convince oneself that the reunion of all these cases exhausts all possibilities for $S ( 1 ; 1 )$, so one again gets $3 n − 2$ possibilities. In the second case, it follows from the arguments in the proof of Proposition 9 that there must be a 1 at position $( 1 , 2 )$. In this case, there is no restriction on the matrix $S ( 2 ; 2 )$, and one gets again $3 n − 2$ possibilities. So, there are $3 × 3 n − 2 = 3 n − 1$ full Hessenberg-type convertible matrices of order $n + 1$, thus completing the proof. ☐

## Acknowledgments

This work was supported in part by FCT–Portuguese Foundation for Science and Technology through the Center of Mathematics and Applications of University of Beira Interior, within project UID/MAT/00212/2013.

## Author Contributions

Henrique F. da Cruz, Ilda Inácio Rodrigues and Rogério Serôdio introduced the problem; Henrique F. da Cruz, Ilda Inácio Rodrigues, Rogério Serôdio and Alberto Simões investigated the problem; Henrique F. da Cruz, Ilda Inácio Rodrigues, Rogério Serôdio and José Velhinho reviewed the proofs and arguments and wrote the paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Pólya, G. Aufgabe 424. Arch. Math. Phys. 1913, 20, 271. [Google Scholar]
2. Szegö, G. Lösung Zu Aufgabe 424. Arch. Math. Phys. 1913, 21, 291–292. [Google Scholar]
3. Gibson, P.M. Conversion of the permanent into the determinant. Proc. Am. Math. Soc. 1971, 27, 471–476. [Google Scholar] [CrossRef]
4. Little, C.H.C. A characterization of convertible (0,1)-matrices. J. Comb. Theory Ser. B 1975, 18, 187–208. [Google Scholar] [CrossRef]
5. Kasteleyn, P.W. Graph theory and crystal physics. In Graph Theory and Theoretical Physics; Harary, F., Ed.; Academic Press: New York, NY, USA, 1967; pp. 43–110. [Google Scholar]
6. Vazirani, V.V.; Yannakakis, M. Pfaffian orientations, 0-1 permanents, and even cycles in directed graphs. Discret. Appl. Math. 1989, 25, 179–190. [Google Scholar] [CrossRef]
7. Robertson, N.; Seymour, P.D.; Thomas, R. Permanents, Pfaffian orientations, and even directed circuits. Ann. Math. 1999, 150, 929–975. [Google Scholar] [CrossRef]
8. Da Fonseca, C.M. An identity between the determinant and the permanent of Heissenberg-type matrices. Czechoslov. Math. J. 2011, 61, 917–921. [Google Scholar] [CrossRef]
9. Gibson, P.M. An identity between permanents and determinants. Am. Math. Mon. 1969, 76, 270–271. [Google Scholar] [CrossRef]
Back to TopTop