Next Article in Journal
Multivariate Extended Gamma Distribution
Next Article in Special Issue
Quincunx Fundamental Refinable Functions in Arbitrary Dimensions
Previous Article in Journal
Expansion of the Kullback-Leibler Divergence, and a New Class of Information Metrics
Previous Article in Special Issue
Fourier Series for Singular Measures

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Euclidean Algorithm for Extension of Symmetric Laurent Polynomial Matrix and Its Application in Construction of Multiband Symmetric Perfect Reconstruction Filter Bank

by
Jianzhong Wang
Department of Mathematics and Statistics, Sam Houston State University, 1901 Ave. I, Huntsville, TX 77341-2206, USA
Submission received: 9 March 2017 / Revised: 14 April 2017 / Accepted: 17 April 2017 / Published: 20 April 2017
(This article belongs to the Special Issue Wavelet and Frame Constructions, with Applications)

## Abstract

:
For a given pair of s-dimensional real Laurent polynomials $( a → ( z ) , b → ( z ) )$, which has a certain type of symmetry and satisfies the dual condition $b → ( z ) T a → ( z ) = 1$, an $s × s$ Laurent polynomial matrix $A ( z )$ (together with its inverse $A − 1 ( z )$) is called a symmetric Laurent polynomial matrix extension of the dual pair $( a → ( z ) , b → ( z ) )$ if $A ( z )$ has similar symmetry, the inverse $A − 1 ( Z )$ also is a Laurent polynomial matrix, the first column of $A ( z )$ is $a → ( z )$ and the first row of $A − 1 ( z )$ is $( b → ( z ) ) T$. In this paper, we introduce the Euclidean symmetric division and the symmetric elementary matrices in the Laurent polynomial ring and reveal their relation. Based on the Euclidean symmetric division algorithm in the Laurent polynomial ring, we develop a novel and effective algorithm for symmetric Laurent polynomial matrix extension. We also apply the algorithm in the construction of multi-band symmetric perfect reconstruction filter banks.

## 1. Introduction

In this paper, we develop a novel and effective algorithm for symmetric Laurent polynomial matrix extension (SLPME) and apply it in the construction of the symmetric multi-band perfect reconstruction filter bank (SPRFB). The paper is a continuative study of [1].
To describe the SLPME problem clearly, we first give some notions and notations. For a given matrix A, we denote by $A ( : , j )$ the j-th column of A and by $A ( i , : )$ its i-th row. Let $L$ be the ring of all Laurent polynomials (LPs) with real coefficients and $s ≥ 1$ an integer. An LP vector $a → ( z ) ∈ L s$ is called prime if there is $b → ( z ) ∈ L s$ such that $( b → ( z ) ) T a → ( z ) = 1$. In this case, we call $b → ( z )$ a dual of $a → ( z )$ and call $( a → ( z ) , b → ( z ) )$ a dual pair. An invertible LP matrix $M ( z ) ∈ L s × s$ is called $L$-invertible if $M − 1 ( z ) ∈ L s × s$, as well. We will denote by $G s$ the group of all $s × s$ $L$-invertible matrices. We write an s-dimensional column vector $a →$ as $a → = [ a 1 ; a 2 ; ⋯ ; a s ] ,$ and write its transpose as $a → T = [ a 1 , a 2 , ⋯ , a s ]$. The symmetry of LP vectors and matrices is defined as follows:
Definition 1.
An LP vector $a → ( z ) = [ a 1 ( z ) ; ⋯ ; a s ( z ) ] ∈ L s$ is called polar-symmetric (or $P +$-symmetric), if $a j ( z ) = a s + 1 − j ( 1 / z ) , 1 ≤ j ≤ s$, and called polar-antisymmetric (or $P −$-symmetric), if $a j ( z ) = − a s + 1 − j ( 1 / z )$. An LP matrix $M ( z ) ∈ L s × s$ is called vertically symmetric (or $V$-symmetric), if each of its columns is either $P +$-symmetric or $P −$-symmetric.
In the paper, we employ $ϵ$ for the sign notation: $ϵ = +$ or −. Thus, an LP vector is said to be $P ϵ$-symmetric if it is either $P +$-symmetric or $P −$-symmetric. When the sign is not stressed, we simplify $P ϵ$ to $P$. We now define an SLPME of an LP vector as follows:
Definition 2.
Let $a → ( z ) ∈ L s$ be a given $P$-symmetric prime vector. An LP matrix $A ( z ) ∈ G s$ is called an SLPME of $a → ( z )$ if $A ( z )$ is $V$-symmetric and $A ( : , 1 ) = a → ( z )$. Furthermore, $( A ( z ) , A − 1 ( z ) )$ is called an SLPME of a $P$-symmetric dual pair $( a → ( z ) , b → ( z ) )$ if $A ( z ) ∈ G s$ is an SLPME of $a → ( z )$ and $A − 1 ( 1 , : ) = b → T ( z )$.
It is worth pointing out that the construction of a dual pair with or without the symmetry property is also a key ingredient in LPME and SLPME. This problem has been completely resolved in [2].
The study of the Laurent polynomial matrix extension (LPME) has a long history. In the early 1990s, the two-band LPME arose in the study of the construction of compactly-supported wavelets [3,4,5,6,7]. In the construction of multi-wavelets, the LPME problems arise [8,9,10,11].
It has become well known that LPME is the core in the construction of multi-band prefect reconstruction filter banks (PRFB) and multi-band wavelets [12,13,14,15,16]. If a PRFB is represented by the polyphase form, then constructing the polyphase matrices of PRFB is essentially identical with LPME. The general study of multi-band PRFB is referred to [1,17,18,19,20,21]. We mention that the algorithm proposed in [1] was based on Euclidean division in the ring $L$. The author revealed the relation between Euclidean division in $L$ and $L$-elementary matrices, then developed the algorithm for LPME using $L$-elementary matrix factorization.
Unfortunately, the algorithm for LPME cannot be applied for SLPME because it does not preserve symmetry in the factorization. A special case of SLPME was given in Theorem 4.3 of [22]. However, the development of effective algorithms for SLPME is still desirable. Recently, Chui, Han and Zhuang in [17] introduced a bottom-up algorithm to construct SPRFB for a given dual pair of symmetric filters. Their algorithm consists of a forward (or top-down) phase and a backward (or down-top) phase. In the top-down phase, the algorithm gradually reduces the filters in the dual pair to the simplest ones, keeping the symmetry in the process. Thus, an SPRFB is first constructed for the simplest dual pair. Then, in the down-top phase, the algorithm builds the SLPME for the original dual pair. Their method does not employ the polyphase forms of filters. Hence, it is not directly linked to SLPME.
In this paper, we develop an SLPME algorithm in the framework of the Laurent polynomial algebra. We first introduce the Euclidean $L$-symmetric division algorithm, which keeps the symmetry of LPs in the division. Then, we introduce the symmetric $L$-elementary matrices in the Laurent polynomial ring and reveal the relation between the Euclidean $L$-symmetric division and the symmetric $L$-elementary transformation. Our SLPME algorithm essentially is based on the symmetric $L$-elementary transformations on the $V$-symmetric matrices in the group $G$.
The paper is organized as follows. In Section 2, we introduce $L$-symmetric vectors and matrices and their properties. In Section 3, we first develop the Euclidean symmetric division algorithms in the Laurent polynomial ring, introduce symmetric $L$-elementary matrices and reveal the relation between the Euclidean symmetric division and the symmetric $L$-elementary transformation. Then, at the end of the section, we present the Euclidean symmetric division algorithm for SLPME. In Section 4, we apply our SLPME algorithm in the construction of multi-band SPRFBs. In Section 5, we present several illustrative examples for the construction of symmetric multi-band SPRFBs and SLPMEs.

## 2. Symmetries of LP Vectors and Matrices

In this section, we study the symmetric properties of $P ϵ$-symmetric vectors and $V$-symmetric matrices. For $a → ( z ) = [ a 1 ( z ) ; ⋯ ; a s ( z ) ]$, we write $a ← ( z ) = [ a s ( z ) ; ⋯ ; a 1 ( z ) ]$. Then, $a → ( z )$ is $P ϵ$-symmetric if and only if $a → ( z ) = ϵ a ← ( 1 / z )$. Define:
$I ← = 0 0 ⋯ 0 1 0 0 ⋯ 1 0 ⋮ ⋮ ⋮ ⋯ ⋮ 1 0 ⋯ 0 0 .$
Then, $a ← ( z ) = I ← a → ( z )$, $I ← T = I ←$ and $I ← 2 = I$. Later, if no confusion arises, we will simplify $a → ( z )$ to $a →$, $M ( z )$ to M, and so on.
We denote by $( P ϵ ) s$ the set of all $P ϵ$-symmetric vectors in $L s$. Particularly, when $s = 1$, the vector $a → ( z )$ is reduced to a Laurent polynomial, say, $a ( z )$. Thus, $a ( z ) ∈ P ϵ$ if and only if $a ( z ) = ϵ a ( 1 / z )$.
Lemma 1.
Let $( a → , b → ) ∈ L s × L s$ be a symmetric dual pair. Then, they have the same symmetry, i.e., if $a →$ is $P ϵ$-symmetric, so is $b →$.
Proof.
We have $a → T ( z ) b → ( z ) = 1$ so that $a ← T ( 1 / z ) b ← ( 1 / z ) = 1$. Therefore, if $a → ∈ ( P ϵ ) s$, by $a → T ( z ) = ϵ a ← T ( 1 / z )$, we have:
$a → T ( z ) ( b → ( z ) − ϵ b ← ( 1 / z ) ) = a → T ( z ) b → ( z ) − ϵ 2 a ← T ( 1 / z ) b ← ( 1 / z ) = 0 ,$
which yields $b → ( z ) = ϵ b ← ( 1 / z )$, i.e., $b → ( z )$ is $P ϵ$-symmetric. The lemma is proven.  ☐
Definition 3.
A matrix $M ( z ) ∈ L s × s$ is called centrally polar symmetric, denoted by $C$-symmetric, if $M ( z ) = I ← M ( 1 / z ) I ←$.
All $C$-symmetric matrices in $L s × s$ form a semigroup of $L s × s$, denoted by $C s$; and all $L$-invertible, $C$-symmetric matrices in $C s$ form a subgroup of $G s$, denoted by $G C s$. By Definition 3, we have the following:
Proposition 1.
A matrix $M ( z ) = m i , j ( z ) i , j = 1 s$ is $C$-symmetric if and only if:
$m i , j ( z ) = m s + 1 − i , s + 1 − j ( 1 / z ) , 1 ≤ i ≤ s , 1 ≤ j ≤ s .$
Therefore, $M ( z ) ∈ C s ⟺ M T ( z ) ∈ C s$ and $M ( z ) ∈ G C s ⟺ M − 1 ( z ) ∈ G C s$.
We say that $V ( z )$ is a $V ϵ$-symmetric matrix if all columns of $V ( z )$ are $P ϵ$-symmetric.
Lemma 2.
For any $s ∈ N$, there exists a non-singular $V +$-symmetric matrix and a non-singular $V −$-symmetric one.
Proof.
We first prove the lemma for the $V +$-symmetric case by mathematical induction. For $s = 1 , 2 ,$ the matrices $M 1 ( z ) = [ 1 ]$ and $M 2 ( z ) = z 1 / z 1 / z z$ are non-singular $V +$-symmetric matrices because their determinants are not zero. Assume that the statement is true for each $m , 1 ≤ m ≤ k$. We prove that the statement is also true for $k + 1 , ( k > 1 )$. Let $M k − 1 ( z )$ be a $( k − 1 ) × ( k − 1 )$ non-singular $V +$-symmetric matrix. Then, so is the following $( k + 1 ) × ( k + 1 )$ matrix:
$M k + 1 ( z ) = z 0 1 / z 0 M k − 1 ( z ) 0 1 / z 0 z .$
The proof is completed. For the $V −$-symmetric case, $M 1 ( z ) = [ z − 1 / z ]$ and $M 2 ( z ) = z − 1 / z − 1 / z z$ are non-singular and $V −$-symmetric. The remainder of the proof is similar.  ☐
The following proposition describes the role of $C$-symmetric matrices.
Proposition 2.
Any matrix in $C s$ represents a linear transformation from $( P ϵ ) s$ to $( P ϵ ) s$. Conversely, any linear transformation from $( P ϵ ) s$ to $( P ϵ ) s$ is realized by a matrix in $C s$.
Proof.
We first prove the proposition for the case of $( P + ) s$. If $S ( z ) ∈ C s$, then for any $a → ( z ) ∈ ( P + ) s$, writing $b → ( z ) = S ( z ) a → ( z )$, we have:
$b → ( z ) = S ( z ) a → ( z ) = I ← S ( 1 / z ) I ← a ← ( 1 / z ) = I ← ( S ( 1 / z ) a → ( 1 / z ) ) = b ← ( 1 / z ) .$
Hence, $S ( z ) a → ( z ) ∈ ( P + ) s$. On the other hand, if for any $a → ( z ) ∈ ( P + ) s$, $S ( z ) a → ( z ) ∈ ( P + ) s$, then we have:
$S ( z ) a → ( z ) = I ← ( S ( 1 / z ) a → ( 1 / z ) ) = I ← S ( 1 / z ) I ← I ← a → ( 1 / z ) = I ← S ( 1 / z ) I ← a → ( z ) ,$
which yields that the equality:
$S ( z ) A ( z ) = I ← S ( 1 / z ) I ← A ( z )$
holds for any $V +$-symmetric matrix. By Lemma 2, we can choose a non-singular matrix $A ( z )$ in (2), which yields $S ( z ) = I ← S ( 1 / z ) I ←$, i.e., $S ( z ) ∈ C s$. The proposition is proven. For the case of $( P − ) s$, the proof is similar.  ☐
Since $G C s ⊂ C s$, by Proposition 2, $G C s$ is a group of linear transformations on the set $( P ϵ ) s$. For the matrices in $G C s$, we have the following:
Proposition 3.
Assume $S ( z ) ∈ G C s$. Then, for any prime vector $a → ( z ) ∈ ( P ϵ ) s$, the vector $S ( z ) a → ( z ) ∈ ( P ϵ ) s$ is also prime.
Proof.
Assume that $a → ( z )$ is a $P +$-symmetric prime vector. Then, there is a $P +$-symmetric vector $b → ( z )$, such that $a → T b → = 1$. Therefore, we have $( S a → ) T ( S T ) − 1 b → = 1$, which indicates that $S ( z ) a → ( z )$ is a $P +$-symmetric prime vector. The proof is similar for $a → ∈ ( P − ) s$.  ☐
In linear algebra, a well-known result is that each invertible matrix can be written as a product of elementary matrices. To produce the similar factorization of a matrix in $G C s$, we introduce the $C$-symmetric elementary matrices. We first define the $L$-elementary matrices (that may not be $C$-symmetric).
Definition 4.
Let I be the $s × s$ identity matrix. An $s × s$ $L$-elementary matrix is obtained by performing one of the following $L$-elementary row operations on I:
(1)
Interchanging two rows, e.g., $( R i ) ↔ ( R j )$.
(2)
Multiplying a row by a non-zero real number c, e.g., $c R i → R i$.
(3)
Replacing a row by itself plus a multiple $q ( z ) ∈ L$ of another row, e.g., $R i + q ( z ) R j → R i$.
For convenience, we denote by $E [ i , j ]$, $E ( i ) ( c )$ and $E ( i , j ) ( q )$ for the $L$-elementary matrices in (1), (2) and (3), and call them Types 1, 2 and 3, respectively. Since $E [ i , j ] = E [ j , i ]$, we agree that $i < j$ in $E [ i , j ]$. It is clear that an $L$-elementary matrix is $L$-invertible, and its inverse is of the same type. Indeed, we have the following:
$( E [ i , j ] ) − 1 = E [ i , j ] , ( E s ( i ) ) − 1 ( c ) = E ( i ) ( 1 / c ) , ( E ( i , j ) ( q ) ) − 1 = E ( i , j ) ( − q ) .$
Later, when the type of an $L$-elementary matrix is not stressed, we simply denote it by E. On the other hand, if the dimension of an $L$-elementary matrix needs to be stressed, then we write it as $E s$, $E s [ i j ]$, etc. For developing our SLPME algorithm, we define the $C$-symmetric elementary matrix based on Definition 4.
Definition 5.
Let $s ≥ 2$ be an integer. Write $q ¯ ( z ) = q ( 1 / z )$. When $s = 2 m$, the matrices:
$E [ i , j ] = E [ i , j ] E [ s + 1 − j , s + 1 − i ] , 1 ≤ i < j ≤ m , E ( i ) ( c ) = E ( i ) ( c ) E ( s + 1 − i ) ( c ) , 1 ≤ i ≤ m , E ( i , j ) ( q ) = E ( i , j ) ( q ) E ( s + 1 − i , s + 1 − j ) ( q ¯ ) , 1 ≤ i , j ≤ m , i ≠ j ,$
are called $C$-symmetric elementary matrices of Type 1, 2 or 3, respectively. When $s = 2 m + 1$, the matrices:
$E [ i , j ] = E [ i , j ] E [ s + 1 − j , s + 1 − i ] , 1 ≤ i < j ≤ m , E ( c ) = E ( m + 1 ) ( c ) and E ( i ) ( c ) = E ( i ) ( c ) E ( s + 1 − i ) ( c ) , 1 ≤ i ≤ m , E ( i , j ) ( q ) = E ( i , j ) ( q ) E ( s + 1 − i , s + 1 − j ) ( q ¯ ) , 1 ≤ i , j ≤ m + 1 , i ≠ j ,$
are called $C$-symmetric elementary matrices of Type 1, 2 or 3, respectively.
We denote by $E s$ the set of all $C$-symmetric elementary matrices in $G C s$ and by $E i s$ the set of all matrices of type i in $E s$.
We can verify that the inverses of $C$-symmetric elementary matrices are given by the following:
$( E [ i , j ] ) − 1 = E [ i , j ] , ( E ( i ) ) − 1 ( c ) = E ( i ) ( 1 / c ) , ( E ( i , j ) ) − 1 ( q ) = E ( i , j ) ( − q ) .$
If we do not stress the type of $C$-symmetric elementary matrix, we will simply denote it by $E$. On the other hand, if we need to stress the dimension of an $s × s$ $C$-symmetric elementary matrix, we write it as $E s ( i ) ( c )$, $E s ( i , j ) ( q )$, and so on.
Example 1.
Let $q ( z ) ∈ L$ and $c ∈ R \ { 0 }$. All $C$-symmetric elementary matrices in $E 3$ are:
$E ( c ) = 1 0 0 0 c 0 0 0 1 , E ( 1 ) ( c ) = c 0 0 0 1 0 0 0 c$
$E ( 1 , 2 ) ( q ) = 1 q ( z ) 0 0 1 0 0 q ( 1 / z ) 1 , E ( 2 , 1 ) ( q ) = 1 0 0 q ( z ) 1 q ( 1 / z ) 0 0 1 .$
By (4), their inverses are:
$1 0 0 0 1 / c 0 0 0 1 , 1 / c 0 0 0 1 0 0 0 1 / c , 1 − q ( z ) 0 0 1 0 0 − q ( 1 / z ) 1 , 1 0 0 − q ( z ) 1 − q ( 1 / z ) 0 0 1 .$
All $C$-symmetric elementary matrices in $E 4$ are:
$E [ 1 , 2 ] = 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 , E ( 1 ) ( c ) = c 0 0 0 0 1 0 0 0 0 1 0 0 0 0 c , E ( 2 ) ( c ) = 1 0 0 0 0 c 0 0 0 0 c 0 0 0 0 1 ,$
$E ( 1 , 2 ) ( q ) = 1 q ( z ) 0 0 0 1 0 0 0 0 1 0 0 0 q ( 1 / z ) 1 , E ( 2 , 1 ) ( q ) = 1 0 0 0 q ( z ) 1 0 0 0 0 1 q ( 1 / z ) 0 0 0 1 .$
Their inverses are:
$0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 , 1 / c 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 / c , 1 0 0 0 0 1 / c 0 0 0 0 1 / c 0 0 0 0 1 ,$
$1 − q ( z ) 0 0 0 1 0 0 0 0 1 0 0 0 − q ( 1 / z ) 1 , 1 0 0 0 − q ( z ) 1 0 0 0 0 1 − q ( 1 / z ) 0 0 0 1 .$

## 3. Euclidean Algorithm for SLPME

For simplification, in the paper, we only discuss LPs with real coefficients. Readers will find that our results can be trivially generalized to the LPs with coefficients in the complex field or other number fields. First, we recall some notations and notions used in [1]. We denote by $Π$ the ring of all (real) polynomials and write $Π h = Π \ { 0 }$. We also write $L h = L \ { 0 }$ and denote by $L m$ the group of all nonzero Laurent monomials: $L m = { c z ℓ ∈ L ; c ≠ 0 , ℓ ∈ Z }$. If $a ∈ L h$, writing $a ( z ) = ∑ k = m n a k z k$, where $n ≥ m$ and $a m a n ≠ 0$, we define its highest degree as $deg + ( a ) = n$, its lowest degree as $deg − ( a ) = m$ and its support length as $supp ( a ) = n − m + 1$. When $a = 0$, we agree that $deg + ( 0 ) = − ∞$, $deg − ( 0 ) = ∞$ and $supp ( 0 ) = 0$.
Let the semi-group $Π h c ⊂ Π h$ be defined by $Π h c = { p ∈ Π h : p ( 0 ) ≠ 0 } .$ Then, the power mapping $π : L h → Π h c$,
$π ( a ( z ) ) = z − d e g − ( a ) a ( z ) ,$
defines an equivalent relation “∽” in $L h$, i.e., $a ∽ b$ if and only if $π ( a ) = π ( b )$. For convenience, we agree that $π ( 0 ) = 0$. It is obvious that $π ( c z ℓ ) = c$. In [1], we established the following Euclid’s division theorem for Laurent polynomials.
Theorem 1 ($ℒ$-Euclid’s division theorem)
Let $( a , b ) ∈ L h × L h$. Then, there exists a unique pair $( q , r ) ∈ L × L$ such that:
$a = q b + r ,$
where, if $r ( z ) ≠ 0$,
$supp ( r ) + deg − ( a ) − 1 ≤ d e g + ( r ) < supp ( b ) + d e g − ( a ) − 1 .$
Furthermore, if $supp ( a ) ≥ supp ( b )$, then:
$1 ≤ supp ( q ) ≤ supp ( a ) − supp ( b ) + 1 .$
Remark 1.
In [1], we defined $supp ( a ) = n − m$ for $a ( z ) ≠ 0$ and $supp a = − ∞$ for $a ( z ) = 0$. In this paper, the definition of the support length is slightly changed so that it is up to the standard. Therefore, the inequality in (8) is updated according to the new definition.
In [1], we already developed a Euclidean algorithm for LPME based on Theorem 1. We now develop a Euclidean algorithm for SLPME. For this purpose, we introduce two lemmas.
Lemma 3.
Let $( a , b ) ∈ L h × L h$ and $s ∈ Z$ be an integer satisfying $s ≤ deg − ( a )$ and $deg + ( a ) − s + 1 ≥ supp ( b )$. Then, there exists a unique pair $( q , r ) ∈ L h × L$ such that:
$a = q b + r .$
where $1 ≤ supp q ≤ deg + ( a ) − s − supp ( b ) + 2$ and:
$supp ( r ) + s − 1 ≤ deg + ( r ) < supp ( b ) + s − 1 , if r ( z ) ≠ 0 .$
Proof.
In the case that $s = deg − a$, we have $supp ( a ) = deg + ( a ) − s + 1$. Hence, the lemma is identical to Theorem 1. We now assume $s < deg − ( a )$. Define $a ˜ ( z ) = a ( z ) + z s$. Then, $deg − ( a ˜ ) = s , deg + ( a ˜ ) = deg + ( a ) ,$ and $supp ( a ˜ ) = deg + ( a ) − s + 1 ≥ supp ( b )$. By Theorem 1, there is a unique pair $( q , r ˜ )$ such that:
$a ˜ = q b + r ˜ ,$
where, if $r ˜ ( z ) ≠ 0$,
$supp ( r ˜ ) + deg − ( a ˜ ) − 1 ≤ deg + ( r ˜ ) < supp ( b ) + d e g − ( a ˜ ) − 1 , 1 ≤ supp ( q ) ≤ supp ( a ˜ ) − supp ( b ) + 1 .$
Let $r ( z ) = r ˜ ( z ) − z s$. We have:
$a = q b + r .$
If $r ˜ ( z ) = z s$, then $supp ( r ) = 0$. In this case, it is clear that there exists a unique $q ∈ L h$ such that (9) holds. We now consider the case that $r ˜ ( z ) ≠ z s$. If $r ˜ ( z ) = 0$, then $r ( z ) = − z s$, so that $supp ( r ) = 1$ and $deg + ( r ) = s$. In this case, we must have $supp ( b ) ≥ 2$. Indeed, if $supp ( b ) = 1$, then $b ( z ) = c z ℓ , c ≠ 0$. Setting $q ( z ) = 1 c z ℓ a ( z )$ in (9), we get $r ( z ) = 0$, which leads to a contradiction with $r ( z ) = − z s$. Hence, we have $supp ( b ) ≥ 2$ so that (10) holds. Finally, if $r ˜$ is neither zero nor $z s$, by $r ˜ ( z ) = r ( z ) + z s$, we have $deg + ( r ) = deg + ( r ˜ )$ and $deg − ( r ) ≥ s$ so that (10) holds. The proof is completed.  ☐
For a real number x, we denote by $[ x ]$ the integer part of x, denote by $⌈ x ⌉$ the nearest integer of x that is no less than x and denote by $⌊ x ⌋$ the nearest integer of x that is no greater than x. For instance, for $m ∈ Z$, $[ 2 m + 1 2 ] = ⌊ 2 m + 1 2 ⌋ = m$ and $⌈ 2 m + 1 2 ⌉ = m + 1$.
Lemma 4.
Assume $a ( z ) ∈ P ϵ$, $b ( z ) ∈ L h$ and $supp ( a ) > supp ( b )$. Define $k = ⌈ supp ( a ) − supp ( b ) 2 ⌉$. Then, there is a pair $( q ( z ) , r ( z ) ) ∈ L h × L$ such that:
$a ( z ) = q ( z ) b ( z ) + ϵ q ( 1 / z ) b ( 1 / z ) + r ( z ) ,$
where $r ( z ) ∈ P ϵ$, $1 ≤ supp ( q ) ≤ supp ( a ) − supp ( b ) + 1 − k$ and $supp ( r ) ≤ supp ( a ) − 2 k ≤ supp ( b )$.
Proof.
By $a ( z ) = ϵ a ( 1 / z )$, we may write $a ( z ) = ∑ j = − m m a j z j$, where $a j = ϵ a − j$ and $a m ≠ 0$ so that $supp ( a ) = 2 m + 1$. Define:
$a t ( z ) = ∑ j = m − k + 1 m a j z j + 1 2 ∑ j = k − m m − k a j z j .$
Then, $deg + ( a t ) = m$ and $a t ( z ) + ϵ a t ( 1 / z ) = a ( z )$. Write $n = supp ( b )$. By $k = ⌈ supp ( a ) − supp ( b ) 2 ⌉$, we have $n = 2 m − 2 k + 1 + 1 2 ( 1 + ( − 1 ) n − 1 )$. Applying Lemma 3 to $a t ( z )$ and $b ( z )$ by setting $s = k − m$, we obtain a unique pair $( q , r ^ ) ∈ L h × L$ such that:
$a t ( z ) = q ( z ) b ( z ) + r ^ ( z ) ,$
where $deg + ( r ^ ) < n + ( k − m ) − 1 = m − k + 1 2 ( 1 + ( − 1 ) n − 1 )$ and $deg − ( r ^ ) ≥ k − m$. Since $a t ( z ) + ϵ a t ( 1 / z ) = a ( z )$, we have:
$a ( z ) = q ( z ) b ( z ) + ϵ q ( 1 / z ) b ( 1 / z ) + r ^ ( z ) + ϵ r ^ ( 1 / z ) .$
Writing $r ( z ) = r ^ ( z ) + ϵ r ^ ( 1 / z )$, we have $r ( z ) ∈ P ϵ$ and $supp ( r ) ≤ 2 m − 2 k + 1 ≤ supp ( b )$. The proof is completed.  ☐
The proof suggests the following Euclidean $P$-symmetric division algorithm for computing $q ( z )$ and $r ( z )$ in the division of $a ( z ) ÷ b ( z )$ described by Lemma 4.
Algorithm 1 (Euclidean $P$-symmetric division algorithm).
Assume that $2 m > n$ and:
$a ( z ) = ∑ j = − m m a j z j ∈ P ϵ , b ( z ) = ∑ j = − l n − l b j z j ∈ L h .$
1.
Compute $k = m − ⌊ n / 2 ⌋$.
2.
Construct $a π ( z ) = z − k + m ∑ j = m − k + 1 m a j z j + 1 2 ∑ j = k − m m − k a j z j$ and $b π ( z ) = z l b ( z )$.
3.
Perform polynomial division $a π = q π b π + r π$ to produce $( q π , r π ) ∈ Π 2$.
4.
Output $[ q ( z ) , r ( z ) ] = [ z k − m + l q π ( z ) , z k − m r π ( z ) + ϵ z m − k r π ( 1 / z ) ]$.
For $a → ∈ L s$, we define $∥ a → ∥ 0$ as the number of nonzero entries in $a →$ and define $supp a → = ∑ 1 ≤ i ≤ s supp ( a i )$. The following theorem describes the relation between the Euclidean $P$-symmetric division and the $C$-symmetric elementary transformation on $( P ϵ ) s$.
Theorem 2.
Assume that $a → 0 ∈ ( P ϵ ) s$ with $∥ a → 0 ∥ 0 > 2$. Then, there is a $C$-symmetric elementary matrix $E$ of Type 3, such that $a → 1 = E a → 0 ∈ ( P ϵ ) s$ with $supp a → 1 < supp a → 0$ and $∥ a → 1 ∥ 0 ≤ ∥ a → 0 ∥ 0$.
Proof.
Write $a → 0 = [ a 1 0 ; ⋯ ; a s 0 ]$. We first consider the case of $s = 2 m$. Since $∥ a → 0 ∥ 0 > 2$, by the $P ϵ$-symmetry of $a → 0$, there are at least two nonzero entries in $[ a 1 0 , ⋯ , a m 0 ]$, say $supp ( a j 0 ) ≥ supp ( a i 0 ) > 0$, where $i ≠ j$ and $i , j ≤ m$. By Theorem 1, there is a pair $( q , r ) ∈ L h × L$ such that $a j 0 = q a i 0 + r$ and $supp ( r ) < supp ( a i 0 )$, where r possibly vanishes. Let $a → 1 = E ( j , i ) ( − q ) a → 0 ∈ P s$. Then, $a j 1 ( z ) = r ( z )$, $a s + 1 − j 1 ( z ) = ϵ r ( 1 / z )$, and the other entries are unchanged. Hence, $supp a → 1 < supp a → 0$ and $∥ a → 1 ∥ 0 ≤ ∥ a → 0 ∥ 0$.
We now consider the case of $s = 2 m + 1 , m ≥ 1$. If there are at least two nonzero entries in $[ a 1 0 ; ⋯ ; a m 0 ]$, the proof is similar to what we have done for $s = 2 m$. Otherwise, $∥ a → 0 ∥ 0 = 3$, so that there is a nonzero entry $a i 0 , i < m + 1$ and $a m + 1 0 ≠ 0$. If $supp ( a i 0 ) ≥ supp ( a m + 1 0 )$, applying Theorem 1, we produce the pair $( q , r )$ such that $a i 0 = q a m + 1 0 + r$. Let $a → 1 = E ( i , m + 1 ) ( − q ) a → 0$. Then, in $a → 1 ( z )$, $a i 1 ( z ) = r ( z )$, $a s + 1 − i 1 ( z ) = ϵ r ( 1 / z )$, and the other entries are unchanged. Else, if $supp ( a m + 1 0 ) > supp ( a i 0 )$, by Lemma 4, there is $q ∈ L h$ and $r ∈ P ϵ$ such that:
$a m + 1 0 ( z ) = q ( z ) a i 0 ( z ) + ϵ q ( 1 / z ) a i 0 ( 1 / z ) + r ( z ) ,$
where $supp ( r ) < supp ( a m + 1 0 )$. Let $a → 1 = E ( m + 1 , i ) ( − q ) a → 0$. Then, $a m + 1 1 ( z ) = r ( z )$, and the other entries are unchanged. In both cases, we have $a → 1 ∈ ( P ϵ ) s$, $supp a → 1 < supp a → 0$ and $∥ a → 1 ∥ 0 ≤ ∥ a → 0 ∥ 0$. The proof is completed.  ☐
Definition 6.
Let $s ≥ 2$. A $P$-symmetric prime vector $a → ( z )$ is called the smallest one if it is given as follows:
(1)
$a → ( z ) = c e → m + 1 ∈ ( P + ) 2 m + 1$, where $e → m + 1$ is the $( m + 1 )$-th coordinate basis vector of $R 2 m + 1$.
(2)
$a → ( z ) ∈ ( P − ) 2 m + 1$ with only two nonzero entries $a i ( z ) = d ( z )$ and $a 2 m + 2 − i ( z ) = − d ( 1 / z ) , 1 ≤ i ≤ m$.
(3)
$a → ( z ) ∈ ( P ϵ ) 2 m$ with only two nonzero entries: $a i ( z ) = d ( z )$ and $a 2 m + 1 − i ( z ) = ϵ d ( 1 / z ) , 1 ≤ i ≤ m$.
Particularly, we call $a → ( z )$ normalized if $c = 1$ in (1) and $i = 1$ in (2) and (3).
In Definition 6, because $a → ( z )$ is prime, $d ( z )$ in (2) and (3) satisfies $gcd L ( d ( z ) , d ( 1 / z ) ) = 1$. Besides, we may normalize the smallest $P$-symmetric prime vector as follows: In (1), if $c ≠ 1$, then $E ( 1 / c ) a → ( z ) = e → 2 m + 1$ is normalized. In (2) and (3), if $i ≠ 1$, then $E [ 1 , i ] a → ( z )$ is the normalized one. Repeating the $C$-symmetric elementary transformations in Theorem 2, we may transform a $P$-symmetric prime vector to the smallest one.
Corollary 1.
Assume that $a → 0$ is a $P ϵ$-symmetric prime vector. Then, there are final $C$-symmetric elementary matrices ${ E j } j = 1 n$ of Type 3 such that $a → n = E n E n − 1 ⋯ E 1 a → 0$ is the smallest $P ϵ$-symmetric prime vector.
Proof.
We first assume that the prime vector $a → 0 ∈ ( P + ) 2 m + 1$, and it is not the smallest one. Then, $∥ a → 0 ∥ 0 > 2$. By Theorem 2, applying the mathematical induction, we can construct final $C$-symmetric elementary matrices $E 1 , ⋯ , E k ∈ E 3$ such that $a → k = E k ⋯ E 1 a → 0$ has only one nonzero entry in ${ a 1 k , ⋯ , a m + 1 k }$. If $a m + 1 k ( z ) ≠ 0$, then $a → k = c e → m + 1$. Otherwise, there is $1 ≤ i ≤ m$ such that $a i k ( z ) ≠ 0$. Writing $d ( z ) = a i k ( z )$, by the $P +$-symmetry of $a → k ( z )$, we have $a 2 m + 2 − i k = d ( 1 / z )$ and $gcd L ( d ( z ) , d ( 1 / z ) ) = 1$. By the extended Euclidean algorithm in [1], we can find a LP pair $( g 1 ( z ) , g 2 ( z ) )$ such that $d ( z ) g 1 ( z ) + d ( 1 / z ) g 2 ( z ) = 1 .$ Let $d ˜ ( z ) = 1 2 ( g 1 ( z ) + g 2 ( 1 / z ) )$. Then:
$d ( z ) d ˜ ( z ) + d ( 1 / z ) d ˜ ( 1 / z ) = 1 .$
Defining $E k + 1 = E ( m + 1 , i ) ( d ˜ )$ and $E k + 2 = E ( i , m + 1 ) ( − d )$, we have $E k + 2 E k + 1 a → k = e → m + 1$. The proof for the case of $a → 0 ∈ ( P + ) 2 m + 1$ is completed.
We now consider the case of $a → 0 ∈ ( P − ) 2 m + 1$. Similar to the proof above, we can construct $E 1 , ⋯ , E k ∈ E 3$ such that $a → k = E k ⋯ E 1 a → 0$ has only one nonzero entry in ${ a 1 k , ⋯ , a m + 1 k }$. If $a m + 1 k ≠ 0$, because $a → k$ is prime, $a m + 1 k ( z ) = c z ℓ$. By $a → k ∈ ( P − ) 2 m + 1$, we would have $a m + 1 k ( z ) = − a m + 1 k ( 1 / z )$, which yields $c = 0$. Therefore, the only nonzero entry in ${ a 1 k , ⋯ , a m + 1 k }$ cannot be $a m + 1 k$. Assume now $a i k ( z ) = d ( z ) , i ≤ m$. Then, $a 2 m + 2 − i k ( z ) = − d ( 1 / z )$ and $a → k$ is smallest. The proof for $a → k ∈ ( P − ) 2 m + 1$ is completed. The proof for the case of $s = 2 m$ is similar.  ☐
When the vector $[ d ( z ) ; d ( 1 / z ) ]$ is prime, we choose $d ˜ ( z )$ in (12) and define:
$D ϵ ( z ) = d ( z ) − ϵ d ˜ ( 1 / z ) ϵ d ( 1 / z ) d ˜ ( z ) .$
Then, $D ϵ ( z )$ is an SLPME of the vector $[ d ( z ) ; ϵ d ( 1 / z ) ]$. The inverse of $D ϵ ( z )$ is:
$D ϵ − 1 ( z ) = d ˜ ( z ) ϵ d ˜ ( 1 / z ) − ϵ d ( 1 / z ) d ( z ) .$
In (13), if we set $d ( z ) = 2 2$, then $D ϵ ( z )$ is reduced to:
$J ϵ = 2 2 − ϵ 2 2 ϵ 2 2 2 2 ,$
which will be used in the construction of SLPMEs. In the following content, the submatrix of M, which contains all elements $m k j$ in M with $k ∈ { k 1 , ⋯ , k ℓ }$ and $j ∈ { j 1 , ⋯ , j ℓ }$, is denoted by $M ( [ k 1 , ⋯ , k ℓ ] , [ j 1 , ⋯ , j ℓ ] )$.
Lemma 5.
Let $s ≥ 2$ and $a → ( z ) ∈ L s$ be a normalized smallest $P$-symmetric prime vector. Let $D ϵ ( z )$ be the matrix in (13). Assume that $v ( z ) ∈ P +$, $w j + ( z ) ∈ P +$ and $w j − ∈ P −$ are arbitrary. Write:
$W j ϵ ( z ) = − d ( z ) w j + ( z ) ϵ d ( z ) w j − ( z ) − ϵ d ( 1 / z ) w j + ( z ) d ( 1 / z ) w j − ( z ) ,$
and:
$u j ( z ) = 2 2 ( w j + ( z ) + w j − ( z ) ) .$
Then, an SLPME $A ( z )$ of $a → ( z )$ is constructed as follows.
(i)
For $a → ( z ) = e → m + 1 ∈ ( P + ) 2 m + 1$, we define $A ( z )$ as the following:
$A ( m + 1 , : ) = [ 1 , − w 1 + ( z ) , ⋯ , − w m + ( z ) , w m − ( z ) , ⋯ , w 1 − ( z ) ] , A [ m + 1 − j , m + 1 + j ] , [ m + 2 − j , m + 1 + j ] = J + , 1 ≤ j ≤ m ,$
and the other entries are zero. Its inverse $A − 1 ( z )$ is the following:
$A − 1 ( 1 , : ) = [ u 1 ( z ) , ⋯ , u m ( z ) , 1 , u m ( 1 / z ) , ⋯ , u 1 ( 1 / z ) ] , A − 1 [ m + 2 − j , m + 1 + j ] , [ m + 1 − j , m + 1 + j ] = J − , 1 ≤ j ≤ m ,$
and the other entries vanish.
(ii)
For $a → ( z ) = [ d ( z ) ; 0 ; ⋯ ; 0 ; − d ( 1 / z ) ] ∈ ( P − ) 2 m + 1$,
$A ( [ 1 , 2 m + 1 ] , [ 1 , 2 m + 1 ] ) = D − ( z ) , A ( [ 1 , m + 1 , 2 m + 1 ] , m + 1 ) = [ − d ( z ) v ( z ) ; 1 ; − d ( 1 / z ) v ( z ) ] , A ( [ 1 , 2 m + 1 ] , [ j + 1 , 2 m + 1 − j ] ) = W j − ( z ) , 1 ≤ j ≤ m − 1 , A ( [ m + 1 − j , m + 1 + j ] , [ m + 1 − j , m + 1 + j ] ) = J + , 1 ≤ j ≤ m − 1 ,$
and the other entries vanish. Its inverse $A − 1 ( z )$ is the following:
$A − 1 ( m + 1 , m + 1 ) = 1 , A − 1 ( [ 1 , 2 m + 1 ] , [ 1 , 2 m + 1 ] ) = D − − 1 ( z ) , A − 1 ( 1 , [ 2 : 2 m ] ) = [ u 1 ( z ) , ⋯ , u m − 1 ( z ) , v ( z ) , − u m − 1 ( 1 / z ) , ⋯ , − u 1 ( 1 / z ) ] , A − 1 ( [ m + 1 − j , m + 1 + j ] , [ m + 1 − j , m + 1 + j ] ) = J − , 1 ≤ j ≤ m − 1 , ,$
and the other entries vanish.
(iii)
For $a → ( z ) = [ d ( z ) ; 0 ; ⋯ ; 0 ; ϵ d ( 1 / z ) ] ∈ ( P ϵ ) 2 m$,
$A ( [ 1 , 2 m ] , [ 1 , 2 m ] ) = D ϵ ( z ) , A ( [ 1 , 2 m ] , [ j + 1 , 2 m − j ] ) = W j ϵ ( z ) , 1 ≤ j ≤ m − 1 , A ( [ m + 1 − j , m + j ] , [ m + 1 − j , m + j ] ) = J ϵ , 1 ≤ j ≤ m − 1 ,$
and the other entries vanish. Its inverse $A − 1 ( z )$ is the following:
$A − 1 ( [ 1 , 2 m ] , [ 1 , 2 m ] ) = D ϵ − 1 ( z ) , A − 1 ( 1 , 2 : 2 m − 1 ) = [ u 1 ( z ) , ⋯ , u m − 1 ( z ) , ϵ u m − 1 ( 1 / z ) , ⋯ , ϵ u 1 ( 1 / z ) ] , A − 1 ( [ m + 1 − j , m + j ] , [ m + 1 − j , m + j ] ) = J ϵ − 1 , 1 ≤ j ≤ m − 1 ,$
and the other entries vanish.
Proof.
Recall that $w j + ( z ) = w j + ( 1 / z ) , w j − ( z ) = − w j − ( 1 / z ) , v ( z ) = v ( 1 / z )$. By computation, we claim that $A ( z )$ in (i), (ii) or (iii) is $V$-symmetric and $L$-invertible, and $A − 1 ( z )$ is given by (18), (20) or (22), respectively. The proof is completed.  ☐
The SLPME of the smallest $P$-symmetric prime vector is not unique because $w j + ( z ) , w j − ( z ) ,$ and $v ( z )$ can be arbitrary. Besides, each $J ϵ$ can be replaced by $D ϵ ( z )$ in (13), where $d ( z )$ and $d ˜ ( z )$ can also be freely chosen.
We show the SLPMEs of some smallest $P$-symmetric prime vectors in the following example.
Example 2.
(i)
An SLPME of $a → ( z ) = [ 0 ; 0 ; 1 ; 0 ; 0 ]$ is given by:
$A ( z ) = 0 2 2 0 0 − 2 2 0 0 2 2 − 2 2 0 1 − w 1 + ( z ) − w 2 + ( z ) w 2 − ( z ) w 1 − ( z ) 0 0 2 2 2 2 0 0 2 2 0 0 2 2 ,$
whose inverse is:
$A − 1 ( z ) = u 1 ( z ) u 2 ( z ) 1 u 2 ( 1 z ) u 1 ( 1 z ) 2 2 0 0 0 2 2 0 2 2 0 2 2 0 0 − 2 2 0 2 2 0 − 2 2 0 0 0 2 2 .$
(ii)
An SLPME of $a → ( z ) = [ d ( z ) ; 0 ; 0 ; 0 ; − d ( 1 z ) ]$ is given by:
$A ( z ) = d ( z ) − d ( z ) w + ( z ) − d ( z ) v ( z ) − d ( z ) w − ( z ) d ˜ ( 1 z ) 0 2 2 0 2 2 0 0 0 1 0 0 0 − 2 2 0 2 2 0 − d ( 1 z ) d ( 1 z ) w + ( z ) − d ( 1 z ) v ( z ) d ( 1 z ) w − ( z ) d ˜ ( z ) ,$
whose inverse is:
$A − 1 ( z ) = d ˜ ( z ) u ( z ) v ( z ) − u ( 1 z ) − d ˜ ( 1 z ) 0 2 2 0 − 2 2 0 0 0 1 0 0 0 2 2 0 2 2 0 d ( 1 z ) 0 0 0 d ( z ) .$
(iii)
An SLPME of $a → ( z ) = [ d ( z ) ; 0 ; 0 ; 0 ; 0 ; ϵ d ( 1 z ) ]$ is:
$A ( z ) = d ( z ) − d ( z ) w 1 + ( z ) − d ( z ) w 2 + ( z ) − ϵ d ( 1 z ) w 2 − ( z ) − ϵ d ( 1 z ) w 2 − ( z ) − ϵ d ˜ ( 1 z ) 0 2 2 0 0 − ϵ 2 2 0 0 0 2 2 − ϵ 2 2 0 0 0 0 ϵ 2 2 2 2 0 0 0 ϵ 2 2 0 0 2 2 0 ϵ d ( 1 z ) − ϵ d ( 1 z ) w 1 + ( z ) − ϵ d ( 1 z ) w 2 + ( z ) d ( 1 z ) w 2 − ( z ) d ( 1 z ) w 1 − ( z ) d ˜ ( z ) ,$
whose inverse is:
$A − 1 ( z ) = d ˜ ( z ) u 1 ( z ) u 2 ( z ) ϵ u 2 ( 1 z ) ϵ u 1 ( 1 z ) ϵ d ˜ ( 1 z ) 0 2 2 0 0 ϵ 2 2 0 0 0 2 2 ϵ 2 2 0 0 0 0 − ϵ 2 2 2 2 0 0 0 − ϵ 2 2 0 0 2 2 0 − ϵ d ( 1 z ) 0 0 0 0 d ( z ) .$
We now give the main theorem for SLPME.
Theorem 3 (Euclidean symmetric division algorithm for SLPME).
Let $a → ( z ) ∈ L s$ be a $P$-symmetric prime vector. Then, the following Euclidean symmetric division algorithm realizes its SLPME:
1.
Apply Euclidean symmetric division to construct the $C$-symmetric elementary matrices $E 1 , ⋯ , E n$ such that $a → n = E n ⋯ E 1 a →$ is a normalized smallest $P$-symmetric prime vector.
2.
Apply Lemma 5 to construct an SLPME $A n ( z )$ of $a → n$ and its inverse $( A n ) − 1 ( z )$ by choosing $v ( z )$, $w j + ( z )$ and $w j − ( z )$ at random, say $v = w j + = w j − = 0$.
3.
Construct the SLPME for $a →$ by:
$A ( z ) = E 1 − 1 ⋯ E n − 1 A n ( z ) , A − 1 ( z ) = A n − 1 ( z ) E n ⋯ E 1 .$
Then, $A ( z )$ is an SLPME of $a →$.
If a dual pair $( a → , b → )$ is given, then Step (2) is replaced by the following to compute $A n ( z )$.
2.a
Compute $b → n ( z ) = ( E n − 1 ) T ⋯ ( E 1 − 1 ) T b → ( z )$.
2.b
If $a → ∈ ( P + ) 2 m + 1$, set:
$w j + ( z ) = 2 2 ( b j n ( z ) + b j n ( 1 / z ) ) , 1 ≤ j ≤ m , w j − ( z ) = 2 2 ( b j n ( z ) − b j n ( 1 / z ) ) , 1 ≤ j ≤ m .$
If $a →$ in $( P − ) 2 m + 1$ or in $( P ϵ ) 2 m$, set:
$w j + ( z ) = 2 2 ( b j + 1 n ( z ) + b j + 1 n ( 1 / z ) ) , 1 ≤ j ≤ m − 1 , w j − ( z ) = 2 2 ( b j + 1 n ( z ) − b j + 1 n ( 1 / z ) ) , 1 ≤ j ≤ m − 1 ,$
and also set $v ( z ) = b m + 1 n ( z )$ if $a → ∈ ( P − ) 2 m + 1$.
2.c
Construct the SLPME $A n ( z )$ as in Lemma 5 using $v ( z )$, $w j + ( z )$ and $w j − ( z )$.
Then, $[ A ( z ) , A − 1 ( z ) ]$ is an SLPME of the dual pair $( a → , b → )$.
Proof.
By the construction of $A n ( z )$, its first column $A n ( : , 1 )$ is the smallest $P$-symmetric prime vector. Since $A ( z ) = E 1 − 1 ⋯ E n − 1 A n ( z )$, we have $A ( : , 1 ) = a →$ and $A ( z )$ is $V$-symmetric and $L$-invertible, whose inverse can be computed by $A − 1 ( z ) = A n − 1 E n ⋯ E 1$. Hence, $A ( z )$ is an SLPME of $a →$. Assume now the dual pair $( a → , b → )$ is given. By the computation in Step (2.a) and Step (2.b), we claim that $A n − 1 ( 1 , : ) = ( b → n ) T$. Since $A − 1 = A n − 1 E n ⋯ E 1$, $A − 1 ( 1 , : ) = b → T$. Hence, $[ A ( z ) , A − 1 ( z ) ]$ is an SPLME of the pair $( a → , b → )$. The proof is completed.  ☐

## 4. Application in the Construction of Symmetric Multi-Band Perfect Reconstruction Filter Banks

In this section, we use the results in the previous section to construct symmetric M-band perfect reconstruction filter banks (SPRFBs). We adopt the standard notions and notations of digital signals, filters, the M-downsampling operator and the M-upsampling operator in signal processing (see [6,7]). In this paper, we restrict our study to real digital signals and simply call them signals.
Mathematically, a signal $x$ is defined as a bi-infinite real sequence, whose n-th term is denoted by $x ( n )$ or $x n$. A finite signal is a sequence that has only finite nonzero terms. All signals form a linear space, denoted by l. A filter $H : l → l , H x = y ,$ can be represented as a signal $H = ( . . . H ( − 1 ) , H ( 0 ) , H ( 1 ) , H ( 2 ) , . . . ) ∈ l$ that makes $y = H ∗ x$ well defined, where ∗ denotes the convolution operator:
$y ( n ) = ∑ k H ( k ) x ( n − k ) .$
A finite filter H is called a finite impulse response (FIR). Otherwise, it is called an infinite impulse response (IIR). In this paper, we only study FIR. The z-transform of a signal $x$ is the Laurent series $x ( z ) = ∑ j ∈ Z x ( j ) z j ,$ where $z = e i θ , θ ∈ R ,$ resides on the unit circle of the complex plane $C$. Hence, $z ¯ = z − 1$. Similarly, the z-transform of an FIR H is the Laurent polynomial:
$H ( z ) = ∑ j ∈ Z H ( j ) z j .$
We define the support length of an FIR as the support length of its z-transform: $supp ( H ) = supp ( H ( z ) )$. By the convolution theorem, if $y = H ∗ x$, then $y ( z ) = H ( z ) x ( z )$.
PRFBs have been widely used in many areas such as signal and image processing, data mining, feature extraction and compressive sensing [12,13,14,15,16]. The readers can find an introduction to PRFB from many references on signal processing and wavelets, say [6,7]. A PRFB consists of two sub-filter banks: an analysis filter bank, which decomposes a signal into different bands, and a synthesis filter bank, which composes a signal from its different band components. Assume that an analysis filter bank consists of the band-pass filter set ${ H 0 , H 1 , ⋯ , H M − 1 }$ and a synthesis one consists of the band-pass filter set ${ B 0 , B 1 , ⋯ , B M − 1 }$, where $H 0$ and $B 0$ are low-pass filters. They form an M-band PRFB if and only if the following condition holds:
$∑ j = 0 M − 1 B ¯ j ( ↑ M ) ( ↓ M ) H j = I ,$
where $↓ M$ is the M-downsampling operator, $↑ M$ is the M-upsampling operator, I is the identity operator and $B ¯ j$ denotes the conjugate filter of $B j$. Here, the conjugate of a real filter $a = ( ⋯ , a − 1 , a 0 , a 1 , ⋯ )$ is $a ¯ = ( ⋯ , a 1 , a 0 , a − 1 , ⋯ )$. Therefore, the z-transform of $B ¯ j$ is $B j ( z ) ¯ = B j ( z − 1 )$.
The polyphase form of a signal is defined as follows:
Definition 7.
Let $x ( z )$ be the z-transform of a signal $x$ and $M ≥ 2$ an integer. The Laurent series:
$x [ M , k ] ( z ) = ∑ j x ( M j + k ) z j , k ∈ Z ,$
is called the k-th M-phase of $x$, and the vector of Laurent series $[ x [ M , 0 ] ( z ) ; ⋯ ; x [ M , M − 1 ] ( z ) ]$ is called an M-polyphase of $x$.
Since a filter can be identical with a signal, we define its polyphase in the same way. For instance, let $F ( z )$ be the z-transform of an FIR filter F. We call:
$F [ M , k ] ( z ) = ∑ j F ( M j + k ) z j .$
the k-th M-phase of F and call the LP vector $F → ( z ) = [ F [ M , 0 ] ( z ) ; ⋯ ; F [ M , M − 1 ] ( z ) ]$ the M-polyphase of F. We will abbreviate $F [ M , k ] ( z )$ to $F [ k ] ( z )$ if the band number M is not stressed. It is clear that $F ( z ) = ∑ k = 0 M − 1 z k F [ M , k ] ( z M )$. Since, for any filter F,
$F [ k + s M ] ( z ) = z − s F [ k ] ( z ) , s ∈ Z ,$
the M-polyphase of F can be generalized to $[ F [ M , s ] ( z ) ; ⋯ ; F [ M , s + M − 1 ] ( z ) ]$ with $s ∈ Z$. Then, in general,
$F ( z ) = ∑ k = s M − 1 + s z k F [ M , k ] ( z M ) .$
For a filter bank ${ H 0 , ⋯ , H M − 1 }$, we define its M-polyphase matrix as:
$H ( z ) = H 0 [ 0 ] ( z ) H 1 [ 0 ] ( z ) ⋯ H M − 1 [ 0 ] ( z ) H 0 [ 1 ] ( z ) H 1 [ 1 ] ( z ) ⋯ H M − 1 [ 1 ] ( z ) ⋮ ⋮ ⋯ ⋮ H 0 [ M − 1 ] ( z ) H 1 [ M − 1 ] ( z ) ⋯ H M − 1 [ M − 1 ] ( z ) .$
The characterization identity (24) for PRFB now can be written as the following:
$B * ( z ) H ( z ) = 1 M I ,$
where $B * ( z )$ is the Hermitian adjoint matrix of $B ( z )$ and I is the identity matrix.
A pair of low-pass filters $( H 0 , B 0 )$ is called a conjugate pair if their M-polyphase forms satisfy:
$∑ k = 0 M − 1 H 0 [ k ] ( z ) ¯ B 0 [ k ] ( z ) = 1 M .$
We write $H → 0 ( z ) = [ H 0 [ 0 ] ( z ) ; ⋯ ; H 0 [ M − 1 ] ( z ) ]$ and $B → 0 ( z ) = [ B 0 [ 0 ] ( z ) ; ⋯ ; B 0 [ M − 1 ] ( z ) ]$. Then, the vector form of (28) is:
$( B → 0 ( z ) ) * H → 0 ( z ) = 1 M .$
Recall that, in the previous section, we call $( a → ( z ) , b → ( z ) ) ∈ L 2$ a dual pair, if $b → T a → = 1$. Therefore, $( H → 0 ( z ) , B → 0 ( z ) )$ in (29) is a conjugate pair if and only if $( M H → 0 ( z ) , B → 0 ( z ) ¯ )$ is a dual pair.
The PRFB construction problem is the following: Assume that a conjugate pair of low-pass filters $( H 0 , B 0 )$ is given. Find the filter sets ${ H 1 , ⋯ , H M − 1 }$ and ${ B 1 , ⋯ , B M − 1 }$ such that the pair of filter banks ${ H 0 , H 1 , ⋯ , H M − 1 }$ and ${ B 0 , B 1 , ⋯ , B M − 1 }$ forms an M-band PRFB. The problem can be presented in the polyphase form: Let $( H → 0 ( z ) , B → 0 ( z ) )$ be the M-polyphase of $( H 0 , B 0 )$. Then, $( a → , b → ) = ( M H → 0 ( z ) , B → 0 ( z ) ¯ )$ is an LP dual pair. The PRFB construction problem becomes to find an LPME $[ A ( z ) , A − 1 ( z ) ]$ of $( a → , b → )$ such that $A ( : , 1 ) = a → ( z )$ and $A − 1 ( 1 , : ) = ( b → ( z ) ) T$. Once the pair $[ A ( z ) , A − 1 ( z ) ]$ is constructed, then the polyphase matrices for the PRFB are $H ( z ) = 1 M A ( z )$ and $B ( z ) = ( A − 1 ( z ) ) *$. Hence, the PRFB construction problem essentially is identical to the LPME one, which we have studied thoroughly in [1].
The symmetric PRFB (SPRFB) plays an important role in signal processing because it has the linear phase. An FIR $a$ is said to be $c ϵ$-symmetric (with respect to the symmetric center $c / 2 , c ∈ Z ,$) if $a j = ϵ a c − j$. It is clear that if c is even, then $supp ( a )$ is odd, else if c is odd, then $supp ( a )$ is even. In applications, we usually shift a given $c ϵ$-symmetric filter to the center zero if $supp ( a )$ is odd or to one if $supp a$ is even. We abbreviate $c ϵ$-symmetric to symmetric if the symmetric center c and type (characterized by $ϵ$) are not stressed. For convenience, we will call the z-transform of $a$ the $c ϵ$-symmetric if $a$ is so. It is easy to verify that a $0 ϵ$-symmetric LP is $P ϵ$-symmetric, a $1 ϵ$-symmetric $F ( z )$ satisfies $F ( z ) = ϵ z F ( z − 1 )$ and a $( − 1 ) ϵ$-symmetric $F ( z )$ satisfies $F ( z ) = ϵ / z F ( z − 1 )$. We will denote by $P + ϵ$ and $P − ϵ$ the sets of all $1 ϵ$-symmetric and $( − 1 ) ϵ$-symmetric LPs, respectively. It is clear that, if $F ( z ) ∈ P ϵ$, so is $F ( z ) ¯$. If $F ( z ) ∈ P + ϵ$, then $F ( z ) ¯ ∈ P − ϵ$.
Assume that a conjugate pair of symmetric low-pass filters $( H 0 , B 0 )$ is given. An SPRFB construction problem is to find two symmetric filter sets ${ H 1 , ⋯ , H M − 1 }$ and ${ B 1 , ⋯ , B M − 1 }$ such that the pair of symmetric filter banks ${ H 0 , H 1 , ⋯ , H M − 1 }$ and ${ B 0 , B 1 , ⋯ , B M − 1 }$ forms an M-band SPRFB. Because the filters in a conjugate dual pair $( H 0 , B 0 )$ have the same symmetric type and center (see Lemma 1 or [17]), without loss of generality, we will assume that the given conjugate pair is $0 ϵ$-symmetric (if $supp ( H 0 )$ is odd) or $1 ϵ$-symmetric (if $supp ( H 0 )$ is even). Although the construction of PRFB has been well studied, the development of the algorithms for SPRFB is relatively new. The authors of [17] introduced a bottom-up algorithm to construct SPRFB for a given symmetric conjugate pair, without using SLPME. Our purpose in this section is to develop a novel algorithm based on the symmetric Euclidean SLPME algorithm introduced in the previous section. We want to put the algorithm in the framework of the matrix algebra on the Laurent polynomial ring to make it more constructive. The PRFB algorithm in [1] does not work for the construction of SPRFB. The new development is required.
To develop the SPRFB algorithm based on M-polyphase representation, we need to characterize the M-polyphase of a symmetric filter. By computation, we can verify that the k-th M-phase of a symmetric filter satisfies the following:
(1)
If F is $0 ϵ$-symmetric, then:
$F [ k ] ( z ) = ϵ F [ − k ] ( z − 1 ) .$
(2)
If F is $1 ϵ$-symmetric, then:
$F [ k ] ( z ) = ϵ F [ − k + 1 ] ( z − 1 ) .$
(3)
If F is $M ϵ$-symmetric, then:
$F [ k ] ( z ) = ϵ z F [ − k ] ( z − 1 ) .$
(4)
If F is $( 1 − M ) ϵ$-symmetric, then:
$F [ k ] ( z ) = ϵ z − 1 F [ − k + 1 ] ( z − 1 ) .$
We call a vector in $L 2 m + 1$ $P z ϵ$-symmetric if it satisfies (32) and call a vector in $L 2 m$ $P z ¯ ϵ$-symmetric if it satisfies (33). We denote by $( P z ϵ ) 2 m + 1$ and $( P z ¯ ϵ ) 2 m$ the sets of all $P z ϵ$-symmetric vectors in $L 2 m + 1$ and $P z ¯ ϵ$-symmetric vectors in $L 2 m$, respectively. By computation, we have the following:
Proposition 4.
Let $E$ be a $( 2 m + 1 ) × ( 2 m + 1 )$ $C$-symmetric elementary matrix and $a → ∈ ( P z ϵ ) 2 m + 1$. Then, $E a → ∈ ( P z ϵ ) 2 m + 1$. Let $E$ be a $( 2 m ) × ( 2 m )$ $C$-symmetric elementary matrix and $a → ∈ ( P z ¯ ϵ ) 2 m$. Then, $E a → ∈ ( P z ¯ ϵ ) 2 m$.
We now characterize the M-polyphase of a symmetric filter F as follows:
Lemma 6.
Let $M ≥ 2 , m = M 2$, and $F → ( z ) = [ F [ m − M + 1 ] ( z ) ; ⋯ ; F [ m ] ( z ) ]$ be the M-polyphase of a filter F.
1.
If M is odd and F is $0 ϵ$-symmetric, or M is even and F is $1 ϵ$-symmetric, then $F → ( z )$ is $P ϵ$-symmetric.
2.
Assume $M = 2 m$. If F is $0 ϵ$-symmetric, then (30) holds for $0 ≤ k ≤ m − 1$ and:
$F [ m ] ( z ) = ϵ z − 1 F [ m ] ( z − 1 ) ,$
else if F is $( 2 m ) ϵ$-symmetric, then (32) holds for $0 ≤ k ≤ m − 1$ and:
$F [ m ] ( z ) = ϵ F [ m ] ( z − 1 ) .$
3.
Assume $M = 2 m + 1$. If F is $1 ϵ$-symmetric, then (31) holds for $0 ≤ k ≤ m − 1$ and:
$F [ − m ] ( z ) = ϵ z F [ − m ] ( z − 1 ) ,$
else if F is $( − 2 m ) ϵ$-symmetric, then (32) holds for $0 ≤ k ≤ m − 1$ and:
$F [ − m ] ( z ) = ϵ F [ − m ] ( z − 1 ) ,$
Proof.
We obtain Part 1 directly from (30) and (31). To prove Parts 2 and 3, according to (30)–(33), we only need to verify (34)–(37).
If $M = 2 m$ and F is $0 ϵ$-symmetric, by (26) and (30), $F [ m ] ( z ) = z − 1 F [ − m ] ( z )$ $= ϵ z − 1 F [ m ] ( z − 1 )$, which yields (34); and if F is $( 2 m ) ϵ$-symmetric, then $F [ m ] ( z ) = z − 1 F [ − m ] ( z ) = ϵ z − 1 z F [ m ] ( z − 1 ) = ϵ F [ m ] ( z − 1 )$, which yields (35).
If $M = 2 m + 1$ and F is $1 ϵ$-symmetric, then $F [ − m ] ( z ) = z F [ m + 1 ] ( z ) = ϵ z F [ − m ] ( z − 1 )$, which yields (36); and if F is $( 1 − M ) ϵ$-symmetric, then $F [ − m ] ( z ) = z F [ m + 1 ] ( z ) = ϵ z z − 1 F [ − m ] ( z − 1 ) = ϵ F [ − m ] ( z − 1 )$, which yields (37). The lemma is proven.  ☐
We call a vector in $L 2 m$ $P e ϵ$-symmetric if it satisfies (30) for $0 ≤ k ≤ m − 1$ and (34) and call it $( P e ϵ ) *$-symmetric if it satisfies (32) for $0 ≤ k ≤ m − 1$ and (35). Similarly, we call a vector in $L 2 m + 1$ $P o ϵ$-symmetric, if it satisfies (31) for $0 ≤ k ≤ m − 1$ and (36), and call it $( P o ϵ ) *$-symmetric if it satisfies (33) for $0 ≤ k ≤ m − 1$ and (37). We denote by $( P e ϵ ) 2 m$, $( ( P e ϵ ) * ) 2 m$, $( P o ϵ ) 2 m + 1$ and $( ( P o ϵ ) * ) 2 m + 1$, the sets of all $P e ϵ$-symmetric, $( P e ϵ ) *$-symmetric, $P o ϵ$-symmetric and $( P o ϵ ) *$-symmetric vectors, respectively. All of these symmetric vectors (other than $P$-symmetric) will be called $P ˜$-symmetric ones.
Example 3.
The vector $e → 1 ∈ R 2 m + 1$ is $( P o + ) *$-symmetric, but not $P o ϵ$-symmetric; and the vector $e → 2 m ∈ R 2 m$ is $( P e + ) *$-symmetric, but not $P e ϵ$-symmetric.
By Part 1 of Lemma 6, we have the following SPRFB construction algorithm:
Theorem 4.
Let $( H 0 , B 0 )$ be a conjugate pair of symmetric filters and $( H → 0 ( z ) , B → 0 ( z ) )$ the M-polyphase of the pair. Assume that M is odd and $H 0$ is $0 ϵ$-symmetric, or M is even and $H 0$ is $1 ϵ$-symmetric. Write $a → ( z ) = M H → 0 ( z )$ and $b → ( z ) = B → 0 ( z ) ¯$. Let $[ A ( z ) , A − 1 ( z ) ]$ be an SLPME of the dual pair $( a → ( z ) , b → ( z ) )$ computed by the Euclidean division algorithm in Theorem 3. Write $H ( z ) = 1 M A ( z )$ and $B ( z ) = ( A − 1 ) * ( z )$. Then, $[ H ( z ) , B ( z ) ]$ is the M-polyphase form of the M-band SPRFB, in which $H 0$ is a filter in the analysis filter bank and $B 0$ is in the synthesis bank.
Proof.
By $B → 0 * ( z ) H → 0 ( z ) = 1 M$, we have:
$( b → ( z ) ) T a → ( z ) = M ( B → 0 ( z ) ¯ ) T H → 0 ( z ) = M ( B → 0 ) * ( z ) H → 0 ( z ) = 1 .$
Hence, $( a → ( z ) , b → ( z ) )$ is a symmetric LP dual pair. By Theorem 3, $H ( : , 1 ) = 1 M A ( : , 1 ) = H → 0 ( z )$, $B ( : , 1 ) = ( A − 1 ) * ( 1 , : ) = ( b → ¯ ) T ( z ) = B 0 ( z )$ and $B * ( z ) H ( z ) = 1 M$. The theorem is proven.  ☐
Lemma 6 shows that, when the odevityof the band number M mismatches the odevity of the support length of the conjugate pair $( H 0 , B 0 )$, then their M-polyphase forms are not $P$-symmetric. Thus, we cannot apply Theorem 3 to solve the SPRFB constriction problem for $( H 0 , B 0 )$. To employ the results we already obtained in the previous section, we establish a relation between $P$- symmetry and $P ˜$-symmetry.
Definition 8.
The $( 2 m + 1 ) × 2 m$ matrix:
$S e ( z ) = 0 2 z 2 I 2 m − 1 0 0 2 2$
is called the symmetrizer for the vectors in $( P e ϵ ) 2 m$. The $( 2 m + 2 ) × ( 2 m + 1 )$ matrix:
$S o ( z ) = 2 2 0 0 I 2 m 2 2 z 0$
is called the symmetrizer for the vectors in $( P o ϵ ) 2 m + 1$.
Recall $z = e i θ$. It is easy to verify that the left inverse of $S e ( z )$ is:
$0 I 2 m − 1 0 2 2 z 0 2 2 = S e * ( z )$
and the left inverse of $S o ( z )$ is:
$2 2 0 2 z 2 0 I 2 m 0 = S o * ( z ) .$
Lemma 7.
We have the following.
(a)
If $F → ( z ) ∈ ( P e ϵ ) 2 m$, then $S e ( z ) F → ( z ) ∈ ( P ϵ ) 2 m + 1$. Conversely, if $a → ( z ) ∈ ( P ϵ ) 2 m + 1$, then $S e * ( z ) a → ( z ) ∈ ( P e ϵ ) 2 m$.
(b)
If $F → ( z ) ∈ ( P o ϵ ) 2 m + 1$, then $S o ( z ) F → ( z ) ∈ ( P ϵ ) 2 m + 2$. Conversely, if $a → ( z ) ∈ ( P ϵ ) 2 m + 2$, then $S o * ( z ) a → ( z ) ∈ ( P o ϵ ) 2 m + 1$.
(c)
If $F → ( z ) ∈ ( ( P e ϵ ) * ) 2 m$, then $S e ( z ) F → ( z ) ∈ ( P z ϵ ) 2 m + 1$. Conversely, if $a → ( z ) ∈ ( P z ϵ ) 2 m + 1$, then $S e * ( z ) a → ( z ) ∈ ( ( P e ϵ ) * ) 2 m$.
(d)
If $F → ( z ) ∈ ( ( P o ϵ ) * ) 2 m + 1$, then $S o ( z ) F → ( z ) ∈ ( P z ¯ ϵ ) 2 m + 2$. Conversely, if $a → ( z ) ∈ ( P z ¯ ϵ ) 2 m + 2$, then $S o * ( z ) a → ( z ) ∈ ( ( P o ϵ ) * ) 2 m + 1$.
Proof.
Let $F → ( z ) ∈ ( P e ϵ ) 2 m$. Writing $a → ( z ) = [ a 1 ( z ) ; ⋯ ; a 2 m + 1 ( z ) ] = S e ( z ) F → ( z )$ and applying (30) and (34), we have:
$a 1 ( z ) = 2 2 z F [ m ] ( z ) = 2 2 ϵ F [ m ] ( z − 1 ) , a j ( z ) = ϵ a 2 m + 2 − j ( z − 1 ) , 2 ≤ j ≤ m + 1 , a 2 m + 1 ( z ) = 2 2 F [ m ] ( z ) ,$
which show that $a →$ is $P ϵ$-symmetric. On the other hand, if $a → ( z ) = [ a 1 ( z ) ; ⋯ ; a 2 m + 1 ( z ) ]$ is $P ϵ$-symmetric, writing $u → ( z ) = S e * ( z ) a → ( z )$, for $j = 1 , ⋯ , 2 m − 1$, we have $u j ( z ) = a j + 1 ( z )$, so that $u j ( z ) = ϵ u 2 m − j ( z )$. We also have:
$u 2 m ( z ) = 2 2 z a 1 ( z ) + 2 2 a 2 m + 1 ( z ) .$
By $a 1 ( z ) = ϵ a 2 m + 1 ( z − 1 )$ and the identify above, we have:
$u 2 m ( z ) = ϵ ( 2 2 z − 1 a 2 m + 1 ( z − 1 ) + 2 2 a 1 ( z − 1 ) ) = ϵ z − 1 u 2 m ( z − 1 ) .$
Hence, $u → ( z ) ∈ ( P e ϵ ) 2 m$. The proof for Part (a) is completed. By the similar computation and applying (31) and (36), (32) and (35), (33) and (37), respectively, we can prove Parts (b), (c) and (d) of the lemma.  ☐
Similar to Definition 6, we define the smallest $P ˜$-symmetric prime vector in the sets $( P e ϵ ) 2 m$ and $( P o ϵ ) 2 m + 1$.
Definition 9.
The smallest $P ˜$-symmetric prime vector is defined as follows:
(1)
The vector $a ˜ ( z ) = S e * ( c e m + 1 )$ is called the smallest $P e +$-symmetric prime vector in $( P e + ) 2 m$.
(2)
Let $a → ∈ ( P − ) 2 m + 1$ be the smallest $P$-symmetric prime vector in (2) of Definition 6 with $i ≠ 1$. Then, $a ˜ ( z ) = S e * a →$ is called the smallest $P e −$-symmetric prime vector in $( P e − ) 2 m$.
(3)
Let $a → ∈ ( P ϵ ) 2 m + 2$ be the smallest $P$-symmetric prime vector in (3) of Definition 6 with $i ≠ 1$. Then, $a ˜ ( z ) = S o * a →$ is called the smallest $P o ϵ$-symmetric prime vector in $( P o ϵ ) 2 m + 1$.
By Definition 9, we immediately have the following:
Proposition 5.
The smallest $P ˜$-symmetric prime vector has the following form:
(1)
The smallest $P e ϵ$-symmetric prime vector in $( P e ϵ ) 2 m$ has the form $a → = [ a → h ; 0 ]$, where $a → h ( z )$ is the smallest $P ϵ$-symmetric prime vector in $P 2 m − 1$. Therefore, if $a ˜$ in the smallest $P e ϵ$-symmetric prime vector, then $S e a ˜$ is the smallest $P ϵ$-symmetric prime vector.
(2)
The smallest $P o ϵ$-symmetric prime vector in $( P o ϵ ) 2 m + 1$ has the form $a → = [ 0 ; a → t ]$, where $a → t ( z )$ is the smallest $P ϵ$-symmetric prime vector in $P 2 m$. Therefore, if $a ˜$ in the smallest $P o ϵ$-symmetric prime vector, then $S o a ˜$ is the smallest $P ϵ$-symmetric prime vector.
Example 4.
Assume $d ( z ) ∈ L$ satisfies $gcd L ( d ( z ) , d ( z − 1 ) ) = 1$. The smallest $( P e + )$-symmetric prime vector in $( P e + ) 4$ has the form $[ 0 ; c ; 0 ; 0 ] .$ The smallest $( P e − )$-symmetric prime vector in $( P e − ) 4$ has the form $[ d ( z ) ; 0 ; − d ( z − 1 ) ; 0 ] .$ The smallest $( P o ϵ )$-symmetric prime vector in $( P o ϵ ) 5$ has the form $[ 0 ; d ( z ) ; 0 ; 0 ; ϵ d ( z − 1 ) ]$ or $[ 0 ; 0 ; d ( z ) ; ϵ d ( z − 1 ) ; 0 ]$.
At the next step, we define $C ˜$-symmetric elementary matrices for transforming a $P ˜$-symmetric prime vector to the smallest one. By Lemma 7, we immediately have the following:
Lemma 8.
For any $E ∈ E 2 m + 1$, $a → ∈ ( P e ϵ ) 2 m$ and $b → ∈ ( ( P e ϵ ) * ) 2 m$, we have $S e * E S e a → ∈ ( P e ϵ ) 2 m$ and $S e * E S e b → ∈ ( ( P e ϵ ) * ) 2 m$. For any $E ∈ E 2 m + 2$, $a → ∈ ( P o ϵ ) 2 m + 1$ and $b → ∈ ( ( P o ϵ ) * ) 2 m + 1$, we have $S o * E S o a → ∈ ( P o ϵ ) 2 m + 1$ and $S o * E S o b → ∈ ( ( P o ϵ ) * ) 2 m + 1$.
We also have the following:
Lemma 9.
Let $E$ be an $s × s$ $C$-symmetric elementary matrix of Type 3.
(1)
If $s = 2 m + 2$, then $( S o * ( z ) E ( z ) S o ( z ) ) − 1 = S o * ( z ) E − 1 ( z ) S o ( z )$.
(2)
If $s = 2 m + 1$, then $( S e * ( z ) E ( z ) S e ( z ) ) − 1 = S e * ( z ) E − 1 ( z ) S e ( z )$.
Proof.
In the case of $s = 2 m + 2$, we have $S o * S o = I 2 m + 1$ and:
$S o S o * = 1 / 2 0 z / 2 0 I 2 m 0 1 / ( 2 z ) 0 1 / 2 = I 2 m + 2 + Q ,$
where
$Q = − 1 / 2 0 z / 2 0 0 2 m 0 1 / ( 2 z ) 0 − 1 / 2$
satisfies $S o * Q = 0$ and $Q s o = 0$. Therefore,
$( S o * E − 1 S o ) ( S o * E S o ) = S o * E − 1 ( I 2 m + 2 + Q ) E S o = I 2 m + 1 + S o * E − 1 Q E S o .$
If $E = E ( i , j ) ( q ) ∈ E 3 2 m + 2 , i , j ≠ 1$, then $E − 1 Q E = Q$, which yields $S o * E − 1 Q E S o = S o * Q S o = 0$. If $E = E ( 1 , j ) ( q )$, then $E − 1 Q = Q$; else if $E = E ( j , 1 ) ( q )$, then $Q E = Q$. By $S o * Q = 0$ and $Q s o = 0$, in both cases, we have $S o * E − 1 Q E S o = 0 .$ The lemma is proven for odd M. The proof for even M is similar.  ☐
By Lemmas 8 and 9, we define the $C ˜$-symmetric elementary matrices as follows.
Definition 10.
The matrix:
$E ˜ ( i , j ) ( q ) = S o * E ( i , j ) ( q ) S o , E ( i , j ) ( q ) ∈ E 3 2 m + 2 , S e * E ( i , j ) ( q ) S e , E ( i , j ) ( q ) ∈ E 3 2 m + 1 ,$
is called a $C ˜$-symmetric elementary matrix. We denote by $E ˜ s$ the set of all $s × s$ $C ˜$-symmetric elementary matrices.
As before, when the indices of $E ˜ ( i , j ) ( q )$ are not stressed, we simply write it as $E ˜$. If we need to stress the dimension of an $s × s$ $C ˜$-symmetric elementary matrix, we write it as $E ˜ s ( i , j ) ( q )$.
Proposition 6.
$E ˜ ( j , i ) ( q ) = [ E ˜ ( i , j ) ( q ¯ ) ] *$ and $[ E ˜ ( i , j ) ( q ) ] − 1 = E ˜ ( i , j ) ( − q )$.
Proof.
The first identity is derived from:
$[ E ( j , i ) ( q ¯ ) ] * = ( E ( i , j ) ( q ¯ ) ¯ ) T = ( E ( i , j ) ( q ) ) T = E ( j , i ) ( q ) .$
The second one is derived from $[ E ( i , j ) ( q ) ] − 1 = E ( i , j ) ( − q )$ and Lemma 9.  ☐
To derive the explicit expressions of $C ˜$-symmetric elementary matrices, we write:
$q → c ( z ) = 2 2 ( q ( z ) + z q ( z − 1 ) ) e → m , e → m ∈ R 2 m − 1 , q → o j ( z ) = 2 2 ( q ( z ) e → j + z − 1 q ( z − 1 ) e → 2 m + 1 − j ) , e → j , e → 2 m + 1 − j ∈ R 2 m , q → e j ( z ) = 2 2 ( z q ( z ) e → j + q ( z − 1 ) e → 2 m − j ) , e → j , e → 2 m − j ∈ R 2 m − 1 .$
When $1 ≤ j < i ≤ m ,$ by (38) and:
$E 2 m + 2 ( i + 1 , j + 1 ) ( q ) = 1 0 0 0 E 2 m ( i , j ) ( q ) 0 0 0 1 ,$
we have:
$E ˜ 2 m + 1 ( i + 1 , j + 1 ) ( q ) = 1 0 0 E 2 m ( i , j ) ( q ) , 1 ≤ j < i ≤ m .$
Similar computation yields:
$E ˜ 2 m + 1 ( j + 1 , 1 ) ( q ) = 1 0 q → o j I 2 m , 1 ≤ j ≤ m − 1 .$
For even-dimensional cases, we have:
$E ˜ 2 m ( i + 1 , j + 1 ) ( q ) = E 2 m − 1 ( i , j ) 0 0 1 , 1 ≤ j < i ≤ m , E ˜ 2 m ( m + 1 , 1 ) ( q ) = I 2 m − 1 q → c 0 1 , E ˜ 2 m ( j + 1 , 1 ) ( q ) = I 2 m − 1 q → e j 0 1 , 1 ≤ j ≤ m − 1 .$
Example 5.
All elements of $E ˜ 4$ are derived from $E ( i , j ) ( q ) ∈ E 3 5$, where $1 ≤ i , j ≤ 3$ and $i ≠ j$. By Corollary 6, we only need to present $E ˜ 4 ( i , j ) ( q )$ for $( i , j ) = ( 2 , 1 ) , ( 3 , 1 )$, and $( 3 , 2 )$. By the formulas above, we obtain:
$E ˜ 4 ( 2 , 1 ) ( q ) = 1 0 0 2 2 z q ( z ) 0 1 0 0 0 0 1 2 2 q ( z − 1 ) 0 0 0 1 , E ˜ 4 ( 3 , 1 ) ( q ) = 1 0 0 0 0 1 0 q ^ ( z ) 0 0 1 0 0 0 0 1$
where $q ^ ( z ) = 2 2 ( q ( z ) + z q ( z − 1 ) )$, and:
$E ˜ 4 ( 3 , 2 ) ( q ) = 1 0 0 0 q ( z ) 1 q ( z − 1 ) 0 0 0 1 0 0 0 0 1 .$
All elements of $E ˜ 5$ are derived from $E ( i , j ) ( q ) ∈ E 3 6$. Similarly, we only need to present $E ˜ 6 ( i , j ) ( q )$ for $( i , j ) = ( 2 , 1 ) , ( 3 , 1 )$ and $( 3 , 2 )$. By the formulas above,
$E ˜ 5 ( 2 , 1 ) ( q ) = 1 0 0 0 0 2 2 q ( z ) 1 0 0 0 0 0 1 0 0 0 0 0 1 0 2 2 z − 1 q ( z − 1 ) 0 0 0 1 , E ˜ 5 ( 3 , 1 ) ( q ) = 1 0 0 0 0 0 1 0 0 0 2 2 q ( z ) 0 1 0 0 2 2 z − 1 q ( z − 1 ) 0 0 1 0 0 0 0 0 1 .$
$E ˜ 5 ( 3 , 2 ) ( q ) = 1 0 0 0 0 0 1 0 0 0 0 q ( z ) 1 0 0 0 0 0 1 q ( z − 1 ) 0 0 0 0 1 .$
We now generalize Lemma 4 to the sets $( P e ϵ ) 2 m$ and $( P o ϵ ) 2 m + 1$.
Lemma 10.
Assume that $a ( z ) ∈ P ϵ$, $b ( z ) ∈ P − ϵ$, $c ( z ) ∈ P + ϵ$ and $d ( z ) ∈ L h$.
(1)
If $supp ( a ) > supp ( b )$, then there is $p ( z ) ∈ P + +$ and $a 1 ( z ) ∈ P ϵ$ with $supp ( a 1 ) < supp ( b )$ such that $a ( z ) = b ( z ) p ( z ) + a 1 ( z )$. If $supp ( a ) < supp ( b )$, then there is $q ( z ) ∈ P − +$ and $b 1 ( z ) ∈ P − ϵ$ with $supp ( b 1 ) < supp ( a )$ such that $b ( z ) = q ( z ) a ( z ) + b 1 ( z )$.
(2)
If $supp ( a ) > supp ( c )$, then there is $q ( z ) ∈ P − +$ and $a 1 ( z ) ∈ P ϵ$ with $supp ( a 1 ) < supp ( b )$ such that $a ( z ) = c ( z ) q ( z ) + a 1 ( z )$. If $supp ( a ) < supp ( c )$, then there is $p ( z ) ∈ P + +$ and $c 1 ( z ) ∈ P + ϵ$ with $supp ( c 1 ) < supp ( a )$ such that $c ( z ) = p ( z ) a ( z ) + c 1 ( z )$.
(3)
If $supp ( b ) > supp ( d )$, there is a $q ( z ) ∈ L h$ and $b 1 ( z ) ∈ P − ϵ$ with $supp ( b 1 ) ≤ supp ( d )$ such that:
$b ( z ) = q ( z ) d ( z ) + ϵ / z q ( 1 / z ) d ( 1 / z ) + b 1 ( z ) .$
(4)
If $supp ( c ) > supp ( d )$, there is a $p ( z ) ∈ L h$ and $c 1 ( z ) ∈ P + ϵ$ with $supp ( c 1 ) ≤ supp ( d )$ such that:
$c ( z ) = p ( z ) d ( z ) + ϵ z p ( 1 / z ) d ( 1 / z ) + c 1 ( z ) .$
Proof.
We first prove (1). If $supp ( a ) > supp ( b )$, applying Lemma 4 to $a ( z )$ and $b ( z )$, we have $q ( z ) ∈ L h$ and $a 1 ( z ) ∈ P ϵ$