Next Article in Journal
On Basic Probability Logic Inequalities
Previous Article in Journal
The Cădariu-Radu Method for Existence, Uniqueness and Gauss Hypergeometric Stability of Ω-Hilfer Fractional Differential Equations
Previous Article in Special Issue
Three Authentication Schemes without Secrecy over Finite Fields and Galois Rings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minimal State-Space Representation of Convolutional Product Codes

1
Departament de Matemàtiques, Universitat d’Alacant, E-03690 Alacant, Spain
2
CIDMA—Center for Research and Development in Mathematics and Applications, Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(12), 1410; https://doi.org/10.3390/math9121410
Submission received: 31 March 2021 / Revised: 7 June 2021 / Accepted: 11 June 2021 / Published: 17 June 2021
(This article belongs to the Special Issue Algebra and Number Theory)

Abstract

:
In this paper, we study product convolutional codes described by state-space representations. In particular, we investigate how to derive state-space representations of the product code from the horizontal and vertical convolutional codes. We present a systematic procedure to build such representation with minimal dimension, i.e., reachable and observable.

1. Introduction

It is well-known that the combination of codes can yield a new code with better properties than the single codes alone. Such combinations have been widely used in coding theory in different forms, e.g., concatenation, product codes, turbo codes, array codes, or using EVENODD and interleaving methods [1,2,3,4,5,6,7,8]. The advantages of the combination of codes can be due to, for instance, larger distance, lower decoding complexity or improved burst error correction. In this paper, we shall focus on the so-called product codes, which is a natural generalization of the interleaved schemes. More concretely, we will focus on product convolutional codes.
In the context of block product codes, the codewords are constant matrices with entries in a finite field. We may consider that both rows and columns are encoded into error-correcting codes. Hence, for encoding, first the row redundant symbols are obtained (horizontal encoding using C h ), and then the column redundant symbols (vertical encoding using C v ). If C h has minimum distance d h , and C v has minimum distance d v , it is easy to see that the product code, denoted by C h C v , has minimum distance d h d v . This class of product codes has been thoroughly studied and is widely used to correct burst and random errors using many possible different decoding procedures. However, the product of two convolutional codes has been less investigated and many properties that are known for block codes are still to be investigated in the convolutional context.
Naturally, the class of convolutional codes generalizes the class of linear block codes, and, therefore, they are mathematically more involved than block codes. In this context, the data are considered as a sequence in contrast with block codes which operate with fixed message blocks (matrices in this case). Even though they split the data into blocks of a fixed rate as block codes do, the relative position of each block in the sequence is taken into account. The blocks are not encoded independently and previously encoded data (matrices in this case) in the sequence have an effect over the next encoded node. Because of this, convolutional codes have memory and can be viewed as linear systems over a finite field (see, for instance, [5,9,10,11,12,13,14,15,16,17,18]). A description of convolutional codes can be provided by a time-invariant discrete linear system called discrete-time state-space system in control theory (see [19,20,21]). Hence, we consider product convolutional codes described by state-space representations. Convolutional codes have already been thoroughly investigated within this framework and fundamental system theoretical properties, such as observability, reachability, and minimality, have been derived in [11,12,13,14].
It is worth mentioning the results derived in [22,23] on fundamental algebraic properties of the encoders representing product convolutional codes. In addition, they showed that every product convolutional code can be represented as a woven code and introduced the notion of block distances. In [22], it is was shown that, if the generator matrices of the horizontal and vertical convolutional codes are minimal basic, then the generator matrix of the product code is also minimal basic. In this work, we continue this thread of research but within input-state-space framework instead of working with generator matrices. We present a constructive methodology to build a minimal state-space representations for these codes from two minimal state-space representation of the corresponding horizontal C h and vertical C v convolutional codes. These representations are, therefore, reachable and observable and are easily constructed by sorting and selecting some of the entries of a given matrix built upon the state-space representations of C h and C v . This is done directly without using the encoder matrix representations of the convolutional codes. The derived representations are minimal and, therefore, are reachable and observable. Moreover, they are easily constructed by sorting and selecting some of the entries of a given matrix built upon the state-space representations of C h and C v .
Recently, there have been new advances in the original idea of deriving an algebraic decoding algorithm of convolutional codes using state space representations. The idea was first proposed in [24] and heavily uses the structure of these representations to derive a general procedure, which will allow for extending known decoding algorithms for block codes (like, e.g., the Berlekamp–Massey algorithm) to convolutional codes. More concretely, the algorithm iteratively computes the state vector x t inside the trellis diagram, and, once this state vector is constructed, the algorithm computes, in an algebraic manner, a new state vector x t + s , where s is related to the observability index of the state representation. Recently, these ideas have been further developed in [25,26]. Hence, the ideas of this paper can be used to built a minimal state space representation of a product convolutional code with the property that its decoding can be simplified by considering the simpler horizontal and vertical component codes and applying the decoding algorithms developed in [25,26].
In [27], an input–state–output representation of each one of the convolutional codes C h and C v , two input–state–output representations of the product convolutional code C h C v were introduced, but none of them are minimal, even if the two input–state–output representations are both minimal. In this paper, we give a solution to this problem.
The rest of the paper is organized as follows: In Section 2, we introduce the background on polynomial matrices and convolutional codes to understand the paper. In Section 3, we describe how product convolutional codes can be viewed as a convolutional code whose generator matrix is the Kronecker product of the corresponding generator matrices. In Section 4, we provide a state-space realization of the product convolutional code based on a state-space realization of each one of the convolutional codes involved in the product. Finally, in Section 5, we present the conclusions and future work.

2. Preliminaries

Let F be a finite field, F [ z ] the ring of polynomials in the variable z and coefficients in F , and F ( z ) the set of rational functions in the variable z and coefficients in F .
Assume that k and n are positive integers with n > k , denote by F [ z ] n × k the set of all n × k matrices with entries in F [ z ] , and denote by F [ z ] n the set F [ z ] n × 1 .
A matrix U ( z ) F [ z ] k × k is called unimodular if it admits a polynomial inverse; that is, its determinant is a nonzero element of F (see, for example [28,29]).
Assume that G ( z ) F [ z ] n × k . The internal degree of G ( z ) is the maximum degree of the k × k minors of G ( z ) . We said that G ( z ) is basic if its internal degree is the minimum of the internal degrees of the matrices G ( z ) U ( z ) , for all invertible matrices U ( z ) F ( z ) k × k ; i.e., the internal degree of G ( z ) is as small as possible (see, for instance, [17,30,31,32,33]. In particular, if U ( z ) is unimodular, then G ( z ) and G ( z ) U ( z ) have the same internal degrees. G ( z ) is called right prime, if, for every factorization, G ( z ) = G ( z ) U ( z ) with G ( z ) F [ z ] n × k and U ( z ) F [ z ] k × k ; necessarily, U ( z ) is unimodular (see, for instance, [28,33,34]). Furthermore, G ( z ) is basic if and only if any (and therefore all) of the following equivalent conditions are satisfied: G ( z ) is right prime, G ( z ) has a polynomial left inverse [17,32]).
Assume that G ( z ) = g i j ( z ) F [ z ] n × k and denote by ν j = max 1 i n deg g i j ( z ) the j-th column degree of G ( z ) . We said that G ( z ) is column reduced if the rank of the high-order coefficient matrix  G = g i j ( ν j ) F n × k is k, where g i j ( ν j ) is the coefficient of z ν j in g i j ( z ) . Equivalently, G ( z ) is column reduced if and only if its internal and external degrees coincide, where the external degree of G ( z ) is the number j = 1 k ν j . Note that the internal degree of a polynomial matrix is always less than or equal to its external degree [17]. For any G ( z ) F [ z ] n × k , there exists a unimodular matrix U ( z ) F [ z ] k × k such that G ( z ) U ( z ) is column reduced. Moreover, if G ( z ) , G ( z ) U ( z ) F [ z ] n × k are column reduced matrices with U ( z ) F [ z ] k × k unimodular, then G ( z ) and G ( z ) U ( z ) have the same column degrees, up to a permutation. Column reduced matrices are also called minimal matrices [12,13,33]). Basic and reduced matrices are also called minimal-basic matrices [30,31,32] or canonical matrices [17].
A rate k / n convolutional code C is an F [ z ] -submodule of rank k of the module F [ z ] n (see [18,20,35]). Since F [ z ] is a Principle Ideal Domain, a convolutional code C has always a well-defined rank k, and there exists G ( z ) F [ z ] n × k , of rank k, such that (see [35])
C = im F [ z ] G ( z ) = v ( z ) F [ z ] n | v ( z ) = G ( z ) u ( z ) with u ( z ) F [ z ] k
where u ( z ) is the information vector, v ( z ) is the corresponding codeword, and G ( z ) is the generator or encoder matrix of C .
If G ( z ) F [ z ] n × k is a generator matrix of C and U ( z ) F [ z ] k × k is unimodular, then G ( z ) U ( z ) is also a generator matrix of C . Therefore, all generator matrices of C have the same internal degree. The degree or complexity of C is the internal degree of one (and therefore any) generator matrix and, therefore, is also equal to the external degree of one (and therefore any) column reduced generator matrix (see [17,34]). The column degrees of a basic and column reduced generator matrix of C are called Forney indices of C .
Since C always admits a generator matrix G ( z ) F [ z ] n × k which is column reduced, the row degrees ν 1 , ν 2 , , ν k of G ( z ) are the Forney indices of C and j = 1 k ν j = δ , the degree of C . From now on, we refer to a rate k / n convolutional code with degree δ as an ( n , k , δ ) convolutional code.
An ( n , k , δ ) convolutional code C can be described by a time invariant linear system (see [11,14,16,17]), denoted by ( A , B , C , D ) ,
x t + 1 = A x t + B u t v t = C x t + D u t , t = 0 , 1 , 2 , , x 0 = 0 ,
where A F m × m , B F m × k , C F n × m and D F n × k . For each instant t, we call x t F m the state vector, u t F k the input vector, and v t F n the output vector, and we say that the system ( A , B , C , D ) has dimension m. In the literature of linear systems, the above representation is known as the state-space representation (see, for example, [28,29,36,37,38]). If we define u ( z ) = t 0 u t z t , and v ( z ) = t 0 v t z t , it follows from expression (1) that v ( z ) = G ( z ) u ( z ) where
G ( z ) = C ( I m z A ) 1 B z + D
is the transfer matrix of the system. We say that ( A , B , C , D ) is a realization of G ( z ) if G ( z ) is the transfer matrix of ( A , B , C , D ) .
For a given transfer matrix G ( z ) , there are, in general, many possible realizations. A realization ( A , B , C , D ) of G ( z ) is called minimal if it has minimal dimension, and this happens if and only if the pair ( A , B ) is reachable and the pair ( A , C ) is observable (see, for instance, [28,29,36]). Recall that the pair ( A , B ) is called reachable if
rank B A B A δ 1 B = δ
or equivalently (see [39]), rank λ I δ A B = δ , for all λ F ¯ , where F ¯ is the closure of F . Analogously, the pair ( A , C ) is observable if and only if the pair ( A T , C T ) is reachable. The dimension of a minimal realization of a transfer matrix G ( z ) is called the McMillan degree of G ( z ) . In the particular case that G ( z ) is a column reduced generator matrix of a convolutional code C , the McMillan degree of G ( z ) coincides with the degree δ of C .
Reachability and observability represent two major concepts of control system theory. They were introduced by Kalman in [40] in the context of systems theory and, in [35], the definitions of reachability and observability of convolutional codes were presented, see also [9,41,42,43]. These notions are not only important for characterizing minimality of our state-space realization but also to describe the possibility of driving the state everywhere with the appropriate selection of inputs (reachability) and the ability of computing the state after from the observation of output sequence.
A system ( A , B , C , D ) is a realization of a convolutional code C if C is equal to the set of outputs corresponding to polynomial inputs u ( z ) F [ z ] k and to zero initial conditions; i.e., x 0 = 0 . The minimal dimension of a realization of C is equal to the degree of C and the minimal realizations of the column reduced generator matrices of C are minimal realizations of the code.
If ( A ¯ , B ¯ , C ¯ , D ¯ ) , with A ¯ F m × m , B ¯ F m × k , C ¯ F n × m , and D ¯ F n × k is a non-minimal realization of a transfer matrix with McMillan degree δ , from the Kalman’s decomposition theorem (see, for example, [28,29,36,38,40,44,45,46,47]), there exists an invertible matrix S F m × m such that
( S A ¯ S 1 , S B ¯ , C ¯ S 1 , D ¯ ) = A O A ˜ 13 O A ˜ 21 A ˜ 22 A ˜ 23 A ˜ 24 O O A ˜ 33 O O O A ˜ 43 A ˜ 44 , B B ˜ O O , C O C ˜ O , D ,
where A F δ × δ , B F δ × k , C F n × δ and the pair ( A , B ) is reachable, the pair ( A , C ) is observable, and
C ¯ ( I m z A ¯ ) 1 B ¯ z + D ¯ = C ( I δ z A ) 1 B z + D .
That is, ( A , B , C , D ) is a minimal realization of the transfer matrix G ( z ) . Moreover, if ( A , B , C , D ) is another minimal realization of G ( z ) , then there exists a unique invertible matrix P F δ × δ such that
A = P A P 1 , B = P B , C = C P 1 , and D = D .
The state-space representation in expression (1), also known as, driving representation, is different from the input–state–output representation (see [21]) given by
x t + 1 = A x t + B u t y t = C x t + D u t , v t = y t u t , t = 0 , 1 , 2 , , x 0 = 0 ,
where A F m × m , B F m × k , C F ( n k ) × m and D F ( n k ) × k . This input–state–output representation has been thoroughly studied by many authors [9,10,18,19,21,27,33,35,48], and the codewords are the finite support input–output sequences v t t 0 corresponding to finite support state sequences x t t 0 .
The next theorem (see [11,14]) provides a state-space realization for a given polynomial matrix, and it will be very useful in Section 4.
Theorem 1.
Let G ( z ) = g 1 ( z ) g 2 ( z ) g k ( z ) F [ z ] n × k be a matrix with column degrees ν 1 , ν 2 , , ν k . Assume that g j ( z ) = = 0 ν j g j ( ) z , for j = 1 , 2 , , k , and consider the matrices
A j = 0 T 0 I ν j 1 0 F ν j × ν j , B j = 1 0 F ν j , C j = g j ( 1 ) g j ( 2 ) g j ( ν j ) F n × ν j .
If δ = j = 1 k ν j and
A = A 1 A 2 A k F δ × δ , B = B 1 B 2 B k F δ × k , C = C 1 C 2 C k F n × δ , D = g 1 ( 0 ) g 2 ( 0 ) g k ( 0 ) F n × k ,
then the pair ( A , B ) is reachable. Moreover, if G ( z ) is column reduced, then the pair ( A , C ) is observable, and, therefore, ( A , B , C , D ) is a minimal realization of G ( z ) .
For the realization ( A , B , C , D ) of G ( z ) introduced in the previous theorem, it follows from expression (2), that G ( z ) = C E ( z ) + D , where
E ( z ) = E 1 ( z ) E 2 ( z ) E k ( z ) with E j ( z ) = z z 2 z ν j , for j = 1 , 2 , , k .
The following example will help us to understand the previous theorem.
Example 1.
Let F = G F ( 2 ) be the Galois field of two elements and consider the polynomial matrix
G ( z ) = z 2 z + 1 z + 1 z 1 1 F [ z ] 3 × 2 .
Since ν 1 = 2 , ν 2 = 1 , and rank G = 2 . It follows that G ( z ) is column reduced. Now consider the matrices
A 1 = 0 0 1 0 , A 2 = 0 , B 1 = 1 0 , B 2 = 1 , C 1 = 0 1 1 0 0 0 , C 2 = 1 1 0 , D = 0 1 1 0 1 1 .
Then, according to Theorem 1, it follows that ( A , B , C , D ) is a minimal state-space realization of G ( z ) with
A = A 1 A 2 , B = B 1 B 2 and C = C 1 C 2 .
Moreover, E ( z ) = z 0 z 2 0 0 z and G ( z ) = C E ( z ) + D .

3. Product Convolutional Codes

In this section, we introduce the product of two convolutional codes called horizontal and vertical codes, respectively. Assume that C h and C v are horizontal ( n h , k h , δ h ) and vertical ( n v , k v , δ v ) , respectively. Then, the product convolutional code (see [22,49]) C = C h C v is defined to be the convolutional code whose codewords consist of all V ( z ) F [ z ] n v × n h whose columns belong to C v and whose rows belong to C h .
Encoding of the product convolutional code C can be done as follows (see [22,49]): Let G h ( z ) F [ z ] n h × k h and G v ( z ) F [ z ] n v × k v be generator matrices of the component convolutional codes C h and C v , respectively. Denote by U ( z ) F [ z ] k v × k h an information matrix. Now, we can apply row-column encoding; i.e., every column of U ( z ) is encoded using G v ( z ) , and then every row of the resulting matrix G v ( z ) U ( z ) is encoded using G h ( z ) as ( G v ( z ) U ( z ) ) G h ( z ) T . We can also apply column-row encoding; i.e., every row of U ( z ) is encoded using G h ( z ) , and then every column of the resulting matrix U ( z ) G h ( z ) T is encoded using G v ( z ) as G v ( z ) ( U ( z ) G h ( z ) T ) . As a consequence of the associativity of the product of matrices, we get the same matrix in both cases. Thus, the codeword matrix V ( z ) is given by
V ( z ) = G v ( z ) U ( z ) G h ( z ) T ,
and by using properties of the Kronecker product (see [50,51]), we have
vect V ( z ) = G h ( z ) G v ( z ) vect U ( z )
where vect · is the operator that transforms a matrix into a vector by stacking the column vectors of the matrix below one another. Now, since
G ( z ) = G h ( z ) G v ( z ) F [ z ] n h h v × k h k v
and rank G ( z ) = rank G h ( z ) rank G v ( z ) = k h k v , it follows that G ( z ) is a generator matrix of the product convolutional code C = C h C v . Note that C is a rate k h k v / n h n v convolutional code. We will compute its degree in Theorem 5 below.
The following two theorems were introduced in [22,49] without proof. We include them here, with proof, for completeness and further references. The first one establishes that the generator matrix of the product code is basic if the generator matrices of the constituent codes are also basic.
Theorem 2.
Assume that G h ( z ) F [ z ] n h × k h and G v ( z ) F [ z ] n v × k v are generator matrices of the horizontal ( n h , k h , δ h ) and vertical ( n v , k v , δ v ) convolutional codes C h and C v , respectively. If G h ( z ) and G v ( z ) are basic, then G ( z ) = G h ( z ) G v ( z ) is basic.
Proof. 
Since G h ( z ) and G v ( z ) are basic matrices, there exist L h ( z ) F [ z ] k h × n h and L v ( z ) F [ z ] k v × n v such that L h ( z ) G h ( z ) = I k h and L v ( z ) G v ( z ) = I k v . Now, consider the polynomial matrix L ( z ) = L h ( z ) L v ( z ) F [ z ] k h k v × n h n v . From the properties of the Kronecker product, it follows that L ( z ) G ( z ) = I k h k v . Consequently, G ( z ) is basic. □
The next theorem gives us the column degrees of a generator matrix of the product code as a function of the column degrees of the generator matrices of the constituent codes.
Theorem 3.
Assume that G h ( z ) F [ z ] n h × k h and G v ( z ) F [ z ] n v × k v are generator matrices of the horizontal ( n h , k h , δ h ) and vertical ( n v , k v , δ v ) convolutional codes C h and C v , respectively. If ν 1 ( h ) , ν 2 ( h ) , , ν k h ( h ) , and ν 1 ( v ) , ν 2 ( v ) , , ν k v ( v ) , are the column degrees of C h and C v , respectively, then the column degrees of G ( z ) = G h ( z ) G v ( z ) are
ν 1 , ν 2 , , ν k v , ν k v + 1 , ν k v + 2 , , ν 2 k v , ν 2 k v + 1 , , ν ( k h 1 ) k v + 1 , ν ( k h 1 ) k v + 2 , , ν k h k v ,
with ν l = ν i ( h ) + ν j ( v ) , with l = ( i 1 ) k v + j and j = 1 , 2 , , k v .
Proof. 
Assume that
G h ( z ) = g 1 ( h ) ( z ) g 2 ( h ) ( z ) g k h ( h ) ( z ) G v ( z ) = g 1 ( v ) ( z ) g 2 ( v ) ( z ) g k v ( v ) ( z ) .
From the properties of the Kronecker product, it follows that
G ( z ) = M 1 M 2 M k h
where
M i = g i ( h ) ( z ) g 1 ( v ) ( z ) g i ( h ) ( z ) g 2 ( v ) ( z ) g i ( h ) ( z ) g k v ( v ) ( z ) , a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a for i = 1 , 2 , , k h .
Now, since the column degrees of g i ( h ) ( z ) and g j ( v ) ( z ) are ν i ( h ) and ν j ( v ) , respectively, it follows that the column degree of g i ( h ) ( z ) g j ( v ) ( z ) is ν i ( h ) + ν j ( v ) , and the theorem holds. □
As an immediate consequence of the previous theorem, we have the following theorem:
Theorem 4.
Assume that G h ( z ) F [ z ] n h × k h and G v ( z ) F [ z ] n v × k v . If G h ( z ) and G v ( z ) are column reduced, then G ( z ) = G h ( z ) G v ( z ) is column reduced.
Proof. 
Let G h ( ) and G v ( ) the high-order coefficient matrices of G h ( z ) and G v ( z ) , respectively. If G is the high-order coefficient matrix of G ( z ) , from Theorem 3, it follows that
G = G h ( ) G v ( ) ,
and, from the properties of the Kronecker product,
rank G = rank G h ( ) G v ( ) = rank G h ( ) rank G v ( ) = k h k v .
Therefore, G ( z ) is column reduced. □
Finally, as a consequence of Theorems 2 and 4, we obtain the following theorem that gives us the degree of the product code as a function of the degrees of the constituent codes.
Theorem 5.
Assume that C h and C v are horizontal ( n h , k h , δ h ) and vertical ( n v , k v , δ v ) convolutional codes, respectively. Then, the degree of C = C h C v is δ h k v + k h δ v .
Proof. 
Assume that G h ( z ) F [ z ] n h × k h and G v ( z ) F [ z ] n v × k v are basic and column reduced generator matrices of C h and C v , respectively. With the notation of Theorem 3, ν 1 ( h ) , ν 2 ( h ) , , ν k h ( h ) and ν 1 ( v ) , ν 2 ( v ) , , ν k v ( v ) , are the Forney indices of C h and C v , respectively, and, therefore, δ h = i = 1 k h ν i ( h ) and δ v = j = 1 k v ν j ( v ) . Moreover, from Theorems 2 and 4, G ( z ) = G h ( z ) G v ( z ) is a basic and column reduced generator matrix for C . Again, with the notation of Theorem 3, ν ( i 1 ) k v + j = ν i ( h ) + ν j ( v ) , for i = 1 , 2 , , k h and j = 1 , 2 , , k v , are the Forney indices of C , and, therefore,
i = 1 k h j = 1 k v ν i ( h ) + ν j ( v ) = i = 1 k h ν i ( h ) k v + δ v = δ h k v + k h δ v
is the degree of C . □
We will use the above theorems in the next section to obtain a minimal state-space realization of the product convolutional code C = C h C v .

4. State-Space Realizations of Product Convolutional Codes

More specifically, let us assume that ( A h , B h , C h , D h ) and ( A v , B v , C v , D v ) are minimal realizations of column reduced generator matrices of the ( n h , k h , δ h ) horizontal and ( n v , k v , δ v ) vertical codes C h and C v , respectively. In this section, we will obtain a minimal state-space realization ( A , B , C , D ) of the ( n , k , δ ) product convolutional code C = C h C v , where n = n h n v , k = k h k v and δ = δ h k v + k h δ v . This means that we must find matrices A F δ × δ , B F δ × k , C F n × δ and D F n × k , such that the pair ( A , B ) is reachable, the pair ( A , C ) is observable, and C ( I δ z A ) 1 B z + D is a basic and column reduced generator matrix for C .
We can assume, without loss of generality, that matrices A h , A v , and B h , B v have the form of matrices A and B in Theorem 1. That is,
A h = A 1 ( h ) A 2 ( h ) A k h ( h ) with A i ( h ) = 0 T 0 I ν i ( h ) 1 0 F ν i ( h ) × ν i ( h ) ,
B h = B 1 ( h ) B 2 ( h ) B k h ( h ) with B i ( h ) = 1 0 F ν i ( h ) ,
A v = A 1 ( v ) A 2 ( v ) A k v ( v ) with A j ( v ) = 0 T 0 I ν j ( v ) 1 0 F ν j ( h ) × ν j ( h ) ,
B v = B 1 ( v ) B 2 ( v ) B k v ( v ) with B j ( v ) = 1 0 F ν j ( v ) .
The next theorem allows us to obtain a reachable pair ( A , B ) from the reachable pairs ( A h , B h ) and ( A v , B v ) .
Theorem 6.
Assume that ( A h , B h , C h , D h ) and ( A v , B v , C v , D v ) are minimal state-space realizations of the ( n h , k h , δ h ) horizontal and ( n v , k v , δ v ) vertical codes C h and C v , respectively, with A h , B h , A v , and B v as in expressions (4)–(7). For i = 1 , 2 , , k h and j = 1 , 2 , , k v , let ν ( i 1 ) k v + j = ν i ( h ) + ν j ( v ) and consider
A ( i 1 ) k v + j = 0 T 0 I ν i ( h ) 1 0 1 0 T 0 I ν j ( v ) 1 0 = 0 T 0 I ν ( i 1 ) k v + j 1 0 F ν ( i 1 ) k v + j × ν ( i 1 ) k v + j , B ( i 1 ) k v + j = 1 0 F ν ( i 1 ) k v + j ,
and define
A = A 1 A 2 A k F δ × δ , B = B 1 B 2 B k F δ × k .
Then, ( A , B ) is a reachable par.
Proof. 
It is easy to see that rank λ I δ A B = δ , for all λ F ¯ . Thus, the pair ( A , B ) is reachable. □
Assume again that ( A h , B h , C h , D h ) and ( A v , B v , C v , D v ) are minimal state-space realizations of the ( n h , k h , δ h ) horizontal and ( n v , k v , δ v ) vertical codes C h and C v , respectively, with A h , B h , A v , and B v as in expressions (4)–(7). From Theorem 1 and expressions (1) and (3), it follows that
G h ( z ) = C h E h ( z ) + D h and G v ( z ) = C v E v ( z ) + D v
where
E h ( z ) = E 1 ( h ) ( z ) E 2 ( h ) ( z ) E k h ( h ) ( z ) with E i ( h ) ( z ) = z z 2 z ν i ( h ) , for i = 1 , 2 , , k h .
E v ( z ) = E 1 ( v ) ( z ) E 2 ( v ) ( z ) E k v ( v ) ( z ) with E j ( v ) ( z ) = z z 2 z ν j ( v ) , for j = 1 , 2 , , k v .
Now, since G ( z ) = G h ( z ) G v ( z ) , from expression (8) and the properties of the Kronecker product, we have that
G ( z ) = C h E h ( z ) + D h C v E v ( z ) + D v = C h C v E h ( z ) E v ( z ) + C h D v E h ( z ) I k v a a a a a a a a a a a a a a a a a a a a + D h C v I k h E v ( z ) + D h D v = C h C v C h D v D h C v E h ( z ) E v ( z ) E h ( z ) I k v I k h E v ( z ) + D h D v = C ¯ E ¯ ( z ) + D ¯ .
Note that D ¯ = D h D v is a matrix of size n h n v × k h k v ; that is, n × k . Thus, we can take D = D ¯ . However, since C ¯ = C h C v C h D v D h C v is a matrix of size n h n v × ( δ h δ v + δ h k v + k h δ v ) , that is, n × ( δ h δ v + δ ) , we cannot take the above matrix as matrix C. The following example will help us to understand how we should proceed to obtain the matrix C from the matrix C ¯ in expression (11).
Example 2.
Let F = G F ( 2 ) be the Galois field of two elements and consider G h ( z ) , the column reduced matrix, and the minimal state-space realization ( A h , B h , C h , D h ) of G h ( z ) given in Example 1. That is, G h ( z ) = z 2 z + 1 z + 1 z 1 1 F [ z ] 3 × 2 and
A h = 0 0 1 0 0 , B h = 1 0 1 , C h = 0 1 1 1 0 1 0 0 0 , and D h = 0 1 1 0 1 1 .
Moreover, E h ( z ) = z 0 z 2 0 0 z .
Let G v ( z ) = 1 + z + z 2 1 + z z 1 1 + z 3 z 1 1 + z 2 F [ z ] 4 × 2 . Since ν 1 ( v ) = 3 , ν 2 ( v ) = 2 , and rank G v ( ) = 2 , it follows that G v ( z ) is column reduced. Now, consider the matrices
A v = 0 0 0 1 0 0 0 1 0 0 0 1 0 , B v = 1 0 0 1 0 , C v = 1 1 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 , and D v = 1 1 0 1 1 0 1 1 .
Moreover, E v ( z ) = z 0 z 2 0 z 3 0 0 z 0 z 2 .
Now, from expression (11), the generator matrix G ( z ) = G h ( z ) G v ( z ) of the product convolutional code C = C h C v is given by
G ( z ) = C ¯ E ¯ ( z ) + D ¯
with
C ¯ = C h C v C h D v D h C v , E ¯ ( z ) = E h ( z ) E v ( z ) E h ( z ) I k v I k h E v ( z ) , and D ¯ = D h D v ,
where
C h C v = 0 0 0 0 0 1 1 0 1 0 1 1 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ,
C h D v = 0 0 1 1 1 1 0 0 0 1 0 1 0 0 1 0 1 0 0 0 1 1 1 1 1 1 0 0 1 1 0 1 0 0 0 1 1 0 0 0 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , D h C v = 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 1 0 1 1 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 ,
E h ( z ) E v ( z ) = z 2 0 z 3 0 z 4 0 0 z 2 0 z 3 z 3 0 z 4 0 z 5 0 0 z 3 0 z 4 z 2 0 z 3 0 z 4 0 0 z 2 0 z 3 ,
E h ( z ) I k v = z 0 0 z z 2 0 0 z 2 z 0 0 z , I k h E v ( z ) = z 0 z 2 0 z 3 0 0 z 0 z 2 z 0 z 2 0 z 3 0 0 z 0 z 2 .
As we can observe, C ¯ has 31 columns, but we need a matrix with 16 columns. Furthermore, E ¯ ( z ) does not have the structure given by expression (3).
However, considering the rows of E ¯ ( z ) whose elements have been written in red, we can move these rows to the appropriate positions and then, by Gaussian elimination from those rows, we can transform the matrix E ¯ ( z ) into the matrix E ( z ) O , with
E ( z ) = z 0 0 0 z 2 0 0 0 z 3 0 0 0 z 4 0 0 0 z 5 0 0 0 0 z 0 0 0 z 2 0 0 0 z 3 0 0 0 z 4 0 0 0 0 z 0 0 0 z 2 0 0 0 z 3 0 0 0 z 4 0 0 0 0 z 0 0 0 z 2 0 0 0 z 3
and O the zero matrix of the appropriate size. This means that we can find an invertible matrix P F 31 × 31 such that
P E ¯ ( z ) = E ( z ) O
and, therefore C ¯ E ¯ ( z ) = C E ( z ) , with C F 12 × 16 such that C ¯ P 1 = C C ˜ .
We can use the argument introduced in the above example to prove the following theorem.
Theorem 7.
Assume that ( A h , B h , C h , D h ) and ( A v , B v , C v , D v ) are minimal state-space realizations of the ( n h , k h , δ h ) horizontal and ( n v , k v , δ v ) vertical codes C h and C v , respectively, with A h , B h , A v , and B v as in expressions (4)–(7). Let A be the matrix defined in Theorem 6 and let C ¯ be the matrix in expression (11). Moreover, assume that
E ( z ) = E 1 ( z ) E 2 ( z ) E k ( z ) with E ( z ) = z z 2 z ν , for = 1 , 2 , , k ,
where ν = ν i ( h ) + ν j ( v ) , with = ( i 1 ) k v + j , for i = 1 , 2 , , k h and j = 1 , 2 , , k v , and consider the matrices E h ( z ) and E v ( z ) in expressions (9) and (10). If E ¯ ( z ) = E h ( z ) E v ( z ) E h ( z ) I k v I k h E v ( z ) , then there exists an invertible matrix P F ( δ + δ h δ v ) × ( δ + δ h δ v ) such that
P E ¯ ( z ) = E ( z ) O .
Moreover, if C ¯ P 1 = C C ˜ , with C F n × δ , then the pair ( A , C ) is observable.
Proof. 
Note that the submatrix of E ¯ ( z ) given by
E ^ ( z ) = E h ( z ) I k v z ν 1 ( h ) E v ( z ) O O O z ν 2 ( h ) E v ( z ) O O O z ν k h ( h ) E v ( z )
contains the necessary rows to construct the matrix E ( z ) . Thus, by using an appropriate permutation matrix Q F ( δ + δ h δ v ) × ( δ + δ h δ v ) , we have that
Q E ¯ ( z ) = E ( z ) E ˜ ( z ) .
Now, the entries in the first column of E ˜ ( z ) are 0 or z t with 1 t ν 1 ( h ) + ν 1 ( v ) 1 ; therefore, by using Gaussian elimination, we can transform these entries in 0. Once this operation is completed, the entries in the second column of the modified E ˜ ( z ) are, again, 0 or z t with 1 t ν 1 ( h ) + ν 2 ( v ) 1 and, therefore, we can transform these entries in 0. We continue with this argument, until we transform matrix E ˜ ( z ) into the zero matrix. In other words, we have found an invertible matrix R F ( δ + δ h δ v ) × ( δ + δ h δ v ) such that
R E ( z ) E ˜ ( z ) = E ( z ) O .
Thus, we can take P = R Q and, from expression (11), it follows that C ¯ E ¯ ( z ) = C E ( z ) .
Now, by a similar argument to the argument used in the proof of Theorem 1, it follows that the pair ( A , C ) is observable. □
The proof of the previous theorem tells us which are the rows of matrix E ¯ ( z ) that we must consider to obtain matrix E ( z ) . Therefore, it also tells us which are the columns of matrix C ¯ that we must consider. Specifically, the submatrix E ^ ( z ) given in expression (12) will help us to determine a submatrix of C ¯ , which contains the necessary columns to construct the matrix C. For that, on the one hand, the block E h ( z ) I k v of E ^ ( z ) means that we take all the columns of C h D v . On the other hand, if we assume that C h = C 1 ( h ) C 2 ( h ) C k h ( h ) , with
C i ( h ) = g i ( h ) ( 1 ) g i ( h ) ( 2 ) g i ( h ) ν i ( h ) , for i = 1 , 2 , , k h ,
then, from the properties of the Kronecker product,
C h × C v = C 1 ( h ) C v C 2 ( h ) C v C k h ( h ) C v
with
C i ( h ) C v = g i ( h ) ( 1 ) C v g i ( h ) ( 2 ) C v g i ( h ) ν i ( h ) C v , a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a for i = 1 , 2 , , k h .
Therefore, the rest of the rows of matrix E ^ ( z ) in expression (12) means that we must take the columns g i ( h ) ν i ( h ) C v , for i = 1 , 2 , , k h . Thus, by using the matrix P 1 , we have that
C ¯ P 1 = g 1 ( h ) ν 1 ( h ) C v g 2 ( h ) ν 2 ( h ) C v g k h ( h ) ν k h ( h ) C v a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a g k h ( h ) ν k h ( h ) C v C h D v D h C v P 1 = c 1 ( 1 ) c 1 ( 2 ) c 1 ( ν 1 ) c 2 ( 1 ) c 2 ( 2 ) c 2 ( ν 2 ) c k ( 1 ) c k ( 2 ) c k ( ν k ) C ˜ = C C ˜ ,
with k = k h k v and ν as in Theorem 7.
Now, as a consequence of Theorems 6 and 7, we obtain a minimal state-space realization of the convolutional product code.
Corollary 1.
With the notation of Theorems 6 and 7, the system ( A , B , C , D ) , with D = D h D v , is a minimal realization of the convolutional product code C = C h C v .
Example 3.
For the matrices in Example 2, it follows that
C ¯ P 1 = 1 1 0 1 0 1 1 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 1 0 1 0 0 1 0 1 0 0 0 1 1 1 1 1 1 0 0 1 1 0 1 0 0 0 1 1 0 0 0 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 = 0 1 1 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 1 0 1 1 1 1 0 0 1 0 0 0 0 1 0 1 1 0 0 0 1 1 1 0 0 1 0 0 0 1 0 0 1 1 1 0 1 1 0 1 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 1 0 1 0 0 0 0 1 1 1 0 1 0 0 0 1 0 1 1 1 0 0 0 1 0 0 0 1 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 = C C ˜

5. Conclusions and Future Work

In this paper, we presented a constructive methodology to obtain a minimal state-space representation ( A , B , C , D ) of a convolutional product code from two minimal state-space representations, ( A h , B h , C h , D h ) and ( A v , B v , C v , D v ) of an horizontal and a vertical convolutional code, respectively. In this work, we have considered driven variable representations and showed that, even if the matrices A, B, and D of the product convolutional code can be built in a straightforward way from the given matrix representations ( A h , B h , C h , D h ) and ( A v , B v , C v , D v ) , the matrix C requires further analysis. We showed, however, that C can still be computed if one properly selects the appropriate entries of a matrix that depends on C h , C v , D h and D v . In this way, the produced representation is minimal and can be computed in a relatively easy way.
An interesting line for future research would be to consider input–state–output representations instead of driven variables and study these different state space representations in the context of convolutional product codes.

Author Contributions

Investigation, J.-J.C., D.N., R.P. and V.R.; writing–original draft, J.-J.C., D.N., R.P. and V.R.; writing–review and editing, J.-J.C., D.N., R.P. and V.R. All authors have read and agreed to the published version of the manuscript.

Funding

The research of the first, second, and fourth authors was supported by Spanish grants PID2019-108668GB-I00 of the Ministerio de Ciencia e Innovación of the Gobierno de España and VIGROB-287 of the Universitat d’Alacant. The research of the third author was supported by The Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT—Fundação para a Ciência e a Tecnologia), references UIDB/04106/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Blaum, M.; Brady, J.; Bruck, J.; Menon, J. EVENODD: An efficient scheme for tolerating double disk failures in RAID architectures. IEEE Trans. Comput. 1995, 42, 192–202. [Google Scholar] [CrossRef] [Green Version]
  2. Blaum, M.; Roth, R.M. New array codes for multiple phased burst correction. IEEE Trans. Inf. Theory 1993, 39, 66–77. [Google Scholar] [CrossRef]
  3. Cardell, S.D.; Climent, J.J. An approach to the performance of SPC product codes under the erasure channel. Adv. Math. Commun. 2016, 10, 11–28. [Google Scholar] [CrossRef] [Green Version]
  4. Climent, J.J.; Napp, D.; Pinto, R.; Simões, R. Series concatenation of 2D convolutional codes by means of input–state–output representations. Int. J. Control 2018, 91, 2682–2691. [Google Scholar] [CrossRef] [Green Version]
  5. DeCastro-García, N.; García-Planas, M. Concatenated linear systems over rings and their application to construction of concatenated families of convolutional codes. Linear Algebra Its Appl. 2018, 542, 624–647. [Google Scholar] [CrossRef] [Green Version]
  6. Elias, P. Error-free coding. Trans. Ire Prof. Group Inf. Theory 1954, 4, 29–37. [Google Scholar] [CrossRef]
  7. Napp, D.; Pinto, R.; Sidorenko, V. Concatenation of convolutional codes and rank metric codes for multi-shot network coding. Des. Codes Cryptogr. 2018, 86, 237–445. [Google Scholar] [CrossRef]
  8. Sidorenko, V.; Jiang, L.; Bossert, M. Skew-feedback shift-register synthesis and decoding interleaved Gabidulin codes. IEEE Trans. Inf. Theory 2011, 57, 621–632. [Google Scholar] [CrossRef]
  9. Climent, J.J.; Herranz, V.; Perea, C. Linear system modelization of concatenated block and convolutional codes. Linear Algebra Its Appl. 2008, 429, 1191–1212. [Google Scholar] [CrossRef] [Green Version]
  10. Climent, J.J.; Herranz, V.; Perea, C. Parallel concatenated convolutional codes from linear systems theory viewpoint. Syst. Control Lett. 2016, 96, 15–22. [Google Scholar] [CrossRef] [Green Version]
  11. Fornasini, E.; Pinto, R. Matrix fraction descriptions in convolutional codes. Linear Algebra Its Appl. 2004, 392, 119–158. [Google Scholar] [CrossRef]
  12. Forney, G.D., Jr. Minimal bases of rational vector spaces, with applications to multivariable linear systems. SIAM J. Control 1975, 13, 493–520. [Google Scholar] [CrossRef]
  13. Forney, G.D., Jr.; Johannesson, R.; Wan, Z.X. Minimal and canonical rational generator matrices for convolutional codes. IEEE Trans. Inf. Theory 1996, 42, 1865–1880. [Google Scholar] [CrossRef] [Green Version]
  14. Gluesing-Luerssen, H.; Schneider, G. State space realizations and monomial equivalence for convolutional codes. Linear Algebra Its Appl. 2007, 425, 518–533. [Google Scholar] [CrossRef] [Green Version]
  15. Herranz, V.; Napp, D.; Perea, C. 1/n turbo codes from linear system point of view. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A. Mat. 2020, 114. [Google Scholar] [CrossRef]
  16. Massey, J.L.; Sain, M.K. Codes, automata, and continuous systems: Explicit interconnections. IEEE Trans. Autom. Control 1967, 12, 644–650. [Google Scholar] [CrossRef]
  17. McEliece, R.J. The algebraic theory of convolutional codes. In Handbook of Coding Theory; Pless, V.S., Huffman, W.C., Eds.; Elsevier: North-Holland, The Netherlands, 1998; pp. 1065–1138. [Google Scholar]
  18. Rosenthal, J. Connections between linear systems and convolutional codes. In Codes, Systems and Graphical Models; Marcus, B., Rosenthal, J., Eds.; Springer: New York, NY, USA, 2001; Volume 123, The IMA Volumes in Mathematics and its Applications; pp. 39–66. [Google Scholar] [CrossRef] [Green Version]
  19. Rosenthal, J. Some interesting problems in systems theory which are of fundamental importance in coding theory. In Proceedings of the IEEE Conference on Decision and Control, San Diego, CA, USA, 12 December 1997; pp. 1–6. [Google Scholar]
  20. Rosenthal, J.; Schumacher, J.M.; York, E.V. On behaviors and convolutional codes. IEEE Trans. Inf. Theory 1996, 42, 1881–1891. [Google Scholar] [CrossRef]
  21. Rosenthal, J.; York, E.V. BCH convolutional codes. IEEE Trans. Inf. Theory 1999, 45, 1833–1844. [Google Scholar] [CrossRef] [Green Version]
  22. Bossert, M.; Medina, C.; Sidorenko, V. Encoding and distance estimation of product convolutional codes. In Proceedings of the 2005 IEEE International Symposium on Information Theory (ISIT 2005), Adelaide, SA, Australia, 4–9 September 2005; pp. 1063–1066. [Google Scholar] [CrossRef]
  23. Höst, S.; Johannesson, R.; Sidorenko, V.; Zigangirov, K.S.; Zyablov, V.V. Woven convolutional codes I: Encoder properties. IEEE Trans. Inf. Theory 2002, 48, 149–161. [Google Scholar] [CrossRef]
  24. Rosenthal, J. An algebraic decoding algorithm for convolutional codes. Prog. Syst. Control Theory 1999, 25, 343–360. [Google Scholar] [CrossRef] [Green Version]
  25. Lieb, J.; Rosenthal, J. Erasure decoding of convolutional codes using first order representations. Math. Control. Signals Syst. 2021, 1–15. [Google Scholar] [CrossRef]
  26. Muñoz Castañeda, A.L.; Muñoz-Porras, J.M.; Plaza-Martín, F.J. Rosenthal’s decoding algorithm for certain 1-dimensional convolutional codes. IEEE Trans. Inf. Theory 2019, 65, 7736–7741. [Google Scholar] [CrossRef]
  27. Climent, J.J.; Herranz, V.; Perea, C. input–state–output representation of convolutional product codes. In Coding Theory and Applications—Proceedings of the 4th International Castle Meeting on Coding Theory and Applications (4ICMCTA); CIM Series in Mathematical, Sciences; Pinto, R., Rocha Malonek, P., Vettori, P., Eds.; Springer: Berlin, Germany, 2015; Volume 3, pp. 107–114. [Google Scholar] [CrossRef]
  28. Fuhrmann, P.A.; Helmke, U. The Mathematics of Networks of Linear Systems; Springer International Publishing AG: Cham, Switzerland, 2015. [Google Scholar]
  29. Kailath, T. Linear Systems; Prentice-Hall: Upper Saddle River, NJ, USA, 1980. [Google Scholar]
  30. Forney, G.D., Jr. Convolutional codes I: Algebraic structure. IEEE Trans. Inf. Theory 1970, 16, 720–738. [Google Scholar] [CrossRef]
  31. Johannesson, R.; Wan, Z.X. A linear algebra approach to minimal convolutional encoders. IEEE Trans. Inf. Theory 1993, 39, 1219–1233. [Google Scholar] [CrossRef] [Green Version]
  32. Johannesson, R.; Zigangirov, K.S. Fundamentals of Convolutional Coding; IEEE Press: New York, NY, USA, 1999. [Google Scholar]
  33. Smarandache, R.; Gluesing-Luerssen, H.; Rosenthal, J. Constructions of MDS-convolutional codes. IEEE Trans. Inf. Theory 2001, 47, 2045–2049. [Google Scholar] [CrossRef]
  34. Piret, P. Convolutional Codes, an Algebraic Approach; MIT Press: Boston, MA, USA, 1988. [Google Scholar]
  35. York, E.V. Algebraic Description and Construction of Error Correcting Codes: A Linear Systems Point of View. Ph.D. Thesis, Department of Mathematics, University of Notre Dame, Notre Dame, IN, USA, 1997. [Google Scholar]
  36. Antsaklis, P.J.; Michel, A.N. A Linear Systems Primer; Birkhäuser: Boston, MA, USA, 2007. [Google Scholar]
  37. Chen, C.T. Linear Systems Theory and Design, 3rd ed.; Oxford University Press: New York, NY, USA, 1999. [Google Scholar]
  38. Kalman, R.E. Mathematical description of linear dynamical systems. J. Soc. Ind. Appl. Math. Ser. A Control 1963, 1, 152–192. [Google Scholar] [CrossRef]
  39. Hautus, M.L.J. Controllability and observability condition for linear autonomous systems. Proc. Ned. Akad. Voor Wet. (Ser. A) 1969, 72, 443–448. [Google Scholar]
  40. Kalman, R.E. Lectures on Controllabilitty and Observability. In Controllabilitty and Observability; Evangelisti, E., Ed.; Espringer: Berlin, Germangy, 1968; pp. 1–149. [Google Scholar]
  41. Climent, J.J.; Herranz, V.; Perea, C. A first approximation of concatenated convolutional codes from linear systems theory viewpoint. Linear Algebra Its Appl. 2007, 425, 673–699. [Google Scholar] [CrossRef] [Green Version]
  42. Hutchinson, R.; Rosenthal, J.; Smarandache, R. Convolutional codes with maximum distance profile. Syst. Control Lett. 2005, 54, 53–63. [Google Scholar] [CrossRef] [Green Version]
  43. Zerz, E. On multidimensional convolutional codes and controllability properties of multidimensional systems over finite rings. Asian J. Control 2010, 12, 119–126. [Google Scholar] [CrossRef]
  44. Delchamps, D.F. State Space and Input-Output Linear Systems; Springer: New York, NY, USA, 1988. [Google Scholar]
  45. de Schutter, B. Minimal state-space realization in linear system theory: An overview. J. Comput. Appl. Math. 2000, 121, 331–354. [Google Scholar] [CrossRef] [Green Version]
  46. Gilbert, E.G. Controllability and observability in multivariable control systems. J. Soc. Ind. Appl. Math. Ser. A Control 1963, 1, 128–151. [Google Scholar] [CrossRef]
  47. Kalman, R.E.; Falb, P.L.; Arbib, M.A. Topics in Mathematical System Theory; McGraw-Hill: New York, NY, USA, 1969. [Google Scholar]
  48. Rosenthal, J.; Smarandache, R. Construction of convolutional codes using methods from linear systems theory. Proccedings of the 35th Allerton Conference on Communications, Control and Computing, Monticello, IL, USA, 29 September–1 October 1997; pp. 953–960. [Google Scholar]
  49. Medina, C.; Sidorenko, V.R.; Zyablov, V.V. Error exponents for product convolutional codes. Probl. Inf. Transm. 2006, 42, 167–182. [Google Scholar] [CrossRef]
  50. Brewer, J.W. Kronecker products and matrix calculus in system theory. IEEE Trans. Circuits Syst. 1978, 25, 772–781. [Google Scholar] [CrossRef]
  51. Graham, A. Kronecker Products and Matrix Calculus with Applications; Ellis Horwood Limited: Chischester, West Sussex, UK, 1981. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Climent, J.-J.; Napp, D.; Pinto, R.; Requena, V. Minimal State-Space Representation of Convolutional Product Codes. Mathematics 2021, 9, 1410. https://doi.org/10.3390/math9121410

AMA Style

Climent J-J, Napp D, Pinto R, Requena V. Minimal State-Space Representation of Convolutional Product Codes. Mathematics. 2021; 9(12):1410. https://doi.org/10.3390/math9121410

Chicago/Turabian Style

Climent, Joan-Josep, Diego Napp, Raquel Pinto, and Verónica Requena. 2021. "Minimal State-Space Representation of Convolutional Product Codes" Mathematics 9, no. 12: 1410. https://doi.org/10.3390/math9121410

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop