Abstract
In this paper, we study product convolutional codes described by state-space representations. In particular, we investigate how to derive state-space representations of the product code from the horizontal and vertical convolutional codes. We present a systematic procedure to build such representation with minimal dimension, i.e., reachable and observable.
1. Introduction
It is well-known that the combination of codes can yield a new code with better properties than the single codes alone. Such combinations have been widely used in coding theory in different forms, e.g., concatenation, product codes, turbo codes, array codes, or using EVENODD and interleaving methods [,,,,,,,]. The advantages of the combination of codes can be due to, for instance, larger distance, lower decoding complexity or improved burst error correction. In this paper, we shall focus on the so-called product codes, which is a natural generalization of the interleaved schemes. More concretely, we will focus on product convolutional codes.
In the context of block product codes, the codewords are constant matrices with entries in a finite field. We may consider that both rows and columns are encoded into error-correcting codes. Hence, for encoding, first the row redundant symbols are obtained (horizontal encoding using ), and then the column redundant symbols (vertical encoding using ). If has minimum distance , and has minimum distance , it is easy to see that the product code, denoted by , has minimum distance . This class of product codes has been thoroughly studied and is widely used to correct burst and random errors using many possible different decoding procedures. However, the product of two convolutional codes has been less investigated and many properties that are known for block codes are still to be investigated in the convolutional context.
Naturally, the class of convolutional codes generalizes the class of linear block codes, and, therefore, they are mathematically more involved than block codes. In this context, the data are considered as a sequence in contrast with block codes which operate with fixed message blocks (matrices in this case). Even though they split the data into blocks of a fixed rate as block codes do, the relative position of each block in the sequence is taken into account. The blocks are not encoded independently and previously encoded data (matrices in this case) in the sequence have an effect over the next encoded node. Because of this, convolutional codes have memory and can be viewed as linear systems over a finite field (see, for instance, [,,,,,,,,,,]). A description of convolutional codes can be provided by a time-invariant discrete linear system called discrete-time state-space system in control theory (see [,,]). Hence, we consider product convolutional codes described by state-space representations. Convolutional codes have already been thoroughly investigated within this framework and fundamental system theoretical properties, such as observability, reachability, and minimality, have been derived in [,,,].
It is worth mentioning the results derived in [,] on fundamental algebraic properties of the encoders representing product convolutional codes. In addition, they showed that every product convolutional code can be represented as a woven code and introduced the notion of block distances. In [], it is was shown that, if the generator matrices of the horizontal and vertical convolutional codes are minimal basic, then the generator matrix of the product code is also minimal basic. In this work, we continue this thread of research but within input-state-space framework instead of working with generator matrices. We present a constructive methodology to build a minimal state-space representations for these codes from two minimal state-space representation of the corresponding horizontal and vertical convolutional codes. These representations are, therefore, reachable and observable and are easily constructed by sorting and selecting some of the entries of a given matrix built upon the state-space representations of and . This is done directly without using the encoder matrix representations of the convolutional codes. The derived representations are minimal and, therefore, are reachable and observable. Moreover, they are easily constructed by sorting and selecting some of the entries of a given matrix built upon the state-space representations of and .
Recently, there have been new advances in the original idea of deriving an algebraic decoding algorithm of convolutional codes using state space representations. The idea was first proposed in [] and heavily uses the structure of these representations to derive a general procedure, which will allow for extending known decoding algorithms for block codes (like, e.g., the Berlekamp–Massey algorithm) to convolutional codes. More concretely, the algorithm iteratively computes the state vector inside the trellis diagram, and, once this state vector is constructed, the algorithm computes, in an algebraic manner, a new state vector , where s is related to the observability index of the state representation. Recently, these ideas have been further developed in [,]. Hence, the ideas of this paper can be used to built a minimal state space representation of a product convolutional code with the property that its decoding can be simplified by considering the simpler horizontal and vertical component codes and applying the decoding algorithms developed in [,].
In [], an input–state–output representation of each one of the convolutional codes and , two input–state–output representations of the product convolutional code were introduced, but none of them are minimal, even if the two input–state–output representations are both minimal. In this paper, we give a solution to this problem.
The rest of the paper is organized as follows: In Section 2, we introduce the background on polynomial matrices and convolutional codes to understand the paper. In Section 3, we describe how product convolutional codes can be viewed as a convolutional code whose generator matrix is the Kronecker product of the corresponding generator matrices. In Section 4, we provide a state-space realization of the product convolutional code based on a state-space realization of each one of the convolutional codes involved in the product. Finally, in Section 5, we present the conclusions and future work.
2. Preliminaries
Let be a finite field, the ring of polynomials in the variable z and coefficients in , and the set of rational functions in the variable z and coefficients in .
Assume that k and n are positive integers with , denote by the set of all matrices with entries in , and denote by the set .
A matrix is called unimodular if it admits a polynomial inverse; that is, its determinant is a nonzero element of (see, for example [,]).
Assume that . The internal degree of is the maximum degree of the minors of . We said that is basic if its internal degree is the minimum of the internal degrees of the matrices , for all invertible matrices ; i.e., the internal degree of is as small as possible (see, for instance, [,,,,]. In particular, if is unimodular, then and have the same internal degrees. is called right prime, if, for every factorization, with and ; necessarily, is unimodular (see, for instance, [,,]). Furthermore, is basic if and only if any (and therefore all) of the following equivalent conditions are satisfied: is right prime, has a polynomial left inverse [,]).
Assume that and denote by the j-th column degree of . We said that is column reduced if the rank of the high-order coefficient matrix is k, where is the coefficient of in . Equivalently, is column reduced if and only if its internal and external degrees coincide, where the external degree of is the number . Note that the internal degree of a polynomial matrix is always less than or equal to its external degree []. For any , there exists a unimodular matrix such that is column reduced. Moreover, if are column reduced matrices with unimodular, then and have the same column degrees, up to a permutation. Column reduced matrices are also called minimal matrices [,,]). Basic and reduced matrices are also called minimal-basic matrices [,,] or canonical matrices [].
A rate convolutional code is an -submodule of rank k of the module (see [,,]). Since is a Principle Ideal Domain, a convolutional code has always a well-defined rank k, and there exists , of rank k, such that (see [])
where is the information vector, is the corresponding codeword, and is the generator or encoder matrix of .
If is a generator matrix of and is unimodular, then is also a generator matrix of . Therefore, all generator matrices of have the same internal degree. The degree or complexity of is the internal degree of one (and therefore any) generator matrix and, therefore, is also equal to the external degree of one (and therefore any) column reduced generator matrix (see [,]). The column degrees of a basic and column reduced generator matrix of are called Forney indices of .
Since always admits a generator matrix which is column reduced, the row degrees of are the Forney indices of and the degree of . From now on, we refer to a rate convolutional code with degree as an convolutional code.
An convolutional code can be described by a time invariant linear system (see [,,,]), denoted by ,
where , , and . For each instant t, we call the state vector, the input vector, and the output vector, and we say that the system has dimension m. In the literature of linear systems, the above representation is known as the state-space representation (see, for example, [,,,,]). If we define and it follows from expression (1) that where
is the transfer matrix of the system. We say that is a realization of if is the transfer matrix of .
For a given transfer matrix , there are, in general, many possible realizations. A realization of is called minimal if it has minimal dimension, and this happens if and only if the pair is reachable and the pair is observable (see, for instance, [,,]). Recall that the pair is called reachable if
or equivalently (see []), for all , where is the closure of . Analogously, the pair is observable if and only if the pair is reachable. The dimension of a minimal realization of a transfer matrix is called the McMillan degree of . In the particular case that is a column reduced generator matrix of a convolutional code , the McMillan degree of coincides with the degree of .
Reachability and observability represent two major concepts of control system theory. They were introduced by Kalman in [] in the context of systems theory and, in [], the definitions of reachability and observability of convolutional codes were presented, see also [,,,]. These notions are not only important for characterizing minimality of our state-space realization but also to describe the possibility of driving the state everywhere with the appropriate selection of inputs (reachability) and the ability of computing the state after from the observation of output sequence.
A system is a realization of a convolutional code if is equal to the set of outputs corresponding to polynomial inputs and to zero initial conditions; i.e., . The minimal dimension of a realization of is equal to the degree of and the minimal realizations of the column reduced generator matrices of are minimal realizations of the code.
If , with , , , and is a non-minimal realization of a transfer matrix with McMillan degree , from the Kalman’s decomposition theorem (see, for example, [,,,,,,,,]), there exists an invertible matrix such that
where , , and the pair is reachable, the pair is observable, and
That is, is a minimal realization of the transfer matrix . Moreover, if is another minimal realization of , then there exists a unique invertible matrix such that
The state-space representation in expression (1), also known as, driving representation, is different from the input–state–output representation (see []) given by
where , , and . This input–state–output representation has been thoroughly studied by many authors [,,,,,,,,], and the codewords are the finite support input–output sequences corresponding to finite support state sequences .
The next theorem (see [,]) provides a state-space realization for a given polynomial matrix, and it will be very useful in Section 4.
Theorem 1.
Let be a matrix with column degrees . Assume that for , and consider the matrices
If and
then the pair is reachable. Moreover, if is column reduced, then the pair is observable, and, therefore, is a minimal realization of .
For the realization of introduced in the previous theorem, it follows from expression (2), that , where
The following example will help us to understand the previous theorem.
Example 1.
Let be the Galois field of two elements and consider the polynomial matrix
Since , , and It follows that is column reduced. Now consider the matrices
Then, according to Theorem 1, it follows that is a minimal state-space realization of with
Moreover, and
3. Product Convolutional Codes
In this section, we introduce the product of two convolutional codes called horizontal and vertical codes, respectively. Assume that and are horizontal and vertical , respectively. Then, the product convolutional code (see [,]) is defined to be the convolutional code whose codewords consist of all whose columns belong to and whose rows belong to .
Encoding of the product convolutional code can be done as follows (see [,]): Let and be generator matrices of the component convolutional codes and , respectively. Denote by an information matrix. Now, we can apply row-column encoding; i.e., every column of is encoded using , and then every row of the resulting matrix is encoded using as . We can also apply column-row encoding; i.e., every row of is encoded using , and then every column of the resulting matrix is encoded using as . As a consequence of the associativity of the product of matrices, we get the same matrix in both cases. Thus, the codeword matrix is given by
and by using properties of the Kronecker product (see [,]), we have
where is the operator that transforms a matrix into a vector by stacking the column vectors of the matrix below one another. Now, since
and it follows that is a generator matrix of the product convolutional code . Note that is a rate convolutional code. We will compute its degree in Theorem 5 below.
The following two theorems were introduced in [,] without proof. We include them here, with proof, for completeness and further references. The first one establishes that the generator matrix of the product code is basic if the generator matrices of the constituent codes are also basic.
Theorem 2.
Assume that and are generator matrices of the horizontal and vertical convolutional codes and , respectively. If and are basic, then is basic.
Proof.
Since and are basic matrices, there exist and such that and Now, consider the polynomial matrix From the properties of the Kronecker product, it follows that Consequently, is basic. □
The next theorem gives us the column degrees of a generator matrix of the product code as a function of the column degrees of the generator matrices of the constituent codes.
Theorem 3.
Assume that and are generator matrices of the horizontal and vertical convolutional codes and , respectively. If , and , are the column degrees of and , respectively, then the column degrees of are
with with and .
Proof.
Assume that
From the properties of the Kronecker product, it follows that
where
Now, since the column degrees of and are and , respectively, it follows that the column degree of is , and the theorem holds. □
As an immediate consequence of the previous theorem, we have the following theorem:
Theorem 4.
Assume that and If and are column reduced, then is column reduced.
Proof.
Let and the high-order coefficient matrices of and , respectively. If is the high-order coefficient matrix of , from Theorem 3, it follows that
and, from the properties of the Kronecker product,
Therefore, is column reduced. □
Finally, as a consequence of Theorems 2 and 4, we obtain the following theorem that gives us the degree of the product code as a function of the degrees of the constituent codes.
Theorem 5.
Assume that and are horizontal and vertical convolutional codes, respectively. Then, the degree of is .
Proof.
Assume that and are basic and column reduced generator matrices of and , respectively. With the notation of Theorem 3, and , are the Forney indices of and , respectively, and, therefore, and Moreover, from Theorems 2 and 4, is a basic and column reduced generator matrix for . Again, with the notation of Theorem 3, for and , are the Forney indices of , and, therefore,
is the degree of . □
We will use the above theorems in the next section to obtain a minimal state-space realization of the product convolutional code .
4. State-Space Realizations of Product Convolutional Codes
More specifically, let us assume that and are minimal realizations of column reduced generator matrices of the horizontal and vertical codes and , respectively. In this section, we will obtain a minimal state-space realization of the product convolutional code , where , and . This means that we must find matrices , , and , such that the pair is reachable, the pair is observable, and is a basic and column reduced generator matrix for .
We can assume, without loss of generality, that matrices , , and , have the form of matrices A and B in Theorem 1. That is,
The next theorem allows us to obtain a reachable pair from the reachable pairs and .
Theorem 6.
Proof.
It is easy to see that for all . Thus, the pair is reachable. □
Assume again that and are minimal state-space realizations of the horizontal and vertical codes and , respectively, with , , , and as in expressions (4)–(7). From Theorem 1 and expressions (1) and (3), it follows that
where
Now, since , from expression (8) and the properties of the Kronecker product, we have that
Note that is a matrix of size ; that is, . Thus, we can take . However, since is a matrix of size that is, , we cannot take the above matrix as matrix C. The following example will help us to understand how we should proceed to obtain the matrix C from the matrix in expression (11).
Example 2.
Let be the Galois field of two elements and consider , the column reduced matrix, and the minimal state-space realization of given in Example 1. That is, and
Moreover,
Let Since , , and it follows that is column reduced. Now, consider the matrices
Moreover,
Now, from expression (11), the generator matrix of the product convolutional code is given by
with
where
As we can observe, has 31 columns, but we need a matrix with 16 columns. Furthermore, does not have the structure given by expression (3).
However, considering the rows of whose elements have been written in red, we can move these rows to the appropriate positions and then, by Gaussian elimination from those rows, we can transform the matrix into the matrix with
and O the zero matrix of the appropriate size. This means that we can find an invertible matrix such that
and, therefore with such that
We can use the argument introduced in the above example to prove the following theorem.
Theorem 7.
Assume that and are minimal state-space realizations of the horizontal and vertical codes and , respectively, with , , , and as in expressions (4)–(7). Let A be the matrix defined in Theorem 6 and let be the matrix in expression (11). Moreover, assume that
where , with , for and , and consider the matrices and in expressions (9) and (10). If then there exists an invertible matrix such that
Moreover, if with , then the pair is observable.
Proof.
Note that the submatrix of given by
contains the necessary rows to construct the matrix . Thus, by using an appropriate permutation matrix , we have that
Now, the entries in the first column of are 0 or with ; therefore, by using Gaussian elimination, we can transform these entries in 0. Once this operation is completed, the entries in the second column of the modified are, again, 0 or with and, therefore, we can transform these entries in 0. We continue with this argument, until we transform matrix into the zero matrix. In other words, we have found an invertible matrix such that
Thus, we can take and, from expression (11), it follows that
Now, by a similar argument to the argument used in the proof of Theorem 1, it follows that the pair is observable. □
The proof of the previous theorem tells us which are the rows of matrix that we must consider to obtain matrix . Therefore, it also tells us which are the columns of matrix that we must consider. Specifically, the submatrix given in expression (12) will help us to determine a submatrix of , which contains the necessary columns to construct the matrix C. For that, on the one hand, the block of means that we take all the columns of . On the other hand, if we assume that with
then, from the properties of the Kronecker product,
with
Therefore, the rest of the rows of matrix in expression (12) means that we must take the columns , for . Thus, by using the matrix , we have that
with and as in Theorem 7.
Now, as a consequence of Theorems 6 and 7, we obtain a minimal state-space realization of the convolutional product code.
Corollary 1.
With the notation of Theorems 6 and 7, the system , with , is a minimal realization of the convolutional product code .
Example 3.
For the matrices in Example 2, it follows that
5. Conclusions and Future Work
In this paper, we presented a constructive methodology to obtain a minimal state-space representation of a convolutional product code from two minimal state-space representations, and of an horizontal and a vertical convolutional code, respectively. In this work, we have considered driven variable representations and showed that, even if the matrices A, B, and D of the product convolutional code can be built in a straightforward way from the given matrix representations and , the matrix C requires further analysis. We showed, however, that C can still be computed if one properly selects the appropriate entries of a matrix that depends on , , and . In this way, the produced representation is minimal and can be computed in a relatively easy way.
An interesting line for future research would be to consider input–state–output representations instead of driven variables and study these different state space representations in the context of convolutional product codes.
Author Contributions
Investigation, J.-J.C., D.N., R.P. and V.R.; writing–original draft, J.-J.C., D.N., R.P. and V.R.; writing–review and editing, J.-J.C., D.N., R.P. and V.R. All authors have read and agreed to the published version of the manuscript.
Funding
The research of the first, second, and fourth authors was supported by Spanish grants PID2019-108668GB-I00 of the Ministerio de Ciencia e Innovación of the Gobierno de España and VIGROB-287 of the Universitat d’Alacant. The research of the third author was supported by The Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT—Fundação para a Ciência e a Tecnologia), references UIDB/04106/2020.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
References
- Blaum, M.; Brady, J.; Bruck, J.; Menon, J. EVENODD: An efficient scheme for tolerating double disk failures in RAID architectures. IEEE Trans. Comput. 1995, 42, 192–202. [Google Scholar] [CrossRef]
- Blaum, M.; Roth, R.M. New array codes for multiple phased burst correction. IEEE Trans. Inf. Theory 1993, 39, 66–77. [Google Scholar] [CrossRef]
- Cardell, S.D.; Climent, J.J. An approach to the performance of SPC product codes under the erasure channel. Adv. Math. Commun. 2016, 10, 11–28. [Google Scholar] [CrossRef]
- Climent, J.J.; Napp, D.; Pinto, R.; Simões, R. Series concatenation of 2D convolutional codes by means of input–state–output representations. Int. J. Control 2018, 91, 2682–2691. [Google Scholar] [CrossRef]
- DeCastro-García, N.; García-Planas, M. Concatenated linear systems over rings and their application to construction of concatenated families of convolutional codes. Linear Algebra Its Appl. 2018, 542, 624–647. [Google Scholar] [CrossRef]
- Elias, P. Error-free coding. Trans. Ire Prof. Group Inf. Theory 1954, 4, 29–37. [Google Scholar] [CrossRef]
- Napp, D.; Pinto, R.; Sidorenko, V. Concatenation of convolutional codes and rank metric codes for multi-shot network coding. Des. Codes Cryptogr. 2018, 86, 237–445. [Google Scholar] [CrossRef]
- Sidorenko, V.; Jiang, L.; Bossert, M. Skew-feedback shift-register synthesis and decoding interleaved Gabidulin codes. IEEE Trans. Inf. Theory 2011, 57, 621–632. [Google Scholar] [CrossRef]
- Climent, J.J.; Herranz, V.; Perea, C. Linear system modelization of concatenated block and convolutional codes. Linear Algebra Its Appl. 2008, 429, 1191–1212. [Google Scholar] [CrossRef]
- Climent, J.J.; Herranz, V.; Perea, C. Parallel concatenated convolutional codes from linear systems theory viewpoint. Syst. Control Lett. 2016, 96, 15–22. [Google Scholar] [CrossRef]
- Fornasini, E.; Pinto, R. Matrix fraction descriptions in convolutional codes. Linear Algebra Its Appl. 2004, 392, 119–158. [Google Scholar] [CrossRef]
- Forney, G.D., Jr. Minimal bases of rational vector spaces, with applications to multivariable linear systems. SIAM J. Control 1975, 13, 493–520. [Google Scholar] [CrossRef]
- Forney, G.D., Jr.; Johannesson, R.; Wan, Z.X. Minimal and canonical rational generator matrices for convolutional codes. IEEE Trans. Inf. Theory 1996, 42, 1865–1880. [Google Scholar] [CrossRef]
- Gluesing-Luerssen, H.; Schneider, G. State space realizations and monomial equivalence for convolutional codes. Linear Algebra Its Appl. 2007, 425, 518–533. [Google Scholar] [CrossRef]
- Herranz, V.; Napp, D.; Perea, C. 1/n turbo codes from linear system point of view. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A. Mat. 2020, 114. [Google Scholar] [CrossRef]
- Massey, J.L.; Sain, M.K. Codes, automata, and continuous systems: Explicit interconnections. IEEE Trans. Autom. Control 1967, 12, 644–650. [Google Scholar] [CrossRef]
- McEliece, R.J. The algebraic theory of convolutional codes. In Handbook of Coding Theory; Pless, V.S., Huffman, W.C., Eds.; Elsevier: North-Holland, The Netherlands, 1998; pp. 1065–1138. [Google Scholar]
- Rosenthal, J. Connections between linear systems and convolutional codes. In Codes, Systems and Graphical Models; Marcus, B., Rosenthal, J., Eds.; Springer: New York, NY, USA, 2001; Volume 123, The IMA Volumes in Mathematics and its Applications; pp. 39–66. [Google Scholar] [CrossRef]
- Rosenthal, J. Some interesting problems in systems theory which are of fundamental importance in coding theory. In Proceedings of the IEEE Conference on Decision and Control, San Diego, CA, USA, 12 December 1997; pp. 1–6. [Google Scholar]
- Rosenthal, J.; Schumacher, J.M.; York, E.V. On behaviors and convolutional codes. IEEE Trans. Inf. Theory 1996, 42, 1881–1891. [Google Scholar] [CrossRef]
- Rosenthal, J.; York, E.V. BCH convolutional codes. IEEE Trans. Inf. Theory 1999, 45, 1833–1844. [Google Scholar] [CrossRef]
- Bossert, M.; Medina, C.; Sidorenko, V. Encoding and distance estimation of product convolutional codes. In Proceedings of the 2005 IEEE International Symposium on Information Theory (ISIT 2005), Adelaide, SA, Australia, 4–9 September 2005; pp. 1063–1066. [Google Scholar] [CrossRef]
- Höst, S.; Johannesson, R.; Sidorenko, V.; Zigangirov, K.S.; Zyablov, V.V. Woven convolutional codes I: Encoder properties. IEEE Trans. Inf. Theory 2002, 48, 149–161. [Google Scholar] [CrossRef]
- Rosenthal, J. An algebraic decoding algorithm for convolutional codes. Prog. Syst. Control Theory 1999, 25, 343–360. [Google Scholar] [CrossRef]
- Lieb, J.; Rosenthal, J. Erasure decoding of convolutional codes using first order representations. Math. Control. Signals Syst. 2021, 1–15. [Google Scholar] [CrossRef]
- Muñoz Castañeda, A.L.; Muñoz-Porras, J.M.; Plaza-Martín, F.J. Rosenthal’s decoding algorithm for certain 1-dimensional convolutional codes. IEEE Trans. Inf. Theory 2019, 65, 7736–7741. [Google Scholar] [CrossRef]
- Climent, J.J.; Herranz, V.; Perea, C. input–state–output representation of convolutional product codes. In Coding Theory and Applications—Proceedings of the 4th International Castle Meeting on Coding Theory and Applications (4ICMCTA); CIM Series in Mathematical, Sciences; Pinto, R., Rocha Malonek, P., Vettori, P., Eds.; Springer: Berlin, Germany, 2015; Volume 3, pp. 107–114. [Google Scholar] [CrossRef]
- Fuhrmann, P.A.; Helmke, U. The Mathematics of Networks of Linear Systems; Springer International Publishing AG: Cham, Switzerland, 2015. [Google Scholar]
- Kailath, T. Linear Systems; Prentice-Hall: Upper Saddle River, NJ, USA, 1980. [Google Scholar]
- Forney, G.D., Jr. Convolutional codes I: Algebraic structure. IEEE Trans. Inf. Theory 1970, 16, 720–738. [Google Scholar] [CrossRef]
- Johannesson, R.; Wan, Z.X. A linear algebra approach to minimal convolutional encoders. IEEE Trans. Inf. Theory 1993, 39, 1219–1233. [Google Scholar] [CrossRef]
- Johannesson, R.; Zigangirov, K.S. Fundamentals of Convolutional Coding; IEEE Press: New York, NY, USA, 1999. [Google Scholar]
- Smarandache, R.; Gluesing-Luerssen, H.; Rosenthal, J. Constructions of MDS-convolutional codes. IEEE Trans. Inf. Theory 2001, 47, 2045–2049. [Google Scholar] [CrossRef]
- Piret, P. Convolutional Codes, an Algebraic Approach; MIT Press: Boston, MA, USA, 1988. [Google Scholar]
- York, E.V. Algebraic Description and Construction of Error Correcting Codes: A Linear Systems Point of View. Ph.D. Thesis, Department of Mathematics, University of Notre Dame, Notre Dame, IN, USA, 1997. [Google Scholar]
- Antsaklis, P.J.; Michel, A.N. A Linear Systems Primer; Birkhäuser: Boston, MA, USA, 2007. [Google Scholar]
- Chen, C.T. Linear Systems Theory and Design, 3rd ed.; Oxford University Press: New York, NY, USA, 1999. [Google Scholar]
- Kalman, R.E. Mathematical description of linear dynamical systems. J. Soc. Ind. Appl. Math. Ser. A Control 1963, 1, 152–192. [Google Scholar] [CrossRef]
- Hautus, M.L.J. Controllability and observability condition for linear autonomous systems. Proc. Ned. Akad. Voor Wet. (Ser. A) 1969, 72, 443–448. [Google Scholar]
- Kalman, R.E. Lectures on Controllabilitty and Observability. In Controllabilitty and Observability; Evangelisti, E., Ed.; Espringer: Berlin, Germangy, 1968; pp. 1–149. [Google Scholar]
- Climent, J.J.; Herranz, V.; Perea, C. A first approximation of concatenated convolutional codes from linear systems theory viewpoint. Linear Algebra Its Appl. 2007, 425, 673–699. [Google Scholar] [CrossRef]
- Hutchinson, R.; Rosenthal, J.; Smarandache, R. Convolutional codes with maximum distance profile. Syst. Control Lett. 2005, 54, 53–63. [Google Scholar] [CrossRef]
- Zerz, E. On multidimensional convolutional codes and controllability properties of multidimensional systems over finite rings. Asian J. Control 2010, 12, 119–126. [Google Scholar] [CrossRef]
- Delchamps, D.F. State Space and Input-Output Linear Systems; Springer: New York, NY, USA, 1988. [Google Scholar]
- de Schutter, B. Minimal state-space realization in linear system theory: An overview. J. Comput. Appl. Math. 2000, 121, 331–354. [Google Scholar] [CrossRef]
- Gilbert, E.G. Controllability and observability in multivariable control systems. J. Soc. Ind. Appl. Math. Ser. A Control 1963, 1, 128–151. [Google Scholar] [CrossRef]
- Kalman, R.E.; Falb, P.L.; Arbib, M.A. Topics in Mathematical System Theory; McGraw-Hill: New York, NY, USA, 1969. [Google Scholar]
- Rosenthal, J.; Smarandache, R. Construction of convolutional codes using methods from linear systems theory. Proccedings of the 35th Allerton Conference on Communications, Control and Computing, Monticello, IL, USA, 29 September–1 October 1997; pp. 953–960. [Google Scholar]
- Medina, C.; Sidorenko, V.R.; Zyablov, V.V. Error exponents for product convolutional codes. Probl. Inf. Transm. 2006, 42, 167–182. [Google Scholar] [CrossRef]
- Brewer, J.W. Kronecker products and matrix calculus in system theory. IEEE Trans. Circuits Syst. 1978, 25, 772–781. [Google Scholar] [CrossRef]
- Graham, A. Kronecker Products and Matrix Calculus with Applications; Ellis Horwood Limited: Chischester, West Sussex, UK, 1981. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).